context
stringlengths
250
4.36k
A
stringlengths
250
4.85k
B
stringlengths
250
4.12k
C
stringlengths
250
3.69k
D
stringlengths
250
8.2k
label
stringclasses
4 values
Some parents (21%, N=4) and teens (32%, N=6) said they would not use the communication features at all because they already use other messaging tools (e.g., texting from a messaging app) to communicate. Using a separate app would be redundant and unnecessary.
Although teens did not monitor their parents’ app usage, they were often the providers of tech support in their families. When parents and teens were discussing about who is the most tech savvy person in their family or who would they consult to regarding their app safety concerns, the pairs frequently mentioned gener...
The feature that garnered the most discussion among parents and teens was the ability to hide or show apps to one another. Overall, parents and teens were both concerned about this feature because it promoted secrecy and negated some of the purpose behind the app. One important thing to be noted here is that when we a...
While most of the families were reluctant to use the communication features, about a quarter of parents (N=5, 26%) and teens (N=4, 21%) mentioned that these features could be useful to initiate discussion about app safety and data privacy. For example, parent P14 felt that this would help their entire family make bette...
Overall, we found that most parents and teens made few considerations toward their own online safety or privacy when installing new apps or granting permissions to the apps they installed (RQ1). Meanwhile, parents often manually monitored the apps their teens installed but gave little thought to the permissions granted...
C
Figure 12: Sample points of the cellular tower dataset (grey dots) and significant loops under subsample bootstrapping for different filtrations (red hollow circles and blue crosses). The black rectangle, which contains two holes detected by RDAD, is blown up and shown on the right subplot.
The most commonly used filtration is the distance filtration. While it can identify clean global topological signals, it is less useful for small and noisy features. To overcome this, multiple alternatives have been suggested. In this subsection, after briefly discussing the distance filtration, we review Bell et al’s ...
The homology class picked up by the distance-to-measure filtration is a large sparsely populated area with few cellular towers if any. Those picked up by the RDAD filtration are comparatively smaller regions with an abrupt drop in density. The distance-to-measure filtration fails to pick up the smaller homology classes...
The two filtrations pick up completely different homology classes. The class picked up by the distance-to-measure filtration is near Steens Mountain Wilderness in Oregan. The 3 classes picked up by the RDAD filtration are Lake Michigan; Dallas, Texas; and the Texan region surrounded by Houston, Austin and San Antonio....
We also apply our method to real data. The distance-to-measure filtration and the RDAD filtration are applied to an open dataset HIFLD21_cellurlar_towers of cellular tower locations recorded by the Federal Communications Commission (FCC). The two filtrations reveal uninhabited regions in the United States and regions...
D
Considering the semantic relations among AUs, some works (Wang et al. 2013; Walecki et al. 2017) make efforts in modeling such relations via probabilistic graphical models or graph neural networks. Wang et al. (Wang et al. 2013) introduced a restricted Boltzmann machine to model facial action units, thereby capturing n...
This is known as subject variation problem, which makes it challenging for AU recognition models to generalize across subjects. Although previous works have noticed that subject variation problem exists in facial action unit recognition task, as far as we know, there have been few works focusing on answering the whys a...
Considering the semantic relations among AUs, some works (Wang et al. 2013; Walecki et al. 2017) make efforts in modeling such relations via probabilistic graphical models or graph neural networks. Wang et al. (Wang et al. 2013) introduced a restricted Boltzmann machine to model facial action units, thereby capturing n...
As for subject variation problem, works such as (Chen et al. 2013) provide a solution for enhancing the generalizability of AU recognition model by training personalized AU classifiers for each subject and works such as (Zen et al. 2016; Wang and Wang 2018) make attempt to relieve the subject-related prediction bias t...
So far, we can see that the prediction bias of AU recognition is mainly caused by the differences among subjects’ customs of expressing emotions. Subject can be essentially regarded as a confounder, which misleads AU recognition model to learn subject-specific AU semantic relations from subjects in the training data an...
C
BBP accelerates block propagation by generating consistent PPB among different nodes, proving advantageous in a network without malicious nodes, as demonstrated in Section 5.3.1. However, in the practical blockchain network, three types of potential attacks may interfere with the PPB generation to offset the benefit: ...
According to the measurement result, pfsubscript𝑝𝑓p_{f}italic_p start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT fluctuated within the range of [0.053, 0.067] with an average value of 0.059. Additionally, the block interval is 13-15s during this period, with an average value of 14s. Using eq. (3), we compute that the 9...
BBP accelerates block propagation by generating consistent PPB among different nodes, proving advantageous in a network without malicious nodes, as demonstrated in Section 5.3.1. However, in the practical blockchain network, three types of potential attacks may interfere with the PPB generation to offset the benefit: ...
To validate the BBP robustness in the network with these attacks, we measure the proportion of non-synchronized PPBs for BBP and compare 90% block propagation times for BBP and BHP under the testbed network with various malicious nodes. Note that 90% block propagation time for BHP is only counted under the network with...
The experimental results are depicted in Fig. 7: (a) the non-synchronized PPBs and (b) 90% Block Propagation Time. From Fig. 7, it is evident that the average proportion of non-synchronized PPBs, in the absence of malicious nodes, is approximately 3.5%. Since 90% node propagation is good enough, this 96.5% proportion ...
C
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2...
Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate...
In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo...
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ...
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2...
C
There are some limitations in our study which is worth mentioning. We relied on three databases: ACM DL, IEEE Xplore, and Scopus; therefore, we may have missed relevant papers published in other databases. Another limitation is the inapplicability of quality appraisal methods such as the ”Risk of Bias Assessment” in ou...
Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim...
Researchers worldwide have amplified the debate between autonomy vs. human control due to the risks and concerns associated with AI and large-scale automation (Shneiderman, \APACyear2020). In this area concerning automation and AI in speech therapy, we studied the level of autonomy achieved by AI-based automated speech...
This systematic literature review was based on the PRISMA Statement to analyze papers on AI-based automated speech therapy tools for persons with SSD. We extracted relevant data from the included articles based on four predefined research questions: Types of SSD addressed; Level of autonomy achieved by such tools; Mode...
We conducted this systematic literature review based on a sample of 24 out of 678 research papers deriving from Scopus, IEEEXplore, and ACM DL databases. Exciting insights and trends emerged from our analysis of these papers. In recent years, we observed an increasing interest in AI-based auto- mated speech therapy to...
C
Our outlier detection is a simple process inspired by compression ratio. Intuitively, any data point that belongs to an underlying community should have large compression ratios with many points from the same community, whereas it will have a lower compression ratio w.r.t inter-community points. On the other hand, outl...
This gives us initial theoretical evidence that in the random-mixture model with outliers, our simple outlier detection method can detect outliers when a non-negligible fraction of the points are outliers. Next, we use simulations of our model to test the efficacy of our outlier detection method and its impact on the ...
Then, our intuition is that if data consists of many points from the high dimensional mixture model, as well as several outlier points that don’t share a common signal (center), they have a lower variance of compression ratio. We concretize this notion with the following simple detection algorithm 1.
We analyze this simple algorithm in an extension of the standard random vector mixture model. We also compare our algorithm with popular algorithms such as the Local Outlier Factor (LOF) method [BKNS00] and KNN-dist [RRS00] as well as more recent methods such as Isolation forest [LTZ08] and ECOD [LZH+22] through both s...
Outlier detection has been an active area of study in unsupervised learning, providing several influential algorithms. In a recent, comprehensive benchmarking of outlier detection algorithms, [HHH+22] compared the performance of several unsupervised learning algorithms on different datasets. They found that for unsuper...
C
We formulate the dialog with NRsubscript𝑁𝑅N_{R}italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT rounds of QA interactions. Specifically, given the visual input data with partially missing visions, the AI system is given NRsubscript𝑁𝑅N_{R}italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT chances to ask ...
The results for the generated scene graphs from incomplete images incorporated with the proposed SI-Dial are also presented in Table 1. We observe that dialog can serve as an effective supplementary information source in case of missing visions, especially for the semantically masked visual input with most severe missi...
Fig. 1: The overall architecture of our proposed SI-Dial framework. We first obtain the preliminary objects from the object detector based on the incomplete visual input, and propose to conduct an interactive dialog process. Note that the dashed lines denote the operations only after the dialog is completed) for the f...
Overall speaking, our proposed SI-Dial takes the preliminary object representations (i.e., node and edge features from the object detector) as input, and outputs the updated representations with supplementary information incorporated from the dialog interactions:
Having obtained the interactive dialog xh⁢i⁢s,NRsubscript𝑥ℎ𝑖𝑠subscript𝑁𝑅x_{his,N_{R}}italic_x start_POSTSUBSCRIPT italic_h italic_i italic_s , italic_N start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT end_POSTSUBSCRIPT as the supplementary source to missing visual input, we update the preliminary objects O′superscri...
C
In the one-dimensional facility location problem, agents are located on the real line, and a planner’s goal is to build one or more facilities on the line to serve the agents. The cost of an agent is her distance to the nearest facility. The problem asks to locate facilities to minimize the total cost of all agents (th...
In the one-dimensional facility location problem, agents are located on the real line, and a planner’s goal is to build one or more facilities on the line to serve the agents. The cost of an agent is her distance to the nearest facility. The problem asks to locate facilities to minimize the total cost of all agents (th...
The position of each agent is private information. We want to design strategyproof mechanisms that guarantee that the agents report their true positions and locate the facilities based on the reports such that either the total or the maximum cost approximates the optimal value of the corresponding optimization problem ...
A key conversion in the models is that now each agent becomes strategic and may misreport her position to decrease her cost. These new problems are called the facility location games, which require to design mechanisms that elicit the true positions of agents and output facility locations to (approximately) minimize th...
We use the same entrance fee function as that in the proof of Theorem 10 except that we set the entrance fee at the location of agent 00 to 00. Then in order to get an approximation ratio less or equal to 3333, one of the two facilities must always be located in the position of the new agent 00 with probability 1111. T...
C
There is some more bibliography to add to the already vast literature [3, 10, 23, 31, 33, 35] on dominating induced matchings. As we have seen before the papers on perfect edge domination are less frequent. There is a paper [16] where the authors describe ILP formulations for the PED problem, together with some experim...
This technique does not work for our class under consideration since this operation adds 3 new vertices of degree 2 and they are not part of triangles. Instead of this operation we propose other alternatives to avoid induced cycles of size at most k𝑘kitalic_k for any k𝑘kitalic_k. But all of them come with some cost. ...
We say that G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) is a neighborhood star-free graph, NSF graph for short, if for every vertex v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V with degree at least 2, G⁢[N⁢[v]]𝐺delimited-[]𝑁delimited-[]𝑣G[N[v]]italic_G [ italic_N [ italic_v ] ], is not a star. In other words, every ...
We consider in this work five variants of the 1in3SAT problem, two of them are well-known NP-Complete problems, and we prove that the other three are NP-Complete. Then, we reduce them to the existence of DIMs on some subclasses of connected NSF graphs.
The polynomial reduction given in [30] transforms connected instances of PLANAR POSITIVE 1in3SAT to connected instances of CUBIC PLANAR POSITIVE 1in3SAT. The NP-Completeness of this variant is guaranteed by the NP-Completeness of CONNECTED CUBIC PLANAR POSITIVE 1in3SAT and the correctness of this reduction.
C
12⁢v⊤⁢H⁢v+x⊤⁢P⁢v12superscript𝑣top𝐻𝑣superscript𝑥top𝑃𝑣\displaystyle\tfrac{1}{2}v^{\top}Hv+x^{\top}Pvdivide start_ARG 1 end_ARG start_ARG 2 end_ARG italic_v start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_H italic_v + italic_x start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_P italic_v
Since the cost function in (15) is not strictly convex in t(h)superscript𝑡ℎt^{(h)}italic_t start_POSTSUPERSCRIPT ( italic_h ) end_POSTSUPERSCRIPT, it is not immediately obvious whether its parametric solution is single valued anywhere. However, we note that the objective is strictly convex over that part of its domai...
Under standard assumptions the problem (5) will always be feasible given the definition of a CLF, and its optimal solution will be unique. Unlike (4), however, it is not immediately obvious whether the controller Φ⁢(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) defined by (5) will be PWA. We will prove in §4 that this remains the case....
The function ΦNN⁢(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ) is a continuous PWA mapping 42, but direct computation of its Lipschitz constant using (6) is generally not practicable since it would require the explicit description of its PWA representation as in Definit...
Note that standard methods for proving that the parametric solution of (10) in x𝑥xitalic_x is PWA continuous can not be applied because the right hand side S⁢(x)𝑆𝑥S(x)italic_S ( italic_x ) of the inequalities is not affine. We can be sure, however, that the problem has a solution for any x∈𝒮𝑥𝒮x\in\mathcal{S}itali...
B
One disadvantage of the Hamiltionian is that canonical coordinates need to be used. To eliminate this constraint, other works use the Lagrangian to model the energy of the system. Since this formalism is more complex, [31] and [60] restrict the Lagrangian to the case of rigid-body dynamics to model systems with multipl...
Another problem of many previous approaches is that they do not allow for interpretation of individual learned system parameters. For example, [18] learns dynamics in the form of a general PDE in a latent space, which, like the aforementioned works based on energy functions, prohibits interpretation of the learned phy...
For example, [16, 10] and [53] use a neural network to parameterize the Hamiltonian of a system, which relates the total energy to the change of the state. This approach allows to infer the dynamics of systems with conserved energy, like an undamped pendulum. [48] augment the Hamiltonian by a learned Rayleigh dissipati...
While many approaches work with trajectories in state space, there are also several works that operate directly on videos. In this case, the information about physical quantities is substantially more abstract, so that uncovering dynamics from video data is a significantly more difficult problem. In their seminal work ...
Several of the previously mentioned works model physical systems using Lagrangian or Hamiltonian energy formulations [31, 16, 11, 10, 53, 59, 29, 60], or other general physics models [27]. While those are a elegant approaches that allow the model to adapt to different physical systems, they have two drawbacks. First, ...
A
Nevertheless, with the application of quantum k𝑘kitalic_k-means clustering, which extracts only the pertinent semantic concepts for communication, the required amount of communication resources is significantly reduced, for instance, by approximately 50%percent5050\%50 % at |𝒳|=70𝒳70\lvert\mathcal{X}\rvert=70| calig...
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi...
In Figure 3, we show the quantum semantic fidelity achieved against the amount of quantum communication resources used for |𝒳|=500𝒳500\lvert\mathcal{X}\rvert=500| caligraphic_X | = 500. At low noise, to achieve a quantum semantic fidelity of 0.70.70.70.7, QSC requires around 50%percent5050\%50 % quantum communicatio...
First, in Figure 2, we compare the quantum communication resources needed for QSC and semantic-agnostic frameworks. We observe that as the data traffic increases (represented by |𝒳|𝒳\lvert\mathcal{X}\rvert| caligraphic_X |), the amount of semantic concepts extracted will increase which causes the monotonic increase ...
Nevertheless, with the application of quantum k𝑘kitalic_k-means clustering, which extracts only the pertinent semantic concepts for communication, the required amount of communication resources is significantly reduced, for instance, by approximately 50%percent5050\%50 % at |𝒳|=70𝒳70\lvert\mathcal{X}\rvert=70| calig...
B
To deal with the enforcement of concealability, we assume that the considered system is unconcealable in the remainder of this paper. In this section, we first introduce a defensive function to manipulate actual observations generated by the system in order to enforce concealability.
If concealability of the system does not hold, then we deal with the problem of concealability enforcement. The notion of C𝐶Citalic_C-enforceability characterizes whether an external defensive function has the capability to use an obfuscation strategy that manipulates the outputs generated by the system such that the ...
We are interested in hiding from an external observer (a curious eavesdropper) confidential information of the system that is represented as the occurrences of events from ESsubscript𝐸𝑆E_{S}italic_E start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT, which is called the secret event set. Accordingly, the privacy of the s...
The defensive function proposed in this section can alter observable output events of the system G𝐺Gitalic_G by deletions, insertions, or replacements. The problem of enforcing concealability of the system aims to determine whether the defensive function is C𝐶Citalic_C-enforcing, i.e., given constraints in terms of h...
Then, the notion of C𝐶Citalic_C-enforceability is introduced, which characterizes the ability of a defensive function to manipulate the observations of output events such that the occurrences of secret events can be concealed from the eavesdropper regardless of system activity.
D
(2) [Lines 7-10] When the timer expires (and the warm-up period is concluded), K𝐾Kitalic_K consecutive policy function updates are performed. (3) [Lines 11-12] Using the updated policy, the agent i𝑖iitalic_i interacts with the environment to collect the experience trajectory data and computes the values of associated...
A basic training cycle consists of the following steps, as shown in Fig. 5(a). The agents collect experience data by performing network operations under the existing policies, and send these data together with some local policy information to the central entity. The central entity puts the received data in the replay b...
The time between two updates (i.e., update-timer interval) directly impacts the convergence speed, but it is potentially arbitrary. Indeed, it can be set according to the length of an arbitrary episode, the time to process an update, or the replay-buffer sampling factor to generate a mini-batch. However, to strike a b...
The training process consists of a sequence of update instants, in each of which, multiple DNN updates are performed by sampling K𝐾Kitalic_K random mini-batches from the replay buffer and updating DNNs’ weights accordingly. Moreover, an initial transient period is needed to reach a steady state (i.e., stationary buff...
Once the convergence is reached (f.i., after a maximum number of steps or when minimal NN weight updates are performed), the training procedure stops and distributed agents continue the interaction with the environment based on local observations and fixed local policies. However, the training procedures can be re-acti...
B
On the other hand, the principal receives negative utility for false positives: u⁢(θ0,L)<0𝑢subscript𝜃0𝐿0u\left(\theta_{0},L\right)<0italic_u ( italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_L ) < 0 for all L>0𝐿0L>0italic_L > 0, and u⁢(θ0,L)𝑢subscript𝜃0𝐿u\left(\theta_{0},L\right)italic_u ( italic_θ sta...
Note that although this menu is infinite, it can still be easily implemented. Since it is simple to verify whether or not a contract is incentive-aligned, the principal asks the agent for their proposed incentive-aligned contract and then proceeds with that contract, provided it is indeed incentive-aligned. For the age...
The principal’s expected utility depends on the distribution over agent types, Q𝑄Qitalic_Q, and the menu offered, ℱℱ\mathcal{F}caligraphic_F. The principal controls the latter, but may not know the former. Thus, we will seek a menu that performs well for many distributions Q𝑄Qitalic_Q. Our main result in what follow...
principal needs only to screen out agents who know that their product is ineffective, which the incentive-aligned statistical contract does. Moreover, a larger menu that is incentive-aligned is more attractive to the agents, so to maximize participation the principal should offer the largest incentive-aligned menu poss...
Our study of hypothesis testing in the principal-agent model has forged connections between statistical inference and ideas from the economic theory of mechanism design. The primary conclusion of this work is that the principal who does not know the distribution of agent types should deploy the menu of all e𝑒eitalic_...
B
Here, the limitations of PPIR(FHE)-v1 on the bandwidth size are even more evident than in the affine case, since the bandwidth increases according to the number of parameters. This result gives a non-negligible burden to the p⁢a⁢r⁢t⁢y1𝑝𝑎𝑟𝑡subscript𝑦1party_{1}italic_p italic_a italic_r italic_t italic_y start_POSTS...
Fig. A6: Qualitative results for Cubic splines registration with SSD between 2D medical images. The red frame is the transformed moving image using Clear+GMS registration. Green and Yellow frames are the transformed images using respectively PPIR(MPC)+GMS and PPIR(FHE)v1+GMS.
Brain MRI data and whole body PET data: non-linear registration (SSD). Table 3, comparing Clear and PPIR(MPC), PPIR(FHE)-v1 and v2, showcases the metrics resulting from spline-based non-linear registration between grey matter density images without the application of gradient approximation. Additionally, the table incl...
Whole body PET data: affine registration (SSD). Table 2 compares Clear, PPIR(MPC), PPIR(FHE)-v1 and v2, showcasing metrics resulting from the affine transformation of whole-body PET images. Notably, registration through PPIR(MPC) yields negligible differences compared to Clear in terms of the number of iterations, int...
Incorporating gradient approximation for handling whole-body PET data leads to similar conclusions as for the experiments on brain data. Qualitative results, reported in Supplementary Figure A6, show negligible differences between images transformed with Clear+GMS, PPIR(MPC)+GMS, and PPIR(FHE)-v1+GMS.
D
Experimental results show that MEKD can effectively protect the privacy of local data and models in the cloud, and it performs well under either soft or hard responses. At the same time, MEKD has robust results in the case of limited query samples and out-of-domain data.
Knowledge Distillation (KD) is a widely accepted approach to the problem of model compression and acceleration, which has received sustained attention from both the academic and industrial research communities [15, 38, 47, 17]. The goal of KD is to extract knowledge from a cumbersome model or an ensemble of models, kno...
Knowledge Distillation (KD). Hinton et al. [19] propose an original teacher-student architecture that uses the logits of the teacher model as the knowledge. Since then, some KD methods regard knowledge as final responses to input samples [3, 31, 58], some regard knowledge as features extracted from different layers of ...
One can only guess the mapping process by using the responses to the input samples of different network layers or the relations between features and treat them as knowledge to guide the training of the student model [57]. However, in the black-box KD problem, the internal responses or relations between layers of the te...
The first two aim to derive the student to mimic the responses of the output layer or the feature maps of the hidden layers of the teacher, and the last approach uses the relationships between the teacher’s different layers to guide the training of the student model. Feature-based and relation-based methods [24, 57], d...
B
The second reason is efficient compression of smooth functions. It is known that for functions with m𝑚mitalic_m continuous derivatives n𝑛nitalic_n-th coefficient is O⁢(n−m)𝑂superscript𝑛𝑚O(n^{-m})italic_O ( italic_n start_POSTSUPERSCRIPT - italic_m end_POSTSUPERSCRIPT ) for both Chebyshev [MH02, Theorem 5.14] and ...
It is important to understand Equation 3 implies that SNO only realizes mapping between two functions given by truncated series. If one needs to compute these functions on the finer grid, Chebyshev or trigonometric interpolation should be use. In the same vein, Chebyshev/trigonometric smoothing (the chopping of the ser...
Our solution to both of outlined problems is to detach the process of function approximation (interpolation) from the mapping between functional spaces, and explicitly fix the highest possible resolution. To do that, for both domain and codomain of neural operator we consider functions represented by finite series of t...
First, the output of the neural operator is a neural network, i.e., essentially a black-box function. In classical numerical methods such as FEM [Cia02], spectral methods [Boy01] and others, the parametrization allows for extracting bounds on function, its derivatives, and any other local or global information in a co...
All these properties combined make trigonometric and Chebyshev polynomials especially suitable for PDE solutions by spectral methods [Boy01] and adaptive lossless computations with functions [Tre07]. The later goal is fully realized in the Chebfun software.444https://www.chebfun.org Chebfun demonstrates that computati...
D
It is a trade-off between reducing computation overhead and summarizing fine-grained spatial information since a bigger kernel would reduce feature size along with more informative representation, e.g., when feature map size is reduced to 1×1111\times 11 × 1, it degrades to ignoring the spatial information and posing o...
Figure 2: Illustration of our framework. (a) Target-aware Transformer. Conditioned on the teacher feature and the student feature, the transformation map Corr. is computed and then applied on the student feature to reconfigure itself, which is then asked to minimize the L22{}_{2}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRI...
We address the conundrum by the proposed anchor-point distillation. As shown in Figure 2 (c), we summarize the local area to compact representation, referred to anchor, within a local area that is representative to describe the semantic of the given area, forming the new feature map of smaller size. Since the new feat...
The result exhibited in Table 8 shows that the amount of distillation calculation is greatly reduced with the increasing pooling size. On the other hand, excessive pooling range would omit useful and informative representation and damage the performance. We also report the mean training time. All experiments are conduc...
It is a trade-off between reducing computation overhead and summarizing fine-grained spatial information since a bigger kernel would reduce feature size along with more informative representation, e.g., when feature map size is reduced to 1×1111\times 11 × 1, it degrades to ignoring the spatial information and posing o...
C
The full dataset contains over 7000700070007000 entries with 46464646 dimensions; after removing entries with missing values we have 948 unique foods, which we use for our experiments. We walk through how one might use ENS-t-SNE for exploratory data analysis. We first utilize a standard t-SNE projection to get a sense ...
Figure 7: ENS-t-SNE applied to the USDA Food Composition dataset. (a): The 3D embedding found by ENS-t-SNE, where each of the three classes have been separated. (b): The first projection corresponding to the water+lipids subspace. The blue and orange clusters (grains and vegetables, fruits and drinks) have been separat...
We use separate visual channels to encode the different types of clusters. Specifically, to show the original clusters for the first perspective, we use colors (blue and orange), for the second perspective, we use the shape (circles and squares), and for the third perspective, we use texture, filled and not filled; see...
Manually examining the clusters for human-interpretable meaning shows that the first cluster (red) contains almost entirely meats, while the second and third clusters (blue and orange) appear to have a lot in common, which is unexpected given the k𝑘kitalic_k-means results and the t-SNE plot.
The meats have been strongly clustered, with the red cluster in the top left. The blue and orange clusters, that were distinct in the previous projection, have been mixed, indicating that the blue and orange clusters have largely similar protein and vitamin components. Notably, there are two blue/orange clusters. One s...
C
Related Work. Our work follows the previous studies of POMDPs. In general, solving a POMDP is intractable from both the computational and the statistical perspectives (Papadimitriou and Tsitsiklis, 1987; Vlassis et al., 2012; Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a). Given such computational a...
In contrast, partially observed Markov decision processes (POMDPs) with large observation and state spaces remain significantly more challenging. Due to a lack of the Markov property, the low-dimensional feature of the observation at each step is insufficient for the prediction and control of the future (Sondik, 1971;...
Deep reinforcement learning demonstrates significant empirical successes in Markov decision processes (MDPs) with large state spaces (Mnih et al., 2013, 2015; Silver et al., 2016, 2017). Such empirical successes are attributed to the integration of representation learning into reinforcement learning. In other words, ma...
To learn a sufficient embedding for control, we utilize the low-rank transition of POMDPs. Our idea is motivated by the previous analysis of low-rank MDPs (Cai et al., 2020; Jin et al., 2020b; Ayoub et al., 2020; Agarwal et al., 2020; Modi et al., 2021; Uehara et al., 2021). In particular, the state transition of a low...
Related Work. Our work follows the previous studies of POMDPs. In general, solving a POMDP is intractable from both the computational and the statistical perspectives (Papadimitriou and Tsitsiklis, 1987; Vlassis et al., 2012; Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a). Given such computational a...
C
We propose a novel policy optimization algorithm which leverages proximal causal inference for handling the confounding bias, and adopts pessimism to tackle the distributional shift. The core of our algorithm is a coupled sequence of confidence regions constructed via proximal causal inference and minimax estimation, w...
From a theoretical perspective, the identification result and the backward induction property of the bridge functions provide a way of decomposing the suboptimality of the learned policy in terms of statistical errors of the bridge functions. When combined with the pessimism and the fast statistical rates enjoyed by an...
Our work is closely related to the bodies of literature on (i) reinforcement learning POMDPs, (ii) offline reinforcement learning (in MDPs), and (iii) OPE via causal inference. For a comparison, we summarize and contrast with most related existing works in Table 1.
We prove that the proposed algorithm achieves n−1/2superscript𝑛12n^{-1/2}italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT-suboptimality under a partial coverage assumption on the offline dataset. We believe the novel algorithm design and analysis that leverage techniques from causal inference will be promisi...
In Section 4, we show that under some mild assumptions on the function classes 𝔹𝔹\mathbb{B}blackboard_B and 𝔾𝔾\mathbb{G}blackboard_G and under only a partial coverage assumption on the dataset 𝔻𝔻\mathbb{D}blackboard_D, we can show that the suboptimality (2.2) of Algorithm 1 decays at the fast statistical rate of ...
C
Additionally, if we only have local information about the constraint function c⁢(𝒙)𝑐𝒙c({\bm{x}})italic_c ( bold_italic_x ) (e.g., function evaluation and Jacobian) at any given point 𝒙𝒙{\bm{x}}bold_italic_x, as in CUTEst benchmark nonlinear problems (cf. Section LABEL:sec:5), the projection operator is not computa...
In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi...
Given the prevalence of streaming datasets in modern problems, offline methods that require dealing with a large batch set in each step are less attractive. It is desirable to design fully online methods, where only a single sample is used in each step, and to perform online statistical inference by leveraging those me...
To perform online statistical inference of Problem (1.1) without relying on projections, we draw inspiration from a recent growing series of literature in numerical optimization, which develops various stochastic sequential quadratic programming (StoSQP) methods for (1.1). The SQP methods can be regarded as second-ord...
There are numerous methods for solving constrained optimization problems, such as projection-based methods, penalty methods, augmented Lagrangian methods, and sequential quadratic programming (SQP) methods (Nocedal2006Numerical). This paper particularly considers solving constrained stochastic optimization problems vi...
C
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
The present paper suffers from the same rather severe restrictions on hexahedral meshes in 3D as in previous work. The analysis of discrete inf-sup conditions for general hexahedral meshes remains an open problem. Another open problem is the analysis of isoparametric generalized Taylor-Hood families in 2D and 3D to co...
The discrete LBB condition could also be shown for the isogeometric generalized Taylor-Hood family, see [6], [7]. The proof there relies on a continuously differentiable parametrization of the domain ΩΩ\Omegaroman_Ω on each of a fixed number of patches, which does not cover general quadrilateral/hexahedral meshes.
Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case...
In this paper we focus on the related generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes under the following assumptions.
A
We relate WaveMix to previous works in Section 2, where we delve further into the image priors modeled by various classes of neural architectures for vision, and the use of wavelet transform. Our key innovations – the WaveMix blocks, use of multi-level 2D-DWT in each block, channel mixing, and the preservation of featu...
Natural images have a number of priors that are not comprehensively exploited in any single type of neural network architecture. For instance, (1) convolutional neural networks (CNNs) only model shift-invariance using convolutional design elements [5, 6, 7, 8, 9, 10, 11, 12], (2) vision transformers (ViT) model long-ra...
What makes DWT an attractive tool for analysis of natural signals are its multi-resolution properties and treatment of spatio-temporally sparse discontinuities (edges). A 1D-DWT splits an input 1-D signal x of length H𝐻Hitalic_H into two sub-bands roughly of length H/2𝐻2H/2italic_H / 2 each [36]. The first one is ca...
We performed ablation studies using ImageNet-1k and CIFAR-10 datasets on WaveMix to understand the effect of each type of layer on performance by removing the 2D-DWT layer, replacing it with Fourier transform or random filters, as well as learnable wavelets. All of these led to a decrease in accuracy. Those methods tha...
Statistical properties of natural images that have been well-studied include shift-invariance, scale-invariance, high spatial auto-correlation and preponderance of certain colors, as well as spatial sparseness of edges, [18, 19, 20, 21]. Shift-invariance, which is a form of stationarity of signals, arises due to the a...
D
Ξ˙isubscript˙Ξ𝑖\dot{\Xi}_{i}over˙ start_ARG roman_Ξ end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT only depends on ⋃κ=0k+1Ξκsuperscriptsubscript𝜅0𝑘1subscriptΞ𝜅\bigcup_{\kappa=0}^{k+1}\Xi_{\kappa}⋃ start_POSTSUBSCRIPT italic_κ = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k + 1 end_POSTSUPERSCRIPT roman...
{2}\cos(\gamma);\quad\left|\frac{\partial P}{\partial\xi}|P\in\Sigma_{3};\>\xi% \in\Xi_{3}\right|=\frac{1}{\cos(\beta)},| divide start_ARG ∂ italic_P end_ARG start_ARG ∂ italic_ξ end_ARG | italic_P ∈ roman_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ; italic_ξ ∈ roman_Ξ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT | = - itali...
\delta_{j}}|(i,j)\in\{\ell,m,n\}^{2}\right|,| divide start_ARG ∂ italic_P end_ARG start_ARG ∂ italic_ξ end_ARG | italic_P ∈ roman_Σ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT ; italic_ξ ∈ roman_Ξ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT | = divide start_ARG italic_ρ start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG start...
(∂P∂ξ|P∈Σ2;ξ∈Ξ2),formulae-sequenceconditional𝑃𝜉𝑃subscriptΣ2𝜉subscriptΞ2\left(\frac{\partial P}{\partial\xi}|P\in\Sigma_{2};\>\xi\in\Xi_{2}\right),( divide start_ARG ∂ italic_P end_ARG start_ARG ∂ italic_ξ end_ARG | italic_P ∈ roman_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ; italic_ξ ∈ roman_Ξ start_POSTSUBSCRIPT ...
\Xi_{h_{0}-1}\right|| divide start_ARG ∂ italic_P end_ARG start_ARG ∂ italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG ∣ ( italic_P , italic_x ) ∈ roman_Σ start_POSTSUBSCRIPT italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT × roman_Ξ start_POSTSUBSCRIPT italic_h start_POSTSUBSCRIPT 0 end_P...
A
Multi-encoders and multi-decoders improve performance by combining alternative representations. Increasing the diversity and number of graph representations considered by a model is one way for researchers to improve model performance. Other impactful decisions are the quality of node embeddings learned through their ...
li2020graph learn the mapping between a heterogeneous graph representing the input problem, and an output tree. The graph does not consider mathematical elements of the text, and is instead constructed from two sources: a dependency parse tree representing relationships between words, and a constituency tree which cont...
Language models that transfer knowledge learned from auxiliary tasks rival models based on explicit graph representation of problem text. As a powerful alternative to encoding explicit relations through graphs, other work kim2020point; qin2021neural; liang2021mwp relies on pre-trained transformer-based models, and tho...
Increasing the number and diversity of text elements considered through graphs improves accuracy. Methodologies associated with extracting and encoding dependency graphs between different aspects of word problems and mathematical text in general, is now common practice. Explicitly representing relationships between li...
To highlight this we consider the following comparison. shen2020solving and zhang2020graph each extract two graphs from problem text. One is a number comparison graph, and the other relates word-word pairs shen2020solving or word-number pairs zhang2020graph. They both encode two graphs rather than one heterogeneous gra...
D
Another heterogeneous sub-collection of 23232323 networks. The 10101010 networks from dataset 157157157157 (stream food webs from New Zealand) are divided between sub-collections B and D based on the type of ecosystem. The data from sub-collection B were collected in creeks, while the one from sub-collection D were co...
The last sub-collection consists of 6666 networks from various datasets. In the 7777 blocks structure, the species of block 1111 (represented on 4444 of the 6666 networks) prey on species from all other blocks with the exception of block 7777. The basal species are separated between blocks 6666 and 7777 depending on w...
Another heterogeneous sub-collection of 23232323 networks. The 10101010 networks from dataset 157157157157 (stream food webs from New Zealand) are divided between sub-collections B and D based on the type of ecosystem. The data from sub-collection B were collected in creeks, while the one from sub-collection D were co...
We obtain respectively Q^1=5subscript^𝑄15\widehat{Q}_{1}=5over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 5 blocks for Martins, Q^2=3subscript^𝑄23\widehat{Q}_{2}=3over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 3 blocks for Cooper and Q^3=4subscript^𝑄34\widehat{...
A small sub-collection of 6666 networks with density ranging from .06.06.06.06 to .11.11.11.11. All networks are represented in 5555 or 6666 of the 7777 blocks, including the first three blocks. The sub-collection consists of 3333 of the 5555 networks of dataset 48484848, the separation being based on the collecting s...
A
3D Virtual World: If we think of the real-world photos as the result of the interactions of mechanisms, such as foreground and background objects, lighting conditions, camera attributes, etc., then tasks based on real photos could also be tackled in the same manner as in this work.
3D Virtual World: If we think of the real-world photos as the result of the interactions of mechanisms, such as foreground and background objects, lighting conditions, camera attributes, etc., then tasks based on real photos could also be tackled in the same manner as in this work.
Compositionality: In this work, covariate shift is introduced in test set by intervening on only one mechanism (i.e. rotation). In the setting where multiple mechanisms are considered, it will be ideal if multiple E𝐸Eitalic_Es could leverage the knowledge learned separately and cooperate with each other.
However, some preliminary results show that E𝐸Eitalic_Es will not generalize well, if the training is based on only the interventions of target mechanism and keeping the others fixed. This is in line with [26], where the generalization improves only if more combinations of two mechanisms (category and pose) are expose...
With the rapid development of computer graphics, photo-realistic synthetic datasets with 1) controlled interventions on target mechanisms and 2) automatic pixel-accurate annotations can be efficiently created with 3D rendering engines. As described in Section 3.1, if the mechanism is stable across both virtual and real...
D
Learner Architecture. Recall that, our PU learneris parameterized in terms of an encoder gB⁢(⋅)subscript𝑔𝐵⋅g_{B}(\cdot)italic_g start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ( ⋅ ) with parameters B𝐵Bitalic_B and a linear layer with parameters 𝐯𝐯{\mathbf{v}}bold_v. On PU-CIFAR(Animal/Vehicle) we perform experiment...
Contrastive Loss Baselines. We compare puNCE with several popular contrastive losses including unsupervised variants - InfoNCE (Chen et al., 2020b, ), Debiased Contrastive Learning (DCL) (Chuang et al., , 2020) as well as variants that can leverage explicit supervision - Supervised Contrastive Learning (abbreviated Sup...
We compare puNCE with several popular contrastive losses including unsupervised variants - InfoNCE (Chen et al., 2020b, ), Debiased Contrastive Learning (DCL) (Chuang et al., , 2020) as well as variants that can leverage explicit supervision - Supervised Contrastive Learning (abbreviated SupCon) (Khosla et al., , 2020;...
Contrastive learning with supervision. Supervised Contrastive Learning (SCL) (Khosla et al., , 2020; Zhong et al., , 2021; Graf et al., , 2021; Assran et al., , 2020) is a supervised variant of infoNCE that considers multiple positive pairs from other samples belonging to the same class as anchor in addition to the aug...
We compare puNCE with several popular contrastive losses including unsupervised variants - InfoNCE (Chen et al., 2020b, ), Debiased Contrastive Learning (DCL) (Chuang et al., , 2020) as well as variants that can leverage explicit supervision - Supervised Contrastive Learning (abbreviated SupCon) (Khosla et al., , 2020;...
A
To use the NNTuck to validate layer dependence in empirical multilayer networks, we define three likelihood ratio tests (LRTs)to test layer independence, layer redundance, and layer dependence. Furthermore, we propose three methods for interpreting the third factor matrix of an NNTuck estimated for an empirical networ...
This work also lays the groundwork for diverse future work and applications. Given the observation in Section 6.2.4, that for many of the village multilayer networks we study the layers seem to be noisy observations from the same SBM, it would be interesting to explore how other models of network formation (e.g., the ...
If the layer dependence test determines that an empirical multilayer network has dependent layers, it is useful to investigate how they are related. In the examples in Section 3.2 above, the frontal slices of the deflated core tensor correspond exactly to the affinity matrix of one or more of the layers. As an example,...
In this network the test-AUC of the layer independent NNTuck is always higher than the test-AUC of either the layer dependent or layer redundant NNTuck. This performance difference indicates that the core tensor cannot be deflated without losing important information about the network’s layers. Interestingly, we observ...
In this work we use the nonnegative Tucker decomposition (NNTuck)with KL-divergence as an extension of the stochastic block model (SBM)to multilayer networks. The NNTuck allows for layers in the network to have latent structure, just as the SBM allows for latent structure in the nodes of a single layer network. Usin...
A
To answer this kind of question, methods with reasoning mechanisms [5, 20, 30, 31, 23] have been proposed to infer the reasoning chains over KGs step by step. They typically commence with topic entities in given questions as anchors and try to infer reasoning chains by extending triples according to the semantics of th...
Baselines We first evaluate the overall effectiveness of QAGCN in the multi-relation QA task. Given that the main goal of this paper is to propose a simple method that is competitive with existing reasoning-based methods that rely on complex reasoning mechanism, we mainly choose reasoning-based QA methods as baselines:...
To answer this kind of question, methods with reasoning mechanisms [5, 20, 30, 31, 23] have been proposed to infer the reasoning chains over KGs step by step. They typically commence with topic entities in given questions as anchors and try to infer reasoning chains by extending triples according to the semantics of th...
This extension is usually performed in two ways: label propagation from entities in the reasoning chain of the current step to other entities that can be added in the next step, and reinforcement learning-based decision-making on choosing entities to add. Given that the reasoning involves multiple steps and produces ex...
However, please note that QAGCN also has a large margin of 5.8% in comparison with the third-best method NSM. This demonstrates that, on complex questions, the simple single-step reasoning of QAGCN could perform better than the SOTA methods with complex multi-step label propagation.
C
A detailed analysis of the gate matrices in Figure 3 shows that the probability of measuring a state |1⟩delimited-|⟩1\lvert 1\rangle| 1 ⟩ in the sum qubit q2subscript𝑞2q_{2}italic_q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is given by sin2⁡(θ)⁢cos2⁡(φ)+cos2⁡(θ)⁢sin2⁡(φ)superscript2𝜃superscript2𝜑superscript2𝜃superscr...
and to avoid redundancy and get more information out of each dimension, more compact distributional vector embeddings are often preferred. The work by Alexander and Widdows alexander2022quantum details the method used to classify words from their vector embeddings using a quantum support vector machine (QSVM), and dem...
This is the case when the input rotations are X𝑋Xitalic_X-rotations (nielsen2002quantum, , §1.3.1). Using other fractional rotations as generators gives combinations with different algebraic properties, investigated more thoroughly in a mathematical paper by Widdows
When we say “I learned C++”, this indicates that a skill was acquired — so the verb learned in this case takes technologies as input and creates skills as output. It is easy to write down matrices that perform these operations (Table 3, right hand side).
This part of the process was run classically as a preprocessing step. The generated vectors were then used as input parameters for a feature map quantum circuit. Measurement of the feature map yields a value representing the relationship between two word vectors, which is stored in a kernel matrix (havlicek2019supervis...
B
Graph restructuring and rewiring.  GDC (Klicpera, Weißenberger, and Günnemann 2019) is one of the first works propose to rewire edges in a graph. It uses diffusion kernels, such as heat kernel and personalized PageRank, to redirect messages passing beyond direct neighbours. Chamberlain et al. (2021) and Eliasof, Haber,...
We propose an approach to enhance GNN performance on less-homophilic graphs by restructuring the graph to maximize homophily. Our method is inspired and closely related to Spectral Clustering (SC). It extends SC beyond the leading eigenvalues and learns the frequencies that are best suited to cluster a graph. To achie...
GNNs for heterophilic graphs.  Early GNNs assume homophily implicitly. Such an inductive bias results in a degenerated performance on less-homophilic graphs (Lim et al. 2021). Recently, homophily is also shown to be an effective measure of a graph’s robustness to both over-smoothing and adversarial attacks. Node repres...
We run GCN and SGC on the synthetic dataset of controlled homophily range from 00 to 1111. The model performance with homophily is plotted in Equation 4. As expected, higher homophily level corresponds to better performance for both GCN and SGC. All model reaches 100%percent100100\%100 % accuracy where homophily is la...
Now we propose an adaptive method for spectral clustering, which aligns the clustering structure with node labels by learning the underlying frequency patterns. This empowers us to restructure a graph to improve graph homophily while preserving the original graph structures as much as possible.
A
It is further asymptotically stable if limt→∞‖αt−α𝖾𝗊‖=0subscript→𝑡normsuperscript𝛼𝑡superscript𝛼𝖾𝗊0\lim_{t\to\infty}\|\alpha^{t}-\alpha^{\mathsf{eq}}\|=0roman_lim start_POSTSUBSCRIPT italic_t → ∞ end_POSTSUBSCRIPT ∥ italic_α start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT - italic_α start_POSTSUPERSCRIPT sans...
Our results lay the groundwork for an investigation of the stochastic dynamics that occur for finite sample approximations to the risk or participation driven by decisions of individuals. Such behaviors are risk reducing in expectation, so we expect the noisy trajectories to converge with high probability to sets aroun...
For allocation dynamics like multiplicative weights, such configurations are clearly equilibria for any parameter choice ΘΘ\Thetaroman_Θ on the part of the learners. We thus consider the set of possible segmented equilibria and characterize which are asymptotically stable.
Furthermore, characterizing stable equilibria sets the foundation for understanding high probability behavior of systems under noisy updates which are risk reducing only in expectation (Kushner, 1967). This sets the stage for finite sample risk minimization or multi-agent user models, a challenge which we leave to futu...
Second, it is a technically useful connection that will enable us to characterize and classify the stable equilibria for dynamics which are risk minimizing in the limit. We remark that Theorem 4.3 leaves open the question of stability for equilibria which are non-isolated minima of the total risk function.
C
In all of the cases above, we get an accurate result (up to tolerance γ𝛾\gammaitalic_γ) in the case of two labels. In the case of more than two labels, we get a heuristic result, since our minimization procedure is not guaranteed to converge to 𝚖𝚒𝚗𝚍𝚒𝚜𝚌βsubscript𝚖𝚒𝚗𝚍𝚒𝚜𝚌𝛽\texttt{mindisc}_{\beta}mindisc st...
In this section we report experiments on multiclass classifiers. In Section 9.2.1, we report experiments in which we calculate a lower bound and an upper bound on the unfairness of a classifier when confusion matrices are available, using the algorithm proposed in Section 7.2. In Section 9.2.2, we report experiments in...
In this section, we report experiments demonstrating the various algorithms proposed in this work, as well as the diversity of possible applications of bounding fairness from aggregate data. In Section 9.1, we report experiments with binary classifiers. In Section 9.2, we report experiments with multiclass classifiers....
We report experiments that demonstrate their success in various scenarios. Our experiments further demonstrate diverse applications of these tools for real-world scenarios, in applications such as election prediction, disease diagnosis, and analysis of access to health care. An implementation of the procedures propose...
We discuss related work in Section 2. The setting and notation are defined in Section 3. We extend the equalized odds criterion to multiclass classifiers in Section 4. In Section 5, we discuss lower-bounding the error using label proportions when the classifier is known to be fair. In Section 6, we discuss possible way...
B
ECG Heartbeats: This data set was used throughout this paper. It contains two top motif sets, namely calibration and heartbeats (Figure 9). We discuss only the top-1 motif, and our webpage shows the full results (k-Motiflets Source Code and Raw Results, 2022). Learn-l took 1.51.51.51.5s, and learn-k took 0.50.50.50.5s...
Given silver standard parameters, all competitor methods find the activation phase, e.g. the found motif set overlaps with the actual motifs, but with up to 100%percent100100\%100 % larger extent. k𝑘kitalic_k-Motiflets and VALMOD are the only to identify both the activation phase as top-1 motif and recovery phase as t...
Semi-Synthetic Data Sets with Gold Standard Labels: To measure the precision of the different MD methods we generated a semi-synthetic 25252525 dataset benchmark from (Dau et al., 2019) with implanted motif sets. For each method, we used the gold standard parameters as inputs, i.e. the size k𝑘kitalic_k for k𝑘kitalic...
The top-1 motif set found by the approximate k𝑘kitalic_k-Motiflets alg. corresponds to the activation phase and the top-2 motif to the recovery phase. All methods find the activation phase, but with up to 100%percent100100\%100 % larger extent. Valmod and LM found the recovery phase, again with up to 100%percent100100...
Figure 14. Quality as a ratio of the extents of the top-1 motif sets of the approximate to the exact algorithm. Left: Boxplot over ratio as a function of k∈[2,…,9]𝑘2…9k\in[2,...,9]italic_k ∈ [ 2 , … , 9 ] with n=10000𝑛10000n=10000italic_n = 10000. Right: Boxplot over fractions of the full length n, n′∈[1/8%,1/7%,…,1...
C
(A2) The noises {v⁢(k),ℱ⁢(k),k≥0}𝑣𝑘ℱ𝑘𝑘0\{v(k),\mathscr{F}(k),k\geq 0\}{ italic_v ( italic_k ) , script_F ( italic_k ) , italic_k ≥ 0 } and {ξ⁢(k),ℱ⁢(k),k≥0}𝜉𝑘ℱ𝑘𝑘0\{\xi(k),\mathscr{F}(k),k\geq 0\}{ italic_ξ ( italic_k ) , script_F ( italic_k ) , italic_k ≥ 0 } are both martingale difference sequences and indepen...
Besides, we consider both additive and multiplicative communication noises in the process of the information exchange among nodes. All these challenges make it difficult to analyze the convergence and performance of the algorithm, and the methods in the existing literature are no longer applicable. For example, the met...
Different from [27]-[30], the measurement noises are only assumed to be a martingale difference sequence and independent of the graphs and regression matrices in Assumption (A2). In this paper, neither mutual independence nor spatio-temporal independence is assumed on the regression matrices and graphs. This is applica...
Here, by assuming that (i) the sequences of the graphs, regression matrices and noises are identically distributed, respectively; (ii) the mean graph is undirected; (iii) the sequences of regression matrices and noises are mutually independent, we obtain the above algorithm. In fact, even if the mean graphs are direct...
To overcome the difficulties mentioned above, we develop the nonnegative supermartingale inequality of the estimation error, and further establish the sample path spatio-temporal persistence of excitation condition by combining information of the regression matrices, graphs and algorithm gains, under which sufficient c...
B
{2}{\lVert A\rVert_{2}}\lVert\mathbf{x}^{i}-\mathbf{x}^{i-1}\rVert_{2},italic_δ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ≤ ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT | over^ start_ARG italic_g end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start...
The following theoretical results provide error bounds on the affordable perturbation introduced into the least-squares problem by approximate calculations to ensure that the residuals of AAR iterates on the linear system (7) decreases below a user-defined threshold ϵitalic-ϵ\epsilonitalic_ϵ.
In both cases, these approximate calculations could often be formulated as injections of errors in the residual terms Rksubscript𝑅𝑘R_{k}italic_R start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and 𝐫ksuperscript𝐫𝑘\mathbf{r}^{k}bold_r start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT in the least-squares problems. T...
We provide rigorous theoretical bounds for AAR on linear fixed-point iterations that establish general guidelines to reduce the accuracy of the calculations performed by AAR while still ensuring that the final residual of the fixed-point scheme drops below a user-defined convergence threshold.
Guided by the theoretical bounds, we constructed a heuristic that dynamically adjusts the dimension of the projection subspace at each iteration. As the process approaches convergence, the backward error decreases, which allows for an increase of the inaccuracy of the least-squares calculations while still maintaining...
A
The results are shown in Table 5. All models perform quite similarly in terms of ROUGE score in the oracle setup, while the best performance is achieved when tagging is combined with prepending, outperforming all the evaluated methods. We do not compute ROUGE scores in the non-oracle setup, as we lack a gold summary fo...
The experimental results reported in Table 2 show that topic control methods perform significantly better compared to the corresponding baseline methods that do not take into account the topic requested by the user. Furthermore, the proposed BART-based formulation significantly outperforms the topic-oriented PG approac...
In terms of STAS, in both setups prepending leads to much better results compared to token embeddings and tagging, which have similar scores. The best results are again obtained when tagging is combined with prepending. The high STAS score of the combined BARTpre+tag model in the non-oracle (70.09%) setup shows that t...
Even though the models have not seen the zero-shot topics during training, they can successfully generate topic-oriented summaries for these topics achieving similar results in terms of both ROUGE-1 score and STAS metric, with the BARTpre+tag method outperforming all the other methods. In addition, the results indicate...
The results are shown in Table 5. All models perform quite similarly in terms of ROUGE score in the oracle setup, while the best performance is achieved when tagging is combined with prepending, outperforming all the evaluated methods. We do not compute ROUGE scores in the non-oracle setup, as we lack a gold summary fo...
B
While the standard cells themselves – and the heuristics to design VLSI circuits with them – are valuable proprietary intellectual property for the large chipmakers (Intel, AMD, etc.) [8], open source options are available for academic and small-scale applications. Standard cell VLSI approaches thus enable the practic...
We introduce standard quantum cells, in the following also called tiles. Our method starts from the observation that qubit lattices, as well as the quantum circuit that will be compiled, are very regular structures (e.g. Fig. 2). We join tiles into designs that: a) represent the structure of the circuit to be compiled ...
In this work, we note the fact that quantum circuit tiles are naturally analogous to standard cells, and therefore propose that large-scale quantum circuit design apply a standard cell approach adapted from classical VLSI. We particularly note that NISQ devices are tilings of various polygons such as squares (Google S...
The optimal design of large-scale circuits – both quantum and classical – is not computationally tractable, necessitating the use of sub-optimal heuristics. Noting that the qubit layout of a quantum computer is generally regular and similar to a tiling, large-scale quantum circuit design challenges can be naturally ma...
Given the extreme complexity of VLSI circuit design, the use of “standard cells” have therefore become a mainstream technique for the efficient design of large-scale high-performance computing systems [5] within standard circuit design curricula [6]. In the conventional VLSI standard cell approach, a library of standar...
B
The parameters stacked in between the layers are shaped in the size of the layer it is being stacked to. The reason we pass the parameters in each block is that if we only passed them in the first block, it would be difficult for the later blocks to retain them. This problem is somewhat similar to the degradation prob...
In this model, the parameters of the input image are not fixed. Hence, we also pass the input image parameters along with the output image parameters. This model is more generalizable than the previous one but also has a more complex non-linearity to learn and hence, lags behind in performance compared to default-to-p...
Our method consists of two technical modules: 1) an image reconstruction auto-encoder [14] to extract image features of the input image; and 2) a coarse-to-fine fully convolutional network which utilizes the image features from the auto-encoder and enforces the construction of output image with desired parameters. In ...
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i...
It can be observed that the default-to-param model achieves a higher mean PSNR and lower MAE compared to Param-to-Param model. We believe this is due to the more complex nonlinearity associated with the task of reparameterizing from any parameter than from a fixed parameter.
A
In this paper, we develop structure-preserving EVNN schemes for simulating the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and the generalized diffusions by utilizing a neural network as a tool for spatial discretization, within the framework of the discrete energetic variat...
In the numerical simulation, we take τ=0.01𝜏0.01\tau=0.01italic_τ = 0.01 and use a 1-block ResNet with 20202020 nodes in each layer. The nonlinear activation function is chosen to be tanh\tanhroman_tanh. The total number of parameters is 501. To achieve energy stability, we fix the training samples to eliminate the n...
Various numerical experiments are presented to demonstrate the accuracy and energy stability of the proposed numerical scheme. In our future work, we will explore the effects of different neural network architectures, sampling strategies, and optimization methods, followed by a detailed numerical analysis. Additionally...
In principle, the proposed numerical framework is independent of the choice of neural network architectures. However, different neural network architectures may lead to different numerical performances, arising from a balance of approximation (representation power), optimization, and generalization. In this subsection...
Given that the proposed numerical scheme employs neural networks with time-dependent parameters to approximate the solution of a gradient flow, there is no need to employ a deep neural network. In all numerical experiments for L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows, we...
B
So far, the main invariant that has been developed for multi-parameter persistent modules, efficiently implemented, and which enjoys the desired stability properties, is the fibered barcode [19] (which is equivalent to the rank-invariant [6]). Roughly speaking, the fibered barcode of a n𝑛nitalic_n-parameters persisten...
One of the challenges of multi-parameter persistence is to provide a meaningful notion of distance between persistence modules which can be computed in a reasonable time complexity. Indeed, it has been shown that the usual interleaving distance between persistence modules is NP-hard to compute in the multi-parameter ca...
The fibered barcode has been successfully used in a variety of machine learning tasks as a summary of multi-parameter persistence modules [7]. Nevertheless, it is easy to build examples of γ𝛾\gammaitalic_γ-sheaves (hence persistence modules) with the same fibered barcodes (hence at matching distance zero) though they...
So far, the main invariant that has been developed for multi-parameter persistent modules, efficiently implemented, and which enjoys the desired stability properties, is the fibered barcode [19] (which is equivalent to the rank-invariant [6]). Roughly speaking, the fibered barcode of a n𝑛nitalic_n-parameters persisten...
Nevertheless, there are several bottlenecks to the fibered barcode approach. First, it is easy to exhibit two persistence modules at matching distance zero but having arbitrarily large interleaving distance (see section 5.1). Second, computing and storing an entire n𝑛nitalic_n-parameters persistence module is time an...
D
As discussed, correct arc orientation is particularly important for CBNs. As well as the problem of identifying arc orientation in equivalent DAGs, another problem is that different MECs may have rather similar scores, and therefore the confidence in choosing one over the other is low. [23] therefore recommend consider...
We focus on networks with discrete categorical variables in this study since these are common in many domain areas such as healthcare, epidemiological data and survey data, for example. There is also a wide range of expert-specified discrete variable networks which provide a basis for making a structural evaluation of...
A second group of causes of inaccuracy relates to the assumptions that many algorithms rely on, and which frequently do not apply in the real world. These include assuming that there are no missing data values, latent confounders or measurement noise, as well as assumptions about the underlying statistical distributio...
We evaluate the effect of variable ordering on 16 discrete networks ranging from 8 to 109 variables. Many of these networks are widely used in the literature as case studies to evaluate structure learning algorithms. Table 1 lists them and their key characteristics. The Sports, Property, Formed and Diarrhoea networks a...
This study examines the impact that the arbitrary variable ordering within the dataset has on the accuracy of graphs learnt by commonly used structure learning algorithms using discrete categorical data. Whilst the importance of some aspects of variable ordering is well known, we are unaware of any other study that qu...
A
Additionally, the evaluation for OPT family models and LLaMA family models can be found in Appendix Table 6 and Table 7, respectively, providing a comprehensive overview. Latency measurements are conducted within the FasterTransformer framework, exploring different GPU configurations to assess potential speed-up gains ...
From our observations, we can conclude the following: 1) Reducing the group size (g𝑔gitalic_g) effectively decreases perplexity, even when employing a simple RTN quantization scheme, at the cost of a marginal increase in latency, 2) Increasing the number of GPUs (and, consequently, parallelism) does not significantly ...
As evidenced by the increase in the latency ratio of communication, such reductions in utilization indicate that some GPUs can be temporarily idle until all GPUs are synchronized. Accordingly, the speed-up that can be obtained by tensor parallelism is a lot smaller than the number of GPUs.
To address such a concern, researchers have proposed to use model parallelism, which distributes computations over multiple GPUs through GPU-to-GPU communication (Shoeybi et al., 2019; Narayanan et al., 2021). Nevertheless, it is worth noting that model parallelism introduces additional overheads, stemming from the int...
The results clearly indicate that LUT-GEMM provides lower latency as q𝑞qitalic_q decreases, although an excessively small g𝑔gitalic_g may have a marginal adverse impact on latency. All in all, by integrating LUT-GEMM and OPTQ at the expense of an acceptable increase in perplexity, it is possible to reduce the number ...
A
\prime}}}}(x^{\prime\prime})roman_ts start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ( italic_τ ) = start_OPFUNCTION roman_dist start_POSTSUPERSCRIPT roman_nh end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_G start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_OPFUNCTION ( italic_x start_POSTSUPERSCRIPT ′...
Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia...
As computing the rank of a divisor is NP-hard in general, it is natural to ask whether it can be approximated within any reasonable factor. Our main contribution is establishing a connection between computing the rank and finding a so-called minimum target set in an undirected graph, a central problem in combinatorial ...
The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert...
From the positive side, Luo [16] introduced the notion of rank-determining sets of metric graphs, and verified the existence of finite rank-determining sets constructively. Hladký, Král, and Norine [13] confirmed a conjecture of Baker [2] relating the ranks of a divisor on a graph and on a tropical curve, and provided...
C
N-Caltech 101 The N-Caltech 101 [43] dataset is also converted from the original version of Caltech 101 [44] with a slight change in object classes to avoid confusion. The N-Caltech 101 consists of 100 object classes plus one background class. We apply the 9: 1 train-test split as CIFAR10-DVS.
For the DVS128 dataset, we utilize the same network structure and hyper-parameters as the [30] and add the TCJA module before the last two pooling layers. Dropout (DP) [51] rate is set to 0.5 in accordance with the original network. We add a 1-D average pooling voting layer in the last layer, which yielded a 10-dimens...
DVS128 Gesture The DVS128 Gesture [45] dataset is an event-stream dataset composed of 11 kinds of hand gestures from 29 subjects under three different illumination conditions, directly captured with the DVS128 camera. In this paper, we employs all 11 gesture categories for the purpose of classification.
To make the attention mechanism easier to understand, we finally visualize the output of the first TCJA module in TCJA-SNN working with the DVS128 Gesture dataset, which can be seen in Fig. 11. Changes in attention weights are primarily accumulated among channels, verifying further the substantial role performed by th...
Figure 11: Attention distribution between time step and channel. The top row is the weight from the first TCJA module in TCJA-SNN working with the DVS128 Gesture dataset. We select sparse and dense attention frames in both temporal-wise (T=3,6𝑇36T=3,6italic_T = 3 , 6) and channel-wise (C=33,77𝐶3377C=33,77italic_C = ...
B
\mathcal{T}_{h}},⟨ italic_γ ⟦ italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ⟧ , ⟦ italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ⟧ ⟩ start_POSTSUBSCRIPT caligraphic_E start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT end_P...
{s+\mu_{\min},K}^{2}\lesssim h_{K}^{2(s+\mu_{\min})}\lVert u\rVert_{s+1,1-\mu,% K}^{2}.∥ italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ∥ start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≲ italic_h start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT start...
\lesssim h_{K}^{2s}\lVert u\rVert_{s+1,1-\mu,K}^{2}.⟨ italic_γ ( italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ) , ( italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ) ⟩ start_POSTSUBSCRIPT ∂ italic_K end_POSTSUBSCRIPT ≲ italic_h start_POSTSUBSCRIPT italic_K end_PO...
\bigr{\rangle}_{e^{-}}\Bigr{)},⟨ italic_γ ⟦ italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ⟧ , ⟦ italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ⟧ ⟩ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT ≤ 2 ( ⟨ italic_γ ( italic_u - roman_Π start_POSTSUBSCRIPT italic_h en...
\mathcal{T}_{h}},⟨ italic_γ ⟦ italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ⟧ , ⟦ italic_u - roman_Π start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT italic_u ⟧ ⟩ start_POSTSUBSCRIPT caligraphic_E start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT end_P...
B
Algorithm 1 gives the overall synthesis procedure of FlashSyn. FlashSyn first collects initial data points to approximate the actions in 𝐀𝐜𝐭𝐀𝐜𝐭\mathbf{Act}bold_Act (line 3) where FlashSyn uses the state 𝗊𝗊\mathsf{q}sansserif_q as a starting blockchain state. Then, using the sub-procedure Approximate FlashSyn g...
FlashSyn then iterates over the generated actions vectors and uses some heuristics implemented in the sub-procedure IsFeasible to prune actions vectors (line 8). For instance, an actions vector containing two adjacent actions invoking the same method can be pruned to an actions vector where the two adjacent actions are...
Algorithm 1 gives the overall synthesis procedure of FlashSyn. FlashSyn first collects initial data points to approximate the actions in 𝐀𝐜𝐭𝐀𝐜𝐭\mathbf{Act}bold_Act (line 3) where FlashSyn uses the state 𝗊𝗊\mathsf{q}sansserif_q as a starting blockchain state. Then, using the sub-procedure Approximate FlashSyn g...
For 10101010 benchmarks, FlashSyn successfully discovers new profitable symbolic actions vectors that are different from the ground truths. These vectors either exploit the same vulnerability but in a different order of actions, or represent arbitrage opportunities that were not exploited by the original attackers. Fo...
the sub-procedure Construct constructs the optimization framework 𝒫𝒫\mathcal{P}caligraphic_P for the actions vector (line 9). Then, FlashSyn uses the optimization sub-procedure Optimize (line 10) to find the optimal concrete values to pass as input parameters to the methods in the actions vector that satisfy the cons...
A
Bayesian RL algorithms such as [12; 10] sample MDPs from the true posterior, which requires knowing the true prior. In their comprehensive study, Simchowitz et al. [32] analyse a family of posterior-sampling based algorithms, termed n𝑛nitalic_n-Monte Carlo methods, under mis-specifed priors, and bound the correspondin...
To the best of our knowledge, the only theoretical investigation of meta RL with finite training tasks in a similar setting to ours is the model free approach in [33], which we compare against in our work. Our theoretical analysis builds on ideas from the study of density estimation [29; 16] and dimensionality reductio...
We propose a model-based scheme for meta-RL with a finite sample of training tasks, where we first estimate the prior distribution of tasks, and train a Bayes optimal policy on the estimated prior. Using KDE for density estimation, we obtain state-of-the-art PAC bounds. Further, our approach can exploit low dimensional...
The sample complexity in the general bound of Theorem 6 grows exponentially with the dimension of the parameter space ΘΘ\Thetaroman_Θ. In many practical cases however, such as the HalfCircle domain of Example 3, there may be a low dimensional representation that encodes most of the important information in the tasks, e...
To complement our theoretical results, we demonstrate the practical potential of our approach by incorporating it in the VariBAD meta-RL method of Zintgraf et al. [39]. In VariBAD, a variational autoencoder (VAE) is used to learn a low-dimensional latent representation of the task distribution, and a deep neural networ...
A
Influence on Similarity Threshold. Fig. 5a and Fig. 5b showed the influence of a similarity threshold δ𝛿\deltaitalic_δ to the precision, recall, and F1subscriptF1\text{F}_{1}F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Both query directions (EN →→\rightarrow→ ZH and ZH →→\rightarrow→ EN) had consistent trends of three ...
To benchmark the performance of the dynamic threshold mechanism, we compared it with the other two alternative query methods: k𝑘kitalic_k-NN and single threshold. For k𝑘kitalic_k-NN, we searched its best performance of k𝑘kitalic_k in a range from 5 to 50 with five as step size, and for single threshold, we searched...
Moreover, when we set δ𝛿\deltaitalic_δ at 0.6 (where F1subscriptF1\text{F}_{1}F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT performs the best), the correct ratio 131313The correct ration is calculated by dividing number of relevant items by number of retrieved items. of EN →→\rightarrow→ ZH and ZH →→\rightarrow→ EN were ...
Influence on Similarity Threshold. Fig. 5a and Fig. 5b showed the influence of a similarity threshold δ𝛿\deltaitalic_δ to the precision, recall, and F1subscriptF1\text{F}_{1}F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Both query directions (EN →→\rightarrow→ ZH and ZH →→\rightarrow→ EN) had consistent trends of three ...
There are two limitations to our proposed method. The first one is that the technique we adopt for handling multiword expressions may not be effective for infrequent expressions. We employ phrase identification [27] to capture the frequent term co-occurrences as multiword expressions, such as “loose bowel movements” a...
B
The data that support the findings of this study are openly available. The AG’s news corpus dataset was obtained from Ref. AGw, , and CelebA dataset from Ref. cel, in accordance with the Terms of Service of the respective web resources. Data generation details for the molecular dynamics of alanine dipeptide are provid...
To establish that TERP indeed takes both the input data and the black-box model into account when generating explanations we subject our protocol to the sanity tests developed by Adebayo et al. Adebayo et al. (2018). We achieve this by taking the fine-tuned ViT model and randomizing the model parameters in a top-to-bo...
Python implementation of TERP is available at github.com/tiwarylab/TERP, and the three black-box models trained on specific systems studied in this work are available at figshare.com/articles/dataset/Black-box_models_for_TERP_interpretation/24475003.
The data that support the findings of this study are openly available. The AG’s news corpus dataset was obtained from Ref. AGw, , and CelebA dataset from Ref. cel, in accordance with the Terms of Service of the respective web resources. Data generation details for the molecular dynamics of alanine dipeptide are provid...
In this work, we employed Python implementation of Att-BLSTMZhou et al. (2016) obtained from https://github.com/Renovamen/Text-Classification with pre-trained GloVe word embedding. Att-BLSTM model was trained on Antonio Gulli’s (AG’s) news corpusGulli (2005b) for 10101010 epochs, finally reaching a validation accuracy ...
B
When the seed generation by fuzzing is finished, FuSeBMC executes the BMC engine for each goal label in the Goal Queue. To minimize the execution time, it is run with "lighter" settings: all implicit checks (i.e., memory safety, arithmetic overflows) and assertion checks are disabled, and the bound for loop unwinding ...
All new seeds produced by the fuzzer and the BMC engine are deemed smart due to their powerful effect on code coverage. Conceptually, bounded model checkers use SMT solvers to produce test-cases that resolve complex branch conditions (i.e., guards). Such guards (for example, lines 5 and 12 in Figure 3(a)) pose a challe...
In this paper, we presented FuSeBMC v4, a test generator that relies on smart seed generation to improve the state-of-the-art in hybrid fuzzing and achieve high coverage for C programs. First, FuSeBMC analyses and injects goal labels into the given C program. Then, it ranks these goal labels according to the given stra...
One of the weaknesses of pure fuzzing approaches is their inability to find test-cases that explore program code beyond complex guards, as they essentially work by randomly mutating seeds and, therefore, struggle to find inputs that satisfy the guards.
The main disadvantage of blackbox fuzzers is that due to the random manner in which they generate inputs, they are often unable to explore program paths with complex guards. Whitebox fuzzers, on the other hand, are very good at using program information to circumvent guards but are often slow and resource-intensive to...
A
The sum space method is only applicable to equations of rational order μ=p/q𝜇𝑝𝑞\mu=p/qitalic_μ = italic_p / italic_q and requires the direct sum of 2⁢q2𝑞2q2 italic_q different weighted orthogonal polynomial bases191919q𝑞qitalic_q bases for the domain of the FDE/FIE and q𝑞qitalic_q bases for the range, however the...
Our work is strongly influenced by the method proposed by Hale and Olver [25], in which direct sums of appropriately weighted Jacobi polynomials are used as basis functions. We shall refer to this method as the sum space method and we note that the basis functions are related to the “generalized Jacobi functions” of [...
We have demonstrated that the sum space method for the FIE (49) performs poorly compared to the JFP method as λ𝜆\lambdaitalic_λ increases. However, as illustrated in [25], the sum space method converges exponentially fast in linear complexity (because it yields banded or almost-banded systems) and in double precision...
We have not attempted to make a rigorous classification of the types of problems for which the JFP method is superior to the sum space method and vice versa. However, Example 3 suggests that the sum space method performs poorly for problems in which the largest monomial coefficient of the solution becomes large (e.g.,...
which is a singularly perturbed problem with an increasingly oscillatory solution as ϵ→0→italic-ϵ0\epsilon\to 0italic_ϵ → 0. For this problem the sum space method achieves high accuracy (around 10−10superscript101010^{-10}10 start_POSTSUPERSCRIPT - 10 end_POSTSUPERSCRIPT for ϵ=10−4italic-ϵsuperscript104\epsilon=10^{-4}...
C
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ...
Let us first consider the best index value ranking in the unsupervised approach (Fig. 1c presented in the main text and Fig. S20), in which the lowest index value of L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is greater than the highest index value of L2subscript𝐿2L_{2}italic_L start_POSTSUBS...
In the main text, taking the common neighbor feature as an example, we show the theoretical finding can help us to determine the feature and index selection. To further validate the preceding analysis, we here conduct a second analysis using the path feature (Table S2). When using the index SPL to make unsupervised pr...
Currently, there is no unified rule on what value should be the lowest. In the main text, we consider the lowest value to be zero. Indeed, all 21 indexes in this study assign the value 0 to entities that do not hold the feature they are designed to quantify. There could be exceptions. For example, one can design an ind...
D
We verify if the update scheme search based on contribution analysis is effective. We collect several data points during the search process (the update scheme and the search criteria, i.e., the sum of Δ⁢accΔacc\Delta\text{acc}roman_Δ acc). We train the model with each update scheme to get the average accuracy on the d...
We compare the performance of our searched sparse update schemes with two baseline methods: fine-tuning only biases of the last k𝑘kitalic_k layers; fine-tuning weights and biases of the last k𝑘kitalic_k layers (including fine-tuning the full model, when k𝑘kitalic_k equals to the total #layers). For each configurati...
Figure 11: (a) The weight and activation memory cost of updating each layer of MCUNet (analytic). We find that the activation cost is high for the starting layers; the weight cost is high for the later layers; the overall memory cost is low for the middle layers. (b) Dissecting the sparse update scheme: we update the b...
We visualize the update schedule of the MCUNet [47] model searched under 100KB extra memory (analytic) in Figure 11 (lower subfigure (b), with 10 classes). It updates the biases of the last 22 layers, and sparsely updates the weights of 6 layers (some are sub-tensor update). The initial 20 layers are frozen and run for...
We update the last two blocks of the MCUNet [47] model and only 1/4 of the weights for each layer to compare the accuracy of different channel selection methods (larger magnitude, smaller magnitude, and random). The results are quite similar (within 0.2% accuracy difference). Channel selection is not very important for...
D
≤\displaystyle\leq≤ I+(|A−1⁢B⁢D|)+(|A−1⁢B⁢D|)2+…𝐼superscript𝐴1𝐵𝐷superscriptsuperscript𝐴1𝐵𝐷2…\displaystyle I+(|A^{-1}BD|)+(|A^{-1}BD|)^{2}+...italic_I + ( | italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B italic_D | ) + ( | italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B italic_D |...
|(I−A−1⁢B⁢D)−1|=superscript𝐼superscript𝐴1𝐵𝐷1absent\displaystyle|(I-A^{-1}BD)^{-1}|=| ( italic_I - italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B italic_D ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT | = |I+(A−1⁢B⁢D)+(A−1⁢B⁢D)2+…|𝐼superscript𝐴1𝐵𝐷superscriptsuperscript𝐴1𝐵𝐷2…\displaystyle|I+...
‖(I−A−1⁢B⁢D)−1‖≤‖|(I−A−1⁢B⁢D)−1|‖≤‖(I−|A−1⁢B|)−1‖,normsuperscript𝐼superscript𝐴1𝐵𝐷1normsuperscript𝐼superscript𝐴1𝐵𝐷1normsuperscript𝐼superscript𝐴1𝐵1\|(I-A^{-1}BD)^{-1}\|\leq\||(I-A^{-1}BD)^{-1}|\|\leq\|(I-|A^{-1}B|)^{-1}\|,∥ ( italic_I - italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B italic_D ...
≤\displaystyle\leq≤ I+(|A−1⁢B|)+(|A−1⁢B|)2+…𝐼superscript𝐴1𝐵superscriptsuperscript𝐴1𝐵2…\displaystyle I+(|A^{-1}B|)+(|A^{-1}B|)^{2}+...italic_I + ( | italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B | ) + ( | italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B | ) start_POSTSUPERSCRIPT 2 e...
≤\displaystyle\leq≤ I+(|A−1⁢B⁢D|)+(|A−1⁢B⁢D|)2+…𝐼superscript𝐴1𝐵𝐷superscriptsuperscript𝐴1𝐵𝐷2…\displaystyle I+(|A^{-1}BD|)+(|A^{-1}BD|)^{2}+...italic_I + ( | italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B italic_D | ) + ( | italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B italic_D |...
C
As a first step in this direction, we conducted a mathematical runtime analysis on the permutation-based LeadingOnes and Jump function classes, PLeadingOnes and PJump. While the PLeadingOnes analyses provided no greater difficulties and the results, Θ⁢(n3)Θsuperscript𝑛3\Theta(n^{3})roman_Θ ( italic_n start_POSTSUPERSC...
From a broader perspective, this work confirms what is known from empirical and applied research, see, e.g., [ES15], namely that it is not immediately obvious how to transfer expertise in evolutionary computation for bit-string representations to permutation-based optimization. In this light, this work suggests as int...
In summary, our results on the LeadingOnes and Jump benchmarks show that several arguments and methods from the bit-string world can easily be extended to permutation search spaces, however, the combinatorially richer structure of the set of permutations also leads to new challenges and new research problem such as wha...
Such an established and generally accepted set of benchmarks is clearly missing for permutation-based EAs, which might be one of the reasons why this part of EA theory is less developed. To overcome this shortage, and to do this in a natural and systematic manner, ideally profiting to the maximum from the work done al...
We are optimistic, though, that also for the PLeadingOnes benchmark precise runtime results can be obtained for the four mutation operators discussed in this work. Most likely, the variant of the fitness-level method proposed in [DK21a], which allows a convenient treatment of free-riders, is a good tool here. There re...
A
If the left-hand side ‖R⁢∂f⁢(y0)‖norm𝑅𝑓subscript𝑦0\|R\partial f(y_{0})\|∥ italic_R ∂ italic_f ( italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ∥ is too small at the current iterate y0subscript𝑦0y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, then there is nothing to optimize on the coarse grid – as ‖∂ψ⁢...
In order to apply two-level optimization to the regularized inverse problem (1.2), the coarse grid model function ψ𝜓\psiitalic_ψ given by (3.8) has to be computed. Similar to the evaluation of the objective function at both levels, we assume that the operator A𝐴Aitalic_A can be directly evaluated at both levels. For...
As motivated in Section 1, we assume a generic objective function f𝑓fitalic_f to be given that can be evaluated at different discretization levels. In this section, we consider two discretization levels called fine grid and coarse grid, respectively, and use the following notation.
the projection matrix A𝐴Aitalic_A corresponds to the length of the line segment of the i𝑖iitalic_i-th projection ray passing through the j𝑗jitalic_j-th pixel in the image domain (Figure 1.1). At every level the width of the detector-array was set to the grid size, so that at each scale every pixel intersects with at...
The core ingredient of two level optimization is a coarse grid model, that is a coarse grid representation of the fine grid problem in terms of the objective function f𝑓fitalic_f evaluated at the coarse grid, the current iterate y0subscript𝑦0y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and its restriction x0...
A
Namely f⁢(x1,…⁢xr)=𝟏⁢(w1⁢x1+…⁢wr⁢xr≥T)𝑓subscript𝑥1…subscript𝑥𝑟1subscript𝑤1subscript𝑥1…subscript𝑤𝑟subscript𝑥𝑟𝑇f(x_{1},\ldots x_{r})={\bf 1}(w_{1}x_{1}+\ldots w_{r}x_{r}\geq T)italic_f ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … italic_x start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) = bold_1 ( i...
The function hℎhitalic_h we use is the harmonic extension of a graph invariant function, introduced by Éva Tardos in [43]. The Tardos function and its properties build upon the seminal works of Razborov [35], and Alon and Boppana [1]. The mentioned works constitute a highly influential line of work, about the limitatio...
By Theorem 15 the monotone circuit complexity of a circuit with only AND and OR gates (De Morgan circuit) computing a threshold function with positive coefficients is polynomial. Therefore, we claim that the existence of CNsubscript𝐶𝑁C_{N}italic_C start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT entails the existence o...
Now we show that given a Boolean circuit to compute a given function f:{0,1}d→ℕ:𝑓→superscript01𝑑ℕf:\{0,1\}^{d}\to\mathbb{N}italic_f : { 0 , 1 } start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → blackboard_N, we can construct a general (not necessarily monotone) threshold network of comparable size to approximate f...
It follows from Theorem 15 that if there was a monotone network (with threshold gates) of a given size approximating a harmonic extension f^^𝑓\hat{f}over^ start_ARG italic_f end_ARG, we could replace each gate with a polynomially sized monotone De Morgan circuit entailing a polynomial blowup to the size of the network...
D
)\|_{V_{h}}\leq|{\boldsymbol{\nu}}|!{\boldsymbol{b}}^{{\boldsymbol{\nu}}}\frac% {C_{\rm Poin}^{V_{h}}}{\alpha}\|f\|_{L^{2}(D)}.∥ ∂ start_POSTSUBSCRIPT bold_italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_ν end_POSTSUPERSCRIPT italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ , bold_italic_y ) ∥ s...
Notably, the convergence order for SIPG can be increased by one using a duality argument as in [11, Sect. 4.2.4] if we have elliptic regularity. Importantly, the FE error in the QMC setting will be integrated with respect to 𝒚𝒚\boldsymbol{y}bold_italic_y. That is why, we additionally assume that |u⁢(⋅,𝒚)|Hk+1⁢(D)sub...
This is an immediate consequence of [37, Lem. 9.1] (in fact, it already appears in [9]), but since the proof is short, we present it for completeness. The proof is carried out by induction with respect to the order of the multi-indices 𝛎∈ℱ𝛎ℱ{\boldsymbol{\nu}}\in\mathscr{F}bold_italic_ν ∈ script_F. In the affine setti...
The remainder of the argument is completely analogous to the derivation presented for the continuous setting in [23]: the weights γ𝔲subscript𝛾𝔲\gamma_{\mathrm{\mathfrak{u}}}italic_γ start_POSTSUBSCRIPT fraktur_u end_POSTSUBSCRIPT enter the expression for the upper bound in the same manner as in the continuous settin...
Let 𝛎∈ℱ𝛎ℱ{\boldsymbol{\nu}}\in\mathscr{F}bold_italic_ν ∈ script_F and suppose that the claim has already been proved for all multi-indices with order less than |𝛎|𝛎|{\boldsymbol{\nu}}|| bold_italic_ν |. Then it is a consequence of Theorem 5.7 and the fact that ∂𝐦η⁢(𝐲)=0superscript𝐦𝜂𝐲0\partial^{\boldsymbol{m}}...
B
We use the arm publishing procedure to ensure that the means of remaining arms belong to the next class (i.e., μA∈Sℓ+1Asuperscript𝜇𝐴superscriptsubscript𝑆ℓ1𝐴\mu^{A}\in S_{\ell+1}^{A}italic_μ start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT ∈ italic_S start_POSTSUBSCRIPT roman_ℓ + 1 end_POSTSUBSCRIPT start_POSTSUP...
In order for the induction to proceed, we need to make sure that if we publish all heavy arms, the probability of the best arm being published is small, since otherwise the problem would already be solved and the round elimination process cannot continue. This is easy to do with non-adaptive algorithms, because the who...
In each round, each agent takes a sequence of pulls (one at each time step) and observes the outcomes. At the end of each round there is a communication phase; the agents communicate with each other to exchange newly observed information and determine the number of time steps for the next round (the length of the first...
To handle this challenge, we choose to explicitly analyze for each heavy arm its probability of being the best arm after the first round of pulls, and then show that the sum of these probabilities is small. This analysis is much more complicated than that for the non-adaptive algorithms. We try to illustrate the main i...
The key to the analysis for each individual arm is that, because of the interleaved mean structure, Alice misses most information of half of the terms held by Bob. Without this information, her adaptivity cannot help much in the task of identifying which arm is more likely to be the best arm. On the other hand, the ti...
A
While the literature often assumes convexity for the nonsmooth term g𝑔gitalic_g, our proposed method, as indicated in Assumption 1, allows the nonconvex nature of this term. This enables the algorithm to effectively handle a wide range of nonconvex constraints, including rank constraints and ℓ0subscriptℓ0\ell_{0}roma...
Motivated by the aforementioned advancements and recognizing the existing limitations in the literature, the proposed method addresses the optimization of regularized nonsmooth nonconvex cost functions, allowing the gradients of differentiable functions in the finite sum to be non-Lipschitz. To the best of our knowledg...
In order to achieve superlinear convergence, we assume z⋆superscript𝑧⋆z^{\star}italic_z start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT to be a strong local minimum of the cost φ𝜑\varphiitalic_φ, and that the envelope is twice (strictly) differentiable at this point. The subsequent lemma examines the second-order propert...
We propose SPIRAL with convergence guarantees for a wide class of finite sum problems. Not only are both the nonsmooth regularizer g𝑔gitalic_g and the finite sum terms fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT all allowed to be nonconvex, but also fisubscript𝑓𝑖f_{i}italic_f start_P...
Despite an upsurge in developing optimization methods to address such a problem, the potential of low-memory quasi-Newton methods has largely been neglected which can be partially attributed to the absence of theoretical foundations for handling nonsmooth settings. In the smooth strongly convex settings, competitive co...
A
Tensor decompositions are efficient tools for multi-way data processing (analysis). In particular, they can be used for making reduction on the data tensors and compressing them without destroying their intrinsic multidimensional structure. In the past few decades, several types of tensor decompositions have been intr...
This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ...
In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD). Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor a...
Our algorithm is a generalization of the randomized fixed-precision algorithm developed in [37] which is a modification and improved version of the fixed-precision algorithm proposed in [38]. Similar to the idea presented in [37], we propose to gradually increase the size of the second and first modes of 𝐐¯¯𝐐\underl...
As discussed earlier, the randomized algorithms proposed in ([34, 35, 36, 50]) require an estimation of the tubal rank which may be a difficult task. To solve this limitation, we propose a new randomized fixed-precision or adaptive algorithm which for a given approximation error bound, it can find an optimal tubal rank...
A
As for the SSL HMLC task, the existing work in the literature is relatively limited. In [40] the authors extend the RAk𝑘kitalic_kEL system, initially developed for (supervised) MLC, to the SSL HMLC task, leading to three new methods, called HMC-SSBR, HMC-SSLP and HMC-SSRAk𝑘kitalic_kEL.
RAk𝑘kitalic_kEL is an ensemble-based wrapper method for solving MLC tasks using existing algorithms for multi-class classification. The idea is to build the ensemble by providing a small random subset of k𝑘kitalic_k labels (organized as a label powerset) to each base model, learned by a multi-class classifier. This a...
Moreover, contrary to existing approaches, the approach we propose builds models by exploiting clustering. This allows us to take into account the smoothness assumption, both for the descriptive space and for the output space. Finally, the mentioned existing approaches cannot be directly used to impose limitations on t...
As for the SSL HMLC task, the existing work in the literature is relatively limited. In [40] the authors extend the RAk𝑘kitalic_kEL system, initially developed for (supervised) MLC, to the SSL HMLC task, leading to three new methods, called HMC-SSBR, HMC-SSLP and HMC-SSRAk𝑘kitalic_kEL.
To build an ensemble model for predicting structured output, an appropriate type of PCTs is utilized as a base model. For example, to build an ensemble for the HMLC task, PCTs for HMLC are used as base models. An ensemble predicts a new example by considering predictions of all the ensemble’s base models. For regressi...
A
To address those challenges, some works have been conducted with a focus on simulation-to-real adaptation. Although the models still suffer from performance losses due to sensor inaccuracies and diverse conditions in real environments, these approaches are said to be promising for future autonomous driving [8][9]. On ...
Different from our previous work [1], the model is modified to improve its performance and deal with real-world implementation issues. First, as the input to the perception module, we consider a wider ROI of H×W=512×1024𝐻𝑊5121024H\times W=512\times 1024italic_H × italic_W = 512 × 1024 at the center of the RGBD image....
Figure 2: The architecture of DeepIPC. Blue blocks are parts of the perception module, while green blocks are parts of the controller module. Light-colored blocks are not trainable, while the darker ones are trainable. In the BEV semantic map, waypoints are denoted with white dots, while route points are denoted with ...
In imitation learning and behavior cloning, a considerable amount of expert driving records is needed for training and validation (train-val) [52][53][54][55]. To create the dataset, we drive the vehicle at a speed of 1.25 m/s in a certain area inside Toyohashi University of Technology, Japan. As shown in Fig. 3, the ...
Concisely, DeepIPC is a model that can be forced to learn how to compensate for noise and inaccuracy of sensor measurement implicitly by mimicking expert behavior to achieve human-like autonomous driving [15][16]. DeepIPC processes multi-modal data that contain several quantities needed to perceive the environment and ...
D
In this section we show that, for every fixed integer k≥3𝑘3k\geq 3italic_k ≥ 3, deciding if two given vertices of a graph can be separated by removing a set of vertices that induces a graph with independence number at most k𝑘kitalic_k is NP-complete.
For a graph G𝐺Gitalic_G and any two nontrivial additive hereditary graph properties 𝒫𝒫\mathcal{P}caligraphic_P and 𝒬𝒬\mathcal{Q}caligraphic_Q, we say that G𝐺Gitalic_G is (𝒫,𝒬)𝒫𝒬(\mathcal{P},\mathcal{Q})( caligraphic_P , caligraphic_Q )-colorable if V⁢(G)𝑉𝐺V(G)italic_V ( italic_G ) can be partitioned into tw...
A graph property is nontrivial if there is at least one graph having the property and there is at least one graph that does not have the property. We say that a graph property is additive hereditary if it is closed under taking vertex-disjoint unions and induced subgraphs.
In particular, we recall that Maximum Independent Set is NP-hard on graphs with each edge subdivided twice, but such graphs admit a tree decomposition where one bag is a large independent set, and the induced subgraphs of the other bags are isomorphic to 4444-vertex paths. It follows that if the width-measure of a bag ...
Let 𝒫𝒫\mathcal{P}caligraphic_P be the property of being K2subscript𝐾2K_{2}italic_K start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-free and 𝒬𝒬\mathcal{Q}caligraphic_Q the property of being K3subscript𝐾3K_{3}italic_K start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT-free. Note that both 𝒫𝒫\mathcal{P}caligraphic_P and 𝒬𝒬\mathcal...
B
Next, to derive Eq. 12, we leverage the Lipschitz continuity of ℓℓ\ellroman_ℓ, the condition ρ≥L𝜌𝐿\rho\geq Litalic_ρ ≥ italic_L, the lower bound of ℒ𝚅𝙸𝙼subscriptℒ𝚅𝙸𝙼\mathcal{L}_{\mathrm{\texttt{VIM}}}caligraphic_L start_POSTSUBSCRIPT VIM end_POSTSUBSCRIPT, and the fact that the quadratic loss term in ℒADMMsubsc...
To solve the above challenges, in this work, we propose an efficient VFL optimization framework with multiple heads (VIM), where each head corresponds to one local client. VIM takes the individual contribution of clients into consideration and facilitates a thorough decomposition of the VFL optimization problem into mu...
Specifically, we follow Hong et al. hong2016convergence to assume convexity, Lipschitz smoothness, and the bounded loss for convergence analysis of VIMADMM. Furthermore, we acknowledge that analyzing the local model can be challenging, given the complexity of DNNs, so we introduce an additional assumption that bounds ...
In this section, we provide the convergence guarantee for VIMADMM, which is non-trivial due to the complexity of the alternative optimization between four sets of parameters {Wk},{θk},{zj},{λj}subscript𝑊𝑘subscript𝜃𝑘subscript𝑧𝑗subscript𝜆𝑗\{W_{k}\},\{\theta_{k}\},\{z_{j}\},\{\lambda_{j}\}{ italic_W start_POSTSUBS...
Remark. Theorem 1 (A) shows that VIMADMM converges, measured by the monotonically decreasing and convergent loss, and (B) establishes that any limit point is a stationary solution to the problem III-B. Note that we make several assumptions in Theorem 1 to derive the above guarantees, as often made in ADMM analysis hong...
D
Figure 2: The overall architecture of our Ada-DyGNN method. When a new link between v1subscript𝑣1v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and v2subscript𝑣2v_{2}italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is added, Ada-DyGNN performs robust knowledge propagation by the following steps: 1) The intera...
Then, the agent takes actions based on current states and a learned policy network, which can determine whether to update or retain the embedding for each node. After that, new embeddings of the influenced nodes can be obtained based on the intermediate state and a MLP.
The basic idea of reinforcement learning is to train an agent for decision making by interacting with the environment [67, 68, 69, 70, 71]. There are mainly two lines of methods in reinforcement learning [72, 73]: policy-based methods and value-based methods. Value-based methods, such as DQN [74] and SARSA [75], aim t...
Moreover, the process of sampling which neighbors to update is discrete, posing a challenge for direct optimization through stochastic gradient descent-based methods [42]. In light of these, we attempt to address this problem via reinforcement learning, which excels in optimizing discrete sampling problems and can capt...
In addition, Actor-Critic methods [78, 79, 80, 81] are a hybrid of these two kinds of methods, which makes decisions according to a policy network and estimates the reward by a value function. In our method, we attempt to explore reinforcement learning to effectively capture temporal evolution for dynamic graph learnin...
B
Besides, the Progressive Uncertainty maps each sequence as a Cross-View Gaussian distribution and a Cross-Cloth Gaussian distribution instead of a point. Particularly, mapping each sequence as Gaussian distributions allows the network to increase gradient weight from easy learning cross-view sequences and prevent the n...
Progressive Uncertainty maps each sequence as a Cross-View Gaussian distribution and a Cross-Cloth Gaussian distribution in the feature spaces instead of a point, which can relax the constraints for the model. The identity-branch accepts features output from the backbone and learns the mean feature, which is used to re...
Besides, the Progressive Uncertainty maps each sequence as a Cross-View Gaussian distribution and a Cross-Cloth Gaussian distribution instead of a point. Particularly, mapping each sequence as Gaussian distributions allows the network to increase gradient weight from easy learning cross-view sequences and prevent the n...
PFE [34] first proposes to map each face image as a Gaussian distribution, regarding the sequence feature as the mean, and adding another branch to learn the confidence for the sequence feature. The mean of the distribution can be regarded as the most likely feature of the sequence mapped in the latent space, and the v...
Furthermore, by re-sampling new embeddings from the Gaussian distributions, we can gain more knowledge from the sequence that can be used to classify each identity. Therefore, mapping sequences as Gaussian distributions can further make the feature extracted from the cross-view sub-dataset have more effects in model tr...
D
DPAV reduces the overestimation bias in the target of the training loss, so it is less biased. This improves the dialogue action values accuracy of the DPAV DQN dialogue policy, so this dialogue policy issues more accurate dialogue actions accordingly which improve dialogue performances.
All baselines and DPAV DQN use various estimators or estimation tricks to approximate the ground truth maximal action value Q∗⁢(st+1)subscript𝑄subscript𝑠𝑡1Q_{*}(s_{t+1})italic_Q start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ). Given the state st+1subscript𝑠𝑡...
Combining the results of Figure 3 and Table 2, DPAV DQN achieves better or comparable performances with a lower time complexity. Although the time complexity of ensemble models can be reduced by parallel computing, but that increases the space complexity. So, the overall computational complexity is still high and resou...
Additionally, this method has a lower computational complexity compared to those of ensemble models. Even if the latter could trade time complexity for space complexity by parallel computing, they still have high computational complexity in general as shown in the Table 2. And this method achieves better or comparable...
This paper is the first to investigate the negative effects of the overestimation problem in task-completion dialogue systems. We propose the DPAV estimator to mitigate this problem of Q-learning. We also theoretically prove convergence and derive the upper and lower bounds of the estimation bias compared with those of...
C
To appropriately anticipate the future, it is necessary to understand in detail the observed actions. Human Action Recognition (HAR) from video is itself a large computer vision research field, with increasing interest over egocentric view datasets [7, 13]. Ego4D [13] is the most extensive daily-life egocentric video d...
We evaluate the performance of our standalone I-CVAE model based on the ground-truth action and intention labels provided by Ego4d dataset. We report Edit Distance (ED) for the time horizon Z=20𝑍20Z=20italic_Z = 20 (E⁢D⁢@⁢Z=20𝐸𝐷@𝑍20ED@Z=20italic_E italic_D @ italic_Z = 20) as our evaluation metric for both nouns an...
Metrics. Following the proposed evaluation protocols in Ego4D LTA benchmark [13], we report the Edit Distance (ED) metric, as shown in Equation 1, computed as the Damerau-Levenshtein distance [8, 18] over sequences of predictions of verbs, nouns and actions. This metric accounts for small variations in the action sequ...
Finally, we investigate the performance of our whole framework based on the end-to-end evaluation. First, H3M classifies the actions and the intention from the observed clips. Then, based on these predictions, our I-CVAE model anticipates the Z=20𝑍20Z=20italic_Z = 20 actions in the future. In Table 4 we evaluate the L...
We report our results for the LTA task in Table 1 based on the test set of the Ego4D LTA dataset. In this experiment, our framework predicted the N=6𝑁6N=6italic_N = 6 observed actions and the overall intention from the past, to anticipate Z=20𝑍20Z=20italic_Z = 20 future actions by generating K=5𝐾5K=5italic_K = 5 seq...
B
ℓ⁢(𝒔)=12⁢e−ζ⁢μ2+12⁢ζ.ℓ𝒔12superscript𝑒𝜁superscript𝜇212𝜁\ell(\bm{s})=\frac{1}{2}e^{-\zeta}\mu^{2}+\frac{1}{2}\zeta.roman_ℓ ( bold_italic_s ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG italic_e start_POSTSUPERSCRIPT - italic_ζ end_POSTSUPERSCRIPT italic_μ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + divide s...
Therefore, the core idea is to impose a prior (i.e., Gaussian) distribution 𝒩𝒔⁢(μ,σ2)subscript𝒩𝒔𝜇superscript𝜎2\mathcal{N}_{\bm{s}}(\mu,\sigma^{2})caligraphic_N start_POSTSUBSCRIPT bold_italic_s end_POSTSUBSCRIPT ( italic_μ , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) to the single distance value d𝒔=‖...
Therefore, to address the anomaly contamination problem, we can give a relatively mild penalty to predictions that are with high model uncertainty, thus masking anomaly contamination in a soft manner. On the other hand, to ensure effective optimization of hard normal samples, the one-class classification model should a...
We first consider the design of our new one-class loss function. Given a prior distribution of the one-class distance, we need to maximize the probability of the distance being zero, instead of simply minimizing a single distance value. Based on the probability density function of the Gaussian distribution, the learnin...
We then address how to extend the single output of a distance value to a Gaussian distribution. One direct solution is to employ the ensemble method to obtain a group of predictions, thus estimating the mean and the variance of the distribution. However, the GPU memory consumption and the computational effort might be ...
D
Juraska et. al. (Juraska et al., 2018) propose SLUG, an ensemble of three neural encoders - two LSTMs and one CNN (LeCun et al., 1998), individually trained, for MR-text generation. The authors note that selecting tokens with the maximum log-probability by averaging over different encoders at each time steps results in...
Following the successful applications of knowledge-grounded language models (Ahn et al., 2016; Logan et al., 2019), Konstas et. al. (Konstas et al., 2017) propose a domain-specific pretraining strategy inspired by Sennrich et. al. (Sennrich et al., 2016) to combat the challenges in data sparsity, wherein self-training ...
Graph-to-text translation is not only central to D2T as its application carries over to numerous NLG fields such as question answering (He et al., 2017; Duan et al., 2017), summarization (Fan et al., 2019), and dialogue generation (Liu et al., 2018a; Moon et al., 2019). Further, the D2T frameworks for graph-to-text bo...
Böhm et. al. (Böhm et al., 2019) note that while modern frameworks for text generation compete with higher scores on automated word-overlap metrics, the quality of the generation leaves a lot to be desired. As such, the adaptation of continuous representations based metrics shifts the focus from surface-form matching ...
Pretrained lanuguage models (PLMs) (Devlin et al., 2019; Radford et al., 2019) have been successful in numerous text generation tasks (See et al., 2019; Zhang et al., 2020b). The extensive pretraining grants these models certain worldly knowledge (Petroni et al., 2019) such that, at times, the models refuse to generat...
A
Figure 6: The illustration of NSC. Dashed lines indicate the distances between the noisy sample and other samples in a clean subset with girl-chair. wKNN replaces the noisy predicate in with a soft label, assigning a score of 0.25 to the new label sitting on and a score of 0.75 to the old label in.
Given all the detected noisy positive samples from Pos-NSD, the NSC module aims to correct and assign more robust soft labels to these noisy positive predicate labels. There are two non-zero (positive) categories in the newly assigned soft label rssuperscript𝑟𝑠r^{s}italic_r start_POSTSUPERSCRIPT italic_s end_POSTSUP...
Highlights. In the original NSC of NICE-v1 [33], we directly assign hard labels (i.e., raw predicate labels) with full probability to noisy samples. If the newly assigned labels were unreasonable, new noise might be introduced. In the new NSC of this paper, we assign soft labels to noisy samples, which takes into acco...
Calculation of Predicate Score. However, the raw predicate labels are not always “perfect”. For example, in Fig. 2, ⟨⟨\langle⟨boy-near-pizza⟩⟩\rangle⟩ is mistakenly changed to ⟨⟨\langle⟨boy-eating-pizza⟩⟩\rangle⟩. To ensure the robustness of training, we propose a scoring function that considers the distances of tripl...
Figure 3: The pipeline of NICE (taking an image from VG as an example). (a) Neg-NSD: Given all negative triplets (blue dash arrows), the OOD detection model detects missing annotated triplets (𝒯noisy−subscriptsuperscript𝒯noisy\mathcal{T}^{-}_{\text{noisy}}caligraphic_T start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT star...
A
To verify the theoretical results, we simulate the RER of an attracted miner with a mining power of 0.2, using a Monte Carlo method over 109superscript10910^{9}10 start_POSTSUPERSCRIPT 9 end_POSTSUPERSCRIPT rounds, with an upper bound of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT error. ...
Section 5 provides a detailed mechanism design to convince rational miners that working on the attacker’s private branch is a profitable endeavor. It includes a thorough examination of various aspects, such as the proof of block possession and a profit protection mechanism that ensures that rational miners receive the ...
The workflow of PSM is shown in Figure 3. In PSM, when the attacker finds a block, it keeps the block private instead of immediately releasing it. In the meantime, the attacker releases the partial block with proof of block possession. With these released data, rational miners could mine on the private branch. To assur...
In this section, we describe the mechanism to prove that rational miners working on the attacker’s private branch are profitable in detail. To assure that the attacker will follow the rules announced, the attacker needs to provide proof of block possession and a profit protection mechanism that assures the rational min...
To make PSM practical and colluding strategy successful, we must have mechanisms to convince rational miners that it is profitable to mine in the attacker’s private branch. First, the attacker needs to ensure that it indeed has the secret, i.e., a complete and valid new block. Otherwise, all the mining power that ratio...
A
Observe that the preconditioned sharpness rises until plateauing at the dashed line. We also plot the evolution of the “raw” sharpness λ1⁢(𝐇t)subscript𝜆1subscript𝐇𝑡\lambda_{1}(\mathbf{H}_{t})italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_H start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) in Figure 3(c).
On the local quadratic Taylor approximation, “frozen Adam” does evolve as a linear recurrence, and is unstable whenever the maximum eigenvalue of the preconditioned Hessian (the preconditioned sharpness) exceeds the stability threshold of EMA-style heavy ball momentum, which is 2+2⁢β1η⁢(1−β1)22subscript𝛽1𝜂1subscript�...
Adam is a more complex dynamical system than “frozen Adam,” because its preconditioner evolves in respose to recent gradients; consequently, its dynamics on the local quadratic Taylor approximation does not reduce to a linear recurrence. Throughout physics and engineering, a time-tested way to understand complex system...
In this paper, we demonstrate that the EoS phenomenon does, in fact, carry over to the adaptive setting. Our key empirical finding is that throughout training, the short-term stability behavior of, say, Adam is well-approximated by that of “frozen Adam” — a version of Adam in which the preconditioner is frozen at its c...
However, it has not been clear whether these findings have relevance for adaptive gradient methods. Because of adaptive preconditioning, adaptive gradient methods do not evolve as linear recurrences on the local quadratic Taylor approximation, and thus it is not clear why their local stability would be well-modeled by ...
B
In our simulations, we do not observe a misfolded state of CLN025 shown to be highly populated in several studies 78, 79 employing different force fields (Amber99 80 and Amber99-SB 58, respectively) compared to CHARMM27 here 74. This misfolded state is also not observed in the long unbiased simulation from Ref. 76 tha...
In Fig. 4, we can see the lower-lying free-energy basins in the reweighted stochastic embeddings are captured by both mrse and stke. We can also notice a slight difference between the metastable states lying higher in free energy. Specifically, mrse captures more states below a threshold of 25 kJ/mol in comparison to t...
Comparing the free-energy barriers between the different embeddings in Fig. 4, we can see that they are similar, particularly for the mrse embedding and the free-energy surface spanned by the distance and the radius of gyration, i.e., from 10 to 15 kJ/mol. We can compare our results to the unbiased simulation data from...
Overall, both the separation of the CLN025 metastable states and the free-energy landscapes calculated for the low-dimensional embeddings suggest that the proposed framework can be used to find slow CVs and physically valid free-energy estimates. The presented results (Fig. 4) clearly show that using our approach, we c...
We can observe that the free-energy landscape in the low-dimensional manifold calculated by mrse is highly heterogeneous, with multiple partially unfolded intermediate states and many possible reaction pathways, as shown in Fig. 4(a𝑎aitalic_a). Such a complex free-energy landscape shows that the dynamics of CLN025 is...
B
Further, the data discussed here seem rarely to be available. However, even without access to the data it may be desirable to subdivide the ventricle for perfusion studies. Future research will focus on the development of mappings from a given left ventricle to the one discussed here that will allow a ventricle without...
Briefly, using spatial data describing the coronary arterial vasculature from a single porcine heart obtained from fluorescence cryomicrotome images (Goyal et al., 2012) and image processing techniques, we have developed algorithms to organise and search the data in order to build subtrees from the data. These subtrees...
With the analysis described above, it is possible to generate coronary arterial networks that adhere to given conditions. These networks have terminal segments which should reasonably give rise to a small vessel tree. Such small vessel trees can be modeled in a variety of ways. It is natural to ask which region of the ...
Vascular bed image collection and vessel segmentation are well documented; for example, Nordsletten et al. (Nordsletten et al., 2006) and Chambers et al. (Chambers et al., 2020) methods for segmenting micro-CT images of the murine renal and pulmonary vasculature, respectively. Schuster et al. (Schuster et al., 2010) ca...
The goals of the study were to develop algorithms that robustly and reliably allow a user to generate arterial trees from a large graph representing the left coronary vascular tree, and to determine the region(s) of the ventricle that are perfused from the terminal arteries of a generated subtree. Both goals have been...
A
More recent related works focus on learning the tuned graph representation of routing networks. [18] propose RouteNet, which is regarded as a first work that introduces message passing in deep-learning-based network modeling, in a bipartite graph formulation. [19] and [20] extend RouteNet to be adaptive to heterogeneou...
Finally, [5] present their work on topology size generalization for latency estimation of Origin-Destination (OD) pairs, the same problem we focus here. Through improving RouteNet by including queue occupancy state per link, beside the path and link nodes, they formulate a tripartite message passing scheme, which intro...
Our study is among the first approaches that attempt to model the routing network state snapshot by learning the given topology structure for the task of estimating the network latency using open-world input. Our proposed solution transforms the task of estimating the latency between source and destination nodes, to a ...
We consider the pair-wise similarity between the role of a node and its neighbors to be crucial, since an OD pair’s source or destination can bring its impact towards close neighbours of nodes of interest and formulate their embedding as an “augmented source.” GAT is reported to be a well-suited solution to capture suc...
More recent related works focus on learning the tuned graph representation of routing networks. [18] propose RouteNet, which is regarded as a first work that introduces message passing in deep-learning-based network modeling, in a bipartite graph formulation. [19] and [20] extend RouteNet to be adaptive to heterogeneou...
A
We remark that we adopt the architecture of SPR as an empirical simplification to our proposed contrastive objective, which does not require explicit negative sampling and the corresponding parameter tuning (Schwarzer et al., 2021). This leads to better computational efficiency and avoidance of defining an improper neg...
(1) SPR considers multi-step consistency in addition to the one-step prediction of our proposed contrastive objective, namely, SPR incorporates the information of multiple steps ahead of (sh,ah)subscript𝑠ℎsubscript𝑎ℎ(s_{h},a_{h})( italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT ...
In particular, we adopt the same hyper-parameters as that of SPR (Schwarzer et al., 2021). Meanwhile, we adopt the last layer of the Q-network as our learned representation ϕ^^italic-ϕ\widehat{\phi}over^ start_ARG italic_ϕ end_ARG which is linear in the estimated Q-function.
In contrast, as a special case of the low-rank model, linear MDPs have a similar form of structures but with an extra assumption that the linear representation is known a priori (Du et al., 2019b; Yang & Wang, 2019; Jin et al., 2020; Xie et al., 2020; Ayoub et al., 2020; Cai et al., 2020; Yang & Wang, 2020; Chen et al....
We remark that we adopt the architecture of SPR as an empirical simplification to our proposed contrastive objective, which does not require explicit negative sampling and the corresponding parameter tuning (Schwarzer et al., 2021). This leads to better computational efficiency and avoidance of defining an improper neg...
B
In (Ben Rached et al., 2023), we introduced the DLMC estimator, based on a decoupling approach (dos Reis et al., 2023) to provide a simple importance sampling scheme implementation minimizing the estimator variance. The current paper extends the DLMC estimator to the multilevel setting, achieving better complexity tha...
Importance sampling for MV-SDEs has been studied in (dos Reis et al., 2023; Ben Rached et al., 2023). The decoupling approach developed by (dos Reis et al., 2023) defines a modified, decoupled MV-SDE with coefficients computed using a realization of the MV-SDE law estimated beforehand using a stochastic particle syste...
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a...
The decoupling approach was developed for importance sampling in MV-SDEs (dos Reis et al., 2023; Ben Rached et al., 2023), where the idea is to approximate the MV-SDE law empirically as in (4), use the approximation as input to define a decoupled MV-SDE and apply a change of measure to it. We decouple the computation o...
The decoupled MV-SDE (8) for the given empirical law {μtP:t∈[0,T]}conditional-setsubscriptsuperscript𝜇𝑃𝑡𝑡0𝑇\left\{\mu^{P}_{t}:t\in[0,T]\right\}{ italic_μ start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT : italic_t ∈ [ 0 , italic_T ] } is a standard SDE, making it po...
C
By “far-approach”, we consider the period of the mission when the spacecraft changes from heliocentric to relative navigation about the small-body, which is the same as phase 1 of the Hayabusa 2 campaign [49]. That phase ends with the spacecraft at an arbitrarily far distance from the small body, when a preliminary as...
A mission could have different profiles depending on the mission’s goals and the availability of prior knowledge about the asteroid’s environment and properties. We consider that after the preliminary environment assessment at the end of the far-approach phase, the spacecraft could opt between different profiles, depe...
Due to the asteroid’s low gravity, many different forces reasonably affect the spacecraft operating in its vicinity. The importance of considering or not the action of a particular force varies from case to case, and it should be thought of according to the objective of the analysis, distance to the asteroid, mass, and...
If the autonomy is restricted to the operation around the asteroid, that is when the transition from the ground to the autonomous operation takes place. In this case, the spacecraft would rely on the ground up to the moment when the asteroid is found as a point source in its optical cameras. After that, a hybrid approa...
By “far-approach”, we consider the period of the mission when the spacecraft changes from heliocentric to relative navigation about the small-body, which is the same as phase 1 of the Hayabusa 2 campaign [49]. That phase ends with the spacecraft at an arbitrarily far distance from the small body, when a preliminary as...
C
For a detailed analysis of the algorithmic aspects of this problem and Nikolov’s approach, see also (Ebrahimi et al., 2017). An interesting direction for future work is the question of whether a suitable approximation algorithm for the parameter c𝑐citalic_c can be found, which, in turn, may provide insight into explic...
In this paper, we introduced a novel fixed-point approach for computing Brascamp–Lieb constants, which is grounded in nonlinear Perron–Frobenius theory. In contrast to much of the prior literature, which has analyzed the problem through a Riemannian lens, our approach utilizes a Finslerian geometry on the manifold of ...
Our plan is as follows: We begin by showing that G𝐺Gitalic_G has a fixed point in some compact and convex set. We then establish the non-expansivity of G𝐺Gitalic_G, which implies that its “average map” Gtsubscript𝐺𝑡G_{t}italic_G start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is non-expansive and has the same fixed ...
This paper analyzes a specific Picard iteration, generated by the map G𝐺Gitalic_G, for computing Brascamp–Lieb constants and analyzes its convergence via the Thompson part metric. We note that neither the choice of the Picard iteration, nor the specific Finslerian lens employed in the analysis is unique. Our particul...
Our analysis leverages the Thompson part metric on the manifold of positive definite matrices to model convergence of the fixed-point iteration. To our knowledge, this is the first work that analyzes the computation of Brascamp–Lieb constants via Thompson geometry. We note that a similar Finslerian lens can be employed...
C
\left(\Lambda^{\prime}\right)\right)\leq\|w-w^{\prime}\|_{\infty}.divide start_ARG 1 end_ARG start_ARG italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_p ) end_ARG italic_C start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( caligraphic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( roman_Λ ) , caligraphi...
The following corollaries leverage the combinatorial stability theorem [stability, p. 123] to reformulate the previously established bounds in terms of the constant R⁢(p)𝑅𝑝R(p)italic_R ( italic_p ) and properties of the filtration functions. We omit the proofs for these corollaries as they directly apply the referenc...
We examine how our proposed centrality measures behave with respect to perturbations of the point cloud in Figure 3(a) introduced by replacing each point (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) with (x+κ1,y+κ2)𝑥subscript𝜅1𝑦subscript𝜅2(x+\kappa_{1},y+\kappa_{2})( italic_x + italic_κ start_POSTSUBSCRIPT 1 end_POSTSUBSC...
To evaluate the effectiveness of the bounds established in Corollary 3.10, we perform the following analysis. We consider the 1-centrality distances (refer to Example 3.6) calculated for various perturbation levels. From the bounds provided by Corollary 3.10, we subtract the actual 1-centrality distances. These differe...
We conclude this section by examining a different point cloud, sampled around the Barnsley Fern fractal (Figure 11(a)). Here, we employ the same bootstrapping approach used previously. No signals were identified in any of the bootstrapped samples for the Barnsley Fern. The Spearman rank correlation between the maximum...
C
In this paper, we mainly resort to obtaining accurate pseudo-labels so as to enhance the model’s discrimination and generalization in the unseen domain. We first analyze the generalization error on a domain using the theory of multi-domain learning (ben2010theory, ). Based on the upper bound of the generalization error...
Inspired by the theory of multi-domain learning, we extend the FixMatch (DBLP:conf/nips/SohnBCZZRCKL20, ) 111FixMatch is an excellent baseline in SSDG, which will be validated in the experimental section. to a multi-task learning method, named MultiMatch, for semi-supervised domain generalization. Specifically, we bui...
In this part, we will describe our multi-task learning framework. Since there are multiple different domains in the SSDG task, we consider training each domain as an independent task (i.e., the local task for each domain), which can effectively reduce the interference between different domains during pseudo-labeling. ...
In this paper, our goal is to address the semi-supervised domain generalization (SSDG) task. In this task, a few labeled samples on each domain are available, while most of the samples lack the label information, thus a key task is how to generate the accurate pseudo-label. Different from the conventional semi-supervis...
In this paper, we aim to tackle the semi-supervised domain generalization (SSDG) task. Different from the typical semi-supervised task, the challenge of SSDG is that there exist multiple different domains with latent distribution discrepancy. To address this issue, we first explore the theory of multi-domain learning ...
A
Apart from the receptive field mismatch, the spatial size of feature maps in ConvNets also significantly affects the transferability of adaption. Earlier attempts to use adapters to transfer ConvNets usually downsample the feature’s spatial size for memory and parameter efficiency.
For simplicity, we adopt the same activation function used in the backbone as the non-linearity at the middle of the bottleneck. The effective receptive field of the modulated feature maps produced by Conv-Adapter is thus similar to that of the adapted blocks in the backbone.
Apart from the receptive field mismatch, the spatial size of feature maps in ConvNets also significantly affects the transferability of adaption. Earlier attempts to use adapters to transfer ConvNets usually downsample the feature’s spatial size for memory and parameter efficiency.
In summary, it is crucial to design the architecture and adapting scheme of the PET module computing Δ⁢𝐡Δ𝐡\Delta\mathbf{h}roman_Δ bold_h for ConvNets to have the same spatial size of feature maps and the same receptive field of convolutions for transferability.
They can all process the sequential features globally with long-range dependencies as the computing blocks in Transformers. Although it is possible to apply linear layers, or equivalently 1×1111\times 11 × 1 convolutional layers [48], to adapt the feature maps of ConvNets, it is yet intuitive that this might produce in...
C
Existing PINNs methods face challenges in managing abrupt variations or discontinuities in dynamical systems. Such changes often signal shifts in system dynamics or the influence of external factors. For example, detecting leakages in pipelines using limited sensor data [18]; traffic flow management by predicting conge...
Existing PINNs methods face challenges in managing abrupt variations or discontinuities in dynamical systems. Such changes often signal shifts in system dynamics or the influence of external factors. For example, detecting leakages in pipelines using limited sensor data [18]; traffic flow management by predicting conge...
Deep learning and machine learning methods are widely studied and used in academia and industry. They perform successfully in tasks such as dimensionality reduction [1], computer vision [2, 3], multimodal learning [4, 5], and time series analysis [6]. Recent advancements have further expanded the applicability of deep ...
While changepoints detection methods have shown promise in identifying significant shifts in data characteristics across various fields—from high-dimensional time series data [31, 32, 33], computer vision [34, 35], speech recognition [36, 37], real-time medical monitoring [38, 39], to disturbance localization in power...
We introduce a novel method for identifying changepoints in dynamic systems governed by general PDEs dynamics. Our approach works with piecewise-constant time-changing parameters and leverages total variation regularization on the first-order differences of parameters. We also propose an online learning strategy that ...
C
By a leg of f𝑓fitalic_f, we mean a maximal interval [i,j]⊆[1,m]𝑖𝑗1𝑚[i,j]\subseteq[1,m][ italic_i , italic_j ] ⊆ [ 1 , italic_m ] such that, for hℎhitalic_h in the range i≤h<j𝑖ℎ𝑗i\leq h<jitalic_i ≤ italic_h < italic_j, the difference d=f⁢(h+1)−f⁢(h)𝑑𝑓ℎ1𝑓ℎd=f(h{{+}}1){-}f(h)italic_d = italic_f ( italic_h + 1 ) -...
A number hℎhitalic_h which forms the boundary between two consecutive legs will be called a waypoint. We count the numbers 1111 and m𝑚mitalic_m as waypoints by courtesy, and refer to them as terminal waypoints; all other waypoints are internal. Thus, a walk consists of a sequence of legs from one waypoint to another.
If hℎhitalic_h is an internal waypoint where the change is from an increasing to a decreasing leg, we call hℎhitalic_h a peak; if the change is from a decreasing to an increasing leg, we call it a trough. Not all waypoints need be peaks or troughs, because some legs may be flat; however, it is these waypoints that will...
f𝑓fitalic_f be a walk such that w=uf𝑤superscript𝑢𝑓w=u^{f}italic_w = italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT. Since |u|<|w|𝑢𝑤|u|<|w|| italic_u | < | italic_w |, f𝑓fitalic_f has at least two legs. If f𝑓fitalic_f has any flat legs, then w𝑤witalic_w contains a repeated letter and so is of the f...
that w′=uf′=vg′superscript𝑤′superscript𝑢superscript𝑓′superscript𝑣superscript𝑔′w^{\prime}=u^{f^{\prime}}=v^{g^{\prime}}italic_w start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_u start_POSTSUPERSCRIPT italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT = italic_v start_POSTSUPERSCRIPT itali...
A
To define the information density in a way that respects intuitive notions. In particular, it should satisfy that adding more measurements can only increase the information, not decrease it; that information is inverse proportional to measurement uncertainties; and that with Gaussian noise and linear models, measuring...
As a first example of where we believe that information densities could be used, we consider the regularization of inverse problems. In a large number of practical applications, one regularizes inverse problems by adding a penalty term to the misfit function for the purpose of penalizing undesirable aspects of the rec...
Herein, let us first provide some perspectives in Section 2 on how one formulates inverse problems, and how the different philosophical approaches inform our approach. We then address the goals mentioned in the previous paragraph by first considering a finite-dimensional, linear model problem in Section 3 that we use ...
The methods in the middle of the spectrum mentioned above and in the figure use deterministic methods to make the solution of statistical formulations more efficient. Yet, relatively little has been done to bring information from the (expensive) Bayesian perspective to selectively enrich the deterministic solution, at ...
accuracy of measurements on the one hand, and the uncertainty in the recovered parameters of the inverse problem on the other. But, none of the studies mentioned go on to specifically identify the role of information in the spatially variable ability to recover parameters in inverse problems in a systematic way. Let us...
B
 Bernard et al. (2018) introduced visual interactive labeling (VIAL) as a method for combining active learning with visualization systems for the exploration and selection of data points for labeling. Visual encodings that expose the internal state of the learning model can help in the labeling process (Liu et al., 201...
After the data processing, we aggregate the embeddings by manuscripts to which they belong. The inter-manuscript relations are then visualized in a graph. We use a point cloud visualizing each image based on two-dimensional embeddings to present and filter images. Then, we enable the inspection and selection of a spec...
 Bernard et al. (2018) introduced visual interactive labeling (VIAL) as a method for combining active learning with visualization systems for the exploration and selection of data points for labeling. Visual encodings that expose the internal state of the learning model can help in the labeling process (Liu et al., 201...
Our approach shares similarities with works on visual-interactive labeling. We do not include active learning strategies, but our exploratory approach can be extended to encompass recommendations based on active learning. Currently, the identification of labeling candidates is a choice made by expert users. They have ...
We designed visualizations as part of our research thinking process to combine two (partially-)annotated image datasets representing the same genre, but originating in two different research initiatives exhibiting differences and inconsistencies in their vocabulary. The visualizations were used for labeling purposes a...
C
Table VIII shows the similarities between the source task (QNLI) and target tasks across different sampling numbers. Note that BERT-base is used in this study. It can be seen that the similarities across different sampling numbers are slightly changed. There are relatively larger changes when we only sample 50 instance...
Additionally, we report the statistical analysis results of our predicted similarities across different sampling numbers in Table IX. Note that “std” means the standard deviation (average score of all target tasks), and “| ours-avg |” denotes the absolute difference (average score) between the predicted similarities i...
Table IX: Standard deviation of predicted similarities across different sampling numbers (i.e., 50, 100, 200, 300 in Table VIII). “| ours-avg |” denotes the absolute difference (average score) between the predicted similarities used in this paper and the average similarities of multiple sampling numbers. BERT-base is ...
Additionally, we report the statistical analysis results of our predicted similarities across different sampling numbers in Table IX. Note that “std” means the standard deviation (average score of all target tasks), and “| ours-avg |” denotes the absolute difference (average score) between the predicted similarities i...
Table IX: Standard deviation of predicted similarities across different sampling numbers (i.e., 50, 100, 200, 300 in Table VIII). “| ours-avg |” denotes the absolute difference (average score) between the predicted similarities used in this paper and the average similarities of multiple sampling numbers. BERT-base is ...
A
Hack Forums Discussions. There was an immediate surge of posts on Hack Forums mentioning the two countries after the invasion, from near zero to over 120 per day; see Figure 5. Kruskal-Wallis tests confirm the significance H⁢(2)=72.98,p<.0001formulae-sequence𝐻272.98𝑝.0001H({2})={72.98},{p<.0001}italic_H ( 2 ) = 72.98...
This posting volume is tiny when set against the 62M-post size of Hack Forums, showing trivial contributions of the Russia-Ukraine discussions to the overall landscape (as with the previous evidence seen from defacement and DDoS attacks). These posts are centralised: 97.22% belongs to the top 5 popular subforums. Rank...
The Global Scale. We again see concentrations in DDoS attacks, with the top 10 countries accounting for 70.49% of all victims. The US still dominates (24.68%), followed by Brazil (11.99%) and Bangladesh (8.10%). Ukraine took 1.57%, while Russia lies 8th at 3.61%. Our DDoS and defacement datasets show some correlations...
Underground Forum Discussions. Online forums are structured around subforums containing threads with multiple posts. To assess changes in discussion topics within the hacking community, we use a snapshot of the most popular hacking forum, Hack Forums from the CrimeBB dataset (Pastrana et al., 2018b). The forum is a pla...
As with defacements, the evidence above suggests a genuine increase in DDoS attacks targeting Russia and Ukraine as the conflict began, standing out significantly from most top countries. Russia was still the first to be hit at scale, followed by Ukraine shortly after. The outbreak of both defacement and DDoS attacks ...
A
The top k𝑘kitalic_k samples of 𝒟∗superscript𝒟\mathscr{D^{*}}script_D start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT are denoted as the set Sk⁢(𝒟∗)subscript𝑆𝑘superscript𝒟S_{k}(\mathscr{D^{*}})italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( script_D start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) where settin...
To evaluate our ranking methods, we use transferability at k𝑘kitalic_k as defined in (2). Note that transferability at k𝑘kitalic_k can also be viewed as the attack success rate on f𝑓fitalic_f for the top-k𝑘kitalic_k recommended samples. We remind the reader that ranking is performed without access to f𝑓fitalic_f o...
Identifying the top k𝑘kitalic_k samples is not only useful for the attacker, but also the defender. This is because a defender can evaluate his or her model’s robustness to attacks given the attacker’s best efforts (attacks using the top k𝑘kitalic_k samples). We call this performance measure the transferability at k...
Finally, in contrast to prior works, we suggest a more grounded approach to evaluating model security in transfer attacks. We recommend that the community evaluate their models against the top k𝑘kitalic_k most transferable samples from a blackbox perspective, and not by taking the average success of all samples in wh...
To find the best xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT or δjsubscript𝛿𝑗\delta_{j}italic_δ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT using surrogate f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, the attacker must rank potential adversarial e...
B
Consequently, the fidelity kernel should only be used for datasets where the data-embedded states are ‘not too distant’ in the Hilbert space. Finally, our study of noise suggests that polynomial-depth data embeddings in noisy hardware suffer from exponential concentration, thus presenting a serious barrier to achieve a...
In addition, we show that training parametrized quantum kernels using kernel target alignment suffers from an exponentially flat training landscape under similar conditions to those leading to barren plateaus in QNNs. That is, when constructing the parametrized part of the data embeddings, one should avoid features tha...
},\boldsymbol{x}_{j})]roman_Var start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT [ italic_κ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ] and find that the same ones that lead to BP...
Here we argue that quantum kernel methods experience a similar barrier to barren plateaus. Crucially, the trainability guarantees enjoyed by kernel methods only become meaningful when the values of the kernel can be efficiently estimated to a sufficient precision such that the statistical estimates contain information...
There are a number of causes that can lead to barren plateaus, including using variational ansatze that are too expressive [20, 33, 34] or too entangling [23, 35]. However, barren plateaus can even arise for inexpressive and low-entangling QNNs if the cost function relies on measuring global properties of the system [2...
A
Most of current methods [25, 1, 31, 17, 5, 4] use physical variables, e.g., driving speed, acceleration, time-gap, heading angle, yaw angle, distances, etc. for lane change recognition. Nevertheless, physical variables cannot represent the type of target objects as they do not contain enough semantic information, where...
: The second method is designed over the first one. It uses the same 3D action recognition networks as the first method. Bounding box information is embedded to each frame of the RGB video data to improve classification and prediction accuracy. This method assumes that a separate vehicle prediction method has been used...
To address this problem, we propose a novel end-to-end framework involving two approaches for lane change recognition. Specifically, our method takes advantage of the recent powerful action recognition models and mimics a human drivers’ behaviours by only utilizing the visual information. The first approach (RGB+3DN) ...
: The first method utilises only the visual information collected by the front-facing cameras, which is the same kind of information and approach that human drivers would use to predict manoeuvres. We test this approach with seven 3D action recognition networks involving I3D networks, SlowFast networks, X3D networks an...
Two approaches involving seven 3D action recognition networks are adopted to classify and anticipate lane change events on the PREVENTION dataset. For our RGB+3DN method, the lane change recognition problem is formulated as an action recognition task only utilising visual information collected by cameras. The best per...
B
representation and semantics: If we have pa=pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}=p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, it directly follows that pa≡pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\equiv p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUB...
pa≡pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\equiv p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≡ italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT but pa≠pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\neq p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≠ italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBS...
be changed and API functions can be substituted. Given a program pasubscript𝑝𝑎p_{a}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT, there typically exist many pb∈Psubscript𝑝𝑏𝑃p_{b}\in Pitalic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ∈ italic_P, such that
k𝑘kitalic_k-anonymity. Let pasubscript𝑝𝑎p_{a}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and pbsubscript𝑝𝑏p_{b}italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT be two programs written by developers a𝑎aitalic_a and b𝑏bitalic_b with pa≠pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\neq p_{b}italic_p start_PO...
representation and semantics: If we have pa=pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}=p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, it directly follows that pa≡pbsubscript𝑝𝑎subscript𝑝𝑏p_{a}\equiv p_{b}italic_p start_POSTSUBSCRIPT italic_a end_POSTSUB...
B
CoOp-CA vs CoOp-CS. It is evident that CoOp-CA surpasses CoOp-CS in terms of average accuracy on all datasets. Given that CoOp-CA employs fewer parameters, it exhibits greater data efficiency. This is corroborated by its higher scores with fewer training samples. Nevertheless, as the amount of training data increases, ...
SoftCPT-NATA vs CoOp-CA. Compared to CoOp-CA, SoftCPT-NATA improves the average scores by 0.73%, 5.09%, 3.63% and 2.80% on General-10, Plant-6, RS-8 and Fashion-20, respectively. While comparing the scores with different shots, SoftCPT-NATA wins in almost all cases. This demonstrates the effectiveness of the proposed ...
Fashion-20 is a specialized dataset for fashion classification (a key technique for product data governance on E-commerce platform), which is collected by us and has about 24K images in 20 tasks. All the images are obtained by searching on web using pre-defined keywords. Before human labeling, data cleaning is perform...
For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede...
Comparison to State-of-the-art Methods. CoCoOp, ProGrad, KgCoOp, and PLOT are recent advancements that build upon the foundation of CoOp. When comparing SoftCPT-NATA to CoCoOp, our method demonstrates impressive average accuracy improvements of 3.19%, 6.35%, 10.93%, and 3.40% on the General-10, Plant-6, RS-8, and Fashi...
A