context stringlengths 250 4.63k | A stringlengths 250 4.73k | B stringlengths 250 4.85k | C stringlengths 250 4.32k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
”Like if I read a news about an app or something, I would send community message probably to all, I’d say I’m concerned about an app. I will be probably cautioning everyone not to use that app because it’s using something fishy accessing private information or something like that. That’s how everyone could also take pa... |
Although most participants disapproved of the feature that allowed them to hide apps from one another, some (37%, N=7 Parents and 47%, N=9 Teens) mentioned a positive aspect of this feature. They identified that this feature enabled users to have personal privacy on their app usage and a sense of independence. For exa... |
The feature that garnered the most discussion among parents and teens was the ability to hide or show apps to one another. Overall, parents and teens were both concerned about this feature because it promoted secrecy and negated some of the purpose behind the app. One important thing to be noted here is that when we a... | Overall, we found that most parents and teens made few considerations toward their own online safety or privacy when installing new apps or granting permissions to the apps they installed (RQ1). Meanwhile, parents often manually monitored the apps their teens installed but gave little thought to the permissions granted... |
Almost half of the participants (42%, N=8 Parents and 53%, N=10 Teens) pointed out that this feature may affect the transparent relationship in their families. They primarily believed in a bi-directional transparency-based relationship and therefore, also expect their teens/parents not to hide any apps from them. Many... | B |
\text{DTM}}),= over^ start_ARG italic_R italic_D italic_A italic_D end_ARG ( italic_x ; italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT , italic_D , italic_N , italic_k start_POSTSUBSCRIPT den end_POSTSUBSCRIPT , italic_k start_POSTSUBSCRIPT DTM end_POSTSUB... | Consider, again, the additive noise-corrupted “Antman” two-square dataset on the right of Figure 5. The persistence diagrams for the distance-to-measure filtration and the RDAD filtration are shown in Figure 8 with different confidence bands. Note that in both figures the bands constructed by oracle bootstrapping and b... | Two squares are clearly visible in the scatter plot in the right subplot of Figure 1. However, the blue point corresponding to the smaller square in the persistence diagram of the distance filtration is very close to the diagonal (it is at the tip of the cluster of red diamonds near the origin). On the other hand, for ... | For the two-square example above, the points are sampled from a piecewise constant density supported on the two square annuli, which we may take to be Ω1subscriptΩ1\Omega_{1}roman_Ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Ω2subscriptΩ2\Omega_{2}roman_Ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
| We illustrate the scale invariance property with the “Antman" example in Figure 4. The same number of points are sampled randomly from two square annuli, which are scaled versions of each other. Thus, the two square holes
give two nearby (overlapping) blue circular points in the persistence diagrams in Figure 4. | D |
Recent works have made progress in capturing high-level AU semantic relations in an implicit way (Corneanu, Madadi, and Escalera 2018; Niu et al. 2019) by exploiting correlations between AUs via probabilistic graphic models or in an explicit way (Li et al. 2019; Shao et al. 2020) by constructing an AU semantic graph ac... | This is known as subject variation problem, which makes it challenging for AU recognition models to generalize across subjects.
Although previous works have noticed that subject variation problem exists in facial action unit recognition task, as far as we know, there have been few works focusing on answering the whys a... |
As for subject variation problem, works such as (Chen et al. 2013) provide a solution for enhancing the generalizability of AU recognition model by training personalized AU classifiers for each subject and works such as (Zen et al. 2016; Wang and Wang 2018) make attempt to relieve the subject-related prediction bias t... | Although these works have realized that the data distribution of training subjects differs from that of unseen subjects, they are still based on the assumption that the data distribution of source and target domains shares some similarities.
In contrast, we formulate the causalities among variables in AU recognition ta... | This paper focuses on explaining the why and wherefores of subject variation problem in AU recognition with the help of causal inference theory and providing a solution for subject-invariant facial action unit recognition by deconfounding variable S𝑆Sitalic_S in the causal diagram via causal intervention. Unlike previ... | A |
TPS Limited by Node: While BBP holds the potential for nearly constant block propagation time, its achievable TPS is mainly constrained by the BBP block processing time at each node, i.e., the EVM limitation. For an ideal case, BBP can finish the network consensus as well as a new block is successfully processed by the... |
BBP accelerates block propagation by generating consistent PPB among different nodes, proving advantageous in a network without malicious nodes, as demonstrated in Section 5.3.1. However, in the practical blockchain network, three types of potential attacks may interfere with the PPB generation to offset the benefit: ... | Countermeasure for Attacks II and III: Attacks II and III prolong the block propagation delay of BBP by violating TSO algorithms to generate invalid PPBs, e.g., selecting transactions with low GAS prices rather than high GAS prices. Fortunately, malicious nodes with attacks II and III can be simply identified by a sco... |
The experimental results are depicted in Fig. 7: (a) the non-synchronized PPBs and (b) 90% Block Propagation Time. From Fig. 7, it is evident that the average proportion of non-synchronized PPBs, in the absence of malicious nodes, is approximately 3.5%. Since 90% node propagation is good enough, this 96.5% proportion ... | To validate the BBP robustness in the network with these attacks, we measure the proportion of non-synchronized PPBs for BBP and compare 90% block propagation times for BBP and BHP under the testbed network with various malicious nodes. Note that 90% block propagation time for BHP is only counted under the network with... | A |
In this paper, we study linear function approximation in POMDPs to address the statistical challenges amplified by infinite observation and state spaces. In particular, our contribution is fourfold. First, we define a class of POMDPs with a linear structure and identify an ill conditioning measure for sample-efficient ... | Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2... | Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate... | In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo... |
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ... | B |
The following keywords were used to search all the databases: speech, language, disorder, impairment, assessment, therapy, rehabilitation, treatment, AI, artificial intelligence, automated, automatic. Boolean operators were used to combine the terms as: | We presented the language distribution of the papers based on the language addressed by the AI-based automated speech therapy tools as reported in the studies (see Figure 8). The most addressed languages were English (10 studies) and Spanish (4 studies). Furthermore, two studies addressed the Cantonese language, and th... |
We conducted this systematic literature review based on a sample of 24 out of 678 research papers deriving from Scopus, IEEEXplore, and ACM DL databases. Exciting insights and trends emerged from our analysis of these papers. In recent years, we observed an increasing interest in AI-based auto- mated speech therapy to... | We further report the geographical distribution of the included studies based on the
location of the study indicated in the paper (see Figure 7). We looked at the author’s affiliation and funding agency when required. Most papers reported on studies which |
There were 91 unique authors identified from the included studies. The VOSviewer software was used to calculate the most impactful authors, generate co-authorship clusters, and perform co-occurrences of keyword analysis (Van Eck NJ, \APACyear\bibnodate). All the authors were counted irrespective of the authorship orde... | D |
In a dataset with a community structure, we show the compression ratio for intra-community pairs is higher than that of inter-community pairs even in settings where the pre-PCA inter-community and intra-community distances are very similar. We demonstrate (through a random vector mixture model) that this ratio gap ref... |
This gives us initial theoretical evidence that in the random-mixture model with outliers, our simple outlier detection method can detect outliers when a non-negligible fraction of the points are outliers. Next, we use simulations of our model to test the efficacy of our outlier detection method and its impact on the ... | Finally, we test the relevance of compression ratio as a metric and the outlier detection method in real-world data. We focus on single-cell data, as it is both high dimensional (20,000−40,000200004000020,000-40,00020 , 000 - 40 , 000 dimension)
and noisy [KAH19], using datasets from a popular benchmark database [DRS20... |
Next, we study the usefulness of our outlier detection method in these datasets. Unlike our simulations, there is no ground-truth labeling for outliers. Rather, we assume that in each community (as defined by the labels provided with the dataset), some points may behave more like an outlier, in that they may be a mixt... | As a motivating byproduct, we show that this metric can be used to design an outlier detection method that can detect points deviating from a community structure. Furthermore, we show that this method can improve the accuracy of clustering algorithms in real-world high-dimensional datasets.
| D |
Note that the ground truth question-answer pairs from the dataset annotations are not evenly distributed, meaning some of the images do not have corresponding questions and answers, therefore, the 100 candidates are formed from two sources. For the images with GT QA pairs, we include the GT pairs as part of the candida... | We conduct experiments on the Scene Graph Generation (SGG) task to test the feasibility of the task setting with missing visual input and to demonstrate the effectiveness of our proposed method.
SGG task aims to generate a graphical representation of the scene from given images. | We observe that the PredCls does not fluctuate much in case of missing visions compared to other two metrics SGCls and SGDet. It is consistent with the previous findings in [2], where the authors find that the object labels are highly predictive of relation labels but not vice-versa. In contrast, SGCls and SGDet drops ... |
We train the entire pipeline following the widely adopted stepwise training mechanism as in previous studies [1, 2, 19, 25, 3]. We firstly train the object detector on the image input with missingness. For the second stage, we freeze the parameters in the objector detector and attach the proposed SI-Dial to the pipeli... | We evaluate our generated scene graphs using the three evaluation metrics: (1) Predicate Classification (PredCls): predict the predicates (relations) given the sets of ground truth bounding boxes and object labels. (2) Scene Graph Classification (SGCls): predict the predicate as well as the object labels given the sets... | D |
We use the agent position profiles 𝐱1,𝐱2subscript𝐱1subscript𝐱2\mathbf{x}_{1},\mathbf{x}_{2}bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and 𝐱3subscript𝐱3\mathbf{x}_{3}bold_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT as that in the proof of Theorem 10 for agents 111... | We use the same entrance fee function as that in the proof of Theorem 5 except that we set the entrance fee at the location of agent 00 to 00.
Then to get a total cost approximation ratio less than 2222 in all three agent position profiles, one of the two facilities must be put at the position of agent 00 with probabil... |
We use the same agent position profiles 𝐱1subscript𝐱1\mathbf{x}_{1}bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, 𝐱2subscript𝐱2\mathbf{x}_{2}bold_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and 𝐱3subscript𝐱3\mathbf{x}_{3}bold_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT as in the proof of Theorem 5 for agents 1111 a... | We use the agent position profiles 𝐱1,𝐱2subscript𝐱1subscript𝐱2\mathbf{x}_{1},\mathbf{x}_{2}bold_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and 𝐱3subscript𝐱3\mathbf{x}_{3}bold_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT as that in the proof of Theorem 10 for agents 111... | We use the same entrance fee function as that in the proof of Theorem 10 except that we set the entrance fee at the location of agent 00 to 00.
Then in order to get an approximation ratio less or equal to 3333, one of the two facilities must always be located in the position of the new agent 00 with probability 1111. T... | D |
Every chain has alternately edge link triangles and vertex link triangles if we start from the variable triangle side, the first triangle is an edge link triangle. The number of each type of triangle is 2k22subscript𝑘22k_{2}2 italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (see Figure 8 for k2=1subscript𝑘21k_{2}=1i... |
Every chain has exactly 2k12subscript𝑘12k_{1}2 italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT vertex link triangles (see Figure 7 for k1=1subscript𝑘11k_{1}=1italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1). It is clear that every pair of consecutive vertices in the chain are odd-degree vertices in some induc... | This technique does not work for our class under consideration since this operation adds 3 new vertices of degree 2 and they are not part of triangles. Instead of this operation we propose other alternatives to avoid induced cycles of size at most k𝑘kitalic_k for any k𝑘kitalic_k. But all of them come with some cost. ... | In any case, the contact vertex is colored white in any valid bi-coloring by Theorem 2 and its neighbor in the edge link triangle should be black, again by Theorem 2. In consequence, the vertices of the edge link triangle in the chain should have different colors. All other edges of the chain form part of some induced ... | It is clear that S(G(F))𝑆𝐺𝐹S(G(F))italic_S ( italic_G ( italic_F ) ) admits a DIM if and only if the resulting graph Q𝑄Qitalic_Q, after the replacement of variable-clause edges by the chains described above, has a DIM. Furthermore, every variable-clause edge of S(G(F))𝑆𝐺𝐹S(G(F))italic_S ( italic_G ( italic_F... | C |
Capitalizing on the methodology proposed in 17, we take an optimization-based approach to develop analytical tools fulfilling such quest for theoretical guarantees of ReLU-based approximations of traditional stabilizing controllers for polytopic systems. We develop a purely offline method based on the systematic const... |
Capitalizing on the methodology proposed in 17, we take an optimization-based approach to develop analytical tools fulfilling such quest for theoretical guarantees of ReLU-based approximations of traditional stabilizing controllers for polytopic systems. We develop a purely offline method based on the systematic const... | In §5 we will then show how to compute the worst-case approximation error of e(⋅)𝑒⋅e(\cdot)italic_e ( ⋅ ) exactly, thus providing a condition sufficient to certify the stability and performance of a ReLU-based approximation of Φ(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) as defined by any of (3)–(5). A discussion on the complexit... | We show how to guarantee the (uniform, in a set) ultimate boundedness property 6 of a discrete-time polytopic system when the ReLU approximation replaces a traditional stabilizing controller. Specifically, by focusing on the approximation error between NN-based and traditional controller-based state-to-input mappings, ... | We have considered the design of ReLU-based approximations of traditional controllers for polytopic systems, enabling implementation even on very fast embedded control systems. We have shown that our reliability certificate require one to construct and solve an MILP offline, whose associated optimal value characterizes... | C |
Loss:
We can estimate the unknown physical parameters for a given video based on a rendering loss which penalizes the discrepancy between the input video frames and the rendered video. All estimated parameters and network weights are shown in green text in the figure. | To render the video frames, we draw inspiration from the recent advances in neural implicit representations.
To this end, we combine one representation for the static background with a representation for appearance and shape of dynamic foreground objects. By composing the learned background with the dynamic foreground ... |
Our main goal is the estimation of physical parameters from a single video. We focus on the setting of a static camera, a static background, and rigid objects that are moving according to some physical phenomenon and exhibit a motion that can be restricted to a plane. | We present diverse experiments in which the ODE parametrizes a rigid-body transformation of the foreground objects. We emphasize that conceptually our model is not limited to rigid-body motions, and that it can directly be extended to other cases, for example to nonlinear transformations for modelling soft-body dynamic... | We model the dynamics of the objects using an ordinary differential equation (ODE) and use implicit neural representations to model the appearance, where the static background and the planar dynamics allow us to model the appearance in 2D. Our objective is to estimate the unknown physical parameters, and the initial co... | B |
The above definition is appropriate since it means that p(ck∣|yi⟩)=p(ck∣|yj⟩)𝑝conditionalsubscript𝑐𝑘ketsubscript𝑦𝑖𝑝conditionalsubscript𝑐𝑘ketsubscript𝑦𝑗p\left(c_{k}\mid\ket{y_{i}}\right)=p\left(c_{k}\mid\ket{y_{j}}\right)italic_p ( italic_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∣ | start_ARG italic_... |
As discussed earlier, the QSC framework ensures minimality of quantum communication resources by extracting and compressing the semantic representations of the data, unlike existing semantic-agnostic QCNs. Moreover, to assess the accuracy of the QSC performance within the quantum semantics’ extraction, transmission, r... |
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi... | In general, there is a tradeoff between minimality and accuracy in the QSC framework as a smaller number of quantum communication resources leads to smaller quantum semantic fidelity. Thus, we formulate the QSC minimality-accuracy tradeoff problem as follows:
|
As shown in Fig. 1, the constructed quantum semantic representations in the form of d2subscript𝑑2d_{2}italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-dimensional quantum states must be transmitted through quantum channels to the receiving quantum node. The quantum communication process must preserve the accuracy of ... | C |
From Definition 1, a fault is said to be diagnosable if it can be detected within a finite number of observable events after its occurrence. In order to detect the fault accurately, one should ensure that the fault can be detected for any (long enough) execution after its occurrence. | Necessity: The existence of a secret state in Gdsubscript𝐺𝑑G_{d}italic_G start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT implies that the occurrence of the secret event is revealed following a sequence of observations, which leads to that state. Furthermore, by the diagnoser construction, if there exists a secret stat... | When considering a secret event, once it gets revealed under some execution after its occurrence, we say that the privacy of the system has been compromised (since there is a possibility for the secrecy of the event to be compromised).
Thus, there is clearly an inverse relationship between event diagnosis and event con... | We are interested in hiding from an external observer (a curious eavesdropper) confidential information of the system that is represented as the occurrences of events from ESsubscript𝐸𝑆E_{S}italic_E start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT, which is called the secret event set.
Accordingly, the privacy of the s... | Recall that a defensive function is said to be C𝐶Citalic_C-enforcing if, regardless of what event is generated by a given system, it can manipulate observations and output a sequence that does not reveal the occurrences of secret events.
When the defensive function is unconstrained (i.e., each event t𝑡titalic_t in Eo... | B |
Authors in [pateromichelakis2018context] select routing paths based on the mobility and traffic conditions, then they perform a link-resource time sharing according to flow occupations.
A particle swarm optimization (PSO) algorithm is described in [tafintsev2020aerial] to properly manage unmanned aerial vehicles (UAVs)... | Risk-sensitive RL is adopted in [vu2018ultra] to control transmitter beamwidth and power so as to maximize data rate.
In [zhang2021resource], the authors propose a resource allocation framework based on advantage actor critic and column generation to maximize the throughput of static mmWave IAB networks. | In [guo2020joint], the authors jointly handle handover and power allocation to improve throughput and reduce handover frequency, by developing an MARL algorithm based on proximal policy optimization (PPO).
These works demonstrate good performance of MARL algorithms in making decisions in sophisticated systems, however,... | The work in [lei2020deep] defines a spectrum allocation for IAB networks that maximizes the sum of log-rates through double DQN and actor critic techniques.
Authors in [vu2018path] resort to regret RL and successive convex approximation to perform route selection and rate allocation, respectively. | Authors in [hu2017relay] deal with the relay selection and link scheduling problem to maximize the end-to-end throughput, using 3D models of buildings as primary blockage sources.
In [yao2017outage], heuristic algorithms for user scheduling and power allocation are proposed to reduce outage occurrences. | C |
Is this a good statistical protocol? The answer depends on how much money the pharmaceutical company will make, among other things. In particular, depending on the total profit the company earns when they are approved, even companies with ineffective drugs may be incentivized to run a trial. |
We begin with a stylized example to highlight the interaction between an agent’s incentives and the principal’s statistical protocol. Suppose there are two types of pharmaceutical companies: companies with ineffective drugs (θ=0𝜃0\theta=0italic_θ = 0) and companies with effective drugs (θ=1𝜃1\theta=1italic_θ = 1). F... | Case 2: large profit. Suppose that companies who receive approval make $1 billion in profit, 100 times their investment. In this case, agents of type θ=0𝜃0\theta=0italic_θ = 0 would choose to run trials: their expected profit from seeking approval is $40 million. On average, 5% of such agents would receive approval, s... |
Conversely, the statistical protocol changes the incentives of the agents. Consider again the large profit case above, where agents receive 100 times their initial investment if they receive approval. Now, however, suppose the principal changes to a stricter protocol such that the probability of approval is only 0.005... | Case 1: small profit. Suppose that companies who receive approval make $100 million in profit, 10 times their investment. In this case, agents of type θ=0𝜃0\theta=0italic_θ = 0 would choose not to run trials, since their expected profit for running a trial is -$5 million. Hence, all approved drugs would be effective d... | D |
We recognize specific limitations associated with the use of FHE in our PPIR framework, which limited the effective use of this techniques besides the optimization of the SSD loss. FHE’s computational cost during homomorphic operations poses challenges, limiting the scalability and real-time applicability of PPIR(FHE).... | We test two different techniques: (i): Uniformly Random Selection (URS), proposed by [45, 30], in which a random subset of dimension l≤d𝑙𝑑l\leq ditalic_l ≤ italic_d of spatial coordinates is sampled at every iteration with uniform probabilities, p(𝒙)=1d𝑝𝒙1𝑑p({\bm{x}})=\frac{1}{d}italic_p ( bold_italic_x ) = divi... |
Fig. A5: Qualitative results for affine registration with SSD between 2D medical images. The red frame is the transformed moving image using Clear+URS registration. Green and Yellow frames are the transformed images using respectively PPIR(MPC)+URS and PPIR(FHE)v1+URS. | In contrast, registering with PPIR(FHE) is not feasible when considering entire images due to computational complexity.
Nevertheless, Supplementary Figure A5 shows that neither MPC nor FHE decreases the overall quality of the affine registered images. A comprehensive assessment of the registration results is available ... |
For the SSD loss function, we provide comparison experiments with both URS and GMS [45, 30, 39] for sake of completeness and compatibility with subsampling approaches in IR. We recognized that URS doesn’t bring substantial improvements with resepct to GRS, and this latter method should be preferred in the considered a... | D |
Knowledge Distillation (KD). Hinton et al. [19] propose an original teacher-student architecture that uses the logits of the teacher model as the knowledge. Since then, some KD methods regard knowledge as final responses to input samples [3, 31, 58], some regard knowledge as features extracted from different layers of ... | Figure 2:
The overall framework of MEKD. Lower left: two architectures of GAN-based KD. Upper right: the process of deprivatization. GAN is used to synthetic high-response images to the teacher model within the distribution of data in edge devices. Lower right: the process of distillation with the frozen generator. Th... | One can only guess the mapping process by using the responses to the input samples of different network layers or the relations between features and treat them as knowledge to guide the training of the student model [57].
However, in the black-box KD problem, the internal responses or relations between layers of the te... | Regardless of the method used, the essence of KD is to learn the mapping function of the teacher model from input to output, i.e., fTsubscript𝑓𝑇f_{T}italic_f start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT.
However, it is hard to deduce the mapping function from the existing parameters of the teacher model. | The first two aim to derive the student to mimic the responses of the output layer or the feature maps of the hidden layers of the teacher, and the last approach uses the relationships between the teacher’s different layers to guide the training of the student model.
Feature-based and relation-based methods [24, 57], d... | D |
\sum_{|i|>N}c_{i}\phi_{i}\right\|_{2}}{\left\|\sigma(f(x))\right\|_{2}}.italic_σ ( italic_f ( italic_x ) ) ≡ ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_a end_POSTSU... |
Suppose our function f(x)𝑓𝑥f(x)italic_f ( italic_x ) is (exactly) represented as Fourier series with |k|<N𝑘𝑁|k|<N| italic_k | < italic_N terms. We can equivalently store values of the function on the uniform grid with 2N+12𝑁12N+12 italic_N + 1 points. However, when we apply activation function σ(x)𝜎𝑥\sigma(x... |
Aliasing error defined in Equation 1 measures the norm of harmonics we cannot possibly resolve on the given grid relative to the norm of the function, transformed by activation. The following result gives aliasing error for rectifier and two extreme basis functions. | Figure 1: (a) the output of neural network N(x)𝑁𝑥N(x)italic_N ( italic_x ) computed on coarse and fine grids. On each subgrid, loss and gradients are zero, so the network provides the best (alas, pathological) approximation to f(x)=2x𝑓𝑥2𝑥f(x)=2xitalic_f ( italic_x ) = 2 italic_x on the interval [−1,1]11[-1,1][ ... |
Our solution to the problem of aliasing errors is to decouple interpolation from the mapping between functional spaces. It is described in Section 4. However, it is possible to decrease aliasing error even when FNO is used for interpolation. One simply needs to train (or fine-tune) on a sufficiently fine grid. The dec... | B |
That means even matching two layers of teacher and student with similar semantics, the student may not be able to catch up with the teacher in the pixel-to-pixel manner due to semantic mismatch.
In contrast, our method leverages a target-aware transformer to address the semantic mismatch in a more efficient manner. | Results on ImageNet.
Since Cifar-100 only contains 50,000 training images, we further evaluate our approach on a more challenging dataset. Here, we choose ResNet34 and ResNet18 as teacher and student model respectively. We show the Top-1 accuracy of the student and teacher model in Table 2. Our method outperforms the s... | Table 3: Comparing the semantic segmentation results (in mIoU%) of different methods on Pascal VOC.
We can observe that our method surpasses all previous baselines by a significant margin. Specifically, on the popular compact architecture MobilenetV2, our method improves the student by 5.39% comparing to the stand-alon... | As the feature map size is fairly small when performing distillation on image classification, we plan to further investigate the generalization ability of our method on semantic segmentation, where the feature size is drastically larger. As in Section 3.2, we adapt our TaT method with the patch-group and anchor-point s... | We evaluate the effectiveness of our method on two popular computer vision tasks, image classification and semantic segmentation.
On the ImageNet classification dataset, the tiny ResNet18 student can be boosted from 70.04% to 72.41% in terms of the top-1 accuracy, and surpasses the state-of-the-art knowledge distillati... | B |
To show the clusters in embedded dataset, we use the cylinders and the weights. We partition the total data into three groups based on the number of cylinders as follows: In the first group we place all the cars with 4 or fewer cylinders, in the second group are cars with 5 or 6 cylinders, and in the third group we put... | To show the clusters in embedded dataset, we use the cylinders and the weights. We partition the total data into three groups based on the number of cylinders as follows: In the first group we place all the cars with 4 or fewer cylinders, in the second group are cars with 5 or 6 cylinders, and in the third group we put... | Figure 8 shows that ENS-t-SNE was able to find an embedding of the dataset in 3D separating the data into several clusters. The first perspective groups together datapoints with the same colors, i.e., cars with similar numbers of cylinders are grouped together; see the second subfigure of Figure 8. The second perspecti... | We use separate visual channels to encode the different types of clusters. Specifically, to show the original clusters for the first perspective, we use colors (blue and orange), for the second perspective, we use the shape (circles and squares), and for the third perspective, we use texture, filled and not filled; see... |
In Figure 8 we observe that although in the two perspectives cars are clustered according to corresponding dimensions (number of cylinders and weight), there are some exceptions. For example, the blue outliers in the second (and also third) subfigure correspond to two exceptional cars which have low weights but higher... | B |
Deep reinforcement learning demonstrates significant empirical successes in Markov decision processes (MDPs) with large state spaces (Mnih et al., 2013, 2015; Silver et al., 2016, 2017). Such empirical successes are attributed to the integration of representation learning into reinforcement learning. In other words, ma... |
In contrast, partially observed Markov decision processes (POMDPs) with large observation and state spaces remain significantly more challenging. Due to a lack of the Markov property, the low-dimensional feature of the observation at each step is insufficient for the prediction and control of the future (Sondik, 1971;... | Deep reinforcement learning demonstrates significant empirical successes in Markov decision processes (MDPs) with large state spaces (Mnih et al., 2013, 2015; Silver et al., 2016, 2017). Such empirical successes are attributed to the integration of representation learning into reinforcement learning. In other words, ma... | Related Work. Our work follows the previous studies of POMDPs. In general, solving a POMDP is intractable from both the computational and the statistical perspectives (Papadimitriou and Tsitsiklis, 1987; Vlassis et al., 2012; Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a). Given such computational a... |
In the case that maintaining a belief or conducting the prediction is intractable, previous approaches establish predictive states (Hefny et al., 2015; Sun et al., 2016), which is an embedding that is sufficient for inferring the density of future observations given the interaction history. Such approaches typically r... | A |
Our work is closely related to the bodies of literature on (i) reinforcement learning POMDPs, (ii) offline reinforcement learning (in MDPs), and (iii) OPE via causal inference.
For a comparison, we summarize and contrast with most related existing works in Table 1. | Among these, the confounding bias is caused by the fact that the latent state Shsubscript𝑆ℎS_{h}italic_S start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT simultaneous affects the action variable (i.e., Ahsubscript𝐴ℎA_{h}italic_A start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT) the outcome variables (i.e., Ohsubscript𝑂ℎ... | Table 1: We compare with most related representative works in closely related lines of research. The first line of research studies offline RL in standard MDPs without any partial observability. The second line of research studies online RL in POMDPs where the actions are specified by history-dependent policies.
Thus, ... | Compared to the literature, our work simultaneously involve partial observability, confounded data, and offline policy optimization simultaneously, and thus involves the challenges faced by (i)–(iii).
In the sequel, we discuss the related works in detail. | Our work is closely related to the bodies of literature on (i) reinforcement learning POMDPs, (ii) offline reinforcement learning (in MDPs), and (iii) OPE via causal inference.
For a comparison, we summarize and contrast with most related existing works in Table 1. | C |
In this paper, we answer this question by complementing the global convergence guarantees and establishing the local asymptotic properties of existing StoSQP methods. Specifically, we focus on an Adaptive Inexact StoSQP scheme, referred to as AI-StoSQP. By adaptive we mean that the scheme inherits the critical merit of... | Instead of solving the QP (3.1) exactly, we solve it inexactly by an iterative sketching solver. This approach proves more efficient than deterministic solvers, especially when equipped with suitable sketching matrices (Strohmer2008Randomized; Gower2015Randomized; Pilanci2017Newton).
In particular, we generate a random... | First, they studied unconstrained regression problems with objectives in the form F(𝒙Tξ)𝐹superscript𝒙𝑇𝜉F({\bm{x}}^{T}\xi)italic_F ( bold_italic_x start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_ξ ), resulting in objective Hessians owning rank-one updates that cannot be employed for our general problem ... | Solving Newton systems is considered the most computationally expensive step of second-order methods; and randomized solvers offer advantages over deterministic solvers by requiring less flops and memory when equipped with proper sketching matrices (e.g., sparse sketches). Notably, we perform a constant number of sketc... | To our knowledge, this is the first work that performs online inference by taking into account not only the randomness of samples but also the randomness of computation (i.e., sketching and stepsize); the latter is particularly important for making second-order methods computationally promising.
| C |
This condition was discussed by Bercovier and Pironneau in [2], which turned out as an enabler of (3), see [15]. We will refer to this inf-sup condition as the discrete BP condition.
In [2] the proof was given for k=2𝑘2k=2italic_k = 2 and for meshes made of rectangles for d=2𝑑2d=2italic_d = 2 and bricks for d=3𝑑3d=3... | Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case... |
A popular class of finite element methods for discretizing the Stokes problem is the generalized Taylor-Hood family Pksubscript𝑃𝑘P_{k}italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Pk−1subscript𝑃𝑘1P_{k-1}italic_P start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on triangular/tetrahedral meshes with cont... | The LBB condition is essential for the well-posedness of the Stokes problem. Its verification for the case (a) is typically done by an indirect proof using a compactness argument. We are aware of only one reference, namely [1], which contains a proof for the case |ΓN|>0subscriptΓ𝑁0|\Gamma_{N}|>0| roman_Γ start_POSTSUB... |
The rest of the paper is organized as follows. In Section 2 the technique of T𝑇Titalic_T-coercivity is discussed, which provides important auxiliary results for Section 3, which is the main section of the paper and contains the analysis of the discrete inf-sup conditions. In Appendix A known results on the continuous... | D |
The versatility of WaveMix is such that it can be directly used for semantic segmentation by replacing the output layer with two transposed convolution layers and a per-pixel softmax layer to generate the segmentation maps. On the other hand, architectural changes – such as encoder-decoder and skip connections [72] – ... | For semantic segmentation, we used the Cityscapes [54] dataset (under MIT Licenses). The official training dataset itself was split into training and validation sets. Results of the other models compared were directly taken from their original papers as cited in Table 2. Since ConvMixer [17] was never used for semantic... |
Table 4 shows the performance of WaveMix on image classification using supervised learning on ImageNet-1K on a single GPU with limited epochs. WaveMix models outperform CNN and transformer-based models, and token-mixers. The use of non-learnable fixed weights and shallower network structure also makes inference using ... |
The versatility of WaveMix is such that it can be directly used for semantic segmentation by replacing the output layer with two transposed convolution layers and a per-pixel softmax layer to generate the segmentation maps. On the other hand, architectural changes – such as encoder-decoder and skip connections [72] – ... | The lower mIoU (75.78) obtained by replacing the classification head of ConvMixer [17] with segmentation head (similar to WaveMix) shows that other token-mixing architectures, which work well for classification, may not be able to translate that performance to segmentation without significant architectural modification... | D |
{j}}}\right|,∇ start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT := | divide start_ARG ∂ italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG ∂ italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_β start_POSTSUBS... | xj(ασ−1(j)+βj)=fj(x),superscriptsubscript𝑥𝑗subscript𝛼superscript𝜎1𝑗subscript𝛽𝑗subscript𝑓𝑗𝑥x_{j}^{(\alpha_{\sigma^{-1}(j)}+\beta_{j})}=f_{j}(x),italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_α start_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT (... |
where (αi)isubscriptsubscript𝛼𝑖𝑖(\alpha_{i})_{i}( italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and (βj)jsubscriptsubscript𝛽𝑗𝑗(\beta_{j})_{j}( italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT define... | the terms ∂Pi/∂xj(ai,j)subscript𝑃𝑖superscriptsubscript𝑥𝑗subscript𝑎𝑖𝑗\partial P_{i}/\partial x_{j}^{(a_{i,j})}∂ italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT / ∂ italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_a start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSC... | ∑j=1n(ασ−1(j)+βj)=∑i=1nαi+βi=𝒪Psuperscriptsubscript𝑗1𝑛subscript𝛼superscript𝜎1𝑗subscript𝛽𝑗superscriptsubscript𝑖1𝑛subscript𝛼𝑖subscript𝛽𝑖subscript𝒪𝑃\sum_{j=1}^{n}(\alpha_{\sigma^{-1}(j)}+\beta_{j})=\sum_{i=1}^{n}\alpha_{i}+%
\beta_{i}=\mathcal{O}_{P}∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT sta... | B |
Communicating quantitative science occurs through the medium of mathematical text, which contains expressions, formulae, and equations, most of which requiring accompanying description. Formulae and their explanations interweave with non-mathematical language to form cohesive discourse.
Approaches that consider mathem... | Need for developing robust interactive Natural Language theorem provers. Efficient guided exploration of large mathematical state spaces has no direct equivalent for mathematical natural language, and even pure mathematical reasoning without language is benefiting from approximate methods without using formal theorem p... | Communicating quantitative science occurs through the medium of mathematical text, which contains expressions, formulae, and equations, most of which requiring accompanying description. Formulae and their explanations interweave with non-mathematical language to form cohesive discourse.
Approaches that consider mathem... |
We have described the path to the state-of-the-art for five representative areas considering the relationship between natural and mathematical language, either through necessity of the task or efficacy of approach. We describe the details, limitations and successes within each area and find that informal methods strug... | While transformers vaswani2017attention have seen widespread success in many areas of language, it is not until recently they’ve demonstrated mathematical rabe2020mathematical and logical clark2020transformers capabilities, since redefining state-of-the-art benchmarks in formula retrieval peng2021mathbert and solving m... | D |
As a side effect, by modeling these networks together, provided that the networks have common connectivity patterns, we can use the information of certain networks to recover noisy information from other networks by improving the prediction of missing links (Clauset et al.,, 2008). Hence colSBM𝑐𝑜𝑙𝑆𝐵𝑀colSBMi... |
The interest of our colSBM𝑐𝑜𝑙𝑆𝐵𝑀colSBMitalic_c italic_o italic_l italic_S italic_B italic_M model is two-folds. The first one is to find a common connectivity pattern which explains the structure of the different networks in the collection and to assess via model selection whether these structures are a rea... |
Most contributions about collections of networks rely on some node correspondence between the networks. Recently, motivated by the analysis of fMRI data a few works extend the SBM to model population of networks (Paul and Chen,, 2018; Pavlović et al.,, 2020). Le et al., (2018) make the assumption that the networks of ... | The first one is to allow the distribution of the block memberships to vary between networks and even to allow some networks to not populate certain blocks. This enables to model a collection of networks where the structure of certain networks is encompassed in the structure of other networks.
The second relaxation all... |
Since the SBM is a very flexible model, it has already been adapted to multilayer networks. To name a few, Matias and Miele, (2017) model a collection of networks along a time gradient, the connectivity structure varies from time to time but they integrate a sparsity parameter, which is similar to our density paramete... | D |
Even so, popular classification models trained with ImageNet have experienced 40−45%40percent4540-45\%40 - 45 % performance drop when tested on ObjectNet, a bias-controlled dataset [4] that produces thousands of images with 600 combinations of parameters, by intervening only on three mechanisms in the photo generation ... | In order to answer the first question, it should be made clear what we mean by the knowledge of a mechanism.
As human beings, for example, if we have learned the knowledge of 2D rotation, it means that for any image, (with a proper tool), (a) we can rotate the image at will, and (b) we are able to determine whether (an... | Specifically, the process consists of a hypothesis (of the content of three circles and a triangle) and the verification (whether a figure like this can be generated by covering the triangle over the circles).
If another hypothesis (e.g. of just three circles) and a corresponding verification (by making a notch in each... | To illustrate this, if we look at Fig. 1(a) [24], at least two interpretations can be made (Fig. 1(b) and 1(c)), based on the same observation.
This simple example illustrates a typical process of image perception, in which causal inference (in the anti-causal direction) is made by utilizing the mechanisms of either oc... |
Figure 1: What is in image (a)? There are at least two ways to interpret it, i.e., (b) three black circles partly covered by a white triangle, or (c) three black circles with a notch on each of them. (The former one may have a stronger tendency in perception, according to the Gestalt principles [19].) | D |
Contrastive training is done using LARS optimizer (You et al., , 2019), temperature set to 0.5. We used batch size 2048 for CIFAR10 experiments and 1024 for MNIST experiments. We used an initial learning rate of 0.01 with cosine annealing learning rate for 300 epochs on PU CIFAR10 and 200 epochs for PU MNIST. | As discussed before, our contrastive PU learning approach involves training encoder gB(⋅)subscript𝑔𝐵⋅g_{B}(\cdot)italic_g start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ( ⋅ ) using contrastive loss, followed by training a linear layer on top using standard positive unlabeled loss (e.g. nnPU (Kiryo et al., , 2017)).
... | After initializing the learner with the trained encoder parameters, we explore two popular transfer methods to transfer to the downstream Positive Unlabeled task - fine tuning (updating both 𝐯𝐯{\mathbf{v}}bold_v and B𝐵Bitalic_B), linear probing (updating 𝐯𝐯{\mathbf{v}}bold_v while freezing the lower layers) using ... | After initializing the learner with the trained encoder parameters, we explore two popular transfer methods to transfer to the downstream Positive Unlabeled task - fine tuning (updating both 𝐯𝐯{\mathbf{v}}bold_v and B𝐵Bitalic_B), linear probing (updating 𝐯𝐯{\mathbf{v}}bold_v while freezing the lower layers) using ... |
After initializing the learner with the trained encoder parameters, we explore two popular transfer methods to transfer to the downstream Positive Unlabeled task - fine tuning (updating both 𝐯𝐯{\mathbf{v}}bold_v and B𝐵Bitalic_B), linear probing (updating 𝐯𝐯{\mathbf{v}}bold_v while freezing the lower layers) using... | B |
It is important to note that the theory underlying the likelihood ratio test, Wilks’ theorem (Wilks, 1938), necessarily depends on (i) the maximum likelihood being reached and (ii) the model being identifiable. These are conditions we cannot guarantee in our problem context. Moreover, we propose this method for determi... |
To address both of these issues, we also utilize the split-LRT, developed by Wasserman et al. (2020), which requires no regularity conditions. The split-LRT, however, still requires that the estimation of the nested model corresponds to the global maximum of the log-likelihood. Algorithm 1 only guarantees convergence ... |
Analyzing the third factor matrix is a significant focus of our work, and we propose three methods for interpreting it to quantify layer interdependence based on the structure of that factor matrix. Furthermore, we propose definitions of layer interdependence based on likelihood ratio tests (LRTs)between different mo... | Note that for the cross-validation tasks in the following subsections, we report the average test AUC across 50 different random initializations for each combination of K𝐾Kitalic_K and C𝐶Citalic_C. We vary K𝐾Kitalic_K from 2222 to 12121212 in the Krackhardt multilayer network, and from 2222 to 20202020 in the Malari... |
Table 3: The standard and split-LRT determinations for all datasets explored in the following sections. For the standard LRT, the p𝑝pitalic_p-values for each test are also reported. The Village 0 support system network is determined to be layer redundant, the Malaria network is determined to have layer independence, ... | A |
Each question q∈Q∪Q^𝑞𝑄^𝑄q\in Q\cup\hat{Q}italic_q ∈ italic_Q ∪ over^ start_ARG italic_Q end_ARG is expressed in natural language and can be answered by entities in E𝐸Eitalic_E.
Also, in line with existing work [9, 20, 24, 31], a topic entity is assumed to be annotated.111If topic entities are not annotated, they ca... | The model consists of a question encoder and a graph encoder that compute semantic representations (embeddings) of the question and subgraph entities, respectively, through several layers of encoding.
We select answers from subgraph entities according to their distances to the question in the output embedding space and... | It can be observed that initially (i.e., in Layer-1) the correct answer is not at all close to the question in the embedding space.
However, after more layers of encoding, more information consistent with the given question is passed to Regensburg by the graph encoder, and the question embedding is accordingly transfor... | In this case, the representation of the entity would contain information that is semantically consistent with the given question and can be aligned with the question in the embedding space.
The GCN directly generates semantic representations of the question and KG entities in an end-to-end fashion. | For this question, three GCN layers are used in the graph encoder to encode entities in the subgraph that covers all paths of length one, two, and three starting from the topic entity Isabella of Portugal.
The question is accordingly encoded by the question encoder with three layers of LSTMs. | A |
Building off of an n𝑛nitalic_n-qubit feature map for n𝑛nitalic_n-dimensional word vectors, the same QSVM classification process was followed for densely encoded feature maps (alexander2022quantum, ). In this case, vector representations were encoded into fewer qubits in the feature map circuit, using log2(n)subscri... | The percentage of samples correctly classified peaked when using 4 qubits, where average accuracy was 57% with the ZZFeatureMap and 62% using the densely encoded feature map. The QSVM experiments were on par with classical SVM on average, and classified some sample batches with perfect accuracy. This, in contrast with ... | As observed in the results below, the classical embeddings preprocessing step from Section 3.2 enabled
the quantum circuit to achieve relatively high accuracy with fewer qubits. Further improvements to space using densely encoded feature maps achieved similar results. | and results accuracy can be considered in the light of what problems users are trying to solve. For example, the work by Alexander and Widdows alexander2022quantum investigates solely the effects of decreasing space in the QSVM using a densely encoded feature map. Improved accuracy from 90% to 100% in fewer qubits on ... | When running the densely encoded version of the word embeddings classifier from Section 3.3, 100% accuracy was achieved using 16 embedding dimensions and only 4 qubits (alexander2022quantum, ). This model achieves perfect accuracy for the lambeq set and in the fewest qubits of all the methods covered.
| B |
where VYsubscript𝑉𝑌V_{Y}italic_V start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT is a collection of nodes whose labels are available, 𝟙1\mathds{1}blackboard_1 is an indicator function, yisubscript𝑦𝑖y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the label of node i𝑖iitalic_i, and ℓℓ\ellroman_ℓ is ... | This observation shows that the eigenvectors corresponding to the leading eigenvalues do not always align well with the node labels. Particularly, in a heterophilic graph, two adjacent nodes are unlikely to have the same label, which is in contradiction with the smoothness properties of leading eigenvectors. However, w... | On the one hand, in classical SC, clustering is normally performed based solely on graph structure, ignoring node features. Following this line, Tremblay et al. (2016) considers only random Gaussian signals when approximating SC under Proposition 2.1. On the other hand, Bianchi, Grattarola, and Alippi (2020) adotps nod... | Solving the above objective requires iterating over all possible combinations of eigenvectors, which is infeasible in general. It also requires performing expensive eigendecomposition with O(N3)𝑂superscript𝑁3O(N^{3})italic_O ( italic_N start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) time complexity. In addition, class... |
Let 𝒇i𝒵superscriptsubscript𝒇𝑖𝒵{\bm{f}}_{i}^{\mathcal{Z}}bold_italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_Z end_POSTSUPERSCRIPT be the representation of a node i𝑖iitalic_i obtained from an arbitrary set 𝒵⊆[N]𝒵delimited-[]𝑁\mathcal{Z}\subseteq[N]caligraphic_Z ⊆ [ it... | C |
Consider subpopulations with risk functions minimized at the same value θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. If learners use full risk minimization, the setting lacks isolated minima because the total risk is uniform across all allocations α𝛼\alphaitalic_α. Assuming that risk-... | Our results lay the groundwork for an investigation of the stochastic dynamics that occur for finite sample approximations to the risk or participation driven by decisions of individuals.
Such behaviors are risk reducing in expectation, so we expect the noisy trajectories to converge with high probability to sets aroun... | Second, it is a technically useful connection that will enable us to characterize and classify the stable equilibria for dynamics which are risk minimizing in the limit.
We remark that Theorem 4.3 leaves open the question of stability for equilibria which are non-isolated minima of the total risk function. | For allocation dynamics like multiplicative weights, such configurations are clearly equilibria for any parameter choice ΘΘ\Thetaroman_Θ on the part of the learners.
We thus consider the set of possible segmented equilibria and characterize which are asymptotically stable. | Guaranteeing the stability of such balanced equilibria requires further information about the dynamics, and it is not possible to make a general statement.
Examples in Appendix D.2 demonstrate that such balanced equilibria may be asymptotically stable, stable, or unstable. | C |
𝜶(t+1)=𝜶(t)+μ𝐞,superscript𝜶𝑡1superscript𝜶𝑡𝜇𝐞\boldsymbol{\alpha}^{(t+1)}=\boldsymbol{\alpha}^{(t)}+\mu\mathbf{e},bold_italic_α start_POSTSUPERSCRIPT ( italic_t + 1 ) end_POSTSUPERSCRIPT = bold_italic_α start_POSTSUPERSCRIPT ( italic_t ) end_POSTSUPERSCRIPT + italic_μ bold_e , | Given the first order Taylor approximations above, we form the LP approximation of Eq. (29) around an iterate 𝐱(t)={𝜶~y,𝐇~y,𝐜~y}y=1ksuperscript𝐱𝑡superscriptsubscriptsuperscript~𝜶𝑦superscript~𝐇𝑦superscript~𝐜𝑦𝑦1𝑘\mathbf{x}^{(t)}=\{\tilde{\boldsymbol{\alpha}}^{y},\tilde{\mathbf{H}}^{y},%
\tilde{\mathbf{c}}^{... | approximation of Eq.~{}(\ref{eq:obj_constraints}) around }(\boldsymbol{\alpha}%
^{(t)},\mathbf{c}^{(t)}).( bold_italic_α start_POSTSUPERSCRIPT ( italic_t + 1 ) end_POSTSUPERSCRIPT , bold_c start_POSTSUPERSCRIPT ( italic_t + 1 ) end_POSTSUPERSCRIPT ) = LP solution of an approximation of Eq. ( ) around ( bold_italic_α st... |
(𝜶(t+1),𝐜(t+1))=LP solution of an approximation of Eq. (16) around (𝜶(t),𝐜(t)).superscript𝜶𝑡1superscript𝐜𝑡1LP solution of an approximation of Eq. (16) around superscript𝜶𝑡superscript𝐜𝑡(\boldsymbol{\alpha}^{(t+1)},\mathbf{c}^{(t+1)})=\mbox{LP solution of an % |
This problem is equivalent to Eq. (15), but has no maximum operation. However, now we have non-linear constraints, thus violating the definition of an LP. To correct this, we use a local approximation of η𝜂\etaitalic_η (around an iterate 𝜶(t)superscript𝜶𝑡\boldsymbol{\alpha}^{(t)}bold_italic_α start_POSTSUPERSCRIPT... | C |
ECG Heartbeats: This data set was used throughout this paper. It contains two top motif sets, namely calibration and heartbeats (Figure 9). We discuss only the top-1 motif, and our webpage shows the full results (k-Motiflets Source Code and Raw Results, 2022). Learn-l took 1.51.51.51.5s, and learn-k took 0.50.50.50.5s... |
Semi-Synthetic Data Sets with Gold Standard Labels: To measure the precision of the different MD methods we generated a semi-synthetic 25252525 dataset benchmark from (Dau et al., 2019) with implanted motif sets. For each method, we used the gold standard parameters as inputs, i.e. the size k𝑘kitalic_k for k𝑘kitalic... | Given silver standard parameters, all competitor methods find the activation phase, e.g. the found motif set overlaps with the actual motifs, but with up to 100%percent100100\%100 % larger extent. k𝑘kitalic_k-Motiflets and VALMOD are the only to identify both the activation phase as top-1 motif and recovery phase as t... | The top-1 motif set found by the approximate k𝑘kitalic_k-Motiflets alg. corresponds to the activation phase and the top-2 motif to the recovery phase. All methods find the activation phase, but with up to 100%percent100100\%100 % larger extent. Valmod and LM found the recovery phase, again with up to 100%percent100100... | Figure 14.
Quality as a ratio of the extents of the top-1 motif sets of the approximate to the exact algorithm. Left: Boxplot over ratio as a function of k∈[2,…,9]𝑘2…9k\in[2,...,9]italic_k ∈ [ 2 , … , 9 ] with n=10000𝑛10000n=10000italic_n = 10000. Right: Boxplot over fractions of the full length n, n′∈[1/8%,1/7%,…,1... | C |
Consider a fixed digraph 𝒢={𝒱={1,2},𝒜𝒢=[wij]2×2}𝒢formulae-sequence𝒱12subscript𝒜𝒢subscriptdelimited-[]subscript𝑤𝑖𝑗22\mathcal{G}=\{\mathcal{V}=\{1,2\},\mathcal{A}_{\mathcal{G}}=[w_{ij}]_{2\times 2}\}caligraphic_G = { caligraphic_V = { 1 , 2 } , caligraphic_A start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT... | Here, in Corollary 1, we show that this is actually not needed. Therefore, the sample path spatio-temporal persistence of excitation condition in this paper is more general than the stochastic spatio-temporal persistence of excitation condition in [38].
| Historically, Guo [41] first proposed the stochastic persistence of excitation condition for analyzing the centralized Kalman filtering algorithm, which was then refined in [42]. Whereafter, the cooperative information condition on the conditional expectations of the regression matrices over the deterministic connected... | To overcome the difficulties mentioned above, we develop the nonnegative supermartingale inequality of the estimation error, and further establish the sample path spatio-temporal persistence of excitation condition by combining information of the regression matrices, graphs and algorithm gains, under which sufficient c... | Therefore, the sample path spatio-temporal persistence of excitation condition weakens the stochastic spatio-temporal persistence of excitation condition in [38]. To our best knowledge, we have obtained the most general persistence of excitation condition ever.
| D |
Paul Laiu thanks Dr. Victor DeCaria for insightful discussions.
This work was supported in part by the Office of Science of the Department of Energy, by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administratio... | The experiments compare the performance of GMRES with restart parameter equal to 50, Alternating AA, Subselected Alternating AA and Randomized Alternating AA in solving linear systems with the matrices described in Table 1. The exact solution of each linear system is chosen as a random vector where all the entries foll... | {biography}
M. Paul Laiu. Paul Laiu is a Staff Mathematician in the Multiscale Methods and Dynamics Group at Oak Ridge National Laboratory. His research interest includes numerical optimization, surrogate modeling, and numerical schemes for various partial differential equations in kinetic theory. His work focuses on t... | {biography}
Massimiliano Lupo Pasini. Massimiliano (Max) Lupo Pasini obtained his Bachelor of Science and Master of Science in Mathematical Engineering at the Politecnico di Milano in Milan, Italy. The focus of his undergraduate and master studies was statistics and discretization techniques and reduction order models ... | Paul Laiu thanks Dr. Victor DeCaria for insightful discussions.
This work was supported in part by the Office of Science of the Department of Energy, by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administratio... | C |
The Vox dataset is also used to extract the topic vector representations for the STAS measure. We use the tf-idf vectorizer provided by the Scikit-learn library [31] to extract a vector representation for each document in the corpus. Then, all the representations of the same topic are averaged to extract the final vec... | For the tagging-based method, all the words of the input document are lemmatized to their roots using NLTK [32]. Then, we tag the words between the existing lemmatized tokens and the representative words for the desired topic, based on the top-N𝑁Nitalic_N=100 most representative terms for each topic.
|
Going one step further, we propose another method for controlling the output of a model based on tagging the most representative terms for each thematic category using a special control token. The proposed method assumes the existence of a set of the most representative terms for each topic. Then, given a document and... | Given the set of representative words for each topic, a document, and the desired topic, the tagging mechanism works as follows. All the words of the input document are lemmatized to their roots. Then, we identify the common words between the existing lemmatized tokens and the representative words for the desired topic... | In contrast to the embedding-based models, all the methods that use control tokens can directly handle unknown topics. More specifically, for the prepending method we simply prepend the unknown topic to the document while for the tagging method we tag the most representative words for the unknown topic, assuming the ex... | A |
We conclude that tiling standard cells allows for a faster and improved understanding of the layout of the compiled circuit without the processing time involved in compilation and routing. It is a valuable tool in estimating the resources required for compiling a given quantum circuit to hardware, and especially in cre... | Neutral-atom computers, particularly those employing AODs and SLMs, enable architectures with designated zones for specific functions [24]. These zones include memory (storing unused qubits), execution (performing gate operations), and measurement. Zone-based architectures have demonstrably facilitated pipelining in ne... | Critically, the fact that quantum circuits are often formed by repeating patterns of sub-circuits inspires an opportunity to use this information for speeding up the compilation and the routing of the qubits. For example, this is the case for many arithmetic circuits which were imported from classical computing (e.g. a... |
Within the framework of specialized zones, standard cells represent the execution zones. Our approach automates the pipelining of circuit execution across sequences of zones (tiles), as detailed in Section 2.1 and Figure 4. Furthermore, our cell design allows for the pre-computation of optimal shuttling routes, leadin... |
There exist multiple applications of standard cells and tiling. First, tiling quantum circuits can inform the co-design of computing architectures, where the qubit layout, for example, is developed in parallel to the circuits to execute. Such 2D and 3D architectural co-design can be implemented with neutral atoms [10]... | D |
Our encoder consists of 8888, 3×3333\times 33 × 3 convolutional layers with stride of 2222 for downsampling, each of which is followed by batch normalization layer [15] and Leaky-ReLU [16] activation function with a slope of 0.20.20.20.2. Our decoder consists of 7777, 3×3333\times 33 × 3 transposed convolutional layers... | In our work, we propose a coarse-to-fine fully convolutional network for MR image re-parameterization mainly for Repetition Time (TR) and Echo Time (TE) parameters. As the model is coarse-to-fine, we use image features extracted from an image reconstruction auto-encoder as input instead of directly using the raw image.... |
Our method consists of two technical modules: 1) an image reconstruction auto-encoder [14] to extract image features of the input image; and 2) a coarse-to-fine fully convolutional network which utilizes the image features from the auto-encoder and enforces the construction of output image with desired parameters. In ... |
In this model, the parameters of the input image are not fixed. Hence, we also pass the input image parameters along with the output image parameters. This model is more generalizable than the previous one but also has a more complex non-linearity to learn and hence, lags behind in performance compared to default-to-p... |
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i... | D |
\Theta}}\varphi_{h}\mathrm{d}\nu_{\bm{\Theta}}(\bm{x}),sansserif_M ( bold_Θ ) = ∫ ∇ start_POSTSUBSCRIPT bold_Θ end_POSTSUBSCRIPT italic_φ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⊗ ∇ start_POSTSUBSCRIPT bold_Θ end_POSTSUBSCRIPT italic_φ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT roman_d italic_ν start_POSTSUB... |
The rest of the paper is organized as follows. Section 2 reviews the EnVarA and some existing neural network-based numerical approaches for solving PDEs. Section 3 of the paper is devoted to the development of the proposed EVNN schemes for L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradi... |
From a numerical standpoint, the energy-dissipation law (1) also serves as a valuable guideline for developing structure-preserving numerical schemes for these variational systems, as many straightforward PDE-based discretizations may fail to preserve the continuous variational structures, as well as the physical cons... | It’s important to note that, with the exception of DRM, all existing neural-network-based methods are developed based on either the strong or weak forms of PDEs. While DRM utilizes the free energy functional, it is only suitable for solving static problems, i.e., finding the equilibrium of the system. Additionally, mos... | A key component of these aforementioned approaches is to represent solutions of PDEs via NNs. All of these approaches determine the optimal parameters of the NN by minimizing a loss function, which is often derived from either the strong or weak form of the PDE (see section 2.1 for more details).
By employing NN approx... | C |
We review classical constructions of sheaf theory, such as integral transforms and kernel compositions. We recall the definition of the convolution distance between (derived) sheaves of 𝐤𝐤{\mathbf{k}}bold_k-vector spaces on a finite-dimensional real vector space, as developed by Kashiwara-Schapira [17] and provide pr... | The interplay between sheaves on a real vector space and persistence theory necessitates the use of a topology on a vector space introduced by Kashiwara and Schapira [16], called the γ𝛾\gammaitalic_γ-topology. In this section, we first recall the basic definitions associated to the γ𝛾\gammaitalic_γ-topology. There is... | One of the challenges of multi-parameter persistence is to provide a meaningful notion of distance between persistence modules which can be computed in a reasonable time complexity. Indeed, it has been shown that the usual interleaving distance between persistence modules is NP-hard to compute in the multi-parameter ca... | The matchings between graded barcodes of γ𝛾\gammaitalic_γ-sheaves are defined in the same way as between barcodes of persistence modules. Therefore, one can compute the bottleneck distance between barcodes of γ𝛾\gammaitalic_γ-sheaves using already existing software [29, 30]. It is nevertheless far from being true whe... | We review the notion of γ𝛾\gammaitalic_γ-sheaves, and recall the precise relationship between this type of sheaves and persistence modules [3]. We then strengthen one of our previous results, asserting that the interleaving distance between persistence modules equals the convolution distance between their associated γ... | D |
This work has focused on the effect of variable ordering on DAG structure but supplementary results with simple-hill-climbing demonstrate that the log-likelihood score of the learnt graph and skeleton are affected too, suggesting that probability distribution inference would also be impacted. Quantifying this effect, ... | The effect of variable ordering on structure learning might be well-known for some in the research community, but we argue that its importance is largely underestimated. This is because whilst it is typical for structure learning algorithms to be assessed across different objective functions, varied sample sizes, and d... | Figure 5 also shows that the constraint-based algorithms are sensitive to variable ordering, with a mean change in F1 of 0.036, 0.104 and 0.021 for PC-Stable, GS and Inter-IAMB respectively. Although these sensitivities are small relative to those observed in other algorithms, they are still larger than the sensitivity... | We start by investigating the bnlearn implementation of HC which is widely used in the literature [32, 35] and find that these arbitrary edge orientations are made on the basis of arbitrary variable ordering in the dataset, which therefore has an impact on the accuracy of the learnt CPDAG. For HC, variable ordering is ... | Figure 5 shows the sensitivity to variable ordering for the algorithms described in section 2 and compares it with their sensitivity to other selected factors (details are given in the figure caption). TABU is a variant of HC and is, as one might expect, sensitive to variable ordering, but the mean F1 change of 0.278 i... | A |
Table 1 shows latency measurements for the first layer of the Feed-Forward Network (FFN) in the OPT-175B model (Zhang et al., 2022).
The measured kernels include cuBLAS (for FP-FP or INT-INT), OPTQ (Frantar et al., 2022), AWQ (Lin et al., 2023) (for FP-INT), and LUT-GEMM (for FP-INT or FP-BCQ). | 3) For LLMs, we demonstrate that LUT-GEMM, which utilizes quantized weights without a dequantization process, can considerably accelerate matrix multiplications with small quantization bits while power consumption is greatly saved by reducing the number of GPUs.
4) Assuming a 3-bit BCQ format for weights of OPT-175B se... | In this paper, we introduce LUT-GEMM, a highly efficient matrix multiplication kernel designed to operate directly on quantized weights, thereby eliminating the need for an additional dequantization step.
Leveraging an extended BCQ format, LUT-GEMM exhibits the capability to process both uniformly and non-uniformly qua... | Note that OPTQ and AWQ kernels involve dequantization followed by GEMM, while LUT-GEMM accepts quantized weights directly, eliminating the need for dequantization.
We can observe that the latency of the INT8-INT8 (with cuBLAS) implementation only slightly improves latency over FP-FP since cuBLAS is not well-optimized f... | Table 1 shows latency measurements for the first layer of the Feed-Forward Network (FFN) in the OPT-175B model (Zhang et al., 2022).
The measured kernels include cuBLAS (for FP-FP or INT-INT), OPTQ (Frantar et al., 2022), AWQ (Lin et al., 2023) (for FP-INT), and LUT-GEMM (for FP-INT or FP-BCQ). | C |
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee... | The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert... | Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia... | Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee... |
From the positive side, Luo [16] introduced the notion of rank-determining sets of metric graphs, and verified the existence of finite rank-determining sets constructively. Hladký, Král, and Norine [13] confirmed a conjecture of Baker [2] relating the ranks of a divisor on a graph and on a tropical curve, and provided... | D |
A Cross Convolutional Fusion (CCF) operation with a cross-receptive field is proposed to make use of the associated information. It not only uses the benefit of convolution to minimize parameters but also integrates features from both temporal and channel dimensions in an efficient fashion. |
In recent years, the direct application of various ANNs algorithms for training deep SNNs, including gradient-descent-based methods, has gained traction. However, the non-differentiability of spikes poses a significant challenge. The Heaviside function, commonly used to trigger spikes, has a derivative that is zero ev... | During the forward pass, the Heaviside function is retained, while a surrogate function replaces it during the backward pass. One simple choice for the surrogate function is the Spike-Operator [29], which exhibits a gradient resembling a shifted ReLU function. In our work, we go beyond the conventional surrogate gradie... | Despite significant progress, SNNs have yet to fully exploit the superior representational capability of deep learning, primarily due to their unique training mode, which struggles to model complex channel-temporal relationships effectively. To address this limitation, Zheng et al. [11] introduced a batch normalization... |
We train and test our method on a workstation equipped with two Tesla P4 and two Tesla P10 GPUs. As the memory consumption, we use the Tesla P10 to train and test the CIFAR10-DVS dataset, N-Caltech 101 dataset, and DVS128 Gesture dataset, and use the Tesla P4 to train and test the Fashion-MNIST, CIFAR10/100 dataset. I... | A |
Hardy’s inequality gives
H1(Ω)⊂Vϵ1(Ω)superscript𝐻1Ωsubscriptsuperscript𝑉1italic-ϵΩH^{1}(\Omega)\subset V^{1}_{\epsilon}(\Omega)italic_H start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( roman_Ω ) ⊂ italic_V start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT ( roman_Ω ) for all | ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0, and μ≤1𝜇1\mu\leq 1italic_μ ≤ 1 implies
Vϵ1(Ω)⊂V2−2μ+ϵ1(Ω)subscriptsuperscript𝑉1italic-ϵΩsubscriptsuperscript𝑉122𝜇italic-ϵΩV^{1}_{\epsilon}(\Omega)\subset V^{1}_{2-2\mu+\epsilon}(\Omega)italic_V start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_ϵ end_POSTSU... | ∇×u∇𝑢\nabla\times u∇ × italic_u are both in
V2−2μ+ϵ1(Ω)subscriptsuperscript𝑉122𝜇italic-ϵΩV^{1}_{2-2\mu+\epsilon}(\Omega)italic_V start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 - 2 italic_μ + italic_ϵ end_POSTSUBSCRIPT ( roman_Ω ). Furthermore, since | ∈V2−2μ+ϵ3(Ω),absentsubscriptsuperscript𝑉322𝜇italic-ϵΩ\displaystyle\in V^{3}_{2-2\mu+\epsilon}(\Omega),∈ italic_V start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 - 2 italic_μ + italic_ϵ end_POSTSUBSCRIPT ( roman_Ω ) ,
∥ψ∥3,2−2μ+ϵsubscriptdelimited-∥∥𝜓322𝜇italic-ϵ\displaystyle\lVert\psi\rVert_{3,... | ∈[V2−2μ+ϵ2(Ω)]2,absentsuperscriptdelimited-[]subscriptsuperscript𝑉222𝜇italic-ϵΩ2\displaystyle\in\bigl{[}V^{2}_{2-2\mu+\epsilon}(\Omega)\bigr{]}^{2},∈ [ italic_V start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 - 2 italic_μ + italic_ϵ end_POSTSUBSCRIPT ( roman_Ω ) ] start_POSTSUPERSCRIPT 2 end_POSTS... | A |
FlashSyn does not require prior knowledge of a vulnerable location or contract. Given a set of DeFi lego user interface contracts, action candidates and their special parameters such as strings are given by the users or automatically extracted from transaction history using FlashFind. FlashSyn utilizes these action can... | for actions. Note that 4444 benchmarks are partially closed-source (cs), and 5555 benchmarks are too complicated (cx), thus we are not able to extract mathematical precise summaries for them. For others, we list the profit generated using the manually extracted mathematical expressions in the synthesizer and optimizer.... | Threats to Validity:
The internal threat to validity mainly lies in human mistakes in the study. Specifically, we may understand results of FlashSyn incorrectly, or make mistakes in the implementation of FlashSyn. All authors have extensive smart contract security analysis experience and software engineering expertise ... | drive the approximated formula for this action. We set a timeout of 3333 hours for
FlashSyn-poly and 4444 hours for FlashSyn-inter. FlashSyn does not know a priori whether a benchmark has an attack vector with a positive profit, and it does not set any bounds on the profit. It tries iteratively to synthesize an attack ... | Specifically, we manually inspected all benchmarks whose relevant smart contracts that are all open-source and for each benchmark we allocated more than 4444 manual analysis hours
to extract the precise mathematical summaries. The baseline synthesizer then | D |
For an agent to learn fast, additional structure of the problem is required. In Meta RL [4; 39; 7; 26], agents are allowed to train on a set of training tasks, sampled from the same task distribution as the task they will eventually be tested on. The hope is that similar structure between the tasks could be identified ... | For an agent to learn fast, additional structure of the problem is required. In Meta RL [4; 39; 7; 26], agents are allowed to train on a set of training tasks, sampled from the same task distribution as the task they will eventually be tested on. The hope is that similar structure between the tasks could be identified ... | While significant empirical progress in meta RL has been made, the theoretical understanding of the problem is still limited. A central question, which we focus on in this work, is the probably approximately correct (PAC) analysis of meta RL, namely, how many training tasks are required to guarantee performance that is... |
Meta RL has seen extensive empirical study [4; 39; 7; 26], and the connection to Bayesian RL has been made in a series of recent works [20; 14; 25; 39]. Most meta RL studies assumed infinite tasks during training (effectively drawing different random MDPs from the prior at each training iteration), with few exceptions... | To complement our theoretical results, we demonstrate the practical potential of our approach by incorporating it in the VariBAD meta-RL method of Zintgraf et al. [39]. In VariBAD, a variational autoencoder (VAE) is used to learn a low-dimensional latent representation of the task distribution, and a deep neural networ... | B |
Case 1: Non-anchor words close to the anchors. Even though only anchors involve in determining L𝐿Litalic_L, we could expect to find more synonyms after transformation due to the similar representations encoded in neighbors of anchors. For example, given diarrhea and nausea as queries, we not only can find their corres... | Case 1: Non-anchor words close to the anchors. Even though only anchors involve in determining L𝐿Litalic_L, we could expect to find more synonyms after transformation due to the similar representations encoded in neighbors of anchors. For example, given diarrhea and nausea as queries, we not only can find their corres... | With the contextual information of words determined from the word semantic space learning module, the word spaces were determined independently, resulting in unaligned word dimensions between languages. We aligned consumer health vocabularies of different languages in the space alignment module. To facilitate the align... | This study aims at proposing a cross-lingual ATR framework that helps extend the English CHV into other languages. The framework starts with collecting HCGC corpora of two different languages. With the corpora, the framework adopts word2vec techniques [27] on each monolingual corpus to learn the word associations used ... | Case 2: Non-anchor words distributed similarly across languages. As we transform the whole source space into the target space, the words with similar meanings will match up by L𝐿Litalic_L due to the assumption of topological similarity between spaces. For example, even if we do not provide insomnia or its neighborhood... | D |
Performing predictions based on observed data is a general problem of interest in a wide range of scientific disciplines. Traditionally, scientists have tackled this problem by developing mathematical models that connect observations with predictions using their knowledge of the underlying physical processes. However, ... |
The widespread adoption of AI-based black-box models has become a standard practice across various fields due to their ability to be deployed without requiring an in-depth understanding of the underlying processes. However, this advantage also poses challenges regarding trustworthiness and the explanation of AI models... | One of the primary motivations behind our work is the recognition that model complexity can be an insufficient descriptor of human interpretability as shown in Fig. 1. In this case, if model complexity is used as a proxy for human interpretability, then both linear models shown in Fig. 1(a,b) will be assigned the sam... | Performing predictions based on observed data is a general problem of interest in a wide range of scientific disciplines. Traditionally, scientists have tackled this problem by developing mathematical models that connect observations with predictions using their knowledge of the underlying physical processes. However, ... |
Recently there has been significant progress in addressing this issue and the proposed approaches can be classified into two categories: (a) AI models that are inherently explainable, or (b) post-hoc explanation schemes for AI models that are not inherently explainable (XAI).Rudin (2019) Since most of the existing bl... | D |
Model checking (Clarke et al., 1999) is a general verification technique that aims to determine whether a mathematical abstraction (in the form of a finite state transition system) of the underlying system satisfies the given specification (a set of properties formalized using temporal logic). A model-checking algorit... |
One of the primary features of FuSeBMC is the linking of a greybox fuzzer with a bounded model checker. A bounded model checker works by treating a program as a state transition system and then checking whether there exists a path in this system of length less than some bound k𝑘kitalic_k that violates the property to... |
In software verification (Cordeiro et al., 2012; Clarke et al., 2004) bounded model checking is typically accompanied by a symbolic execution of the given program up to the user-defined positive bound k𝑘kitalic_k. The obtained bounded symbolic traces are then automatically translated into a first-order logic formula ... | Bounded model checking (Biere, 2009) solves a similar problem but for the bounded executions of the system to be verified. In other words, given a positive k𝑘kitalic_k, a bounded model checking algorithm tries finding a counterexample of maximum length k𝑘kitalic_k. In practice, if no such counterexample can be found,... |
Model checking (Clarke et al., 1999) is a general verification technique that aims to determine whether a mathematical abstraction (in the form of a finite state transition system) of the underlying system satisfies the given specification (a set of properties formalized using temporal logic). A model-checking algorit... | C |
We consider our method to be the successor of that of Bhrawy and Zaky [7]. They applied a change of variables to classical Jacobi polynomials such that the algebraic singularities of the resulting basis, the JFP basis (which is called thus for reasons we explain in Section 3), conform to those of the solution333The met... | We have illustrated the application of the JFP method to a variety of FIEs (including FDEs and a fractional PDE reformulated as FIEs) in which exponentially fast convergence to the solution is achieved. The JFP method converges much faster and with a lower overall complexity than the sparse sum space method in [25] for... |
Our work is strongly influenced by the method proposed by Hale and Olver [25], in which direct sums of appropriately weighted Jacobi polynomials are used as basis functions. We shall refer to this method as the sum space method and we note that the basis functions are related to the “generalized Jacobi functions” of [... |
Our pseudo-stabilization technique discussed in Appendix A is essential to the scalability of the JFP method and thus also for its application to practical computational problems. We emphasize that high-precision computations are required only for the computation of fractional integration matrices and not for the solu... | The following is an outline of the paper: We introduce the basic constituents of the JFP method in Sections 2 and 3 (matrix representations of operators on quasimatrices, high-precision floating-point numbers, the JFP basis, etc.). We then focus on the properties (Section 4) and computation (Section 5) of fractional in... | C |
Supplementary Figure S23: The maximum capability of the topological features measured by AUC-mROC in the supervised approach.
The imbalanced positive and negative samples are generated by sample2. We use 21 indexes from four families and measure the performance of the supervised prediction by these indexes in all 550 n... | Currently, there is no unified rule on what value should be the lowest. In the main text, we consider the lowest value to be zero. Indeed, all 21 indexes in this study assign the value 0 to entities that do not hold the feature they are designed to quantify. There could be exceptions. For example, one can design an ind... |
In the main text, taking the common neighbor feature as an example, we show the theoretical finding can help us to determine the feature and index selection. To further validate the preceding analysis, we here conduct a second analysis using the path feature (Table S2). When using the index SPL to make unsupervised pr... |
An index for a topological feature is designed to quantify the expression of this feature. Hence, the logic behind the index is that its value increases monotonically with the extent to which the feature is expressed. For entities that do not hold the feature at all, they should have the same and the lowest value. If ... | p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the percentage of samples in LPsuperscript𝐿𝑃L^{P}italic_L start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT that hold the topological feature, and p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the percentage of samples in... | C |
Our sparse update scheme can achieve higher downstream accuracy at a much lower memory cost: compared to updating last k𝑘kitalic_k layers, sparse update can achieve higher downstream accuracy with smaller memory footprint.
We also measure the highest accuracy achievable by updating the last k𝑘kitalic_k layers (includ... | We pre-trained the models on ImageNet [22] and perform post-training quantization [34]. The quantized models are fine-tuned on downstream datasets to evaluate the transfer learning capacity.
We perform the training and memory/latency measurement on a microcontroller STM32F746 (320KB SRAM, 1MB Flash) using a single batc... |
Remarkably, the downstream accuracy of our on-device training has matched or even surpassed the accuracy of cloud-trained results on tinyML application VWW [20]. Our framework uses 206KB measured SRAM while achieving 89.1% top-1 accuracy for on-device training (we used gradient accumulation for the VWW dataset; see th... |
Our framework is the first solution to enable tiny on-device training of convolutional neural networks under 256KB SRAM and 1MB Flash without auxiliary memory. (1) Our solution enables weight update not only for the classifier but also for the backbone, which provides a high transfer learning accuracy (Figure 9). For ... | In this paper, we aim to bridge the gap and enable tiny on-device training with algorithm-system co-design. We investigate tiny on-device training and find two unique challenges: (1) the model is quantized on edge devices. A real quantized graph is difficult to optimize due to low-precision tensors and the lack of Batc... | B |
2=‖2I‖2=‖I+A−1+I−A−1‖22subscriptnorm2𝐼2subscriptnorm𝐼superscript𝐴1𝐼superscript𝐴12\displaystyle 2=\|2I\|_{2}=\|I+A^{-1}+I-A^{-1}\|_{2}2 = ∥ 2 italic_I ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = ∥ italic_I + italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT + italic_I - italic_A start_POSTSUPERSCRIPT - 1 end_... | ‖I+A−1‖2+‖I−A−1‖2subscriptnorm𝐼superscript𝐴12subscriptnorm𝐼superscript𝐴12\displaystyle\|I+A^{-1}\|_{2}+\|I-A^{-1}\|_{2}∥ italic_I + italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + ∥ italic_I - italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSC... | \|_{2}+\|A-I\|_{2}).1 + ∥ italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≤ ∥ italic_I + italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + ∥ italic_I - italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUB... | ‖A−1(I+A)‖2+‖A−1(I−A)‖2subscriptnormsuperscript𝐴1𝐼𝐴2subscriptnormsuperscript𝐴1𝐼𝐴2\displaystyle\|A^{-1}(I+A)\|_{2}+\|A^{-1}(I-A)\|_{2}∥ italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_I + italic_A ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + ∥ italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRI... | ‖A−1‖2(‖A+I‖2+‖A−I‖2)subscriptnormsuperscript𝐴12subscriptnorm𝐴𝐼2subscriptnorm𝐴𝐼2\displaystyle\|A^{-1}\|_{2}(\|A+I\|_{2}+\|A-I\|_{2})∥ italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( ∥ italic_A + italic_I ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + ∥ italic_A - it... | A |
To see to what extent our asymptotic results are meaningful already for moderate problem sizes, and also to see differences inside the same asymptotic runtime class, we also conducted a small experimental evaluation of the permutation-based (1+1)11(1+1)( 1 + 1 ) EA with four different mutation operators (swap and scra... | We report in Figure 2 the averages over 30303030 runs. We again do not display standard deviations in the figure, but we note here that in all experiments the standard deviation was between 75757575% and 122122122122% of the expectation. This fits to our intuition, which is that the typical optimization process on a ju... |
To see if the insights stemming from our asymptotic analysis are visible already for realistic problems sizes, we conduct a small empirical analysis as well. We defer the details to Section 9 and note here only that the different rates of void mutations (mutations that create an offspring equal to the parent) of the d... |
In Figure 1, we report the runtime (number of function evaluations, averaged over 50 runs) of the (1+1)11(1+1)( 1 + 1 ) EA with the four different mutation operators on the permutation-based LeadingOnes benchmark, both when counting all fitness evaluations and when ignoring easy-to-detect void mutations. To keep the p... | For all experiments, we report the runtime in terms of the number of fitness evaluations until the optimum is found. From the definitions of the different mutation operators, it is clear that they have different probabilities to create an offspring identical to the parent. Since this will have an influence on the perfo... | D |
\big{)}}},over¯ start_ARG italic_η end_ARG start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT = divide start_ARG ∏ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT ( divide start_ARG italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG 1 - italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT... |
where BI(v)𝐵𝐼𝑣BI(v)italic_B italic_I ( italic_v ) is the standard bilinear interpolation operator. We used PyLops222https://github.com/PyLops/pylops, the linear operator library for Python, for performing bilinear interpolation and its transpose operation. Consequently, the grid transfer operators (4.31), (4.32) ... | where BI⊤(v)𝐵superscript𝐼top𝑣BI^{\top}(v)italic_B italic_I start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ( italic_v ) denotes the transposed bilinear interpolation operator. Note that in case of non-uniform weights, the operation BI(v)𝐵𝐼𝑣BI(v)italic_B italic_I ( italic_v ) can be still represented by a correspo... |
For an efficient implementation of the geometric grid transfer operators we used vectorized versions. For the experiments in this work we use uniform weights ωisubscript𝜔𝑖\omega_{i}italic_ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, therefore the geometric mean, defined in Eq. (4.26), can be reformulated as |
In order to apply two-level optimization to the regularized inverse problem (1.2), the coarse grid model function ψ𝜓\psiitalic_ψ given by (3.8) has to be computed. Similar to the evaluation of the objective function at both levels, we assume that the operator A𝐴Aitalic_A can be directly evaluated at both levels. For... | A |
|N(x)−h(x)|<ε, for every x∈[0,1]d.formulae-sequence𝑁𝑥ℎ𝑥𝜀 for every 𝑥superscript01𝑑|N(x)-h(x)|<\varepsilon,\text{ for every }x\in[0,1]^{d}.| italic_N ( italic_x ) - italic_h ( italic_x ) | < italic_ε , for every italic_x ∈ [ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT .
| The Tardos function, introduced in [43], will satisfy these conditions. The Tardos function builds upon the seminal work of Razborov, [35], who studied the hardness of monotone computation for perfect matching. The function is constructed as a graph invariant and is always sandwiched between the clique and chromatic nu... | It follows from Theorem 15 that if there was a monotone network (with threshold gates) of a given size approximating a harmonic extension f^^𝑓\hat{f}over^ start_ARG italic_f end_ARG, we could replace each gate with a polynomially sized monotone De Morgan circuit entailing a polynomial blowup to the size of the network... | By Theorem 15 the monotone circuit complexity of a circuit with only AND and OR gates (De Morgan circuit) computing a threshold function with positive coefficients is polynomial. Therefore, we claim that the existence of CNsubscript𝐶𝑁C_{N}italic_C start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT entails the existence o... | The function hℎhitalic_h we use is the harmonic extension of a graph invariant function, introduced by Éva Tardos in [43]. The Tardos function and its properties build upon the seminal works of Razborov [35], and Alon and Boppana [1]. The mentioned works constitute a highly influential line of work, about the limitatio... | D |
}-{\boldsymbol{m}}}u_{h}(\cdot,\boldsymbol{y})\|_{V_{h}}.start_ROW start_CELL ∥ ∂ start_POSTSUBSCRIPT bold_italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_ν end_POSTSUPERSCRIPT italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ , bold_italic_y ) ∥ start_POSTSUBSCRIPT italic_V start_POSTSUBSCRIPT i... |
In Figures 1 and 2 we recognize that the error plots of all stable methods are almost identical. Thus we conclude that the cubature error dominates the discretization error and hence the particular choice of the discretization appears to be almost irrelevant. | The error estimate does not significantly deviate from standard DG error estimates. Thus, we keep it short and refer to the already mentioned references [11, 34] for the details. We start introducing the broken H2superscript𝐻2H^{2}italic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT space
| Figure 2 deals with the lognormal case. The left picture uses linear approximations of SIPG, NIPG, and the conforming finite element method, while the right picture shows the results for second order SIPG and NIPG. In the left picture, we see that, again, all three methods work fine, and similarly well. Their convergen... |
for some appropriately chosen norm ∥⋅∥\|\cdot\|∥ ⋅ ∥. We focus on the cubature error, and discuss the spatial discretization error briefly in Section 5.3. We remark that the order of the last two error contributions—the finite element error and cubature error, respectively—can be flipped when the diffusion coefficient... | B |
In each round, each agent takes a sequence of pulls (one at each time step) and observes the outcomes. At the end of each round there is a communication phase; the agents communicate with each other to exchange newly observed information and determine the number of time steps for the next round (the length of the first... | In each round, each agent takes a sequence of pulls (one at each time step) and observes the outcomes. At the end of each round there is a communication phase; the agents communicate with each other to exchange newly observed information and determine the number of time steps for the next round (the length of the first... | The lower bound proof using generalized round elimination in (TZZ19, ) was carried out on non-adaptive algorithms in the homogeneous CL setting. For adaptive algorithms, only in the case when n≤K𝑛𝐾n\leq Kitalic_n ≤ italic_K (that is, the number of arms is no more than the number of agents), we can show that adaptive ... |
A Lower Bound for Adaptive CL Algorithms. Our second technical contribution is to prove the lower bound for adaptive CL algorithms directly, instead of via a reduction from a lower bound for non-adaptive CL algorithms. The details can be found in Section 3.3. |
Depending on whether the agents have real-time computing and policy-updating ability, the CL algorithms are divided into two categories: adaptive and non-adaptive. In the adaptive case, agents can change their pull policies at each time step based on new observations. While in the non-adaptive case, policy updates can... | D |
Figure 3: Performance of different algorithms for the NN-PCA problem of (5.4). MNIST (left) with N=60000𝑁60000N=60000italic_N = 60000, n=784𝑛784n=784italic_n = 784, covtype (left center) with N=581012𝑁581012N=581012italic_N = 581012, n=54𝑛54n=54italic_n = 54, a9a (right center) with N=32561𝑁32561N=32561italic_N =... | As depicted in the figure, SPIRAL demonstrates significantly faster performance compared to the other algorithms. Even though the cost function in this scenario does not possess a Lipschitz continuous gradient, we evaluate the performance of adaSPIRAL, both in Bregman and Euclidean versions, in Figure 0(a).
It is impor... | Despite an upsurge in developing optimization methods to address such a problem, the potential of low-memory quasi-Newton methods has largely been neglected which can be partially attributed to the absence of theoretical foundations for handling nonsmooth settings.
In the smooth strongly convex settings, competitive co... |
As depicted in Figure 3, the quasi-Newton updates in SPIRAL significantly enhance the convergence rate compared to (low-memory) Finito/MISO, which lacks such updates. Although proxSARAH exhibits faster convergence for this problem, it performs slower in the Lasso problem and is unable to handle non-Lipschitz different... | For the nonconvex nonnegative principal component analysis problem, we compare against [40], which addresses Finito/MISO in the general nonsmooth nonconvex case.
Additionally, we compare SPIRAL against SMD and the Bregman Finito/MISO method [39] for the phase retrieval problem, where the cost function lacks a Lipschitz... | C |
}}*\underline{\bf X}under¯ start_ARG bold_Y end_ARG = ( under¯ start_ARG bold_X end_ARG * under¯ start_ARG bold_X end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT * under¯ start_ARG bold_X end_ARG and this should be performed sequentially using the subspace... | This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ... |
The sampling approach can also be used for low tubal rank approximation besides the random projection. Indeed, a randomized slice sampling algorithm was proposed in [35] in which horizontal and lateral slices are selected and a low tubal rank approximation is computed based on them, see Figure 3 for a graphical illust... |
Our algorithm is a generalization of the randomized fixed-precision algorithm developed in [37] which is a modification and improved version of the fixed-precision algorithm proposed in [38]. Similar to the idea presented in [37], we propose to gradually increase the size of the second and first modes of 𝐐¯¯𝐐\underl... | In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD). Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor a... | B |
In [51] the authors address the task of multi-label learning with incomplete labels, by combining the label imputation function and multi-label prediction function in a mutually beneficial manner. Specifically, the proposed method conducts automatic label imputation within a low-rank and sparse matrix recovery framewo... | The semi-supervised multi-label learning task has also been investigated in the context of graph-structured data by incorporating the idea of label embedding to capture both network topology and higher-order multi-label correlations [43]. In this work, the label embedding is generated along with the node embedding base... |
One of the explanations of the inner workings of the proposed methods is related to the interaction of semi-supervised learning with the label dependency of MLC and HMLC tasks. More specifically, we investigate whether the smoothness assumption (and, indirectly, since they are not independent, the low-density and the ... |
In [51] the authors address the task of multi-label learning with incomplete labels, by combining the label imputation function and multi-label prediction function in a mutually beneficial manner. Specifically, the proposed method conducts automatic label imputation within a low-rank and sparse matrix recovery framewo... | is proposed to exploit both the feature distribution and the label relation between examples. Therefore, the optimization simultaneously takes into account instance-level relations across labeled and unlabeled samples in feature space and the relations across labels. This approach has been only applied in the image mul... | A |
The best performance is defined by the lowest total metric score as formulated with (10). IoUSEG𝐼𝑜subscript𝑈𝑆𝐸𝐺IoU_{SEG}italic_I italic_o italic_U start_POSTSUBSCRIPT italic_S italic_E italic_G end_POSTSUBSCRIPT: IoU score of semantic segmentation. MAEDE𝑀𝐴subscript𝐸𝐷𝐸MAE_{DE}italic_M italic_A italic_... |
In imitation learning and behavior cloning, a considerable amount of expert driving records is needed for training and validation (train-val) [52][53][54][55]. To create the dataset, we drive the vehicle at a speed of 1.25 m/s in a certain area inside Toyohashi University of Technology, Japan. As shown in Fig. 3, the ... |
The purpose of the online test is to evaluate the model’s drivability in driving the vehicle. The model must drive the vehicle safely by following a set of route points while avoiding obstacles (e.g., a vehicle stopped on the left side of the road). The experiment is conducted three times for each condition and on dif... |
The offline test is used to evaluate the model’s performance in handling multiple perception and control tasks simultaneously. All models are deployed to predict driving records and evaluated with multi-task and task-wise scoring. The test dataset is recorded three times in a completely different area from the train-v... | DeepIPC is evaluated under two conditions with varying cloud intensity with two different tests namely offline and online tests. For each condition, the final score is obtained by averaging the scores from three experimental results. In the offline test, the model is deployed to predict driving records. Then, its perfo... | C |
The original motivation for Dallard et al. [14] stems from structural graph theory.
In 2020, Dallard, Milanič, and Štorgel [13, 15] initiated a systematic study of (tw,ω)tw𝜔(\mathrm{tw},\omega)( roman_tw , italic_ω )-bounded graph classes, that is, hereditary graph classes in which the treewidth can only be large due ... | While (tw,ω)tw𝜔(\mathrm{tw},\omega)( roman_tw , italic_ω )-bounded graph classes are known to possess some good algorithmic properties related to clique and coloring problems (see [10, 9, 13, 15, 14]), the extent to which this property has useful algorithmic implications for problems related to independent sets is an ... | Dallard, Milanič, and Štorgel [16] conjecture the converse, namely, that every (tw,ω)tw𝜔(\mathrm{tw},\omega)( roman_tw , italic_ω )-bounded graph class has bounded tree-independence number.
A related research direction in structural graph theory is the study of induced obstructions to bounded treewidth and tree-indepe... | However, the weak spot in all these algorithmic approaches is the requirement that a tree decomposition with bounded independence number is given with the input.
Theorem 1.1 fills this gap by constructing a decomposition of bounded independence number | The original motivation for Dallard et al. [14] stems from structural graph theory.
In 2020, Dallard, Milanič, and Štorgel [13, 15] initiated a systematic study of (tw,ω)tw𝜔(\mathrm{tw},\omega)( roman_tw , italic_ω )-bounded graph classes, that is, hereditary graph classes in which the treewidth can only be large due ... | A |
In ADMM-based methods, we introduce one hyper-parameter – penalty factor ρ𝜌\rhoitalic_ρ.
Here we study the test accuracy of VIMADMM with different penalty factor ρ𝜌\rhoitalic_ρ. The results in Figure 5 of Section -D show that VIMADMM is not sensitive to ρ𝜌\rhoitalic_ρ on four datasets, and we suggest that the practi... | Due to the privacy protection requirement of VFL, each client k𝑘kitalic_k does not share raw local feature set Xksubscript𝑋𝑘X_{k}italic_X start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT with other clients or the server. Instead, VFL consists of two steps: (1) local processing step: each client learns a local model th... | Long-tail datasets are characterized by a significant imbalance, where minority classes have far fewer samples than majority ones. This horizontal imbalance is distinct from the challenges addressed by VFL, where the same sample (whether it belongs to a majority or minority class) is vertically split across multiple cl... | For fair comparisons, we use the same local models for all methods. Under w/ model splitting setting, owing to the strong feature extraction power of local DNN models, we utilize the linear model as server model by default.
Additionally, we evaluate all methods with the non-linear server model, as detailed in Section V... | (i) In the model splitting setting, each client trains a feature extractor as the local model that outputs local embeddings, and the server owns a model which predicts the final results based on the aggregated embeddings.
(ii) In the VFL without model splitting setting, the clients host the entire model that outputs th... | B |
We conduct the ablation study on the three datasets in both transductive and inductive settings. Table VI and Table VII show the experimental results.
Ada-DyGNN performs better than Ada-DyGNN-agg-w.o.-time and Ada-DyGNN-pro-w.o.-time, indicating that time-related information can boost the performance of our method. |
In this paper, we proposed a robust knowledge propagation method for dynamic graph neural networks. We devised a reinforcement learning based strategy to dynamically determine whether the embedding of a node should be updated. In this way, we can effectively propagate knowledge to other nodes and learn robust node rep... | Our goal is to design a robust knowledge propagation mechanism to obtain the updated embeddings 𝐗(t)𝐗𝑡\mathbf{X}(t)bold_X ( italic_t ) of the nodes in the dynamic graph.
Note that when deleting an edge, robust knowledge adaptation can be resolved similarly to edge addition. Here, we mainly introduce the method when... | In addition, Ada-DyGNN outperforms the methods using three neighbor selection variants, which demonstrates our robust knowledge adaptation mechanism can effectively determine when to propagate knowledge to other nodes, enabling to learn robust node representations.
| We put forward Ada-DyGNN: a robust knowledge adaptation framework to capture temporal evolution for dynamic graph neural networks.
To the best of our knowledge, our approach constitutes the first attempt to study how to adaptively select the nodes to be updated in dynamic graphs. | C |
Progressive Uncertainty maps each sequence as a Cross-View Gaussian distribution and a Cross-Cloth Gaussian distribution in the feature spaces instead of a point, which can relax the constraints for the model.
The identity-branch accepts features output from the backbone and learns the mean feature, which is used to re... | SGD optimizer is used with a momentum of 0.9, other paramters include the learning rate, milestones, total iteration and weight decay for each experiment can be seen in Table V.
We first train the framework only with the identity-branch, then we load the pre-trained model and train the whole framework with the addition... | The uncertainty-branch is used to generate the variance feature, representing with what uncertainty a feature can represent this sequence, and this branch will be abandoned during inference.
In this branch, the original feature f𝑓fitalic_f outputs from the backbone will first pass through a Head module, which is a lig... | PFE [34] first proposes to map each face image as a Gaussian distribution, regarding the sequence feature as the mean, and adding another branch to learn the confidence for the sequence feature.
The mean of the distribution can be regarded as the most likely feature of the sequence mapped in the latent space, and the v... | Progressive Uncertainty maps each sequence as a Cross-View Gaussian distribution and a Cross-Cloth Gaussian distribution in the feature spaces instead of a point, which can relax the constraints for the model.
The identity-branch accepts features output from the backbone and learns the mean feature, which is used to re... | B |
The policy training based on these inaccurate Q values will be negatively affected. Only using Maximum Estimator (ME) will cause overestimation bias and even lead to worse policy quality, it can be observed from the curves of DQN and Duel DQN in Figure 4. Averaged DQN and Maxmin DQN use ME in their single unit so the ... |
Additionally, this method has a lower computational complexity compared to those of ensemble models. Even if the latter could trade time complexity for space complexity by parallel computing, they still have high computational complexity in general as shown in the Table 2. And this method achieves better or comparable... | This paper is the first to investigate the negative effects of the overestimation problem in task-completion dialogue systems. We propose the DPAV estimator to mitigate this problem of Q-learning. We also theoretically prove convergence and derive the upper and lower bounds of the estimation bias compared with those of... |
Overall 222The project resources are in the GitHub https://github.com/changtianluckyforever/version_one., our main contributions are as follows: (i) This is the first work to investigate and handle the overestimation problem of the reinforcement learning framework for task-completion dialogue systems. (ii) We propose ... | In this work, we propose dynamic partial average (DPAV), a novel approach to mitigate the overestimation problem specifically for the task-completion dialogue policy. DPAV utilizes the partial average between the predicted maximal action value and the predicted minimal action value to estimate the ground truth maximum ... | B |
To demonstrate the effectiveness of our approach, we make use of the most diverse dataset of human videos currently available, Ego4D [13], and in particular we evaluate our results in the LTA benchmark. Ego4D provides first-person videos of humans experiencing everyday activities around the world. In the case of the L... |
To demonstrate the effectiveness of our approach, we make use of the most diverse dataset of human videos currently available, Ego4D [13], and in particular we evaluate our results in the LTA benchmark. Ego4D provides first-person videos of humans experiencing everyday activities around the world. In the case of the L... | Finally, we report quantitative results that lead our approach to win both CVPR@2022 and ECCV@2022 Ego4D Long-Term Action Anticipation (LTA) challenges. We extend the results with a detailed discussion based on our ablation study. To sum up, the contributions can be summarized as follows:
| We report our results for the LTA task in Table 1 based on the test set of the Ego4D LTA dataset. In this experiment, our framework predicted the N=6𝑁6N=6italic_N = 6 observed actions and the overall intention from the past, to anticipate Z=20𝑍20Z=20italic_Z = 20 future actions by generating K=5𝐾5K=5italic_K = 5 seq... |
Due to our limited computational resources, training was performed independently for each module. Next we will describe the quantitative evaluation first for the H3M module and then for I-CVAE as stand-alone models. As LTA-Ego4D Forecasting benchmark is private, the ground truth from the testing set is not provided. T... | B |
We consider a local contextual window of each observation to model their temporal dependence. Specifically, a sliding window with length l𝑙litalic_l and stride r𝑟ritalic_r is used to transform the training set into a collection of sub-sequences 𝒮={𝒔1,𝒔1+r,⋯}𝒮subscript𝒔1subscript𝒔1𝑟⋯\mathcal{S}=\{\bm{s}_{1},\b... | These competing methods include both traditional and deep approaches. Also, these competing methods employ different learning strategies (prediction, reconstruction, and discriminative one-class/self-supervised learning) and various network structures (MLP, LSTM, GRU, TCN, Transformer, convolutional net, and graph neur... | A number of studies focus on capturing more comprehensive temporal and inter-variate dependencies by using graph neural networks [2, 9], convolutional kernels [26, 6], and variational Autoencoders [10, 7].
Besides, adaptive memory network [27] and hierarchical structure-based multi-resolution learning [26, 28, 29] are ... | A temporal modeling network ϕitalic-ϕ\phiitalic_ϕ is used to model time-axis dependency and inter-variate interactions.
We opt for Temporal Convolutional Network (TCN) [40] as the temporal modeling network ϕitalic-ϕ\phiitalic_ϕ in COUTA. TCN is more time-efficient than traditional RNN-based structures, and it can bette... | In COUTA, the temporal modeling network ϕitalic-ϕ\phiitalic_ϕ is with one hidden layer, and the kernel size uses 2. The projection head ψ𝜓\psiitalic_ψ and the classification head ψcsubscript𝜓𝑐\psi_{c}italic_ψ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT are two-layer multi-layer perceptron networks with LeakyReLU ... | C |
Dataset Consistency: Often, the greatest challenges for D2T systems, namely hallucination and omission (see §3.2), can be traced back to the datasets. Datasets facing divergence (as outlined in §2.3), wherein the narratives are not consistent with the data instances or vice-versa, often lead to models that hallucinate... |
Human-crafted References: Often, to replicate human linguistics, datasets in NLP/NLG contain human annotations (narratives), considered as gold references. Reiter (Reiter, 2017) notes that D2T datasets, unintentionally, may contain machine-generated annotations, such as those for WeatherGov, and urges the community to... |
With the abundance of paired datasets where each data instance is accompanied by a human generated reference text, often referred to as the gold standard, the NLG community has sought after quick, cheap, and effective metrics for evaluation of D2T systems. The adoption of automated metrics such as BLEU, NIST, and ROUG... |
The last half-decade has seen the ML community place significant emphasis on the reproducibility of academic results (Pineau et al., 2021; Sinha et al., 2020). However, the focus of these reproducibility efforts are placed on automated metrics (§6.1, §6.2.1, §6.2.2) with the reproducibility of human evaluation results... | From a review of 284 correlations reported in 34 papers, Reiter (Reiter, 2018) notes that the correlations between BLEU and human evaluations are inconsistent - even in similar tasks. While automated metrics can aid in the diagnostic evaluation of MT systems, the author showcases the weakness of BLEU in the evaluation ... | A |
Results. The results on the GQA dataset are summarized in TABLE V. From the table, we have the following observations: 1) The incorporation of NICE and NICEST can significantly improve the mR@K scores of the two strong baselines (e.g., 12.6%
and 12.6% absolute gains on metric mR@100 over Motifs and VCTree, respectively... |
Results. From the results in TABLE IV, we can observe that: 1) Compared with the two common baselines (i.e., Motifs and VCTree), the mR@K of our NICE has been significantly improved in all three tasks (e.g., 4.4% ∼similar-to\sim∼ 17.0% and 4.7% ∼similar-to\sim∼ 17.0% absolute gains on metric mR@100 over Motifs and VCT... | Results. The results on the GQA dataset are summarized in TABLE V. From the table, we have the following observations: 1) The incorporation of NICE and NICEST can significantly improve the mR@K scores of the two strong baselines (e.g., 12.6%
and 12.6% absolute gains on metric mR@100 over Motifs and VCTree, respectively... | Results. From the results in TABLE III and TABLE VII, we have the following observations: 1) Compared to the two strong baselines (i.e., Motifs and VCTree), our NICE can consistently improve model performance on metric mR@K over all three tasks (e.g., 5.9% ∼similar-to\sim∼ 14.3% and 3.7% ∼similar-to\sim∼ 14.7% absolute... | As shown in TABLE VI, it is evident that both NICE and NICEST achieve the highest performance in terms of the Mean metric (e.g., 44.3% and 45.3% under PredCls over Motifs, respectively). This far exceeds the performance of using only label correction or label smoothing methods (e.g., 3.9% ∼similar-to\sim∼ 4.7% absolute... | D |
In the first case, when miners find a new block on the public branch, the attacker releases its private branch and starts a 0-lead racing. In this scenario, the attacker and the attracted miners will mine on the previously private branch, and public miners will choose to mine on either branch. As defined earlier, γ𝛾\g... |
Specifically, if a public miner finds a new block, two possible cases may happen: if the lead of the private branch is 2, the attacker will release all the partial blocks, and the system goes back to the single branch state. If the private branch’s lead is more than 2, then the attacker and rational miners will contin... | The workflow of the PSM-DoS attack is shown in Figure 8. First, the attacker distributes the partial block data to all the miners and attracts attracted miners to join the attacker’s private branch. In the meantime, the attacker leaves the private branch and puts all the mining power back into the public branch. Then, ... | In the third case, when the attracted miners find a new block in the private branch, the attacker will immediately release the whole private branch to get the revenue of all the blocks on the private branch. Similar to the first case, the blockchain returns to the single-branch state.
| In the first case, when miners find a new block on the public branch, the attacker releases its private branch and starts a 0-lead racing. In this scenario, the attacker and the attracted miners will mine on the previously private branch, and public miners will choose to mine on either branch. As defined earlier, γ𝛾\g... | C |
It remains unclear how gradient descent is able to safely train at the EoS without diverging [2, 32], though [32] has suggested that “subquadratic growth” of the training objective may play a role.
Note that these results only apply to full-batch gradient descent. In the more general case of SGD, similar effects seem t... |
Our paper gives evidence for a different implicit bias: adaptive gradient methods are liable to find higher-curvature solutions than non-adaptive algorithms, since whereas non-adaptive algorithms are blocked from high-curvature regions, adaptive optimizers can evade this restriction. | We will see that this behavior sometimes differs substantially from that of non-adaptive optimizers.
In particular, whereas non-adaptive optimizers at the EoS are blocked from entering high-curvature regions of parameter space, adaptive gradient methods at the AEoS can and do enter high-curvature regions via their abil... | The fact that non-adaptive gradient descent is blocked from entering sharp (as quantified by maximum Hessian eigenvalue) regions of the loss landscape constitutes one implicit bias [35] of non-adaptive gradient descent.
It is plausible that this implicit bias could impact generalization (e.g. see [33, 19]). |
However, even though adaptive gradient methods train at the “Adaptive Edge of Stability” (AEoS), their behavior in this regime differs in a significant way from that of non-adaptive methods in the non-adaptive EoS regime: whereas non-adaptive optimizers in the non-adaptive EoS regime are blocked from accessing high-cu... | C |
Note that the simulation of CLN025 performed in Ref. 76 is ∼100similar-toabsent100{\sim}100∼ 100 μ𝜇\muitalic_μs long compared to our 1-μ𝜇\muitalic_μs simulation. This clearly illustrates the great benefit of combining manifold learning with the ability to learn from biased data sets. | Overall, both the separation of the CLN025 metastable states and the free-energy landscapes calculated for the low-dimensional embeddings suggest that the proposed framework can be used to find slow CVs and physically valid free-energy estimates. The presented results (Fig. 4) clearly show that using our approach, we c... | Comparing the free-energy barriers between the different embeddings in Fig. 4, we can see that they are similar, particularly for the mrse embedding and the free-energy surface spanned by the distance and the radius of gyration, i.e., from 10 to 15 kJ/mol. We can compare our results to the unbiased simulation data from... | Figure 4: Reweighted stochastic embeddings calculated for chignolin in the TIP3P solvent at 340 K simulated using the CHARMM27 force field. Low-dimensional manifolds are colored according to their free energy. (a𝑎aitalic_a) Representative conformations from the metastable states estimated by the reweighted embedding m... |
We can observe that the free-energy landscape in the low-dimensional manifold calculated by mrse is highly heterogeneous, with multiple partially unfolded intermediate states and many possible reaction pathways, as shown in Fig. 4(a𝑎aitalic_a). Such a complex free-energy landscape shows that the dynamics of CLN025 is... | A |
The collection of segments can be hierarchically ordered according to the generation to which they belong. Generation affiliation can be used as a filter for segments. In order to apply such a filter, it is necessary to sort the nodes into generations. The 0-th generation contains only the segment that contains the inl... |
The 3910 segments discussed here can be sorted into 42 generations. A projection of the first four generations is shown in Fig. 3a. It is surprising that there appear to be three segment that arise directly from the 00-th generation, but these segment belong to the 1-st and 2-nd generations; this indicates that there ... |
To remove a pseudo-trifurcation, either the trifurcation point can be moved up (the initial bifurcation point is chosen as the trifurcation point) or moved down (the second bifurcation point is chosen). The choice does not impact the branching structure of the final tree but does impact how it occupies space, i.e. bra... | In Fig. 6c there is an instance in which a long segment appears to give rise to exactly one other vessel – it can be seen in the lower right of the panel. However, this cannot be the case as these were previously removed. Upon closer inspection of the tree, we see that these are bifurcations in which one branch is much... | Figure 3: Coronal projection of the segments are sorted into generations. Line weight and colour signify generation affiliation. (a) All segments are sorted into generations without modification. (b) 30-fold magnification of the initial junction in the tree shown in 3a; spatial nodes are highlighted with circles and th... | A |
(1) Table IV reports the overall performance of all the test set and we note their difference in MAPE through Delta𝐷𝑒𝑙𝑡𝑎Deltaitalic_D italic_e italic_l italic_t italic_a ;
(2) Fig. 4 reports model performance on each interval of size samples from the test set; | (1) Table IV reports the overall performance of all the test set and we note their difference in MAPE through Delta𝐷𝑒𝑙𝑡𝑎Deltaitalic_D italic_e italic_l italic_t italic_a ;
(2) Fig. 4 reports model performance on each interval of size samples from the test set; | (3) Table IV reports mean inference speed on a selection of certain sized samples from test set. Here we are only concerned with the time interval of inference, without any pre/post-processing time, as they can be parallelized in an actual application. We do not include results from Scalable-RouteNet due to the use of ... | In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph,
which is considered to be learnable by spectral graph-convolution methods. On the other hand, extrapolation towards out-of-distribution data is a fundamental chal... | This work is one of the most related approaches to the one presented here. Contrary to the aforementioned contributions, which are based on transductive settings, our work focuses instead on an inductive setting. In our model, we are incorporating path-to-node attention, but passively between the path’s structurally si... | B |
Specifically, in the competitive setting, our algorithm constructs upper and lower confidence bounds (ULCB) of the value functions based on the representations learned via contrastive learning.
We prove that the proposed approach achieves an 𝒪~(1/ε2)~𝒪1superscript𝜀2\widetilde{\mathcal{O}}(1/\varepsilon^{2})over~ st... | Second, we propose the first provably efficient exploration strategy incorporated with contrastive self-supervised learning. Our proposed UCB-based method is readily adapted to existing representation learning methods for RL, which then demonstrates improvements over previous empirical results as shown in our experimen... | As for theoretical results, we prove that the proposed algorithm provably recovers the true representations under the low-rank MDP setting. Moreover, we show that our algorithm achieves a 𝒪~(1/ε2)~𝒪1superscript𝜀2\widetilde{\mathcal{O}}(1/\varepsilon^{2})over~ start_ARG caligraphic_O end_ARG ( 1 / italic_ε start_POS... |
We study contrastive-learning empowered RL for MDPs and MGs with low-rank transitions. We propose novel online RL algorithms that incorporate such a contrastive loss with temporal information for MDPs or MGs. We further theoretically prove that our algorithms recover the true representations and simultaneously achieve... | This section provides the analysis of the transition kernel recovery via contrastive learning and the proofs of the main results for single-agent MDPs and zero-sum MGs. Our theoretical analysis integrates contrastive self-supervised learning for transition recovery and low-rank MDPs in a unified manner. Part of our ana... | A |
The decoupled MV-SDE (8) for the given empirical law {μtP:t∈[0,T]}conditional-setsubscriptsuperscript𝜇𝑃𝑡𝑡0𝑇\left\{\mu^{P}_{t}:t\in[0,T]\right\}{ italic_μ start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT : italic_t ∈ [ 0 , italic_T ] } is a standard SDE, making it po... | The decoupling approach was developed for importance sampling in MV-SDEs (dos Reis et al., 2023; Ben Rached et al., 2023), where the idea is to approximate the MV-SDE law empirically as in (4), use the approximation as input to define a decoupled MV-SDE and apply a change of measure to it. We decouple the computation o... | This section applies stochastic optimal control theory to obtain the optimal change of measure for the decoupled MV-SDE (8). Then, we incorporate the above importance sampling scheme to the DLMC Algorithm 1, and formulate the DLMC estimator with importance sampling.
|
Note that ζ𝜁\zetaitalic_ζ is the optimal importance sampling control for the decoupled MV-SDE (8) and not the particle system (2). With this scheme, we reduce the variance of the inner expectation in (4). Consequently, we assess the variance reduction in the MC estimator of the inner expectation in the first experime... |
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a... | B |
This conservative approach is indeed justified. Current mission profiles rely on a ground-in-the-loop for the spacecraft operation. The data are downlinked in telemetry to the Earth, meticulously analyzed by the ground team, and sent back as telecommands to be executed by the spacecraft, as in any other space mission.... | This work presents a paradigm shift in autonomous asteroid exploration studies by proposing a departure from the conventional approach of extensively reducing uncertainties before close-proximity operations. Instead, the focus is on leveraging autonomous robust guidance and control to handle uncertainties effectively. ... |
Although these are essential advancements, no attempt has been made yet to take full advantage of an autonomous mission while considering the navigational difficulties encountered in a small-body scenario. For instance, most works concerned with the control and guidance laws make minor considerations about the navigat... | Full autonomy in asteroid missions has gained particular attention in the last few years. Scheres & McMahon [41] assess different mission architectures for enabling autonomous exploration of small bodies, showing that a spacecraft can orbit and deploy objects on the surface of a small body with minimal instrumentation.... |
By “far-approach”, we consider the period of the mission when the spacecraft changes from heliocentric to relative navigation about the small-body, which is the same as phase 1 of the Hayabusa 2 campaign [49]. That phase ends with the spacecraft at an arbitrarily far distance from the small body, when a preliminary as... | C |
Other avenues for future investigation include practical applications of our fixed-point approach. Due to the strong connections of Brascamp–Lieb inequalities to important questions in machine learning and information theory, this approach could be of wider interest.
|
In this paper, we introduced a novel fixed-point approach for computing Brascamp–Lieb constants, which is grounded in nonlinear Perron–Frobenius theory. In contrast to much of the prior literature, which has analyzed the problem through a Riemannian lens, our approach utilizes a Finslerian geometry on the manifold of ... |
More generally, we hope that the Finslerian lens on fixed point iterations provides a new perspective on the problem of computing Brascamp–Lieb constants. We believe that the tools developed in this work can be applied to a wider class of Picard iterations that arise in the context of Brascamp–Lieb constants (such as ... | Our analysis leverages the Thompson part metric on the manifold of positive definite matrices to model convergence of the fixed-point iteration. To our knowledge, this is the first work that analyzes the computation of Brascamp–Lieb constants via Thompson geometry.
We note that a similar Finslerian lens can be employed... | The paper is structured as follows: In section 2 we introduce basic background and notation, including Thompson geometry on the space of positive definite matrices and the class of Brascamp–Lieb inequalities. In section 3, we provide an overview of the paper’s main results and state the main theorems. In section 4, we ... | B |
By modding out the cycle space by the boundary space, we effectively exclude cycles that are simply the boundaries of higher-dimensional simplices. This ensures that we focus on nontrivial cycles that capture the true topological features of the complex. Lastly, homologous cycles represent k𝑘kitalic_k-dimensional loo... |
By modding out the cycle space by the boundary space, we effectively exclude cycles that are simply the boundaries of higher-dimensional simplices. This ensures that we focus on nontrivial cycles that capture the true topological features of the complex. Lastly, homologous cycles represent k𝑘kitalic_k-dimensional loo... | The generators of the homology group play a crucial role. They represent the distinct topological cycles embedded within our combinatorial model of the graph. Collectively, these cycles characterize the overall topology of the graph. Intriguingly, it is these generators and their corresponding homology classes that we ... |
Intuitively, these chains in the cycle space represent closed loops (k𝑘kitalic_k-cycles) within the complex that cannot be continuously deformed into a single point. Meanwhile, the chains in the boundary space represent the edges or borders of higher-dimensional simplices. |
The latter remark can be further visualized as follows. Consider two 1111-cycles colored red and blue in the simplicial complex shown in Figure 2. These cycles may appear different, but if their difference can be expressed as the boundary of a 2222-simplex in the complex, they are considered homologous. This implies t... | D |
In this paper, we mainly resort to obtaining accurate pseudo-labels so as to enhance the model’s discrimination and generalization in the unseen domain. We first analyze the generalization error on a domain using the theory of multi-domain learning (ben2010theory, ). Based on the upper bound of the generalization error... |
In this paper, we aim to tackle the semi-supervised domain generalization (SSDG) task. Different from the typical semi-supervised task, the challenge of SSDG is that there exist multiple different domains with latent distribution discrepancy. To address this issue, we first explore the theory of multi-domain learning ... |
In this part, we will describe our multi-task learning framework. Since there are multiple different domains in the SSDG task, we consider training each domain as an independent task (i.e., the local task for each domain), which can effectively reduce the interference between different domains during pseudo-labeling. ... | In this paper, our goal is to address the semi-supervised domain generalization (SSDG) task. In this task, a few labeled samples on each domain are available, while most of the samples lack the label information, thus a key task is how to generate the accurate pseudo-label. Different from the conventional semi-supervis... |
Inspired by the theory of multi-domain learning, we extend the FixMatch (DBLP:conf/nips/SohnBCZZRCKL20, ) 111FixMatch is an excellent baseline in SSDG, which will be validated in the experimental section. to a multi-task learning method, named MultiMatch, for semi-supervised domain generalization. Specifically, we bui... | D |
Bahng et. al. [1] also explores visual prompts in input pixel space for adapting CLIP models [45] and makes connection with [15].
While showing promising results on Transformers, visual prompts on ConvNets presents much worse transfer results [25, 1], possibly due to the limited capacity of input space visual prompts. ... | We show the performance of Conv-Adapter on VTAB-1k validation set in Fig 5, of using different kernel size for the depth-wise convolution to verify our argument of the loss of locality. One can observe that, for both ResNet50 and ConvNext-B, using smaller kernel size results in inferior performance. When setting the ke... | As 1×1111\times 11 × 1 convolution layer can only transfer channel-wise information, we thus design the adapting of functional convolutions, i.e., intermediate K×K𝐾𝐾K\times Kitalic_K × italic_K convolutions, to keep locality sensitive.
On the contrary, adapting the whole residual block considers the transferring of p... | Similarly, TinyTL introduces extra residual blocks to MobileNet [23, 6] for memory efficient on-device learning.
Guo et. al. [17] proposes re-composing a ResNet with depth-wise and point-wise convolutions, and re-training only the depth-wise part during fine-tuning. | where ⊗tensor-product\otimes⊗ and ⊗^^tensor-product\hat{\otimes}over^ start_ARG ⊗ end_ARG denotes point-wise and depth-wise convolution, respectively.
To allow the modulation Δ𝐡Δ𝐡\Delta\mathbf{h}roman_Δ bold_h to be more flexibly composed into 𝐡𝐡\mathbf{h}bold_h, we set 𝜶𝜶\boldsymbol{\alpha}bold_italic_α in Eq. ... | C |
Estimate jointly the PDEs solution uNNsubscript𝑢𝑁𝑁u_{NN}italic_u start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT and the target parameter λ(k−1)(t)superscript𝜆𝑘1𝑡\lambda^{(k-1)}\left(t\right)italic_λ start_POSTSUPERSCRIPT ( italic_k - 1 ) end_POSTSUPERSCRIPT ( italic_t ) by minimizing (9) using data fr... | Note that in the definition of Regret, 𝚯^(k)superscript^𝚯𝑘\mathbf{\hat{\Theta}}^{(k)}over^ start_ARG bold_Θ end_ARG start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT and λ^(t)(k)^𝜆superscript𝑡𝑘\hat{\lambda}(t)^{(k)}over^ start_ARG italic_λ end_ARG ( italic_t ) start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPE... | Figure 3 shows the performance of CP-PINNs in discovering changepoints and solving (16). Specifically, the leftmost panel illustrates the precise solution across a uniform temporal scale. Identifying the locations of changepoints remains challenging even when the solutions are known. In the second panel, the identical ... |
Figure 1: The dataset is partitioned based on spatial information, with each batch encompassing the full temporal information. In the online learning approach, the network is trained using the previous distribution of loss weights and updated based on the data from the subsequent batch. | The left panel of Figure 2 presents the detected changepoints over the temporal axis. In comparison, the PINNs111Standard PINNs assume a constant coefficient over time, which performs much worse for changepoint scenarios. To ensure a fair comparison, we modify PINNs to allow for a time-dependent coefficient. This highl... | C |
By a leg of f𝑓fitalic_f, we mean a maximal interval [i,j]⊆[1,m]𝑖𝑗1𝑚[i,j]\subseteq[1,m][ italic_i , italic_j ] ⊆ [ 1 , italic_m ] such that, for hℎhitalic_h in the range i≤h<j𝑖ℎ𝑗i\leq h<jitalic_i ≤ italic_h < italic_j, the difference d=f(h+1)−f(h)𝑑𝑓ℎ1𝑓ℎd=f(h{{+}}1){-}f(h)italic_d = italic_f ( italic_h + 1 ) -... | If hℎhitalic_h is an internal waypoint where the change is from an increasing to a decreasing leg, we call hℎhitalic_h a peak; if the change is from a decreasing to an increasing leg, we call it a trough. Not all waypoints need be peaks or troughs, because some legs may be flat; however, it is these waypoints that will... | A number hℎhitalic_h which forms the boundary between two consecutive legs will be called a waypoint. We count the numbers 1111 and m𝑚mitalic_m as waypoints by courtesy, and refer to them as terminal waypoints; all other waypoints are internal.
Thus, a walk consists of a sequence of legs from one waypoint to another. | f𝑓fitalic_f be a walk such that w=uf𝑤superscript𝑢𝑓w=u^{f}italic_w = italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT. Since |u|<|w|𝑢𝑤|u|<|w|| italic_u | < | italic_w |, f𝑓fitalic_f has at least two legs.
If f𝑓fitalic_f has any flat legs, then w𝑤witalic_w contains a repeated letter and so is of the f... | that w′=uf′=vg′superscript𝑤′superscript𝑢superscript𝑓′superscript𝑣superscript𝑔′w^{\prime}=u^{f^{\prime}}=v^{g^{\prime}}italic_w start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_u start_POSTSUPERSCRIPT italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT = italic_v start_POSTSUPERSCRIPT itali... | B |
Herein, let us first provide some perspectives in Section 2 on how one formulates inverse problems, and how the different philosophical approaches inform our approach. We then address the goals mentioned in the previous paragraph by first considering a finite-dimensional, linear model problem in Section 3 that we use ... | Parameter estimation problems – of which inverse problems are a particular kind – seek information about parameters in the model using measurements (“observations”) of the system’s state. Depending on what kind of, and how much, information one seeks, parameter estimation problems can be formulated in different ways. F... | The methods in the middle of the spectrum mentioned above and in the figure use deterministic methods to make the solution of statistical formulations more efficient.
Yet, relatively little has been done to bring information from the (expensive) Bayesian perspective to selectively enrich the deterministic solution, at ... | One could similarly analyze papers from many other disciplines that use inverse problems. They may be using different words, but a common feature of the many definitions of resolution, adjoints, sensitivity, and identifiability that can be found in the literature, is that most of these notions originate in, and were de... |
Historically, parameter estimation problems were usually formulated as seeking that set of parameters q∗superscript𝑞∗q^{\ast}italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT for which model predictions fit measurements best. This approach is often called the deterministic approach to parameter estimation. Oftenti... | A |
The Image Point Cloud (Figure 4)is an entry point to the labeling process by showing two-dimensional representations of the images of the selected manuscripts so that similar images are presented close to each other in the space. In it, each circle represents an image. One can select and combine embeddings based on the... | The Image Point Cloud (Figure 4)is an entry point to the labeling process by showing two-dimensional representations of the images of the selected manuscripts so that similar images are presented close to each other in the space. In it, each circle represents an image. One can select and combine embeddings based on the... | Each selected manuscript has a different color, which is displayed in a legend together with the manuscript name. To highlight the positions of the images of a specific manuscript, the convex hull of the points is drawn as a contour. It can be toggled by clicking on the manuscript in the legend, and it is possible to z... |
On mouseover, the image is shown. A lasso selection can be used at the points to select a set of images for the labeling process. The selected points are increased in size to better highlight them. The re-annotation space is accessed from a button in the left-hand drawer. The current state of the graph and the point c... | The domain expert can select one or multiple similarity metrics for the graph, such as image similarity, label similarity, or description similarity. When two or more similarities are combined, the edge value corresponds to the average of the values. To avoid visual clutter, it is possible to select the maximum number ... | B |
Table XV: Results (%) of cross-task prompt transfer on BERT-small. The red-colored row shows the results of full-tuning BERT-small model, while orange-colored ones denote prompt tuning without any prompt transfer. Notably, positive transfers are in green and “Avg.” denotes the average performance of all target task... |
Table XVI: Results (%) of cross-task prompt transfer on BERT-tiny. The red-colored row shows the results of full-tuning BERT-tiny model, while orange-colored ones denote prompt tuning without any prompt transfer. Notably, positive transfers are in green and “Avg.” denotes the average performance of all target tasks... | Table XVI: Results (%) of cross-task prompt transfer on BERT-tiny. The red-colored row shows the results of full-tuning BERT-tiny model, while orange-colored ones denote prompt tuning without any prompt transfer. Notably, positive transfers are in green and “Avg.” denotes the average performance of all target tasks.... | Table XVI: Results (%) of cross-task prompt transfer on BERT-tiny. The red-colored row shows the results of full-tuning BERT-tiny model, while orange-colored ones denote prompt tuning without any prompt transfer. Notably, positive transfers are in green and “Avg.” denotes the average performance of all target tasks.... | Table XVI: Results (%) of cross-task prompt transfer on BERT-tiny. The red-colored row shows the results of full-tuning BERT-tiny model, while orange-colored ones denote prompt tuning without any prompt transfer. Notably, positive transfers are in green and “Avg.” denotes the average performance of all target tasks.... | B |
The group first provided instructions about tools and guidance to carry on attacks against Russian payment systems on 9 March (two weeks after the invasion), attracting high levels of engagement: 240k views, 2.6k emojis, 1.2k forwards, and 421 replies from 197 users. The next was on 1 April: while the number of replies... |
Targets Promoted by the IT Army of Ukraine. Many announcements and targeted domains were posted in the first two weeks after the invasion, beginning on 26 February, peaking on 27 February with 40 announcements and 45 domains promoted (IP addresses were not regularly included until later), see Figure 6. Yet they quickl... | While the number of announcements dropped, the number of targets steadily increased, particularly in May and June 2022 with multiple-target posting. Activities were unstable at that time; targets got promoted less frequently and occasional days had no targets. Targets were mostly fresh in the first two weeks, but then ... | Target Selection. Targets were often themed, patterning around particular weekdays e.g., online news and propaganda, food delivery, and entertainment are often attacked at weekends to maximise impact as people spend more time online. Themes were also occasionally set with re-promoted old targets, leading to wide variat... |
Crossover with Observed Attacks. The IT Army of Ukraine maintains a dashboard of targets’ status, claiming many are down due to their actions. To find whether the attacks involved reflected DDoS or defacement, we correlate our attack records with promoted targets since the Telegram group started. We consider a defacem... | C |
The top k𝑘kitalic_k samples of 𝒟∗superscript𝒟\mathscr{D^{*}}script_D start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT are denoted as the set Sk(𝒟∗)subscript𝑆𝑘superscript𝒟S_{k}(\mathscr{D^{*}})italic_S start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( script_D start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) where settin... |
Identifying the top k𝑘kitalic_k samples is not only useful for the attacker, but also the defender. This is because a defender can evaluate his or her model’s robustness to attacks given the attacker’s best efforts (attacks using the top k𝑘kitalic_k samples). We call this performance measure the transferability at k... |
Finally, in contrast to prior works, we suggest a more grounded approach to evaluating model security in transfer attacks. We recommend that the community evaluate their models against the top k𝑘kitalic_k most transferable samples from a blackbox perspective, and not by taking the average success of all samples in wh... | To evaluate our ranking methods, we use transferability at k𝑘kitalic_k as defined in (2). Note that transferability at k𝑘kitalic_k can also be viewed as the attack success rate on f𝑓fitalic_f for the top-k𝑘kitalic_k recommended samples. We remind the reader that ranking is performed without access to f𝑓fitalic_f o... |
To find the best xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT or δjsubscript𝛿𝑗\delta_{j}italic_δ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT using surrogate f′superscript𝑓′f^{\prime}italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, the attacker must rank potential adversarial e... | A |
We remark that the assumptions underlying Proposition 3 can be relevant in practice. For example, consider classifying whether or not a given point 𝒙𝒙\boldsymbol{x}bold_italic_x in n−limit-from𝑛n-italic_n -dimensional space (with each component bounded between [−π,π]𝜋𝜋[-\pi,\pi][ - italic_π , italic_π ]) stays in... | For this task, an individual data point in the training dataset is generated by uniformly drawing each vector component from the range [−π,π]𝜋𝜋[-\pi,\pi][ - italic_π , italic_π ]. Since here data points are obtained via uniformly sampling each component independently, the above assumptions are satisfied.
| Figure 3: Effect of exponential concentration on training and generalization performance.
We consider a tensor product encoding for an engineered data set where each component is uniformly drawn from [0,2π]02𝜋[0,2\pi][ 0 , 2 italic_π ] and the true label is ytrue(𝒙)=∑i=1NswiκFQ(𝒙𝒊,𝒙)subscript𝑦true𝒙superscrip... |
We remark that the assumptions underlying Proposition 3 can be relevant in practice. For example, consider classifying whether or not a given point 𝒙𝒙\boldsymbol{x}bold_italic_x in n−limit-from𝑛n-italic_n -dimensional space (with each component bounded between [−π,π]𝜋𝜋[-\pi,\pi][ - italic_π , italic_π ]) stays in... | Figure 7: Global-measurement concentration of quantum kernels. We plot the variance of the fidelity kernel as a function of n𝑛nitalic_n using different data-embeddings, namely a single layer of one qubit rotations (Rxsubscript𝑅𝑥R_{x}italic_R start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT, Rysubscript𝑅𝑦R_{y}italic_... | A |
For lane change prediction, the accuracy of lane keeping class is significantly higher than the other two classes. This may be due to there being fewer frames given to these events, some lane and right lane changes also resemble lane keeping events.
Therefore, left lane and right lane changes can be sometimes miss-pred... | : The second method is designed over the first one. It uses the same 3D action recognition networks as the first method. Bounding box information is embedded to each frame of the RGB video data to improve classification and prediction accuracy. This method assumes that a separate vehicle prediction method has been used... |
Models can anticipate 1 second and 2 seconds ahead on TTE-10 and TTE-20 data respectively. The prediction accuracy of all the models decrease more significantly than their classification accuracy. This can be explained as TTE-10 and TTE-20 data provide less information than TTE-00 data. As can be seen in Table 4.2, th... | Table 4.2 presents the top-1 accuracy of lane change classification and prediction on RGB video data. Among all the models, SlowFast-R101 (with 127.2 GFLOPs) is the most complex model. It outperforms all the other models on the Kinetics-400 dataset [7]. However, SlowFast-R101 networks only yield 71.80% top-1 accuracy o... |
Table 4.2 illustrates the classification and prediction results on video combined with bounding box data. As can be observed, regardless of the classification or prediction results, the performance of each method does not vary much. The best accuracy is only 3.00% higher than the lowest one, which is very different fr... | D |
𝒜δ(p)={(1,0)if δ is in p,(0,1)otherwise.subscript𝒜𝛿𝑝cases10if δ is in p01otherwise\mathcal{A}_{\delta}(p)=\begin{cases}(1,0)&\text{if $\delta$ is in $p$},\\
(0,1)&\text{otherwise}.\end{cases}caligraphic_A start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( italic_p ) = { start_ROW start_CELL ( 1 , 0 ) end_CELL start_... | describe the difference between the anonymized programs as
δ=𝒴(pa)\𝒴(pb)𝛿\𝒴subscript𝑝𝑎𝒴subscript𝑝𝑏\delta=\mathcal{Y}(p_{a})\,\backslash\,\mathcal{Y}(p_{b})italic_δ = caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) \ caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSC... | 𝒴(pa)≠𝒴(pb)𝒴subscript𝑝𝑎𝒴subscript𝑝𝑏\mathcal{Y}(p_{a})\neq\mathcal{Y}(p_{b})caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) ≠ caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ). As a result, a method
𝒴𝒴\mathcal{Y}caligraphic_Y creating universal k𝑘kitalic_k-a... | semantically equivalent. Then, as long as 𝒴(pa)𝒴subscript𝑝𝑎\mathcal{Y}(p_{a})caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) and
𝒴(pb)𝒴subscript𝑝𝑏\mathcal{Y}(p_{b})caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) are not identical, there always exists an | the method 𝒴𝒴\mathcal{Y}caligraphic_Y is forced to normalize the programs to the same
representation, such that we have 𝒴(pa)=𝒴(pb)𝒴subscript𝑝𝑎𝒴subscript𝑝𝑏\mathcal{Y}(p_{a})=\mathcal{Y}(p_{b})caligraphic_Y ( italic_p start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) = caligraphic_Y ( italic_p start_POSTSUBSC... | D |
Existing methods in modeling vision and language signals can be roughly categorized as one-stream [21, 22, 23] and two-stream methods [2, 3, 1]. The recent contrastive learning based two-stream methods like CLIP [2] have garnered significant attention. With CLIP, strong zero-shot and few-shot performance is achieved, ... |
Linear Probe vs Prompt Tuning. It is noteworthy that the simple linear probe (LP-CLIP) exhibits impressive performance on three specialized datasets, even surpassing CoOp, thus highlighting its robustness in resisting domain shifts. Nevertheless, it’s crucial to acknowledge that linear probe utilizes the validation se... |
CoOp proposed by Zhou et al. [5] is the first method that successfully introduces prompt tuning to VLMs. Following this line, several prompt tuning methods were proposed to further improve the effectiveness and versatility for few-shot recognition task. CoCoOp [44] was proposed to address the class shift problem by in... | For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede... | With the rapid development of pre-training techniques, lots of pre-trained models are available, which poses a new challenge – how to fine-tune the models in a parameter-efficient way on new tasks. Traditional fine-tuning methods [24, 25, 26, 27] add task-specific heads and tunes all parameters. Although it is simple a... | D |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.