context stringlengths 250 3.79k | A stringlengths 250 4.99k | B stringlengths 250 8.2k | C stringlengths 250 4.17k | D stringlengths 250 3.69k | label stringclasses 4
values |
|---|---|---|---|---|---|
A few parents (16%, N=3) were also surprised to see some financial apps on their teens phones. For instance parent P16 found Cash App, a mobile payment service app, on the teen’s phone and thought this app was pre-installed on his daughter’s new phone that they purchased recently. |
”I usually don’t monitor what he does, like on the apps, because I kind of trust him to know his boundary and also because he is already 16, I know how I have raised him. So I usually don’t feel the need to have to monitor him what he does on his phones and his apps.”-P5, Mother of T5 (Male, 16 years) |
The rest of the parents (37%, N=7) identified some potentially concerning apps that they already knew that their teens had been using. Through their interaction with CO-oPS, they explicitly brought up the app names that could be a concern and discussed how these apps may harm their teens. They mostly were concerned ab... |
In reviewing the apps installed and the permissions granted on one another’s phones, parents were found to be more concerned about their teens’ app usage but teens generally found more concerning privacy permissions on parents’ phones. 74% of the parents (N=14) found apps on their teen’s phones that could cause concer... |
Only 11% (N=2) of the teens identified some of the apps that their parent’s had been using but teens knew that these apps had some privacy issues. For example, T6 expressed his concern about the Facebook app that his mother had on her phone and his mother P6 agreed with him. | D |
In this subsection, we attempt to recover the Voronoi cells with the proposed filtration from a sample of points near edges of a planar Voronoi diagram with cells of different sizes and different densities on their edges. This is the same dataset as the one on the left of Figure 1, and it is motivated by the cosmologic... |
In this section, we present certain desirable properties of the proposed filtration and substantiate our claims in the introduction. In Section 4.1, we discuss how the proposed filtration prolongs persistences of homology classes of high-density regions. Then we discuss, in Section 4.2, the proposed filtration’s scale... | Two squares are clearly visible in the scatter plot in the right subplot of Figure 1. However, the blue point corresponding to the smaller square in the persistence diagram of the distance filtration is very close to the diagonal (it is at the tip of the cluster of red diamonds near the origin). On the other hand, for ... | In this subsection, we attempt to recover the Voronoi cells with the proposed filtration from a sample of points near edges of a planar Voronoi diagram with cells of different sizes and different densities on their edges. This is the same dataset as the one on the left of Figure 1, and it is motivated by the cosmologic... | We experiment with Voronoi diagrams in which the cells in the center of the diagram tend to be smaller. A point is sampled by first choosing a random cell and then choosing a uniform point on its boundary. This results in a higher sampling density on boundaries of smaller cells. We further inject additive noise. Furthe... | D |
Recent works have made progress in capturing high-level AU semantic relations in an implicit way (Corneanu, Madadi, and Escalera 2018; Niu et al. 2019) by exploiting correlations between AUs via probabilistic graphic models or in an explicit way (Li et al. 2019; Shao et al. 2020) by constructing an AU semantic graph ac... | This paper focuses on explaining the why and wherefores of subject variation problem in AU recognition with the help of causal inference theory and providing a solution for subject-invariant facial action unit recognition by deconfounding variable S𝑆Sitalic_S in the causal diagram via causal intervention. Unlike previ... | This is known as subject variation problem, which makes it challenging for AU recognition models to generalize across subjects.
Although previous works have noticed that subject variation problem exists in facial action unit recognition task, as far as we know, there have been few works focusing on answering the whys a... |
As for subject variation problem, works such as (Chen et al. 2013) provide a solution for enhancing the generalizability of AU recognition model by training personalized AU classifiers for each subject and works such as (Zen et al. 2016; Wang and Wang 2018) make attempt to relieve the subject-related prediction bias t... | Although these works have realized that the data distribution of training subjects differs from that of unseen subjects, they are still based on the assumption that the data distribution of source and target domains shares some similarities.
In contrast, we formulate the causalities among variables in AU recognition ta... | B |
PPB-Validation Procedure: With the pre-validation information, a node can quickly verify the received blockheader. First, it compares the local BodyHash𝐵𝑜𝑑𝑦𝐻𝑎𝑠ℎBodyHashitalic_B italic_o italic_d italic_y italic_H italic_a italic_s italic_h with the Txs𝑇𝑥𝑠Txsitalic_T italic_x italic_s-Hash𝐻𝑎𝑠ℎH... | This part presents experimental results to validate the efficiency of BBP by comparing its basic performance with LBP, BHP, and CBP under the testbed without any malicious nodes. Specifically, we examine block processing time, block traffic load, memory costs, and block propagation delays of these propagation protocols... | In this section, we implement and evaluate our BBP scheme over a test network. For comparison, we also implement three typical block propagation protocols: the Legacy Block Propagation (LBP) and Compact Block Propagation (CBP) of Bitcoin, and the Block-Hash Propagation (BHP) of Ethereum.
| We implemented BBP and conducted experiments over a large-scale blockchain network with many nodes to evaluate its performance. We compare BBP with other block propagation schemes based on the experimental results. The experiment results show that BBP has the least block propagation time. Compared with the current prot... | CBP is the current block propagation protocol adopted in Bitcoin to further reduce the network overload. When a node receives a new block, it validates the block and generates a compact block version. Then it announces this compact block by sending an Inv message to its neighbor nodes. If a neighbor node does not recei... | B |
Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2... | Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate... |
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ... | In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo... | Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2... | B |
SSD result from an underlying motor/neurological, structural, or sensory/perceptual
cause, there is no known cause for functional SSD (ASHA, \APACyear\bibnodate) (see Figure 1). The prevalence of SSD varies significantly according to different studies; however, these studies reflect the magnitude of the problem (cite t... | Furthermore, we found that articulation disorder was the most frequent disorder addressed by the included studies, with three studies dedicated to them. This may be due to the fact that articulation disorder are commonly found in persons with other SSD (Flipsen Jr, \APACyear2015). The results show that most studies aim... | a shortage in the workforce (Duval \BOthers., \APACyear2018; V. Robles-Bykbaev \BOthers., \APACyear2017). Furthermore, according to United Nations Children’s Fund (UNICEF), there are not adequate speech language therapy services for children with communication disorders and disabilities (Lansdown \BOthers., \APACyear20... |
The effectiveness of AI-based automated speech therapy tools depends on their performance compared to the conventional mode of speech therapy provided by SLPs. Moreover, automated speech therapy tools providing wrong feedback can be disastrous to children’s speech improvement. Few studies (4 out of 24) compared the re... | SSD. Personalized speech therapy and practice monitored by SLPs can improve the
acquisition of speech skills (Duval \BOthers., \APACyear2018). However, the accessibility of SLPs is crucial for such intervention. A report suggests that up to 70 % of SLPs have waiting lists, which indicates | D |
For each of the datasets, we do the following. Let k𝑘kitalic_k be the number of communities. We first apply a (k−1)𝑘1(k-1)( italic_k - 1 )-dimensional PCA and then run a standard implementation of K-Means with k𝑘kitalic_k centers on the post-PCA data, and record the normalized mutual information (NMI) value of the o... | LOF, KNN-dist, Isolation forest, and ECOD.
We have two settings. In the first case, we remove 5%percent55\%5 % of the points according to the outlier score, and in the next one, we remove 10%percent1010\%10 % of the points. Once the points are removed, we run PCA+K-Means on the rest of the dataset and obtain the new NM... | Next, we record the NMI improvement in PCA+K-Means due to removing 10%percent1010\%10 % of the points in Figure 5.
As observed from the plot, our method gives the highest NMI improvement in 5 out of 9 datasets and is near-optimal in one more. Of the three other datasets, ECOD has the best performance in simkumar4hard a... | LOF has the best performance in the Zheng4eq and Zheng4uneq datasets. We also add the results for 5%percent55\%5 % removal in the Appendix, in Section E.3. We also add the improvements in the purity index for the 5% and 10% point removal cases, which is another popular measure of clustering accuracy. As aggregate infor... |
Next, we study the usefulness of our outlier detection method in these datasets. Unlike our simulations, there is no ground-truth labeling for outliers. Rather, we assume that in each community (as defined by the labels provided with the dataset), some points may behave more like an outlier, in that they may be a mixt... | A |
In addition, we also show again as in the baseline situations, that the first two levels of visual missingness, are innocuous for the SGG tasks. It provides empirical evidence and further insights to bring deliberately obfuscations for privacy concerns as in [13].
| 2) We propose a model-agnostic dialog framework, SI-Dial, which can be jointly trained with various existing models and endows the AI systems with the interactive communication abilities.
3) We perform extensive experiments and analysis with insufficient visual input in three different levels of data missingness, and d... | As the primary information source for various computer vision tasks, the visual input data play a significant role in most existing works to achieve competitive and promising performance. It is reasonable to expect the performance drop under the task setting with incomplete visual input. To tackle the problem, we propo... | We conduct experiments on the Scene Graph Generation (SGG) task to test the feasibility of the task setting with missing visual input and to demonstrate the effectiveness of our proposed method.
SGG task aims to generate a graphical representation of the scene from given images. |
In this paper, we investigate the SGG task setting with missing input visions, and propose to supplement the missing visual information via interactive natural language dialog using the proposed SI-Dial framework. Extensive experiments on the benchmark dataset with various levels of missingness demonstrate the feasibi... | D |
Since then, the facility location game has become one of the main grounds for approximate mechanism design without money and has attracted numerous follow-up research [20, 21, 1, 14].
In particular, Fotakis and Tzamos [14] show that no deterministic strategyproof mechanism can achieve a bounded approximation ratio when... | The seminal work of Procaccia et al. [23] initiates the study of approximate mechanism design without money for the facility location game.
They study strategyproof mechanisms for one-facility and two-facility games through the lens of approximation ratio of the objective, which provides a way to quantify the fundamen... | Each facility, once located, has an entrance fee determined by its location. The cost of an agent is the sum of the travel fee (distance to the facility) and the entrance fee of the facility. Each agent will use one facility at a minimum cost.
111In this paper, we make the assumption that facilities are homogeneous in ... | The recent survey by Chan et al. [8] depicts state of the art. Here, we mention some of the models: obnoxious facility games where every agent wants to stay away from the facility [11, 13];
heterogeneous facility games where the acceptable set of facilities for each agent could be different [25, 17, 12, 16]; | In all the above models, the cost of an agent is measured by her distance to the closest facility. This cost can be considered as the travel fee. In many real-life scenarios, except for the travel fee, the agent may also need to pay the facility a service or entrance fee, such as tickets for swimming pools and museums.... | C |
Hksubscript𝐻𝑘H_{k}italic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is obtained by replacing the middle edge (v,u)𝑣𝑢(v,u)( italic_v , italic_u ) of H𝐻Hitalic_H by a path Pk+1subscript𝑃𝑘1P_{k+1}italic_P start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT that connects v𝑣vitalic_v and u𝑢uitalic_u. Exactly k... |
We wonder if there is some algorithmic relation between efficient and perfect edge domination. More specifically, we remark that there are graph classes which admit polynomial time solutions for solving the efficient edge domination problem while being hard for solving the perfect edge domination problem. However, we ... | There is some more bibliography to add to the already vast literature [3, 10, 23, 31, 33, 35] on dominating induced matchings. As we have seen before the papers on perfect edge domination are less frequent. There is a paper [16] where the authors describe ILP formulations for the PED problem, together with some experim... |
We say that G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) is a neighborhood star-free graph, NSF graph for short, if for every vertex v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V with degree at least 2, G[N[v]]𝐺delimited-[]𝑁delimited-[]𝑣G[N[v]]italic_G [ italic_N [ italic_v ] ], is not a star. In other words, every ... |
Since connected NSF graphs do not have proper perfect dominating sets, the existence of DIM is equivalent to ask if exist a perfect edge dominating set with at most m−1𝑚1m-1italic_m - 1 edges where |E|=m𝐸𝑚|E|=m| italic_E | = italic_m (the trivial perfect dominating set is the set of all edges E𝐸Eitalic_E). | B |
Under standard assumptions the problem (5) will always be feasible given the definition of a CLF, and its optimal solution will be unique. Unlike (4), however, it is not immediately obvious whether the controller Φ(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) defined by (5) will be PWA. We will prove in §4 that this remains the case.... |
We now investigate the stability of the linear uncertain system in (1) with piecewise-affine neural network (PWA-NN) controller u(x)=ΦNN(x)𝑢𝑥subscriptΦNN𝑥u(x)=\Phi_{\textrm{NN}}(x)italic_u ( italic_x ) = roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( italic_x ), defined as in (7), trained to approximate some ... | On the other hand, a well-known drawback of MI optimization is poor scalability with increasing problem size. While it has already been observed in 17 that the computation time of the worst-case error e¯∞subscript¯𝑒\bar{e}_{\infty}over¯ start_ARG italic_e end_ARG start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT is only weakly ... |
As in the case of (4), the optimization-based controller Φ(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) defined in (5) is stabilizing but may require a prohibitive amount of computation for use in real-time applications with fast dynamics. The practical limitations of characterizing a controller Φ(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) desig... |
We now characterize a stabilizing control law Φ(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) from a geometrical perspective. While both of the vertex-based policies Φ(⋅)Φ⋅\Phi(\cdot)roman_Φ ( ⋅ ) defined in (3) or (4) are known to produce a controller with PWA structure, the structure underlying a selection-based controller Φ(⋅)Φ⋅... | C |
Table 3 shows, that we achieve very good reconstruction on previously unseen frames, which also confirms, that the physical parameters have been well identified. While groundtruth for most of the parameters is hard to acquire, the length of the pendulum, and the angle of the inclined plane are quantities that can be ob... | An exeption is the work by Song et al. that use the solution of an ODE as regularization of a motion network to crate dynamic NeRFs [47]. In contrast to our work, this approach does not enforce the physics to be exact.
While the majority of works on implicit representations focuses on shape, [45] show the generality of... | We model the dynamics of the objects using an ordinary differential equation (ODE) and use implicit neural representations to model the appearance, where the static background and the planar dynamics allow us to model the appearance in 2D. Our objective is to estimate the unknown physical parameters, and the initial co... | All of these approaches require large amounts of data to train the complex encoder and decoder modules. In contrast, our approach does not rely on trainable encoder or decoder structures. Instead it combines neural implicit representations to model the scene appearance with the estimation of the parameters of a known, ... | In this work we presented a solution for identifying the parameters of a physical model from a video while also creating a photorealistic representation of the appearance of the scene objects. To this end, we proposed to combine neural implicit representations and neural ODEs in an analysis-by-synthesis fashion.
Unlike... | D |
The above definition is appropriate since it means that p(ck∣|yi⟩)=p(ck∣|yj⟩)𝑝conditionalsubscript𝑐𝑘ketsubscript𝑦𝑖𝑝conditionalsubscript𝑐𝑘ketsubscript𝑦𝑗p\left(c_{k}\mid\ket{y_{i}}\right)=p\left(c_{k}\mid\ket{y_{j}}\right)italic_p ( italic_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∣ | start_ARG italic_... |
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi... |
As discussed earlier, the QSC framework ensures minimality of quantum communication resources by extracting and compressing the semantic representations of the data, unlike existing semantic-agnostic QCNs. Moreover, to assess the accuracy of the QSC performance within the quantum semantics’ extraction, transmission, r... |
As shown in Fig. 1, the constructed quantum semantic representations in the form of d2subscript𝑑2d_{2}italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-dimensional quantum states must be transmitted through quantum channels to the receiving quantum node. The quantum communication process must preserve the accuracy of ... | In general, there is a tradeoff between minimality and accuracy in the QSC framework as a smaller number of quantum communication resources leads to smaller quantum semantic fidelity. Thus, we formulate the QSC minimality-accuracy tradeoff problem as follows:
| D |
From Definition 1, a fault is said to be diagnosable if it can be detected within a finite number of observable events after its occurrence. In order to detect the fault accurately, one should ensure that the fault can be detected for any (long enough) execution after its occurrence. | Recall that a defensive function is said to be C𝐶Citalic_C-enforcing if, regardless of what event is generated by a given system, it can manipulate observations and output a sequence that does not reveal the occurrences of secret events.
When the defensive function is unconstrained (i.e., each event t𝑡titalic_t in Eo... | Necessity: The existence of a secret state in Gdsubscript𝐺𝑑G_{d}italic_G start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT implies that the occurrence of the secret event is revealed following a sequence of observations, which leads to that state. Furthermore, by the diagnoser construction, if there exists a secret stat... | We are interested in hiding from an external observer (a curious eavesdropper) confidential information of the system that is represented as the occurrences of events from ESsubscript𝐸𝑆E_{S}italic_E start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT, which is called the secret event set.
Accordingly, the privacy of the s... | When considering a secret event, once it gets revealed under some execution after its occurrence, we say that the privacy of the system has been compromised (since there is a possibility for the secrecy of the event to be compromised).
Thus, there is clearly an inverse relationship between event diagnosis and event con... | D |
TlA,TlCsubscriptsuperscript𝑇𝐴𝑙subscriptsuperscript𝑇𝐶𝑙T^{A}_{l},T^{C}_{l}italic_T start_POSTSUPERSCRIPT italic_A end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT , italic_T start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT | In the policy execution phase, the agents independently apply the individual policies without any need to communicate with the central entity or consider other agents’ actions. An agent simply infers its optimal action from its local observation. Therefore, the computational complexity of the proposed MARL-based approa... | A basic training cycle consists of the following steps, as shown in Fig. 5(a). The agents collect experience data by performing network operations under the existing policies, and send these data together with some local policy information to the central entity. The central entity puts the received data in the replay b... | The message exchanges are unavoidably affected by non-negligible latencies, which can deteriorate the training process in the following two ways. First, the latencies can slow down the learning process. Second, different IAB-nodes can experience distinct latencies due to their different distances from the IAB-donor and... |
Cooperation is fundamental to the effective learning of the agents formulated above. Simply applying independent SARL algorithms to train individual agents interprets the other agents’ decisions as part of the environment, which would be, in turn, non-stationary as the other agents’ policies constantly change as well ... | D |
In particular, if they disregard the researcher’s profit motive, the regulator risks being exploitable.
That is, if it is profitable for a researcher to bluff—for a pharmaceutical company to submit an ineffective drug, a scientist to study a treatment with no effect, an engineer to submit an ineffective feature—then th... | We consider as a motivating example the Food and Drug Administration (FDA), which regulates trillions of dollars of activity in the United States by controlling approval for medical treatments based on clinical trials. Here, commercial ventures first carry out background research to develop a candidate treatment, and t... | To grant approval, the principal requires that the agent provide evidence for its product having sufficient quality; e.g., by conducting a randomized controlled trial (RCT) and testing a null hypothesis, H0:θ≤θ0:subscript𝐻0𝜃subscript𝜃0H_{0}:\theta\leq\theta_{0}italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : itali... | We report on the expected value of a placebo drug under the three protocols above in Table 1. We find that for typical drugs with $1-10B profit if approved, the standard protocol requiring two trials is incentive-aligned. For extremely profitable drugs earning $100B or more, the protocol ceases to be incentive-aligned.... |
In this section, we discuss the functioning of the FDA via the lens of a principal-agent problem, and the deterrence constraints this imposes on the regulator. We will analyze the current FDA protocol to see if the expected profit of a placebo drug is positive (i.e., not incentive-aligned), negative (i.e., incentive-a... | A |
𝜽∗=argmin𝜽f(I(𝑾𝜽(𝒙)),J(𝒙)).superscript𝜽subscriptargmin𝜽𝑓𝐼subscript𝑾𝜽𝒙𝐽𝒙{\bm{\theta}}^{*}=\text{argmin}_{{\bm{\theta}}}~{}f\left(I({\bm{W}}_{\bm{%
\theta}}({\bm{x}})),J({\bm{x}})\right).bold_italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = argmin start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCR... | We note that to calculate the update of the registration 𝚫𝜽𝚫𝜽{\bm{\Delta}}{\bm{\theta}}bold_Δ bold_italic_θ of Equation (2), the only operation that requires the joint availability of information from both parties is the term
R=∑𝒙S(𝒙)⋅J(𝒙)𝑅subscript𝒙⋅𝑆𝒙𝐽𝒙R=\sum_{{\bm{x}}}S({\bm{x}})\cdot J({\bm{x}})ital... | The loss f𝑓fitalic_f can be any similarity measure, e.g., the Sum of Squared Differences (SSD), the negative Mutual Information (MI), or normalized cross correlation (CC).
Equation (1) can be typically optimized through gradient-based methods, where the parameters 𝜽𝜽{\bm{\theta}}bold_italic_θ are iteratively updated... | In a privacy-preserving scenario, to calculate the update of the registration 𝚫𝜽𝚫𝜽{\bm{\Delta}}{\bm{\theta}}bold_Δ bold_italic_θ of Equation (7), two operations require the joint availability of information from both parties, which are the matrix
P𝑃Pitalic_P of Equation (8) and the matrix P′superscript𝑃′P^{\prim... |
is the second order term obtained from Equation (3) through linearization [35, 8]. The solution to this problem requires the joint availability of both images I𝐼{I}italic_I and J𝐽{J}italic_J, as well as of the gradients of I𝐼Iitalic_I and of 𝑾𝜽subscript𝑾𝜽{\bm{W}}_{\bm{\theta}}bold_italic_W start_POSTSUBSCRIPT b... | B |
The cloud server hosts a teacher model whose internal structure and composition, connections between layers, model parameters, and gradients used for back-propagation are all invisible and unavailable to edge devices, as shown in Fig. 1.
Due to resource limitations, the edge device can only host a lightweight student m... | (b) In some cases, for query samples, these APIs only provide indexes or semantic tags for the category with the highest probability (i.e., hard responses), rather than probability vectors for all possible classes (i.e., soft responses).
(c) Because users refuse to send sensitive data to cloud servers, the distribution... | Figure 2:
The overall framework of MEKD. Lower left: two architectures of GAN-based KD. Upper right: the process of deprivatization. GAN is used to synthetic high-response images to the teacher model within the distribution of data in edge devices. Lower right: the process of distillation with the frozen generator. Th... | The synthetic privacy-free images are simultaneously sent to the cloud server for inference responses, which can be soft (probability vectors for all possible classes) or hard (indexes or semantic tags for the category with the highest probability).
We expect the synthetic images to match the high responses of the teac... | Figure 1:
Schematic process of cloud-to-edge model compression. A cumbersome black-box model is deployed on a cloud server, trained with millions of samples and tags. The cloud server only provides APIs to receive query data and return inference responses of either soft or hard type. The edge device needs to distill a... | A |
where gi(x)subscript𝑔𝑖𝑥g_{i}(x)italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) are either Chebyshev polynomial or complex exponential. So, neural network maps finite number of input coefficients {di}subscript𝑑𝑖\left\{d_{i}\right\}{ italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } to... | Figure 1: (a) the output of neural network N(x)𝑁𝑥N(x)italic_N ( italic_x ) computed on coarse and fine grids. On each subgrid, loss and gradients are zero, so the network provides the best (alas, pathological) approximation to f(x)=2x𝑓𝑥2𝑥f(x)=2xitalic_f ( italic_x ) = 2 italic_x on the interval [−1,1]11[-1,1][ ... |
Our solution to the problem of aliasing errors is to decouple interpolation from the mapping between functional spaces. It is described in Section 4. However, it is possible to decrease aliasing error even when FNO is used for interpolation. One simply needs to train (or fine-tune) on a sufficiently fine grid. The dec... |
First, high-resolution data is used as an input during the second stage. It is abnormal, because in classical works on super-resolution several low-resolution images are combined to improve resolution [TM11] without the direct use of high-resolution data. Second, the presented procedure implies that resolution increas... | It is important to understand Equation 3 implies that SNO only realizes mapping between two functions given by truncated series. If one needs to compute these functions on the finer grid, Chebyshev or trigonometric interpolation should be use. In the same vein, Chebyshev/trigonometric smoothing (the chopping of the ser... | D |
To this end, we propose a novel one-to-all spatial matching knowledge distillation approach. In Figure 1 (c), our method distills the teacher’s features at each spatial location into all components of the student features through a parametric correlation, i.e., the distillation loss is a weighted summation of all stude... | In this section, we first briefly describe the fundamental elements of feature map knowledge distillation and then introduce the general formulation of our knowledge distillation via a target-aware transformer. As our method computes the point-wise correlation of the given feature maps, the computational complexity bec... | Therefore, we propose a hierarchical distillation approach to address this large feature map limitation.
It contains two steps: 1) patch-group distillation that splits the entire feature maps into smaller patches, so to distill local information from the teacher to the student; 2) we further summarize the local patches... | To this end, we propose a novel one-to-all spatial matching knowledge distillation approach. In Figure 1 (c), our method distills the teacher’s features at each spatial location into all components of the student features through a parametric correlation, i.e., the distillation loss is a weighted summation of all stude... | As our method computes the correlation between feature spatial locations, it might become intractable when feature maps are large. To this end, we extend our pipeline in a two-step hierarchical fashion:
1) instead of computing correlation of all spatial locations, we split the feature maps into several groups of patche... | D |
The auto-mpg dataset from the UCI machine learning repository [10], provides data for 398398398398 cars, each with the following 8 attributes: mpg, cylinders, displacement, horsepower, weight, acceleration, model year, origin. The CLIQUE subspace clustering algorithm to the data. We select two “interesting" subspaces,... | We apply ENS-t-SNE algorithm for the two subspaces
using perplexity value 30. The corresponding 3D embedding by ENS-t-SNE is demonstrated in Figure 8. In order to show the clusters in the obtained embedding, we use colors (red, blue, and orange) and shapes (diamond, triangle, square, and crosses). | We use separate visual channels to encode the different types of clusters. Specifically, to show the original clusters for the first perspective, we use colors (blue and orange), for the second perspective, we use the shape (circles and squares), and for the third perspective, we use texture, filled and not filled; see... | In the ENS-t-SNE embedding, each point belongs to two clusters; one for its species and one for its sex. In an interactive environment, one can follow a datapoint from one projection to the other. In other words, there is a transition between the two views in three dimensions that is missing when using small multiples.... | and the number of clusters per perspective is NC1=2𝑁subscript𝐶12NC_{1}=2italic_N italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 2, NC2=3𝑁subscript𝐶23NC_{2}=3italic_N italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 3 and NC3=4𝑁subscript𝐶34NC_{3}=4italic_N italic_C start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ... | A |
In contrast, partially observed Markov decision processes (POMDPs) with large observation and state spaces remain significantly more challenging. Due to a lack of the Markov property, the low-dimensional feature of the observation at each step is insufficient for the prediction and control of the future (Sondik, 1971;... | We analyze the sample efficiency of ETC under the future and past sufficiency assumptions. In particular, such assumptions ensure that the future and past observations are sufficient for identifying the belief state, which captures the information-theoretic difficulty of POMDPs. We prove that ETC attains an O(1/ϵ2)𝑂1... | To this end, we identify a class of POMDPs with a low-rank structure on the state transition kernel (but not on the observation emission kernel), which allows prediction and control in a sample-efficient manner. More specifically, the transition admits a low-rank factorization into two unknown features, whose dimension... | To learn a sufficient embedding for control, we utilize the low-rank transition of POMDPs. Our idea is motivated by the previous analysis of low-rank MDPs (Cai et al., 2020; Jin et al., 2020b; Ayoub et al., 2020; Agarwal et al., 2020; Modi et al., 2021; Uehara et al., 2021). In particular, the state transition of a low... | In this paper, we propose Embed to Control (ETC) as a unified framework for embedding and control in POMDPs. In particular, by exploiting the low-rank transition and the future sufficiency condition, we decompose the embedding learning into the learning of Bellman operators across multiple steps. By assembling the Bell... | B |
Moreover, both the observations and rewards depends on the state Shsubscript𝑆ℎS_{h}italic_S start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT and such dependency is depicted in red.
We would like to highlight that Shsubscript𝑆ℎS_{h}italic_S start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT affects both Ahsubscript𝐴ℎA_{h}i... | We propose a novel policy optimization algorithm which leverages proximal causal inference for handling the confounding bias, and adopts pessimism to tackle the distributional shift.
The core of our algorithm is a coupled sequence of confidence regions constructed via proximal causal inference and minimax estimation, w... | More concretely, P3O involves two components — policy evaluation via minimax estimation and policy optimization via pessimism.
Specifically, to tackle the distributional shift, P3O returns the policy that maximizes pessimistic estimates of the values obtained by policy evaluation. | In specific, based on the confounded dataset, we first constructing a novel confidence region CRπ(ξ)superscriptCR𝜋𝜉\textnormal{CR}^{\pi}(\xi)CR start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_ξ ) for 𝐛πsuperscript𝐛𝜋\mathbf{b}^{\pi}bold_b start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT based on leve... | Furthermore, in order to handle the distributional shift between behavior policy and target policies, we construct a sequence of confidence regions for {bhπ}h=1Hsuperscriptsubscriptsuperscriptsubscript𝑏ℎ𝜋ℎ1𝐻\{b_{h}^{\pi}\}_{h=1}^{H}{ italic_b start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT itali... | B |
There are numerous methods for solving constrained optimization problems, such as projection-based methods, penalty methods, augmented Lagrangian methods, and sequential quadratic programming (SQP) methods (Nocedal2006Numerical). This paper particularly considers solving constrained stochastic optimization problems vi... |
The asymptotics of second-order Newton’s methods for unconstrained problems have recently been investigated. Bercu2020Efficient designed an online Newton’s method for logistic regression, and Boyer2023asymptotic generalized that method to general regression problems. Compared to first-order methods that often consider... | In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi... | On the other hand, a growing body of literature leverages optimization procedures to facilitate online inference, starting with Robbins1951stochastic; Kiefer1952Stochastic and continuing through Robbins1971convergence; Fabian1973Asymptotically; Ermoliev1983Stochastic. To study the asymptotic distribution of stochastic ... | Given the prevalence of streaming datasets in modern problems, offline methods that require dealing with a large batch set in each step are less attractive. It is desirable to design fully online methods, where only a single sample is used in each step, and to perform online statistical inference by leveraging those me... | C |
We are not aware of any valid proof of the discrete LBB condition for the Taylor-Hood elements for d=3𝑑3d=3italic_d = 3 under Condition (4). However, the proof of the discrete LBB condition in [14] extends to d=3𝑑3d=3italic_d = 3 under the stronger condition FK∈P1(K^)3subscript𝐹𝐾subscript𝑃1superscript^𝐾3F_{K}\in... | The discrete LBB condition could also be shown for the isogeometric generalized Taylor-Hood family, see [6], [7].
The proof there relies on a continuously differentiable parametrization of the domain ΩΩ\Omegaroman_Ω on each of a fixed number of patches, which does not cover general quadrilateral/hexahedral meshes. |
A popular class of finite element methods for discretizing the Stokes problem is the generalized Taylor-Hood family Pksubscript𝑃𝑘P_{k}italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Pk−1subscript𝑃𝑘1P_{k-1}italic_P start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on triangular/tetrahedral meshes with cont... | Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case... | In this paper we focus on the related generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes under the following assumptions.
| A |
Table 1 shows that WaveMix is the current SOTA for Cityscapes dataset in terms of single scale inference mIoU among models pre-trained using only ImageNet-1k dataset. Higher mIoU reported by other models [1] belong to multi-scale inference. Performance of WaveMix on Cityscapes validation set along with the reported res... |
The versatility of WaveMix is such that it can be directly used for semantic segmentation by replacing the output layer with two transposed convolution layers and a per-pixel softmax layer to generate the segmentation maps. On the other hand, architectural changes – such as encoder-decoder and skip connections [72] – ... | For semantic segmentation, we used the Cityscapes [54] dataset (under MIT Licenses). The official training dataset itself was split into training and validation sets. Results of the other models compared were directly taken from their original papers as cited in Table 2. Since ConvMixer [17] was never used for semantic... | The lower mIoU (75.78) obtained by replacing the classification head of ConvMixer [17] with segmentation head (similar to WaveMix) shows that other token-mixing architectures, which work well for classification, may not be able to translate that performance to segmentation without significant architectural modification... |
As shown in Figure 1 (a) and (b), the macro-level idea behind the proposed framework is to stack N𝑁Nitalic_N (a hyperparameter) similar WaveMix blocks that are fully convolutional (by design) in both spatial dimensions and maintain the spatial resolution of the feature maps across the blocks. While some CNNs are full... | A |
ii) Let R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT denote the rows of B′′superscript𝐵′′B^{\prime\prime}italic_B start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT and C1subscript𝐶1C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT its columns. | returns B′′superscript𝐵′′B^{\prime\prime}italic_B start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT retricted to columns not in C[K]subscript𝐶delimited-[]KC_{\rm[K]}italic_C start_POSTSUBSCRIPT [ roman_K ] end_POSTSUBSCRIPT and rows not in
R[K]subscript𝑅delimited-[]KR_{\rm[K]}italic_R start_POSTSUBSCRIPT [ roman_K ] end... |
ii) Let R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT denote the rows of B′′superscript𝐵′′B^{\prime\prime}italic_B start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT and C1subscript𝐶1C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT its columns. | The matrix A2subscript𝐴2A_{2}italic_A start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT formed of rows R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT not in R1subscript𝑅1R_{1}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and columns
C2subscript𝐶2C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT not... | iii) The matrix A2subscript𝐴2A_{2}italic_A start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT formed of rows R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT not in R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and
columns C2subscript𝐶2C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIP... | C |
While transformers vaswani2017attention have seen widespread success in many areas of language, it is not until recently they’ve demonstrated mathematical rabe2020mathematical and logical clark2020transformers capabilities, since redefining state-of-the-art benchmarks in formula retrieval peng2021mathbert and solving m... |
Separate approaches for learning mathematics and language representations may lead to improved performance, as it has in other tasks. Research in neuroscience butterworth2002mathematics; amalric2016origins suggests the brain handles mathematics separately to natural language: approaches in premise selection ferreira20... |
We have described the path to the state-of-the-art for five representative areas considering the relationship between natural and mathematical language, either through necessity of the task or efficacy of approach. We describe the details, limitations and successes within each area and find that informal methods strug... | There is a clear evolution in mathematical text processing overall, from roots in explicit discourse representation zinn2003computational; cramer2009naproche to the present day, where graph-based and transformer-based models produce leading metrics in a few related tasks peng2021mathbert; ferreira2021star; liang2021mwp... |
Identifier-Definition Extraction. Leading work in premise selection ferreira2020premise; ferreira2021star and informal theorem proving welleck2021towards has explicitly highlighted the need for improved pairing of variables with descriptions. The varied tasks related to identifier-definition extraction lack communally... | C |
As a side effect, by modeling these networks together, provided that the networks have common connectivity patterns, we can use the information of certain networks to recover noisy information from other networks by improving the prediction of missing links (Clauset et al.,, 2008). Hence colSBM𝑐𝑜𝑙𝑆𝐵𝑀colSBMi... |
Since the SBM is a very flexible model, it has already been adapted to multilayer networks. To name a few, Matias and Miele, (2017) model a collection of networks along a time gradient, the connectivity structure varies from time to time but they integrate a sparsity parameter, which is similar to our density paramete... | The first one is to allow the distribution of the block memberships to vary between networks and even to allow some networks to not populate certain blocks. This enables to model a collection of networks where the structure of certain networks is encompassed in the structure of other networks.
The second relaxation all... |
The interest of our colSBM𝑐𝑜𝑙𝑆𝐵𝑀colSBMitalic_c italic_o italic_l italic_S italic_B italic_M model is two-folds. The first one is to find a common connectivity pattern which explains the structure of the different networks in the collection and to assess via model selection whether these structures are a rea... |
Most contributions about collections of networks rely on some node correspondence between the networks. Recently, motivated by the analysis of fMRI data a few works extend the SBM to model population of networks (Paul and Chen,, 2018; Pavlović et al.,, 2020). Le et al., (2018) make the assumption that the networks of ... | A |
Vanilla CNN performs relatively better in translation learning, because the position of Xtsubscript𝑋𝑡X_{t}italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT (the original images in this case) is always in the center and independent of U𝑈Uitalic_U.
However, while being able to estimate rotation angles accurately... | In this work, we conduct learning on four types of fTsubscript𝑓𝑇f_{T}italic_f start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT, including the individual learning of rotation, scaling and translation, and the joint learning of all the above three.
For individual learning, only one of the three transformations is applied... | This is based on the observation of the learning curves of the three mechanisms (in Fig. 12 in Appendix A).
Fast learning on translation and scaling and a slow one on rotation can be noticed for all models, which indicates that CNN models have greater difficulty learning the mechanism of rotation. | The performance of FactorNets for 2D transformations learning is reported in Fig. 5 and Fig. 9 in Appendix A.
For the learning of individual mechanisms, it can be observed that most of the absolute percentage errors (APE) (e.g. the third quartile in the distributions) can be controlled to below 20%percent2020\%20 % in ... | This can be attributed to the strong o.o.d. systematicity of knowledge learned with FactorNets, given that the data in the training and test sets are completely different in semantics.
More results on the performance of FactorNets for 2D transformation learning are shown in Appendix A. | B |
We further evaluate puNCE in the binary semi-supervised setting. Training data contains samples from both the classes and a set of unlabeled samples. In particular, we perform experiments when only 1%, 5% and 10% of the data is available (Figure 5.2).
It is important to note that, unlike PU Learning settings, here we p... |
Table 1: Linear Evaluation of different contrastive losses - infoNCE (Chen et al., 2020b, ), DCL (Chuang et al., , 2020), SCL (Khosla et al., , 2020) under Positive Unlabeled setting. puNCE is particularly effective when available supervision is limited. As we increase supervision, SCL starts to improve and become com... | Table 4: Linear Evaluation of different contrastive losses under Semi-Supervised (PNU) setting - with 1%, 5% and 10% labeled training data. puNCE proves to be superior than infoNCE (Chen et al., 2020b, ) and semi-supervised SCL (Assran et al., , 2020) especially in low supervision regime.
|
Table 4: Linear Evaluation of different contrastive losses under Semi-Supervised (PNU) setting - with 1%, 5% and 10% labeled training data. puNCE proves to be superior than infoNCE (Chen et al., 2020b, ) and semi-supervised SCL (Assran et al., , 2020) especially in low supervision regime. | Table 4: Linear Evaluation of different contrastive losses under Semi-Supervised (PNU) setting - with 1%, 5% and 10% labeled training data. puNCE proves to be superior than infoNCE (Chen et al., 2020b, ) and semi-supervised SCL (Assran et al., , 2020) especially in low supervision regime.
| B |
𝓖×1𝑼×2𝑽=𝓖′×1𝑼×2𝑽×3𝒀.subscript2subscript1𝓖𝑼𝑽subscript3subscript2subscript1superscript𝓖′𝑼𝑽𝒀\bm{\mathcal{G}}\times_{1}\bm{U}\times_{2}\bm{V}=\bm{\mathcal{G}}^{{}^{\prime}%
}\times_{1}\bm{U}\times_{2}\bm{V}\times_{3}\bm{Y}.bold_caligraphic_G × start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT bold_italic_U × start_POST... |
In this section we use the cross-validation tools discussed in Section 5, the layer dependence tests developed in Section 4.1, and the 𝒀𝒀\bm{Y}bold_italic_Y interpretability heuristics from Section 4.2 to use the NNTuck in application. In Section 6.1 we generate a synthetic network example to exhibit the interpretab... | Figure 2: If one or more of the frontal slices of the core tensor are linear combinations of another, there is a deflation of the core tensor. In this example, we show how a layer independent NNTuck (left) can be equivalently written as a layer dependent NNTuck (right). This figure shows how layer dependence is stored ... |
When the core tensor can be deflated the factor matrix 𝒀𝒀\bm{Y}bold_italic_Y in the NNTuck captures the interdependence between layers. We examine deflation and the 𝒀𝒀\bm{Y}bold_italic_Y factor matrix through the three example model instances below, respectively depicted in Figures 2, 3, and 4. | If the layer dependence test determines that an empirical multilayer network has dependent layers, it is useful to investigate how they are related. In the examples in Section 3.2 above, the frontal slices of the deflated core tensor correspond exactly to the affinity matrix of one or more of the layers. As an example,... | C |
QAGCN achieved superior performance than existing SOTA methods on PQ-2hop and PQL-2hop with relative improvements of 2.3% and 10.5%, respectively.
Also, on MetaQA 1-hop and MetaQA 2-hop, QAGCN achieved the second-best performance, which is very close to the SOTA methods with only relative drops of -0.2% and -0.1%. | Baselines We first evaluate the overall effectiveness of QAGCN in the multi-relation QA task.
Given that the main goal of this paper is to propose a simple method that is competitive with existing reasoning-based methods that rely on complex reasoning mechanism, we mainly choose reasoning-based QA methods as baselines:... | This reflects that it is indeed challenging for the simple reasoning of QAGCN to answer complex 3-hop questions.
However, the performance of QAGCN is better than most reasoning-based methods, e.g., 29.8% and 1.6% higher than the best-performing RL-based method SRN on MetaQA 3-hop and PQ-3hop, respectively. | This demonstrates that, while maintaining the above-demonstrated effectiveness, QAGCN is easier to train than NSM.
The reason is two-fold: First, NSM relies on a teacher network to provide intermediate supervision for the multi-step reasoning of its student network. | However, please note that QAGCN also has a large margin of 5.8% in comparison with the third-best method NSM.
This demonstrates that, on complex questions, the simple single-step reasoning of QAGCN could perform better than the SOTA methods with complex multi-step label propagation. | D |
Building off of an n𝑛nitalic_n-qubit feature map for n𝑛nitalic_n-dimensional word vectors, the same QSVM classification process was followed for densely encoded feature maps (alexander2022quantum, ). In this case, vector representations were encoded into fewer qubits in the feature map circuit, using log2(n)subscri... | and results accuracy can be considered in the light of what problems users are trying to solve. For example, the work by Alexander and Widdows alexander2022quantum investigates solely the effects of decreasing space in the QSVM using a densely encoded feature map. Improved accuracy from 90% to 100% in fewer qubits on ... | The percentage of samples correctly classified peaked when using 4 qubits, where average accuracy was 57% with the ZZFeatureMap and 62% using the densely encoded feature map. The QSVM experiments were on par with classical SVM on average, and classified some sample batches with perfect accuracy. This, in contrast with ... | When running the densely encoded version of the word embeddings classifier from Section 3.3, 100% accuracy was achieved using 16 embedding dimensions and only 4 qubits (alexander2022quantum, ). This model achieves perfect accuracy for the lambeq set and in the fewest qubits of all the methods covered.
| As observed in the results below, the classical embeddings preprocessing step from Section 3.2 enabled
the quantum circuit to achieve relatively high accuracy with fewer qubits. Further improvements to space using densely encoded feature maps achieved similar results. | D |
For example, Figure 3(a) has the same hnormsubscriptℎnormh_{\text{norm}}italic_h start_POSTSUBSCRIPT norm end_POSTSUBSCRIPT as 3(b) and 3(c). The absence of edge density in the homophilic metric brings undesirable results in restructuring as the measurement always prefer a disconnected graph. Moreover, although hnormsu... |
Figure 3: Examples of graphs with different label-topology relationships and comparison of different homophily measures. The node colour represents the node labels. The red edges connect nodes of different labels, while the green edges connect nodes of the same labels. Figure 3(a) - 3(c) shows homophilic graphs of dif... | Propositions 1111 and 2222 define the limits of homophily and heterophily given a set of nodes and their labels. Propositions 3333 and 4444 define neutral graphs which are neither homophilic nor heterophilic. Proposition 3333 states that a uniformly random graph, which has no label preference on edges, is neutral thus ... |
We run GCN and SGC on the synthetic dataset of controlled homophily range from 00 to 1111. The model performance with homophily is plotted in Equation 4. As expected, higher homophily level corresponds to better performance for both GCN and SGC. All model reaches 100%percent100100\%100 % accuracy where homophily is la... |
As homophily and performance are correlated, in the restructuring process, number of edges are chosen based on homophily level on the validation set. As shown in Equation 5, we chose 48000480004800048000 edges for Chameleon and 26000260002600026000 edges for Squirrel, each corresponds to the first peak of homophily on... | B |
Subpopulations and learners react to each other; Updates in subpopulation allocations lead to updates in learners parameters Θt=(θ1t,…,θmt)superscriptΘ𝑡subscriptsuperscript𝜃𝑡1…subscriptsuperscript𝜃𝑡𝑚\Theta^{t}=(\theta^{t}_{1},\dots,\theta^{t}_{m})roman_Θ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT = ( ita... | The second observation is that the basis for the updates is the average loss, i.e. risk. This motivates the following definition: given participation α𝛼\alphaitalic_α and parameters ΘΘ\Thetaroman_Θ,
the average risk experienced by each subpopulation i𝑖iitalic_i and each | In the recommendation example,
ℛ¯𝗌𝗎𝖻𝗉𝗈𝗉superscript¯ℛ𝗌𝗎𝖻𝗉𝗈𝗉\bar{\mathcal{R}}^{\mathsf{subpop}}over¯ start_ARG caligraphic_R end_ARG start_POSTSUPERSCRIPT sansserif_subpop end_POSTSUPERSCRIPT captures the dissatisfaction with content for a subpopulation and | We introduce a broad class of update dynamics by way of a canonical example.
Suppose that each subpopulation i𝑖iitalic_i updates its allocation by increasing the participation proportional to the quality of various models; for example, by spending more time on recommendation platforms that suggest more engaging conten... | If a new social media platform can predict the tastes of younger users more accurately, the younger users may spend more of their time on the new service, and correspondingly less on an existing platform.
The new platform will then receive more data and improve its performance on young customers, while the old platform... | C |
‖𝜶y−𝜶~y‖∞<τ,‖𝐇y−𝐇~y‖∞<τ,formulae-sequencesubscriptnormsuperscript𝜶𝑦superscript~𝜶𝑦𝜏subscriptnormsuperscript𝐇𝑦superscript~𝐇𝑦𝜏\|\boldsymbol{\alpha}^{y}-\tilde{\boldsymbol{\alpha}}^{y}\|_{\infty}<\tau,%
\quad\|\mathbf{H}^{y}-\tilde{\mathbf{H}}^{y}\|_{\infty}<\tau,∥ bold_italic_α start_POSTSUPERSCRIPT italic_y... |
where ∥⋅∥∞\|\cdot\|_{\infty}∥ ⋅ ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT is the maximum norm. This condition is trivially incorporated in the box constraints of Eq. (32). Finally, given the LP approximation detailed above, the algorithm for solving Eq. (29) follows the same lines as Alg. 1. |
This problem is equivalent to Eq. (15), but has no maximum operation. However, now we have non-linear constraints, thus violating the definition of an LP. To correct this, we use a local approximation of η𝜂\etaitalic_η (around an iterate 𝜶(t)superscript𝜶𝑡\boldsymbol{\alpha}^{(t)}bold_italic_α start_POSTSUPERSCRIPT... | The algorithm for calculating 𝚖𝚒𝚗𝚍𝚒𝚜𝚌βsubscript𝚖𝚒𝚗𝚍𝚒𝚜𝚌𝛽\texttt{mindisc}_{\beta}mindisc start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT up to a given tolerance is listed as Alg. 2. Alg. 2 gets as input the label proportions Inputs, the value of β𝛽\betaitalic_β that specifies the required minimization inst... | Eq. (32) is an LP problem which is solved at each iteration by an LP solver, for which we use the scipy.optimize library. In addition to the box constraints of [0,1]01[0,1][ 0 , 1 ] for all variables, it includes 2⋅k⋅|𝒢|+k⋅2𝑘𝒢𝑘2\cdot k\cdot|\mathcal{G}|+k2 ⋅ italic_k ⋅ | caligraphic_G | + italic_k equality constrai... | A |
Often the first step in analyzing unlabelled TS is motif discovery, used to derive hypotheses from the data based on similar, frequent subsequences. However, existing tools for MD show a high variance in the discovered motifs depending on the given input parameter. If these parameters are set incorrectly this leads to... |
In the previous section, we evaluated the quality of the approximate k𝑘kitalic_k-Motiflet algorithm compared to four state-of-the-art MD methods. We did, however, not yet consider the exact k𝑘kitalic_k-Motiflet algorithm, because (a) its runtime is exponential in the size of the motif set and thus probably infeasibl... | We perform extensive quantitative and qualitative evaluation of our new methods on six real-world and 25 semi-synthetic TS and compare them to four state-of-the-art competitors. We show that our approximate algorithm finds larger motif sets given the same distance threshold and motif sets with smaller pairwise distance... | By qualitative and quantitative evaluation on six real-world and 25 semi-synthetic use cases, we showed that the approximate algorithm produces better motifs than all its competitors at lower runtimes, and that its results come very close to the exact algorithm despite an exponentially lower runtime. Future work will c... | Our experimental evaluation is three-fold: Firstly in Section 6.1 we compare our approximate algorithm against SotA in a quantitative analysis on six real world sets. We evaluate methods by the similarity and cardinality of motif sets. Secondly in Section 6.2, we compare our approximate k𝑘kitalic_k-Motiflet algorithm ... | C |
Liu et al. [25] studied the decentralized regularized gossip gradient descent algorithm for linear regression models, where the method is applicable for the case that only two nodes exchange information at each instant. In addition, they require that the graphs be strongly connected and the observation vectors and the ... | Historically, Guo [41] first proposed the stochastic persistence of excitation condition for analyzing the centralized Kalman filtering algorithm, which was then refined in [42]. Whereafter, the cooperative information condition on the conditional expectations of the regression matrices over the deterministic connected... | To overcome the difficulties mentioned above, we develop the nonnegative supermartingale inequality of the estimation error, and further establish the sample path spatio-temporal persistence of excitation condition by combining information of the regression matrices, graphs and algorithm gains, under which sufficient c... | At each time step, every node runs an online estimation algorithm consisting of an innovation term processing its own new measurement, a consensus term taking a weighted sum of estimations of its own and its neighbors with additive and multiplicative communication noises and a regularization term preventing over-fittin... | The stochastic persistence of excitation condition was first proposed in the analysis of the centralized Kalman filter algorithm in [41] and then refined in [42]. For the decentralized adaptive filtering algorithms in [32]-[33], the cooperative information condition on the conditional expectations of the regression mat... | B |
If ∥𝐫k∥2<∥𝐫(z−1)p∥2subscriptdelimited-∥∥superscript𝐫𝑘2subscriptdelimited-∥∥superscript𝐫𝑧1𝑝2\lVert\mathbf{r}^{k}\rVert_{2}<\lVert\mathbf{r}^{(z-1)p}\rVert_{2}∥ bold_r start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT < ∥ bold_r start_POSTSUPERSCRIPT ( italic_z - 1 ) it... |
Guided by the theoretical bounds, we constructed a heuristic that dynamically adjusts the dimension of the projection subspace at each iteration. As the process approaches convergence, the backward error decreases, which allows for an increase of the inaccuracy of the least-squares calculations while still maintaining... |
The condition in step 2 aims to estimate the amount of inaccuracy that can be tolerated while performing the least-squares calculations at the current AA step. Since the theoretical results suggest that AAR can tolerate more inaccuracy in the least-squares calculations as converging to the solution, the condition in s... | By interpreting the accuracy reduction in AAR calculations as a perturbation to the original least-squares problem in the Anderson mixing computation, we assess how much accuracy can be sacrificed to limit the communication and computational burden without compromising convergence. Along the same lines as
previously pu... | A choice that uses 1 as the heuristic bound for σmin(T^k)subscript𝜎subscript^𝑇𝑘\sigma_{\min}(\hat{T}_{k})italic_σ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( over^ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) has been discussed in the context of GMRES 27, where it is mentioned t... | B |
In contrast to the embedding-based models, all the methods that use control tokens can directly handle unknown topics. More specifically, for the prepending method we simply prepend the unknown topic to the document while for the tagging method we tag the most representative words for the unknown topic, assuming the ex... | In contrast to the embedding-based models, all the methods that use control tokens can directly handle unknown topics. More specifically, for the prepending method we simply prepend the unknown topic to the document while for the tagging method we tag the most representative words for the unknown topic, assuming the ex... |
In terms of STAS, in both setups prepending leads to much better results compared to token embeddings and tagging, which have similar scores. The best results are again obtained when tagging is combined with prepending. The high STAS score of the combined BARTpre+tag model in the non-oracle (70.09%) setup shows that t... | The experimental results reported in Table 2 show that topic control methods perform significantly better compared to the corresponding baseline methods that do not take into account the topic requested by the user. Furthermore, the proposed BART-based formulation significantly outperforms the topic-oriented PG approac... | Even though the models have not seen the zero-shot topics during training, they can successfully generate topic-oriented summaries for these topics achieving similar results in terms of both ROUGE-1 score and STAS metric, with the BARTpre+tag method outperforming all the other methods. In addition, the results indicate... | D |
Our tiles are useful, for example, for Toffoli+H circuits. Toffoli gates cannot, in general, be executed natively, and are decomposed into sequences of Clifford+T gates. The latter are compatible with the NISQ device or the error-correcting code. The Hadamard gate is comparatively straightforward to execute on almost ... |
Figure 7: The Toffoli gate from a) can be implemented using: b) an AND gate and a controlled-S gate, c) an ancilla initialised in |0⟩ket0\ket{0}| start_ARG 0 end_ARG ⟩, two AND gates and a CNOT, d) an AND, an ancilla, and a CNOT; here the uncomputation is measurement-based. The wire labels denote [c]ontrol, [t]arget, ... | Critically, the fact that quantum circuits are often formed by repeating patterns of sub-circuits inspires an opportunity to use this information for speeding up the compilation and the routing of the qubits. For example, this is the case for many arithmetic circuits which were imported from classical computing (e.g. a... |
Our tiles are useful, for example, for Toffoli+H circuits. Toffoli gates cannot, in general, be executed natively, and are decomposed into sequences of Clifford+T gates. The latter are compatible with the NISQ device or the error-correcting code. The Hadamard gate is comparatively straightforward to execute on almost ... |
Figure 1: The standard cell for a 3D implementation of a Toffoli gate: a) Green vertices are the control qubits of the Toffoli gate, and the orange vertex is the target. In the Clifford+T decomposition of the Toffoli gate, the orange and green qubits are CNOT controls and the grey qubits are CNOT targets; b) Pink edge... | A |
The param-net is a coarse-to-fine model which uses a series of expansive layers to construct the output image from input image features along with the parameters. It consists of eight blocks where each block consists of a transposed convolutional layer for upsampling followed by three convolutional layers that acts as ... |
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i... |
In this model, the parameters of the input image are not fixed. Hence, we also pass the input image parameters along with the output image parameters. This model is more generalizable than the previous one but also has a more complex non-linearity to learn and hence, lags behind in performance compared to default-to-p... | The param-net is a coarse-to-fine model which uses a series of expansive layers to construct the output image from input image features along with the parameters. It consists of eight blocks where each block consists of a transposed convolutional layer for upsampling followed by three convolutional layers that acts as ... |
The parameters stacked in between the layers are shaped in the size of the layer it is being stacked to. The reason we pass the parameters in each block is that if we only passed them in the first block, it would be difficult for the later blocks to retain them. This problem is somewhat similar to the degradation prob... | D |
δ𝒟δ𝒙t=δ𝒜δ𝒙.𝛿𝒟𝛿subscript𝒙𝑡𝛿𝒜𝛿𝒙\frac{\delta\mathcal{D}}{\delta\bm{x}_{t}}=\frac{\delta\mathcal{A}}{\delta\bm{%
x}}~{}.divide start_ARG italic_δ caligraphic_D end_ARG start_ARG italic_δ bold_italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_ARG = divide start_ARG italic_δ caligraphic_A end_ARG s... |
The goal of this paper is to combine neural network-based spatial discretization with the framework of the discrete energetic variational approach [50], to develop efficient and robust numerical schemes, termed as energetic variational neural network (EVNN) methods. | In this paper, we develop structure-preserving EVNN schemes for simulating the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and the generalized diffusions by utilizing a neural network as a tool for spatial discretization, within the framework of the discrete energetic variat... | In this subsection, we provide a detailed derivation of underlying PDEs for L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows and generalized diffusions by the general framework of EnVarA. We refer interested readers to [24, 74] for a more comprehensive review of the EnVarA. In bo... |
This procedure is known as the energetic variational approach (EnVarA) [24, 48, 74], developed based on the celebrated work of Onsager [57, 58] and Rayleigh [63] in non-equilibrium thermodynamics. During the past decades, the framework of EnVarA has shown to be a powerful tool for developing thermodynamically consiste... | D |
In other words, the fibered barcode of F𝐹Fitalic_F is the 𝔉𝔉\mathfrak{F}fraktur_F-projected barcode of F𝐹Fitalic_F with 𝔉={jℒhpℒhτ−c|(h,c)∈Λ×𝕍}𝔉conditional-setsubscript𝑗subscriptℒℎsubscript𝑝subscriptℒℎsubscript𝜏𝑐ℎ𝑐Λ𝕍\mathfrak{F}=\{j_{\mathcal{L}_{h}}p_{\mathcal{L}_{h}}\tau_{-c}\,|(h,c)\in%
\Lambda\times\... | In this section, we make use of the structure of γ𝛾\gammaitalic_γ-sheaves to construct metrics which are efficiently computable for sublevel sets persistence modules by relying on software dedicated to one-parameter persistence modules and recent advances on optimization of topological functionals [25]. One of these d... |
In this section, we elaborate on our study of the pushforward operation and introduce in Definition 5.1 the notion of 𝔉𝔉\mathfrak{F}fraktur_F-projected barcodes, associated to a family 𝔉𝔉\mathfrak{F}fraktur_F of subanalytic functions up to infinity from 𝕍𝕍\mathbb{V}blackboard_V to ℝℝ\mathbb{R}blackboard_R. We mo... | In this section, we elaborate on our study of projected barcodes to introduce a family of pseudo-metrics on categories of sheaves inspired by integral probability metrics [22]. Here, the probability measures are replaced by sheaves and the integration of real-valued functions against the probability measure by the push... | In this work, we provide a detailed study of the pushforward operation on sheaves and persistence modules, both from a theoretical and computational perspective. Following the same strategy of reducing the study of multi-parameter persistence modules to the study of families of one-dimensional persistence modules, we i... | C |
Figure 3 illustrates how the F1 score for each variable ordering changes, as we vary the sample sizes for each of the 16 networks. For many networks, we see that variable ordering makes a large difference in learnt accuracy. In the most extreme cases, such as Asia, Formed, Property, and Hailfinder, there is a differen... | Figures 6 and 7 in Appendix A characterise incorrect edges in the learnt graph when optimal and worst variable ordering is used respectively. These figures are based on the same experiments used for Figure 3, and use the edge characterisations defined in Table 2. The number of edges is scaled by the number of edges in ... |
Figure 3 illustrates how the F1 score for each variable ordering changes, as we vary the sample sizes for each of the 16 networks. For many networks, we see that variable ordering makes a large difference in learnt accuracy. In the most extreme cases, such as Asia, Formed, Property, and Hailfinder, there is a differen... | Figure 3 also provides a comprehensive view into how HC’s accuracy varies over a wide range of sample sizes and networks. The broad trend for most networks is that accuracy rises with sample size but then tends to reach a plateau. The height of this plateau varies with the variable ordering and with the network, as doe... | Figure 4 presents a series of boxplots showing the distribution of changes to F1 accuracy, where one factor is changed at a time. For example, the leftmost, dark blue plot shows the change in accuracy when the sample size is increased tenfold. To compute this, the F1 from sample size 1,000 is compared with that of samp... | C |
Accordingly, in recent years, several large-scale generative language models, including GPT-3 (175B) (Brown et al., 2020), HyperCLOVA (204B) (Kim et al., 2021a), Gopher (280B) (Rae et al., 2021), Chinchilla (70B) (Hoffmann et al., 2022), Megatron Turing NLG (530B) (Smith et al., 2022), PaLM (540B) (Chowdhery et al., 20... | From our observations, we can conclude the following: 1) Reducing the group size (g𝑔gitalic_g) effectively decreases perplexity, even when employing a simple RTN quantization scheme, at the cost of a marginal increase in latency, 2) Increasing the number of GPUs (and, consequently, parallelism) does not significantly ... | Note that due to the limited memory capacity of a single GPU, large LMs may need multiple GPUs, resulting in increased communication latency.
GPUs are commonly adopted to accelerate inference as GPUs embed lots of arithmetic units and support multiple threads, critical for speeding up matrix multiplications (Narayanan ... | To address such a concern, researchers have proposed to use model parallelism, which distributes computations over multiple GPUs through GPU-to-GPU communication (Shoeybi et al., 2019; Narayanan et al., 2021).
Nevertheless, it is worth noting that model parallelism introduces additional overheads, stemming from the int... | However, for the LLaMA-65B model with FP16 weights, the model size exceeds the memory capacity of a single GPU (80GB for A100), necessitating model parallelism techniques.
Nevertheless, when the weights of the LLaMA-65B model are quantized to 3 or 4 bits, as demonstrated to be a viable solution in (Frantar et al., 2022... | C |
Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee... |
From the positive side, Luo [16] introduced the notion of rank-determining sets of metric graphs, and verified the existence of finite rank-determining sets constructively. Hladký, Král, and Norine [13] confirmed a conjecture of Baker [2] relating the ranks of a divisor on a graph and on a tropical curve, and provided... | The above reduction of Min-TSS to Dist-Nonhalt uses an auxiliary graph that contains parallel edges. Therefore, the stated complexity results hold for general (i.e. not necessarily simple) graphs. However, in [13, Corollary 22], Hladký, Kráľ, and Norine proved that if one subdivides each edge of a graph with a new vert... | Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia... | Motivated by the fact that a finite graph can be viewed as a discrete analogue of a Riemann surface, Baker and Norine [3] introduced a discrete analogue of the Riemann-Roch theory for graphs. They adapted the notions of a divisor, linear equivalence, and rank to the combinatorial setting, and studied the analogy betwee... | A |
Figure 11: Attention distribution between time step and channel. The top row is the weight from the first TCJA module in TCJA-SNN working with the DVS128 Gesture dataset. We select sparse and dense attention frames in both temporal-wise (T=3,6𝑇36T=3,6italic_T = 3 , 6) and channel-wise (C=33,77𝐶3377C=33,77italic_C = ... |
As mentioned above, we contend that the frame at the current time step exhibits a significant correlation with its neighboring frames in both the channel and temporal dimensions. This correlation opens up the possibility of employing a mechanism to establish a connection between these two dimensions. Initially, we emp... |
To make the attention mechanism easier to understand, we finally visualize the output of the first TCJA module in TCJA-SNN working with the DVS128 Gesture dataset, which can be seen in Fig. 11. Changes in attention weights are primarily accumulated among channels, verifying further the substantial role performed by th... | We initially investigate the kernel size in the TCJA module. Intuitively, when the size of the kernel rises, the receptive field of the local attention mechanism will also expand, which may aid in enhancing the performance of TCJA-SNN. However, the experimental results in Fig. 10 overturn this conjecture. As the size o... | To thoroughly examine the impact of the TLA and CLA modules, we conducted a series of ablation studies. The results, as presented in Tab. VII, indicate that the CLA module plays a crucial role in enhancing performance. This can be attributed to the fact that, in most SNN designs, the number of simulation time steps is ... | C |
∥u−uh∥Ω2superscriptsubscriptdelimited-∥∥𝑢subscript𝑢ℎΩ2\displaystyle\lVert u-u_{h}\rVert_{\Omega}^{2}∥ italic_u - italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT
=ah(u−uh,z)+⟨uh⋅n,∇⋅z⟩∂𝒯h−⟨uh×n,∇×z⟩∂𝒯h,absentsubscrip... | \partial}},= italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_u - italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_z ) + ⟨ ⟦ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⋅ italic_n ⟧ , ∇ ⋅ italic_z ⟩ start_POSTSUBSCRIPT caligraphic_E start_POSTSUBSCRIPT italic_h end_POSTSUBSCR... | ,\llbracket\Pi_{h}z\times n\rrbracket\bigr{\rangle}_{\mathcal{E}_{h}^{\circ}}-%
\langle\nabla\times u,\Pi_{h}z\times n\rangle_{\mathcal{E}_{h}^{\partial}}.italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_u - italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , roman_Π start_POSTSUBSCRIPT italic_h e... | \partial\mathcal{T}_{h}}-\langle u_{h}\times n,\nabla\times z\rangle_{\partial%
\mathcal{T}_{h}},= italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_u - italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_z ) + ⟨ italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⋅ italic_n , ∇ ⋅ italic... | }}-\langle\nabla\times u,w_{h}\times n\rangle_{\partial\mathcal{T}_{h}}= ⟨ ∇ ⋅ italic_u , italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⋅ italic_n ⟩ start_POSTSUBSCRIPT ∂ caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POSTSUBSCRIPT - ⟨ ∇ × italic_u , italic_w start_POSTSUBSCRIPT italic_h end... | C |
After learning special parameters, each action candidate is executed at a given block to verify its executability. An action candidate may not be executable due to various reasons: (1) the function is disabled by the owner or admin; (2) internal function calls to other contracts are disabled by the owners or admins of... |
For all verified smart contracts, their ABIs are made public to facilitate users to call functions and engage with the contracts. An ABI typically comprises (public or external) function names, argument names/types, function state mutability, and return types. During the process of selecting action candidates, certain... | FlashFind automatically collects storage read/write information during the execution of these functions and infers the Read-After-Write (RAW) dependencies101010This RAW dependency information is also employed in FlashSyn’s initial data collection to expand the range of data points. between different action candidates. ... | FlashSyn does not require prior knowledge of a vulnerable location or contract. Given a set of DeFi lego user interface contracts, action candidates and their special parameters such as strings are given by the users or automatically extracted from transaction history using FlashFind. FlashSyn utilizes these action can... | Table 1 lists each action’s token flow, along with the number of data points collected initially (without counterexamples) and the total number of data points for polynomial and interpolation, respectively. The amounts of tokens transferred in/out for each action are calculated based on its contract’s member variables ... | B |
We propose a model-based scheme for meta-RL with a finite sample of training tasks, where we first estimate the prior distribution of tasks, and train a Bayes optimal policy on the estimated prior. Using KDE for density estimation, we obtain state-of-the-art PAC bounds. Further, our approach can exploit low dimensional... | This insight provides a rule-of-thumb of when meta RL approaches based on task inference, such as VariBAD, are expected to work well. Indeed, recent empirical work by Mandi et al. [23] claimed that in benchmarks such as RLBench [15], where tasks are very diverse, simpler meta RL methods based on fine-tuning a policy tr... | For an agent to learn fast, additional structure of the problem is required. In Meta RL [4; 39; 7; 26], agents are allowed to train on a set of training tasks, sampled from the same task distribution as the task they will eventually be tested on. The hope is that similar structure between the tasks could be identified ... | We propose a model-based scheme for meta-RL with a finite sample of training tasks, where we first estimate the prior distribution of tasks, and train a Bayes optimal policy on the estimated prior. Using KDE for density estimation, we obtain state-of-the-art PAC bounds. Further, our approach can exploit low dimensional... | While significant empirical progress in meta RL has been made, the theoretical understanding of the problem is still limited. A central question, which we focus on in this work, is the probably approximately correct (PAC) analysis of meta RL, namely, how many training tasks are required to guarantee performance that is... | A |
This study presents two implications for practitioners. First, the induced non-English CHV connects to the existing English CHV thanks to the bilingual word space. Such connections help the induced non-English medical terms conform to the existing medical terminology, such as concept unique identifiers (CUI) in UMLS. T... |
Influence on Similarity Threshold. Fig. 5a and Fig. 5b showed the influence of a similarity threshold δ𝛿\deltaitalic_δ to the precision, recall, and F1subscriptF1\text{F}_{1}F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Both query directions (EN →→\rightarrow→ ZH and ZH →→\rightarrow→ EN) had consistent trends of three ... |
Results. Table III reports the MRR performance for each model, in which EN →→\rightarrow→ ZH means that we used English queries to find Chinese synonym candidates (i.e., Chinese translations), and ZH →→\rightarrow→ EN was the reverse query direction. All three models outperformed the random baseline, indicating that a... |
There are two limitations to our proposed method. The first one is that the technique we adopt for handling multiword expressions may not be effective for infrequent expressions. We employ phrase identification [27] to capture the frequent term co-occurrences as multiword expressions, such as “loose bowel movements” a... |
Case 3: Non-anchor words that bias at a particular language space. As shown in Table I, the size of corpus is significantly different between languages. Besides, some diseases, drugs, or other medical entities may differ between languages, which causes some areas to be monolingual in the resultant bilingual word space... | C |
ViTs operate by segmenting input images into smaller patches, treating each patch as a token similar to words in NLP. These patches are then embedded (patch-embeddings) and passed to the transformer layers conducting self-attention and feedforward operations. Such a design allows ViTs to capture long-range spatial depe... | The data that support the findings of this study are openly available. The AG’s news corpus dataset was obtained from Ref. AGw, , and CelebA dataset from Ref. cel, in accordance with the Terms of Service of the respective web resources. Data generation details for the molecular dynamics of alanine dipeptide are provid... |
To establish that TERP indeed takes both the input data and the black-box model into account when generating explanations we subject our protocol to the sanity tests developed by Adebayo et al. Adebayo et al. (2018). We achieve this by taking the fine-tuned ViT model and randomizing the model parameters in a top-to-bo... |
In this work, we employ a ViT pre-trained on the ImageNet-21k dataset from the authors Dosovitskiy et al. (2020); Steiner et al. (2021); Wightman (2019) and then fine-tune the model for predicting human facial attributes by training on the publicly available large-scale CelebFaces Attributes (CelebA)Liu et al. (2018) ... |
Large-scale CelebFaces Attributes (CelebA) Dataset Liu et al. (2018) contains 202,599202599202,599202 , 599 celebrity images, each annotated with 40404040 binary attributes. CelebA offers the dataset in two different formats: (a) actual raw images, (b) processed data with aligned facial images. In this work, we employ... | C |
To maximize coverage and explore deeper paths, the tool leverages control- and data-flow features based on static and dynamic analysis to infer fundamental properties of the application. This enables a much faster generation of interesting inputs than an application-agnostic approach. Wang et al. (Wang et al., 2017) p... | FairFuzz (Lemieux and Sen, 2018) is a grey-box fuzzer that utilizes guided-mutation. It uses coverage to achieve the guidance by employing a mutation mask for every pair of seeds and the rare branches to direct the fuzzing to reach each rare branch. SAFL (Wang et al., 2018) is an efficient fuzzer for C/C++ programs. It... | Barton Miller (Barton et al., 1990) proposed fuzzing at the University of Wisconsin in the 1990s, and it became the popular software vulnerabilities detection technique (Sutton et al., 2007). One of the most common fuzzing tools is American fuzzy lop (AFL) (Böhme et al., 2017b; ame, 2021). AFL is a coverage-based fuzze... | GTFuzz (Li et al., 2020) is a tool that prioritizes inputs based on extracting syntax tokens that guard the target place The backward static analysis technique extracts these tokens. Also, GTFuzz benefits from this extraction by improving the mutation algorithm. Smart grey-box fuzzing (SGF) (Pham et al., 2019) is a fuz... |
The combination of symbolic execution and BMC with fuzzing has been used recently to combine the strengths of both techniques. For example, VeriFuzz (Chowdhury et al., 2019) is a state-of-the-art tool we have previously compared to FuSeBMC. The authors describe it as a program-aware fuzz tester that combines feedback-... | C |
We have illustrated the application of the JFP method to a variety of FIEs (including FDEs and a fractional PDE reformulated as FIEs) in which exponentially fast convergence to the solution is achieved. The JFP method converges much faster and with a lower overall complexity than the sparse sum space method in [25] for... | We shall construct differentiation matrices for the JFP basis, which will be banded and could be used to solve FDEs and fractional PDEs without having to reformulate them as FIEs, as we have done in this paper. Adding Newton iteration in function space [8], the JFP method would also be applicable to nonlinear FIEs and ... | The following is an outline of the paper: We introduce the basic constituents of the JFP method in Sections 2 and 3 (matrix representations of operators on quasimatrices, high-precision floating-point numbers, the JFP basis, etc.). We then focus on the properties (Section 4) and computation (Section 5) of fractional in... |
Our work is strongly influenced by the method proposed by Hale and Olver [25], in which direct sums of appropriately weighted Jacobi polynomials are used as basis functions. We shall refer to this method as the sum space method and we note that the basis functions are related to the “generalized Jacobi functions” of [... | We consider our method to be the successor of that of Bhrawy and Zaky [7]. They applied a change of variables to classical Jacobi polynomials such that the algebraic singularities of the resulting basis, the JFP basis (which is called thus for reasons we explain in Section 3), conform to those of the solution333The met... | A |
Supplementary Table S3: The interpretation of the performance by precision measure. The precision gives different performance when choosing different Lksubscript𝐿kL_{\text{k}}italic_L start_POSTSUBSCRIPT k end_POSTSUBSCRIPT. The network G is “Water_Distribution_Network_EXNET”, and the network H is “Freshwater_stream_... |
To make sure our results are not affected by different setups, we perform a robustness check by repeating the measurement in the main text using sample1 (Fig. S8 and Fig. S9) and sample2 (Fig. S10 and Fig. S11). The same capability applies to different setups, supporting the universality of the conclusion drawn. |
In the main text, we use the Random Forest classifier to analyze the capability of a topological feature. Here, we also consider Gradient Boosting and AdaBoost classifiers to display similar tests. The sampling method is the same as that of the main text. The results from Figs. S25 and S26 show that our quantitative f... |
In the main text, we show the analysis for 20% random removal links. For robustness check, we here repeat the analysis for 10% random removal. The similar AUC results (the Fig. S32 is similar to Fig. S2, the Fig. S31 is similar to Fig. S3) are obtained. Analogously, the similar precision results (the Fig. S33 is simil... |
To further test if a machine learning algorithm practically re-arranges the ranking, we show an example below. The Salton index is used. The distribution of index values in different sets is shown in Fig. S28a. After applying the Random Forest algorithm, the classifier finds the mapping function to transform the index... | C |
Though QAS makes optimizing a quantized model possible,
updating the whole model (or even the last several blocks) requires a large amount of memory, which is not affordable for the tinyML setting. We propose to sparsely update the layers and the tensors. | In this paper, we aim to bridge the gap and enable tiny on-device training with algorithm-system co-design. We investigate tiny on-device training and find two unique challenges: (1) the model is quantized on edge devices. A real quantized graph is difficult to optimize due to low-precision tensors and the lack of Batc... | Pruning techniques prove to be quite successful for achieving sparsity and reducing model size [29, 30, 48, 31, 50, 49].
Instead of pruning weights for inference, we "prune" the gradient during backpropagation, and update the model sparsely. Given a tight memory budget, we skip the update of the less important paramete... | We visualize the update schedule of the MCUNet [47] model searched under 100KB extra memory (analytic) in Figure 11 (lower subfigure (b), with 10 classes). It updates the biases of the last 22 layers, and sparsely updates the weights of 6 layers (some are sub-tensor update).
The initial 20 layers are frozen and run for... | The memory usage of a computation graph is related to its implementation [6, 44, 47, 46]. We provide two settings for memory measurement: (1) analytic profiling: we count the size of extra tensors required for backward computation, including the saved intermediate activation, binary truncation task, and the updated wei... | B |
\|B^{-1}\|_{2}.∥ ( italic_A - italic_B italic_D ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = ∥ ( italic_B start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_A - italic_D ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIP... | σmax(A−1B)≤σmax(A−1)σmax(B)=σmax(B)σmin(A).subscript𝜎𝑚𝑎𝑥superscript𝐴1𝐵subscript𝜎𝑚𝑎𝑥superscript𝐴1subscript𝜎𝑚𝑎𝑥𝐵subscript𝜎𝑚𝑎𝑥𝐵subscript𝜎𝑚𝑖𝑛𝐴\sigma_{max}(A^{-1}B)\leq\sigma_{max}(A^{-1})\sigma_{max}(B)=\frac{\sigma_{max%
}(B)}{\sigma_{min}(A)}.italic_σ start_POSTSUBSCRIPT italic_... | Noting that σmax(A−1B)<1subscript𝜎𝑚𝑎𝑥superscript𝐴1𝐵1\sigma_{max}(A^{-1}B)<1italic_σ start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT ( italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B ) < 1 is equal to
‖A−1B‖2<1subscriptnormsuperscript𝐴1𝐵21\|A^{-1}B\|_{2}<1∥ italic_A start_PO... | Let B𝐵Bitalic_B be nonsingular and σmax(A−1B)<1subscript𝜎𝑚𝑎𝑥superscript𝐴1𝐵1\sigma_{max}(A^{-1}B)<1italic_σ start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT ( italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_B ) < 1 in
(1.1). Then | σmax(A−1B)=0.5<1,σmin(A)=2>σmax(B)=1.5.formulae-sequencesubscript𝜎𝑚𝑎𝑥superscript𝐴1𝐵0.51subscript𝜎𝑚𝑖𝑛𝐴2subscript𝜎𝑚𝑎𝑥𝐵1.5\sigma_{max}(A^{-1}B)=0.5<1,\sigma_{min}(A)=2>\sigma_{max}(B)=1.5.italic_σ start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT ( italic_A start_POSTSUPERSCRIPT - ... | B |
For reasons of brevity, we shall concentrate on the most interesting result in [DLMN17], namely that a heavy-tailed choice of the mutation strength gives a significant speed-up for all jump functions. We note cursory that heavy-tailed parameter choices found ample uses subsequently and often overcame in an elegant man... | Since our analyses so far suggest that the scramble mutation operator is more natural than the one based on swaps, we shall only regard a heavy-tailed version of the former.
So we proceed by defining a heavy-tailed scramble mutation operator. We say that an integer random variable X𝑋Xitalic_X follows a power-law distr... | Let β>1𝛽1\beta>1italic_β > 1. The expected runtime of the (1+1)11(1+1)( 1 + 1 ) EA with heavy-tailed scramble mutation with power-law exponent β𝛽\betaitalic_β on the PLeadingOnes benchmark is Θ(n3)Θsuperscript𝑛3\Theta(n^{3})roman_Θ ( italic_n start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ).
|
Finally, we analyze the performance of a heavy-tailed variant of the scramble mutation operator. For bit-string representations, it was observed in [DLMN17] that heavy-tailed mutation operators, and more generally heavy-tailed parameter choices [ABD21], can greatly speed up the runtime of evolutionary algorithms. In p... |
Now we call heavy-tailed scramble mutation (with power-law exponent β𝛽\betaitalic_β) the mutation operator that first samples a number k∼pow(β,n)similar-to𝑘pow𝛽𝑛k\sim\operatorname{pow}(\beta,n)italic_k ∼ roman_pow ( italic_β , italic_n ), then selects a random subset of k𝑘kitalic_k elements from [1..n][1..n][ 1 ... | A |
We introduced a novel approach to geometric multilevel optimization. The approach employs information geometry in order to devise all ingredients of the iterative multilevel scheme. Invoking coarse level representations for computing descent directions effectively accelerates convergence. Experiments conducted for a r... | Although we consider the specific problem (1.1) in this paper, we believe that our approach generalizes to other constrained convex programs, analogous to the way how open convex parameter sets of probability distributions are turned into statistical manifolds [Lau87, AN00].
| The derivation of the approach for boxed-constrained convex programs can be transferred to other convex programs with simply structured feasible sets, analogous to turning parameter spaces of probability distributions into Riemannian manifolds in information geometry. Simplices instead of boxes as feasible sets provide... | Results for non-convex optimization problems using a multilevel approach are not yet available in the Euclidean setting.
Recently, a geometric multilevel optimization approach has been proposed by [SV21] for the specific case of low-rank matrix manifolds. Our approach differs in that we focus on information geometry fo... |
We introduced a novel approach to geometric multilevel optimization. The approach employs information geometry in order to devise all ingredients of the iterative multilevel scheme. Invoking coarse level representations for computing descent directions effectively accelerates convergence. Experiments conducted for a r... | B |
σ(z)=𝟏(z≥0)𝜎𝑧1𝑧0\sigma(z)={\bf 1}(z\geq 0)italic_σ ( italic_z ) = bold_1 ( italic_z ≥ 0 ), which equals 1 if z≥0𝑧0z\geq 0italic_z ≥ 0 and zero otherwise. We slightly abuse the term and say that a network is monotone if every single neuron is a monotone function. Since both ReLU and threshold are monotone, this r... |
Such a restriction on the weights can be seen as an inductive bias reflecting prior knowledge that the functions we wish to approximate are monotone. One advantage of having such a “positivity bias” is that it guarantees the monotonicity of the network. Ensuring that a machine learning model approximating a monotone f... |
While it is well known that neural networks with ReLU activation are universal approximators (can approximate any continuous function on a bounded domain). Perhaps surprisingly, the same is not true for monotone networks and monotone functions. Namely, there are monotone functions that cannot be approximated within an... | One aspect we did not consider here is learning neural networks with positive parameters using gradient descent. It would be interesting to examine the efficacy of gradient methods both empirically and theoretically. Such a study could lead to further insights regarding methods that ensure that a neural network approxi... |
Given the above result, it may seem that, similarly to the case of monotone networks with ReLU activations, the class of monotone networks with threshold activations is too limited, in the sense that it cannot approximate any monotone function with a constant depth (allowing the depth to scale with the dimension was c... | A |
A common feature of virtually all of the aforementioned QMC literature related to PDE uncertainty quantification is that the QMC rule is designed for the non-discretized PDE problem (1) whereas, in practical computations, one only has access to a discrete approximation of the PDE system. Of course, as long as one uses ... |
In the case of conforming finite element discretizations for the elliptic PDE problem, it is enough to analyze the parametric regularity of the continuous problem. The parametric regularity results are inherited by the conforming FE solution. Below, we briefly recap the main parametric regularity results for the affin... | Thus it is sufficient to constrain our parametric regularity analysis of u(⋅,𝒚)𝑢⋅𝒚u(\cdot,{\boldsymbol{y}})italic_u ( ⋅ , bold_italic_y ) to the parameters 𝒚∈U𝒚𝑈{\boldsymbol{y}}\in Ubold_italic_y ∈ italic_U for which the parametric PDE problem is well-defined.
| This paper tries to bridge this theoretical gap. It is structured as follows. Notations and preliminaries are introduced in Section 2. Section 3 describes randomly shifted lattice rules, and Section 4 gives a brief overview over the analysis of conforming FE methods. DG in the QMC framework is presented in Section 5 an... | A common feature of virtually all of the aforementioned QMC literature related to PDE uncertainty quantification is that the QMC rule is designed for the non-discretized PDE problem (1) whereas, in practical computations, one only has access to a discrete approximation of the PDE system. Of course, as long as one uses ... | C |
The key to the analysis for each individual arm is that, because of the interleaved mean structure, Alice misses most information of half of the terms held by Bob. Without this information, her adaptivity cannot help much in the task of identifying which arm is more likely to be the best arm. On the other hand, the ti... |
We would like to highlight a couple of points regarding Theorem 1. First, this is the first lower bound result that addresses the local agent adaptivity in the CL models. In particular, it shows that the capacity of each agent to utilize newly observed information within each round does not contribute to reducing the ... |
The key to the analysis for each individual arm is that, because of the interleaved mean structure, Alice misses most information of half of the terms held by Bob. Without this information, her adaptivity cannot help much in the task of identifying which arm is more likely to be the best arm. On the other hand, the ti... | In order for the induction to proceed, we need to make sure that if we publish all heavy arms, the probability of the best arm being published is small, since otherwise the problem would already be solved and the round elimination process cannot continue. This is easy to do with non-adaptive algorithms, because the who... | Finally, we would like to mention that due to technical needs, in each step of our induction we have to “consume” multiple, but still O(1)𝑂1O(1)italic_O ( 1 ), levels out of the L𝐿Litalic_L levels of arms, but this will not change the asymptotic round bound.
| D |
As depicted in the figure, SPIRAL demonstrates significantly faster performance compared to the other algorithms. Even though the cost function in this scenario does not possess a Lipschitz continuous gradient, we evaluate the performance of adaSPIRAL, both in Bregman and Euclidean versions, in Figure 0(a).
It is impor... | Figure 2 provides a comparison of different algorithms on six datasets. It is evident from the results that both SPIRAL and adaSPIRAL exhibit superior convergence performance compared to other algorithms, regardless of whether the datasets are synthetic or practical.
Also the same speed up by adaSPIRAL is evident for m... | Remarkably, adaSPIRAL-eucl performs well on cost functions without Lipschitz continuous gradients, as verified by this simulation. Consequently, adaSPIRAL-eucl demonstrates potential applicability to a wider range of cost functions in various applications.
Additionally, adaSPIRAL outperforms SPIRAL due to its ability t... | As depicted in the figure, SPIRAL demonstrates significantly faster performance compared to the other algorithms. Even though the cost function in this scenario does not possess a Lipschitz continuous gradient, we evaluate the performance of adaSPIRAL, both in Bregman and Euclidean versions, in Figure 0(a).
It is impor... |
In this section, we evaluate the proposed algorithm, SPIRAL, for both convex and nonconvex problems, considering cost functions with and without Lipschitz continuous gradients. We examine two versions of SPIRAL: 1) SPIRAL, which follows Algorithm 1, and 2) adaSPIRAL, an adaptive version with additional steps as outlin... | B |
Tensor decompositions are efficient tools for multi-way data processing (analysis). In particular, they can be used for making reduction on the data tensors and compressing them without destroying their intrinsic multidimensional structure. In the past few decades, several types of tensor decompositions have been intr... |
Our algorithm is a generalization of the randomized fixed-precision algorithm developed in [37] which is a modification and improved version of the fixed-precision algorithm proposed in [38]. Similar to the idea presented in [37], we propose to gradually increase the size of the second and first modes of 𝐐¯¯𝐐\underl... | In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD). Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor a... | This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ... | As discussed earlier, the randomized algorithms proposed in ([34, 35, 36, 50]) require an estimation of the tubal rank which may be a difficult task. To solve this limitation, we propose a new randomized fixed-precision or adaptive algorithm which for a given approximation error bound, it can find an optimal tubal rank... | C |
The results of the statistical analysis (Table 4) show that SSL-PCTs and SSL-PCT-FR are statistically significantly better than the SL-PCTs for most of the different amounts of labeled data, considered for both structured output prediction tasks. More specifically, for the HMLC task, usually, at least 200 labeled exam... |
As discussed previously, semi-supervised random forests improve over supervised ones in fewer cases as compared to single trees. A statistically significant improvement over CLUS-RF is observed only for the MLC task with 200 labeled examples and the HMLC task with 350 labeled examples. However, in none of the cases, d... |
The feature-weighted semi-supervised method (SSL-PCT-FR) and the non-feature-weighted one (SSL-PCT) have similar trends in predictive performance. However, on some datasets, there are notable differences. Namely, on Birds and Scene datasets, feature weighting is beneficial for the predictive performance of SSL-PCTs, a... |
The results of the statistical analysis (Table 4) show that SSL-PCTs and SSL-PCT-FR are statistically significantly better than the SL-PCTs for most of the different amounts of labeled data, considered for both structured output prediction tasks. More specifically, for the HMLC task, usually, at least 200 labeled exam... | Considering the feature-weighted and non-feature-weighted semi-supervised methods (both single trees and ensembles) there is no statistically significant difference between them in most cases, except at the HMLC task for 200 labeled examples where, statistically, SSL-PCT-FR significantly outperforms SSL-PCT.
| D |
The best drivability is defined by the lowest intervention count and intervention time. In the online test, there is no need to measure the inference speed as we limit the observation sampling to 4 Hz (the same configuration as the data gathering process used for training and validation) to perform a fair evaluation f... |
In the navigational controls estimation task, DeepIPC also has the best performance in line with the waypoints prediction result. The MLP agent can leverage useful features encoded from both RGB and BEV semantic maps. Therefore, the MLP agent can perform as well as the PID agent in estimating steering and throttle. Wi... |
In the waypoints prediction task, DeepIPC has a lower MAE compared to AIM-MT. Thanks to the BEV semantic features, DeepIPC can distinguish free and occupied areas easily from the top-view perspective. Thus, it can properly estimate the waypoints which are also laid in BEV space. Although AIM-MT predicts four waypoints... |
Furthermore, in a comparison of drivability in the evening, DeepIPC and AIM-MT perform worse than Huang et al.’s model. In line with the offline result, the model that mainly takes RGB images failed to perceive the environment in the evening as the provided image is not as clearly visible as when driving at noon. On t... | Based on the experimental results, we disclosed several findings as follows. First, in line with our previous work [1], the BEV semantic feature is proven can improve the model performance in predicting waypoints and navigational controls. With a better perception, the model can leverage useful information which result... | A |
In our opinion, the most interesting application of Theorem 1.1 lies in the area of graph algorithms for NP-hard optimization problems. Dallard et al. [14] and Yolov in [44] have shown that certain NP-hard optimization problems like Maximum Independent Set, Graph Homomorphism, or Maximum Induced Matching problems can b... | They also obtained an algorithmic metatheorem for the problem of finding a maximum-weight sparse (bounded chromatic number) induced subgraph satisfying an arbitrary but fixed property expressible in counting monadic second-order logic (𝖢𝖬𝖲𝖮2subscript𝖢𝖬𝖲𝖮2\mathsf{CMSO}_{2}sansserif_CMSO start_POSTSUBSCRIPT 2 end... | minor-matching hypertree-width, escapes this argument because it does not only depend on the subgraphs induced by the bags, but also on the neighborhoods of the bags.
In particular, for 𝗍𝗋𝖾𝖾-μ𝗍𝗋𝖾𝖾-𝜇\mathsf{tree}\textnormal{-}\musansserif_tree - italic_μ the width of a bag Xtsubscript𝑋𝑡X_{t}italic_X start_P... | First, one could ask what is the most general width-parameter defined by a min-max formula over the bags of a tree decomposition (see, e.g. [2, 36]) that allows us to solve problems like Maximum Independent Set in polynomial time when bounded?
For parameters where the width of a bag depends only on the induced subgraph... | The independence number of a tree decomposition is the maximum of the independence numbers (that is, the maximum size of an independent set) of the subgraphs induced by its bags.
The tree-independence number of a graph G𝐺Gitalic_G, denoted by 𝗍𝗋𝖾𝖾-α(G)𝗍𝗋𝖾𝖾-𝛼𝐺\mathsf{tree}\textnormal{-}\alpha(G)sansserif_t... | A |
We further compare the utility of various VFL methods under differential privacy.
Existing VFL frameworks (see Table I) focus on sample-level DP chen2020vafl ; hu2019fdml ; hu2019learning ; tran2023privacy ; cohendifferentially ; ranbaduge2022differentially , where neighboring datasets are defined as those differing by... | FDML hu2019fdml evaluate their framework under different levels of empirical noises, yet without offering detailed DP mechanisms or DP guarantee.
The ADMM-based linear VFL framework (abbreviated to Linear-ADMM) hu2019learning provides (ϵ,δ)italic-ϵ𝛿(\epsilon,\delta)( italic_ϵ , italic_δ )-DP guarantee for linear mod... | FDML hu2019fdml and Linear-ADMM hu2019learning add noise to local outputs. However, these methods lack exact privacy budget evaluations, providing only empirical utility under different levels of noise. Additionally, ranbaduge2022differentially perturbs local model weights to satisfy DP. However, it requires boundin... | For fair comparisons, we use the same local models for all methods. Under w/ model splitting setting, owing to the strong feature extraction power of local DNN models, we utilize the linear model as server model by default.
Additionally, we evaluate all methods with the non-linear server model, as detailed in Section V... | Additionally, the utility under client-level DP VFL is not directly comparable to sample-level DP in centralized ML abadi2016deep or client-level DP in standard (horizontal) FL mcmahan2018learning due to the unique properties of VFL.
For instances, (1) the dimension of DP-perturbed information in VFL can be smaller (... | B |
We search k𝑘kitalic_k from {50, 100, 200} respectively and test the performance of our method on all the three datasets in both transductive and inductive setting. Table XII, XIII, and XIV report the experimental results in terms of MRR, AP and AUC, respectively.
Note that k=0𝑘0k=0italic_k = 0 means that we do not up... | We search k𝑘kitalic_k from {50, 100, 200} respectively and test the performance of our method on all the three datasets in both transductive and inductive setting. Table XII, XIII, and XIV report the experimental results in terms of MRR, AP and AUC, respectively.
Note that k=0𝑘0k=0italic_k = 0 means that we do not up... | We observe that when k=0𝑘0k=0italic_k = 0, the performance of the model drops significantly. This illustrates simply dropping all neighbor nodes to avoid noise propagation will lead to significant information loss.
When setting k𝑘kitalic_k into 50, 100, or 200, the performance of Ada-DyGNN is relatively stable. | In order to intuitively understand our reinforced neighbor selection module, we design a robustness visualization experiment by showing the actions output by the policy network under different levels of noise added to the UCI dataset. As shown in Fig. 4, the variance σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSC... | As shown in Table III, our Ada-DyGNN model consistently outperforms all the static and dynamic approaches in terms of MRR. This illustrates that designing robust knowledge adaptation mechanism is beneficial to dynamic graph neural networks.
For the transductive setting, our Ada-DyGNN relatively improves the performance... | B |
To improve the robustness and interpretability of CNN, data uncertainty learning has been gradually used in face recognition [34] and person re-identification [35, 36, 37] and other computer vision fields [38, 39, 40, 41, 42] to help the network reject noisy input and avoid false recognition. | SGD optimizer is used with a momentum of 0.9, other paramters include the learning rate, milestones, total iteration and weight decay for each experiment can be seen in Table V.
We first train the framework only with the identity-branch, then we load the pre-trained model and train the whole framework with the addition... | The uncertainty-branch is used to generate the variance feature, representing with what uncertainty a feature can represent this sequence, and this branch will be abandoned during inference.
In this branch, the original feature f𝑓fitalic_f outputs from the backbone will first pass through a Head module, which is a lig... | PFE [34] first proposes to map each face image as a Gaussian distribution, regarding the sequence feature as the mean, and adding another branch to learn the confidence for the sequence feature.
The mean of the distribution can be regarded as the most likely feature of the sequence mapped in the latent space, and the v... | Progressive Uncertainty maps each sequence as a Cross-View Gaussian distribution and a Cross-Cloth Gaussian distribution in the feature spaces instead of a point, which can relax the constraints for the model.
The identity-branch accepts features output from the backbone and learns the mean feature, which is used to re... | C |
The dialogue policy module makes a dialogue decision given the current state (Zhang et al., 2019). Early methods are rule-based (Chen et al., 2017). Since handcrafted rules are non-extensible and resource-consuming (Zhao et al., 2021), deep reinforcement learning (DRL) has become a mainstream method for training dialo... | Reinforcement learning (RL) algorithms, specifically Q-learning (Watkins and Dayan, 1992) based algorithms, have become a mainstream method for training the dialogue policy module (Peng et al., 2018; Zhang et al., 2020b). For each step, the policy agent updates its action value 111This value is the expected return for ... |
The value-based algorithm Q-learning, a common unit of the dialogue policy module, suffers the overestimation bias (Thrun and Schwartz, 1993; Hasselt, 2010). Prior studies addressed the problem in multiple ways, including (1) bias compensation with additive pseudo costs and (2) a variety of estimators. Bias-corrected ... | Overestimation bias is more problematic in the deep Q-learning network (DQN) algorithm (Fan et al., 2020) due to the function approximation errors of DRL. Polishing estimation tricks of a single model and using ensemble models are two mainstream solutions. Double Q-learning is subsequently adapted to a neural network a... | To benchmark our method performance, we use different DQN variants as baselines in dialogue policy module for comparison: (1) DQN policy is learned with standard DQN algorithm Mnih et al. (2015). (2) Duel DQN policy is learned by the duel network structure (Wang et al., 2016).(3) Double DQN policy uses Double Estimator... | B |
Our work is also inspired by both generative approaches [20, 21] but attempts at minimizing the future uncertainty by adding human intention as a condition learned from the latent distribution. While experimental results [28] demonstrated the key role of recent actions for immediate future predictions, we claim that kn... | Following the division of our methodology, we define our framework as a two-step. First, we propose a Hierarchical Multitask Multi-Layer Perceptrons (MLP) Mixer (H3M), to classify each observed video to an action label, as well as to extract the overall intention of the human. The MLP Mixer-based architecture [32] has ... |
Therefore, we develop a methodology that aims to constrain the variability of future actions based on the human intention estimated from past observations. We predict a hierarchical structure from a sequence of videos, each depicting a particular human action. From this given video clip sequence, we define two differe... |
More recent work such as [26] demonstrated the use of latent goals, obtained from the observed actions, as a feature representation used to anticipate the next action. However, this latent representation is not explainable as it does not consist of language-based labels. Moreover, [26] only attempted to select one nex... | To the best of our knowledge, our work is the first attempt to decompose the future in a two-level explainable hierarchical structure. This design allows to deal with time-uncertainty by top-down approaches: high-level intention is used for robust anticipation of the low-level actions.
| C |
Providing extra anomaly information is a direct solution to address the absence of knowledge about anomalies.
This idea is initially proposed by a work named outlier/anomaly exposure [37]. This study employs data from an auxiliary natural dataset as manually introduced out-of-distribution examples. | In NAC, we design tailored data perturbation operations to produce native anomaly examples based on original time series data, which provides one-class classification with valuable knowledge about primeval anomalous behaviors.
These calibration methods enable COUTA to learn data normality in a noise-tolerant, anomaly-i... | Similarly, COUTA employs tailored anomaly-aware transformations for time series data and trains neural networks to discriminate transformed data from original data. By harnessing pretext tasks, self-supervised learning can embed data semantics into representations. Likewise, COUTA also better learns temporal patterns a... | Our work is fundamentally different from [37]. We create dummy anomaly examples by performing data perturbation on original data instead of taking data samples from a supplementary nature dataset. A concurrent study [38] also works on perturbation learning for anomaly detection in images, which constructs a perturbator... | A few anomaly detection methods consider the anomaly contamination problem. The literature [14, 15, 16] filters possible anomalous samples via self-training. An additional Autoencoder is used in [12] to obtain a clean set of time series data before the training process.
A recent work [33] jointly infers binary labels t... | C |
In the above sections §4 and §5, we have extensively outlined the recent innovations in D2T both inside and outside of seq2seq modeling. However, looking forward, with the emergence of highly capable large language models (LLMs) such as ChatGPT (OpenAI, 2022), below we discuss the reconciliation of these emergent tech... |
Pretrained lanuguage models (PLMs) (Devlin et al., 2019; Radford et al., 2019) have been successful in numerous text generation tasks (See et al., 2019; Zhang et al., 2020b). The extensive pretraining grants these models certain worldly knowledge (Petroni et al., 2019) such that, at times, the models refuse to generat... | Supplementary Modules: Fu et. al. (Fu et al., 2020a) propose the adaptation of the seq2seq framework for their partially-algined dataset WITA using a supportiveness adaptor and a rebalanced beam search. The pre-trained adaptor calculates supportiveness scores for each word in the generated text with respect to the inpu... | Few-shot Learning: Data-to-text generation is a task that requires extrapolation beyond general linguistic understanding and commonsense reasoning, thus general LLM prompting strategies (Wei et al., 2022) may not be suited for this endeavor. Adding to that the data sparsity prevalent in D2T, extensions of prefix-tuning... |
Chen et. al. (Chen et al., 2019b) append knowledge-graphs representing external context to the table-text pairs and quantify its efficacy through their metric KBGain - the ratio of tokens unique to the external context to the total number of tokens in the narrative. Similarly, Ma et. al. (Ma et al., 2019) augment the ... | C |
We first count the number of all possible predicates for each subject-object category pair, and arrange the predicates based on their frequency in ascending order. Secondly, we add triplets whose total number is less than 20% of the overall subject-object category pairs in the test triplet list. For example, in Fig. 8,... | TABLE I: Comparison of VG and VG-OOD datasets. “KL” measures KL divergence differences in the predicate distributions of all samples between two splits, and “KL-mean” measures the mean of KL divergence differences in the predicate distributions between two splits over all subject-object category pairs.
| The statistics of the number of images, the number of triplets, and the difference in distribution (i.e., KL and KL-mean) between the VG and VG-OOD training and test sets are shown in TABLE I. Among them, KL divergence computes the disparity in predicate distributions between the training and test sets for all samples.... | Visual Genome [19], the most widely utilized SGG dataset at present, has a consistent predicate distribution for each subject-object category pair in both the training and test sets. Some studies [24] have found that good performance can be obtained just through the frequency prior bias of the commonest predicate categ... |
New Benchmark VG-OOD. In addition, for the widely-used SGG benchmarks (e.g., VG [19]), their predicate distributions of the training set and test set for each subject-object category pair are similar. As displayed in Fig. 8, the predicate distributions of woman-shirt in the original VG dataset are extremely similar (e... | B |
The workflow of the PSM-DoS attack is shown in Figure 8. First, the attacker distributes the partial block data to all the miners and attracts attracted miners to join the attacker’s private branch. In the meantime, the attacker leaves the private branch and puts all the mining power back into the public branch. Then, ... | First, the attacker can address the concerns of attracted miners who discover the new block by carefully selecting the number of hidden bytes, ensuring that the calculation of these hidden bytes can be performed within an acceptable timeframe.
As we have demonstrated in Section 5.1, when the attacker shares the partial... | The workflow of the PSM-DoS attack is shown in Figure 8. First, the attacker distributes the partial block data to all the miners and attracts attracted miners to join the attacker’s private branch. In the meantime, the attacker leaves the private branch and puts all the mining power back into the public branch. Then, ... |
Specifically, to provide proof of block possession, the attacker writes some random information r𝑟ritalic_r to the coinbase transaction of the new block to provide sufficient randomness and uses r𝑟ritalic_r and the nonce𝑛𝑜𝑛𝑐𝑒nonceitalic_n italic_o italic_n italic_c italic_e of the block as the witness. Othe... | In PSM, an attacker follows the partial block sharing strategy and shares the partial block information with rational miners. The partial block has some data covered, e.g., nonce and part of arbitrary bytes in the coinbase transaction. Miners can mine after it to get a new block. The hidden data can be recovered by oth... | A |
\frac{\boldsymbol{\nu}_{t+1}}{1-\beta_{2}^{t+1}}}\right)+\epsilon\,\mathbf{I}%
\right].= ( 1 - italic_β start_POSTSUPERSCRIPT italic_t + 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) [ diag ( square-root start_ARG divide start_ARG bold_italic_ν start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT end_AR... |
However, even though adaptive gradient methods train at the “Adaptive Edge of Stability” (AEoS), their behavior in this regime differs in a significant way from that of non-adaptive methods in the non-adaptive EoS regime: whereas non-adaptive optimizers in the non-adaptive EoS regime are blocked from accessing high-cu... | However, it has not been clear whether these findings have relevance for adaptive gradient methods.
Because of adaptive preconditioning, adaptive gradient methods do not evolve as linear recurrences on the local quadratic Taylor approximation, and thus it is not clear why their local stability would be well-modeled by ... | On quadratic objective functions, this behavior would lead to divergence.
However, neural network training objectives are not quadratic, and gradient descent typically does not diverge; instead, it enters a regime called the Edge of Stability (EoS) [8] in which the sharpness hovers just above, or oscillates around, the... | In contrast to gradient descent (and preconditioned gradient descent), adaptive gradient methods do not evolve as linear recurrence relations on quadratic functions.
Thus, it is a priori unclear whether their local stability can be modeled using an eigenvalue condition. | D |
Overall, both the separation of the CLN025 metastable states and the free-energy landscapes calculated for the low-dimensional embeddings suggest that the proposed framework can be used to find slow CVs and physically valid free-energy estimates. The presented results (Fig. 4) clearly show that using our approach, we c... |
In this work, we consider the problem of using manifold learning methods on data from enhanced sampling simulations. We provide a unified framework for manifold learning to construct CVs using biased simulation data, which we call reweighted manifold learning. To this aim, we derive a pairwise reweighting procedure in... |
We underline that diffusion reweighting makes learning CVs from high-dimensional samples possible regardless of which conformational variable is biased to generate the data set. This extends the applicability of manifold learning methods to atomistic trajectories of any type (unbiased and biased) and makes it possible... | Our framework makes it possible to generate biased data sets that, given the construction of enhanced sampling methods, sample a larger conformational space than standard atomistic simulations and use such data to learn low-dimensional embeddings. If a data set entails many infrequent events, the low-dimensional repres... | We can circumvent this issue by using learning data set from enhanced sampling simulations where transitions between metastable states are more frequently observed and are no longer rare events. However, in this case, the simulation data set is biased and does not correspond to the real system, as it is sampled from a ... | B |
After the use of the assignation algorithm has been exhausted, there may be nodes that are still not assigned to subdomains. This is likely because they are on or near a boundary between two subdomains. Assign any given unassigned node to the modal subdomain of its immediate neighbours, i.e. those with which it shares ... |
Nodes that are always assigned to the same subdomain, regardless of list order, are considered to belong to that subdomain. The correct subdomains are not known a priori, so there is no way to asses the absolute correctness of the grown subdomains. It can be expected that the certainty of the assignations decreases wi... | This method of subdomain growth has the flaw that the final subdomain assignation is impacted by the order in which the elements are considered. The robustness of the algorithm can be assessed by randomly permuting the list of elements before the subdomain assignation. In 200 samples, 93% of nodes are assigned to the s... | In order to compare the divisions, we must make the AHA regions. This is straight forward with the definitions given by Manuel et al. (Manuel et al., 2002). The 17 subregions can be seen in Fig. 13a in ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT with the same view as is seen in... | Figure 10: A dorsal projection of the subdomain boundaries for B6subscript𝐵6B_{6}italic_B start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT with the nodes of the ventricular set V𝑉Vitalic_V that are not always assigned to the same subdomain shown in red. These nodes can clearly be seen to cluster along the subdomain boundaries... | B |
More recent related works focus on learning the tuned graph representation of routing networks. [18] propose RouteNet, which is regarded as a first work that introduces message passing in deep-learning-based network modeling, in a bipartite graph formulation.
[19] and [20] extend RouteNet to be adaptive to heterogeneou... | We consider the pair-wise similarity between the role of a node and its neighbors to be crucial, since an OD pair’s source or destination can bring its impact towards close neighbours of nodes of interest and formulate their embedding as an “augmented source.” GAT is reported to be a well-suited solution to capture suc... | Our study is among the first approaches that attempt to model the routing network state snapshot by learning the given topology structure for the task of estimating the network latency using open-world input.
Our proposed solution transforms the task of estimating the latency between source and destination nodes, to a ... | Finally, [5] present their work on topology size generalization for latency estimation of Origin-Destination (OD) pairs, the same problem we focus here. Through improving RouteNet by including queue occupancy state per link, beside the path and link nodes, they formulate a tripartite message passing scheme, which intro... | More recent related works focus on learning the tuned graph representation of routing networks. [18] propose RouteNet, which is regarded as a first work that introduces message passing in deep-learning-based network modeling, in a bipartite graph formulation.
[19] and [20] extend RouteNet to be adaptive to heterogeneou... | C |
The parameter C𝒮−subscriptsuperscript𝐶𝒮C^{-}_{\mathcal{S}}italic_C start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_S end_POSTSUBSCRIPT in Assumption 3.2 captures the fundamental difficulty of contrastive learning in RL by characterizing how large the function class (Definition 3.3) should... |
To focus our analysis on the contrastive learning for the transition dynamics, we only consider the setting where the reward function rh(⋅,⋅)subscript𝑟ℎ⋅⋅r_{h}(\cdot,\cdot)italic_r start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ , ⋅ ) is known. One might further modify the proposed algorithm to the unknown reward... | One can further extend the setting of the finite function class to the infinite function class setting by utilizing the covering argument as in Van de Geer (2000); Uehara & Sun (2021) such that the terms depending on the cardinality of ℱℱ\mathcal{F}caligraphic_F would be replaced by terms related to the covering number... | This section provides the analysis of the transition kernel recovery via contrastive learning and the proofs of the main results for single-agent MDPs and zero-sum MGs. Our theoretical analysis integrates contrastive self-supervised learning for transition recovery and low-rank MDPs in a unified manner. Part of our ana... | the state-of-the-art RL approach with contrastive learning on the benchmark Atari 100K (Kaiser et al., 2020). SPR utilizes the temporal information and learns the representation via maximizing the similarity between the future state representations and the corresponding predicted next state representations based on the... | B |
We combine importance sampling with MLMC to reduce the relative estimator variance in the rare-event regime by extending the results in (Ben Rached et al., 2023) to the multilevel setting. Combining importance sampling with MLMC has been previously explored in other contexts, including standard SDEs (Ben Alaya et al., ... |
We extend the DLMC estimator introduced by (Ben Rached et al., 2023) to the multilevel setting and propose a multilevel DLMC estimator for the decoupling approach (dos Reis et al., 2023) for MV-SDEs. We include a detailed discussion on the bias and variance of the proposed estimator and devise a complexity theorem, di... | We combine importance sampling with MLMC to reduce the relative estimator variance in the rare-event regime by extending the results in (Ben Rached et al., 2023) to the multilevel setting. Combining importance sampling with MLMC has been previously explored in other contexts, including standard SDEs (Ben Alaya et al., ... |
In (Ben Rached et al., 2023), we introduced the DLMC estimator, based on a decoupling approach (dos Reis et al., 2023) to provide a simple importance sampling scheme implementation minimizing the estimator variance. The current paper extends the DLMC estimator to the multilevel setting, achieving better complexity tha... |
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a... | A |
Once again, the selection of scenarios is intended to underscore the proposal’s robustness under unfavorable conditions, while also incorporating conservative assumptions. These scenarios serve to demonstrate that, from a GN&C perspective, an autonomous robotic spacecraft does not necessarily require extensive navigati... |
To illustrate this point, we conducted a Monte Carlo run with 100 samples for a more realistic scenario, the same depicted in Fig. 4. The spacecraft is inserted into a 100 km circular orbit around Eros. The results for this scenario are shown in Figure 9. As evident, the filter performance is excellent, and the estima... | For example, Fig. 4 illustrates a more realistic operational profile where the spacecraft is inserted into a 100 km orbit (NEAR-Shoemaker was inserted into a comparable orbit a month after rendezvous). In this case, the filter exhibits excellent performance, showcasing significant improvements in estimates. The errors ... |
In the case of Eros, the results of the Monte Carlo analysis, as illustrated in Figure 6, align with the discussion in Section 5.2. Once again, the spacecraft executes its operation successfully. It is effectively inserted into orbit and completes the transfer smoothly, as depicted in Figure 6a. The histogram in Figur... |
Due to the relatively short simulation time in the Monte Carlo runs, readers may question whether problematic behavior emerges after the initial 3 days. To address this, we extended the simulation to 10 days, considering a smaller set of 100 samples, to examine any signs of major issues beyond the 3-day mark. Figure 7... | A |
Our proof techniques build on ideas from nonlinear Perron-Frobenius theory (Lemmens and Nussbaum, 2012), and may be of independent interest for related problems.
Moreover, the simplicity of our approach and its ease of implementation make it attractive for applications. However, a key limitation of our approach is the ... |
In this paper, we introduced a novel fixed-point approach for computing Brascamp–Lieb constants, which is grounded in nonlinear Perron–Frobenius theory. In contrast to much of the prior literature, which has analyzed the problem through a Riemannian lens, our approach utilizes a Finslerian geometry on the manifold of ... | The paper is structured as follows: In section 2 we introduce basic background and notation, including Thompson geometry on the space of positive definite matrices and the class of Brascamp–Lieb inequalities. In section 3, we provide an overview of the paper’s main results and state the main theorems. In section 4, we ... | (Barthe, 1998) introduced a class of Reverse Brascamp–Lieb inequalities that generalize several important inequalities that cannot be encoded with the original Brascamp–Lieb framework. A key example is the Gaussian Brunn–Minkowski inequality (Barthe and Huet, 2008). Importantly, since Reverse Brascamp–Lieb inequalities... |
More generally, we hope that the Finslerian lens on fixed point iterations provides a new perspective on the problem of computing Brascamp–Lieb constants. We believe that the tools developed in this work can be applied to a wider class of Picard iterations that arise in the context of Brascamp–Lieb constants (such as ... | B |
In other words, the homology group is formed by taking the cycle space and "glue" any cycles that differ only by a boundary. This captures the intrinsic topological features of the complex at dimension k𝑘kitalic_k, independent of the specific choices of representatives for these cycles. The Betti number provides a num... | In other words, the homology group is formed by taking the cycle space and "glue" any cycles that differ only by a boundary. This captures the intrinsic topological features of the complex at dimension k𝑘kitalic_k, independent of the specific choices of representatives for these cycles. The Betti number provides a num... |
Intuitively, these chains in the cycle space represent closed loops (k𝑘kitalic_k-cycles) within the complex that cannot be continuously deformed into a single point. Meanwhile, the chains in the boundary space represent the edges or borders of higher-dimensional simplices. |
By modding out the cycle space by the boundary space, we effectively exclude cycles that are simply the boundaries of higher-dimensional simplices. This ensures that we focus on nontrivial cycles that capture the true topological features of the complex. Lastly, homologous cycles represent k𝑘kitalic_k-dimensional loo... |
The latter remark can be further visualized as follows. Consider two 1111-cycles colored red and blue in the simplicial complex shown in Figure 2. These cycles may appear different, but if their difference can be expressed as the boundary of a 2222-simplex in the complex, they are considered homologous. This implies t... | C |
In this paper, we mainly resort to obtaining accurate pseudo-labels so as to enhance the model’s discrimination and generalization in the unseen domain. We first analyze the generalization error on a domain using the theory of multi-domain learning (ben2010theory, ). Based on the upper bound of the generalization error... |
In this part, we will describe our multi-task learning framework. Since there are multiple different domains in the SSDG task, we consider training each domain as an independent task (i.e., the local task for each domain), which can effectively reduce the interference between different domains during pseudo-labeling. ... |
Inspired by the theory of multi-domain learning, we extend the FixMatch (DBLP:conf/nips/SohnBCZZRCKL20, ) 111FixMatch is an excellent baseline in SSDG, which will be validated in the experimental section. to a multi-task learning method, named MultiMatch, for semi-supervised domain generalization. Specifically, we bui... | In this paper, our goal is to address the semi-supervised domain generalization (SSDG) task. In this task, a few labeled samples on each domain are available, while most of the samples lack the label information, thus a key task is how to generate the accurate pseudo-label. Different from the conventional semi-supervis... |
In this paper, we aim to tackle the semi-supervised domain generalization (SSDG) task. Different from the typical semi-supervised task, the challenge of SSDG is that there exist multiple different domains with latent distribution discrepancy. To address this issue, we first explore the theory of multi-domain learning ... | B |
Intuitively, adapting the whole residual blocks has a larger capacity for modulating task-specific features than adapting only K×K𝐾𝐾K\times Kitalic_K × italic_K convolution but may introduce more parameters.
Plugging Conv-Adapter stage-wisely is not considered as it is impractical to make the receptive field of Conv-... | To explore the effective adapting schemes of using Conv-Adapter to tune a ConvNet, we study it mainly from two perspectives, similar to
[18], 1) the location of adaptation in pre-trained ConvNets – which intermediate representation 𝐡𝐡\mathbf{h}bold_h to adapt, and 2) the insertion form of Conv-Adapter – how to set th... | RepNet [59] exploits a dedicated designed side network to re-program the intermediate features of pre-trained ConvNets.
Conv-Adapter differs from previous methods with a design that considers parameter efficiency and transferability from the internal architectures and adapting schemes. Besides, the proposed Conv-Adapte... | It needs a more sophisticated design on not only the Conv-Adapter architecture but also the adaptation location [59], and we empirically find that stage-wise adaptation produces inferior performance and requires much more parameters.
Conv-Adapter is flexible to be inserted into every residual block of the ConvNet backb... | Intuitively, adapting the whole residual blocks has a larger capacity for modulating task-specific features than adapting only K×K𝐾𝐾K\times Kitalic_K × italic_K convolution but may introduce more parameters.
Plugging Conv-Adapter stage-wisely is not considered as it is impractical to make the receptive field of Conv-... | C |
\eta L^{\mathfrak{s}}_{(k)}}+e^{-\eta V^{\mathfrak{{\hat{\lambda}}}}_{(k-1)}}%
\right)\right).italic_γ = divide start_ARG 1 end_ARG start_ARG italic_η end_ARG ( 1 - roman_log ( italic_e start_POSTSUPERSCRIPT - italic_η italic_L start_POSTSUPERSCRIPT fraktur_f end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_k ) end_POS... | Note that in the definition of Regret, 𝚯^(k)superscript^𝚯𝑘\mathbf{\hat{\Theta}}^{(k)}over^ start_ARG bold_Θ end_ARG start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT and λ^(t)(k)^𝜆superscript𝑡𝑘\hat{\lambda}(t)^{(k)}over^ start_ARG italic_λ end_ARG ( italic_t ) start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPE... | The left panel of Figure 2 presents the detected changepoints over the temporal axis. In comparison, the PINNs111Standard PINNs assume a constant coefficient over time, which performs much worse for changepoint scenarios. To ensure a fair comparison, we modify PINNs to allow for a time-dependent coefficient. This highl... | The standard PINNs model assumes that the parameters of PDEs are constant values across the entire time domain. In order to accommodate Definition 2.1, we allow for the changes in the λ(t)𝜆𝑡\lambda(t)italic_λ ( italic_t ) and introduce additional regularization term in a form of total variation penalty on the first ... | We are interested in the stability of the algorithm with adaptive weights. Therefore, we look at the difference between the loss function values with adaptive weights versus any fixed weights after B𝐵Bitalic_B batches (where ΘΘ\Thetaroman_Θ and λ(t)𝜆𝑡\lambda(t)italic_λ ( italic_t ) are estimated on this batch). Thi... | D |
For example, babcd𝑏𝑎𝑏𝑐𝑑babcditalic_b italic_a italic_b italic_c italic_d and abcbcd𝑎𝑏𝑐𝑏𝑐𝑑abcbcditalic_a italic_b italic_c italic_b italic_c italic_d are not primitive, because they are generated by abcd𝑎𝑏𝑐𝑑abcditalic_a italic_b italic_c italic_d; but
abcbda𝑎𝑏𝑐𝑏𝑑𝑎abcbdaitalic_a ital... | change our position in u𝑢uitalic_u by more than one letter at a time, and we visit every position of u𝑢uitalic_u at least once.
If w=uf𝑤superscript𝑢𝑓w=u^{f}italic_w = italic_u start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT for f𝑓fitalic_f a walk, we say that u𝑢uitalic_u generates w𝑤witalic_w. | for which such u𝑢uitalic_u, v𝑣vitalic_v, f𝑓fitalic_f and g𝑔gitalic_g exist. Observe that, since u𝑢uitalic_u and v𝑣vitalic_v are primitive, they feature
no immediately repeated letter. So suppose w𝑤witalic_w does—i.e. is of the form w=xaay𝑤𝑥𝑎𝑎𝑦w=xaayitalic_w = italic_x italic_a italic_a italic_y for some ... | (go to the b𝑏bitalic_b, then turn back to the a𝑎aitalic_a, then turn again and go to the end). For the converse, suppose that w𝑤witalic_w is not primitive,
let u𝑢uitalic_u be a generator of w𝑤witalic_w with |u|<|w|𝑢𝑤|u|<|w|| italic_u | < | italic_w |, and let | Theorem 1 is relatively surprising: let u𝑢uitalic_u and v𝑣vitalic_v be primitive words. Now suppose we go for a walk on u𝑢uitalic_u and, independently, go for a walk on v𝑣vitalic_v; recalling the stipulation that the two walks visit every position in the words they apply to, the theorem says that,
provided only tha... | D |
A practical question is how to create this subdivision. Oftentimes, the subdivision is chosen fine enough to resolve the features of interest but coarse enough to keep the computational cost in check. Regularization is frequently used to ensure that an overly fine mesh does not lead to unwanted oscillations in the reco... | Section 2.3 of [50] and the references therein), but,
as with regularization, no overarching scheme is available to guide this choice of the mesh. [17] is also an example where the mesh is made part of what needs to be estimated in a Bayesian inversion scheme, which in practice appears to lead to meshes that are more r... | accuracy of measurements on the one hand, and the uncertainty in the recovered
parameters of the inverse problem on the other. But, none of the studies mentioned go on to specifically identify the role of information in the spatially variable ability to recover parameters in inverse problems in a systematic way. Let us... |
The relevant question to ask is whether this mesh is better suited to the task than any other potential mesh. Answering this question is notoriously difficult in inverse problems because, in general, the exact solution of the problem is unknown if only finitely many measurements are available and if regularization is ... | However, [17] is concerned with choosing the mesh used for the solution of the state equation, not the discretization of the parameter we seek – although it seems reasonable to assume that the scheme could be adapted to the latter as well. At the same time, the scheme described in [17] requires the solution of a Bayesi... | A |
Continuing our long-standing interdisciplinary collaboration (Jänicke & Wrisley, 2017; Meinecke et al., 2021), we adopted a participatory design process (Jänicke et al., 2020) to address the above-mentioned issues. Our contributions can be summarized as follows:
|
We study the coherence of the visual tradition of the corpus using the tools of computer vision, and in doing so we aim to put together new building blocks rooted in medieval imagery for further research. We also believe that computer vision applications in corpora of medieval illumination have great promise, particul... |
All embeddings are added to a faiss (Johnson et al., 2019) index structure based on the Euclidean distance between them. For each image, the most similar images can be queried by their embedding. Pre-processing of image data in the form of object detection would be valuable; however, high-quality label hierarchies des... |
A label hierarchy for medieval illuminations produced by content specialists using the system. It can be straightforwardly applied to scenarios in which hierarchical classification or weakly supervised object detection (Inoue et al., 2018) is performed on specific historical sets of images with related themes. | Although the domain of medieval manuscripts might seem quite specialized, the situation of divergent common vocabularies and the desire to resolve and combine labels across knowledge bases is common to many research fields. Our system is designed to support subject specialists from different backgrounds in viewing thei... | C |
In addition to the above single source-target PoT scenarios, we also conduct experiments to verify the effectiveness of PanDa in the multiple-task scenarios. Specifically, taking the “MNLI, QNLI, QQP" as source tasks, we report the PoT results of BERT-base on 9 target tasks in Table IV-C. For references, we also repor... |
In addition to the above single source-target PoT scenarios, we also conduct experiments to verify the effectiveness of PanDa in the multiple-task scenarios. Specifically, taking the “MNLI, QNLI, QQP" as source tasks, we report the PoT results of BERT-base on 9 target tasks in Table IV-C. For references, we also repor... |
In addition to the above single source-target PoT scenarios, we also conduct experiments to verify the effectiveness of PanDa in the multiple-task scenarios. Specifically, taking the “MNLI, QNLI, QQP" as source tasks, we report the PoT results of BERT-base on 9 target tasks in Table IV-C. For references, we also repor... | As seen, multi-task PoT generally outperforms single-task PoT, and with this strategy, our PANDA can achieve even better results, demonstrating the complementarity of PANDA and multi-task learning. More encouragingly, compared to the powerful counterparts, PanDa with both fusion strategies can consistently achieve much... | As seen, multi-task PoT generally outperforms single-task PoT, and with this strategy, our PANDA can achieve even better results, demonstrating the complementarity of PANDA and multi-task learning. More encouragingly, compared to the powerful counterparts, PanDa with both fusion strategies can consistently achieve much... | C |
Researchers, politicians, and journalists have long been fascinated by ‘cyberwar’ – the spectre of armed conflict between nations spilling over into attacks conducted over the Internet (Rid, 2012). ‘Colder’ forms of inter-state conflict are characterised by espionage and intelligence gathering, which may facilitate the... | Some associations between kinetic warfare and ‘nationalistic’ cyberattacks have been reported. Ukrainian firms were hit by data wipers such as CaddyWiper and NotPetya (ZDNET, 2022; Bleeping Computer, 2022), DDoS attacks (State Sites of Ukraine, 2022; Forbes, 2022b) and phishing campaigns (The Hacker News, 2022); Ukrain... |
Russia and Ukraine have a long history of electronic information warfare (Margarita Jaitner, 2015) and are among the most active cybercrime hubs (Lusthaus et al., 2020). When Russia invaded Ukraine on 24 February 2022, war-related attacks on the two countries were regularly reported (New York Times, 2022). A popular n... |
The role of the low-level cybercrime actors studied in this paper amounts to essentially trivial acts of solidarity and opportunistic competition. Their primary impact is probably to disseminate political propaganda, with little measurable evidence to suggest these actors are making any persistent contribution to the ... | Researchers, politicians, and journalists have long been fascinated by ‘cyberwar’ – the spectre of armed conflict between nations spilling over into attacks conducted over the Internet (Rid, 2012). ‘Colder’ forms of inter-state conflict are characterised by espionage and intelligence gathering, which may facilitate the... | B |
We direct the reader’s attention to the results presented in Figures 4 and 5, which presents the performance of our proposed ranking strategy, HET, across various datasets and model architecture pairings. Figure 4 details the outcomes for CIFAR10 and ImageNet, while Figure 5 delves into the X-Ray and Road Sign datasets... |
The results garnered from our extensive experimentation with the HET ranking strategy offer compelling evidence of its effectiveness in enhancing the transferability of adversarial examples. Notably, the strategy demonstrates remarkable efficacy in the context of improving the transferability of a single specific samp... | Even as the value of k𝑘kitalic_k increases, representing a broader selection of top-ranking perturbations, HET maintains its superior performance. It exhibits an enhancement of up to 60% in transferability over the lower bound for larger values of k𝑘kitalic_k. This improvement is noteworthy, demonstrating the robustn... |
The results underscore the proficiency of HET in consistently pinpointing the most transferable perturbation for a given sample. The significance of this capability is highlighted by the comparison to the lower bound of transferability—averaging at or below 30% across the datasets—which HET substantially elevates to a... |
Our observations reveal a consistent trend across all datasets and architecture combinations: HET closely tracks the upperbound line, which represents the theoretical maximum transferability. Particularly for small values of k𝑘kitalic_k, HET demonstrates a high likelihood of successful transferability, often achievin... | D |
We remark that the assumptions underlying Proposition 3 can be relevant in practice. For example, consider classifying whether or not a given point 𝒙𝒙\boldsymbol{x}bold_italic_x in n−limit-from𝑛n-italic_n -dimensional space (with each component bounded between [−π,π]𝜋𝜋[-\pi,\pi][ - italic_π , italic_π ]) stays in... |
We remark that the assumptions underlying Proposition 3 can be relevant in practice. For example, consider classifying whether or not a given point 𝒙𝒙\boldsymbol{x}bold_italic_x in n−limit-from𝑛n-italic_n -dimensional space (with each component bounded between [−π,π]𝜋𝜋[-\pi,\pi][ - italic_π , italic_π ]) stays in... | Figure 7: Global-measurement concentration of quantum kernels. We plot the variance of the fidelity kernel as a function of n𝑛nitalic_n using different data-embeddings, namely a single layer of one qubit rotations (Rxsubscript𝑅𝑥R_{x}italic_R start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT, Rysubscript𝑅𝑦R_{y}italic_... | For this task, an individual data point in the training dataset is generated by uniformly drawing each vector component from the range [−π,π]𝜋𝜋[-\pi,\pi][ - italic_π , italic_π ]. Since here data points are obtained via uniformly sampling each component independently, the above assumptions are satisfied.
| Figure 3: Effect of exponential concentration on training and generalization performance.
We consider a tensor product encoding for an engineered data set where each component is uniformly drawn from [0,2π]02𝜋[0,2\pi][ 0 , 2 italic_π ] and the true label is ytrue(𝒙)=∑i=1NswiκFQ(𝒙𝒊,𝒙)subscript𝑦true𝒙superscrip... | C |
The 3D CNNs we employ in this work are initially designed for recognising general human behaviours and trained on human behaviours datasets such as Kinetics-400 and Kinetics-600. These datasets are formed by video clips with relatively high frame rates (25 fps) [3]. Therefore, in order to efficiently extract motion cl... | Human drivers predict lane change intentions mainly use visual clues rather than physical variables. However, existing works that utilize appearance features for lane change are surprisingly few. In [19], two appearance features, the state of brake indicators and the state of turn indicators are used for lane change re... |
In this work, we propose an end-to-end framework involving two approaches for lane change recognition classification and prediction of surrounding vehicles in highway scenarios. Seven state-of-the-art 3D action recognition models are investigated including one I3D model, two SlowFast models and four X3D models for bot... |
Two approaches involving seven 3D action recognition networks are adopted to classify and anticipate lane change events on the PREVENTION dataset. For our RGB+3DN method, the lane change recognition problem is formulated as an action recognition task only utilising visual information collected by cameras. The best per... |
To address this problem, we propose a novel end-to-end framework involving two approaches for lane change recognition. Specifically, our method takes advantage of the recent powerful action recognition models and mimics a human drivers’ behaviours by only utilizing the visual information. The first approach (RGB+3DN) ... | C |
Nt,k=(cπ[1],cπ[2],…,cπ[k])subscript𝑁𝑡𝑘subscript𝑐𝜋delimited-[]1subscript𝑐𝜋delimited-[]2…subscript𝑐𝜋delimited-[]𝑘\displaystyle N_{t,k}=(c_{\pi[1]},c_{\pi[2]},\ldots,c_{\pi[k]})italic_N start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT = ( italic_c start_POSTSUBSCRIPT italic_π [ 1 ] end_POSTSUBSCRIPT... |
Nt,k=(cπ[1],cπ[2],…,cπ[k])subscript𝑁𝑡𝑘subscript𝑐𝜋delimited-[]1subscript𝑐𝜋delimited-[]2…subscript𝑐𝜋delimited-[]𝑘\displaystyle N_{t,k}=(c_{\pi[1]},c_{\pi[2]},\ldots,c_{\pi[k]})italic_N start_POSTSUBSCRIPT italic_t , italic_k end_POSTSUBSCRIPT = ( italic_c start_POSTSUBSCRIPT italic_π [ 1 ] end_POSTSUBSCRIPT... | 8: Rs←(ct*−cts)←subscript𝑅𝑠superscriptsubscript𝑐𝑡superscriptsubscript𝑐𝑡𝑠R_{s}\leftarrow(c_{t}^{*}-c_{t}^{s})italic_R start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ← ( italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT - italic_c start_POSTSUBSCRIPT italic_t en... | ct=ci=…=ci+k−1subscript𝑐𝑡subscript𝑐𝑖normal-…subscript𝑐𝑖𝑘1c_{t}=c_{i}=\ldots=c_{i+k-1}italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = … = italic_c start_POSTSUBSCRIPT italic_i + italic_k - 1 end_POSTSUBSCRIPT and argmax𝒜(p)normal-argmax𝒜𝑝\ope... | have |cπ[i]−ct|≤|cπ[j]−ct|subscript𝑐𝜋delimited-[]𝑖subscript𝑐𝑡subscript𝑐𝜋delimited-[]𝑗subscript𝑐𝑡|c_{\pi[i]}-c_{t}|\leq|c_{\pi[j]}-c_{t}|| italic_c start_POSTSUBSCRIPT italic_π [ italic_i ] end_POSTSUBSCRIPT - italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | ≤ | italic_c start_POSTSUBSCRIPT italic_π... | D |
𝑺t′=𝑺t−η∑k=1T𝒅k𝒈kT𝒈t.subscriptsuperscript𝑺′𝑡subscript𝑺𝑡𝜂superscriptsubscript𝑘1𝑇subscript𝒅𝑘superscriptsubscript𝒈𝑘𝑇subscript𝒈𝑡\boldsymbol{S}^{\prime}_{t}=\boldsymbol{S}_{t}-\eta\sum\nolimits_{k=1}^{T}%
\boldsymbol{d}_{k}\boldsymbol{g}_{k}^{T}\boldsymbol{g}_{t}.bold_italic_S start_POSTSUPERSCRIPT ′ e... |
Task Feature Extraction. To extract task features, we assign a concise text description as the task name for each task, ensuring that the text effectively captures the task’s essence. For instance, for the DTD dataset [50], the task name “texture classification” is more descriptive than “image classification”. Leverag... | For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede... |
From Eq. 9, the learned context for class names of a certain task is not only determined by its own data but also by data from the other tasks. Actually, tasks with more similar task features will contribute more to this task. By multi-task training, a task can borrow information from related tasks to regularize its c... | Learning Task Features without Text Encoder. In the proposed meta network, the task features are extracted by the text encoder. By removing the text encoder from meta network and using a learnable vector for each task, the task features can be learned from scratch. The results with and without the text encoder on three... | C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.