context stringlengths 250 5.19k | A stringlengths 250 8.2k | B stringlengths 250 4.17k | C stringlengths 250 3.6k | D stringlengths 250 4.66k | label stringclasses 4
values |
|---|---|---|---|---|---|
Overall, parents would like to monitor their teens and other family members. Teens in contrast, showed less interest in providing oversight to their parents or to their siblings. In the next section, we discuss the findings on how parents and teens reacted to the privacy and safety oversight they received from one ano... | Parents overall said they would listen to their teen’s input, but teens were less receptive of their parents’ feedback. Teens said they would need to verify the feedback first. N=17, (89%) of parents would uninstall an app immediately or change permissions if their teens warned them. Only N=2 (11%) of the parents would... |
Apart from the above concerns, N=5 (26%) parents and N=8 (42%) teens said they did not want to hide apps that they had installed on their devices. Most teens said their parents already knew what apps were installed on their phones. Similarly, parents said they would not use any app that they would not share with their... | Overall, we found that most parents and teens made few considerations toward their own online safety or privacy when installing new apps or granting permissions to the apps they installed (RQ1). Meanwhile, parents often manually monitored the apps their teens installed but gave little thought to the permissions granted... | Conversely, one third of the teens (32%, N=6) would take their parents’ feedback and immediately act accordingly. The rest of the teens (68%, N=13) would need to verify first by crosschecking the apps and permissions on their phones. Teens also mostly said that they would look up the app online that their parents’ woul... | A |
Figure 12: Sample points of the cellular tower dataset (grey dots) and significant loops under subsample bootstrapping for different filtrations (red hollow circles and blue crosses). The black rectangle, which contains two holes detected by RDAD, is blown up and shown on the right subplot. |
The two filtrations pick up completely different homology classes. The class picked up by the distance-to-measure filtration is near Steens Mountain Wilderness in Oregan. The 3 classes picked up by the RDAD filtration are Lake Michigan; Dallas, Texas; and the Texan region surrounded by Houston, Austin and San Antonio.... |
We also apply our method to real data. The distance-to-measure filtration and the RDAD filtration are applied to an open dataset HIFLD21_cellurlar_towers of cellular tower locations recorded by the Federal Communications Commission (FCC). The two filtrations reveal uninhabited regions in the United States and regions... | The homology class picked up by the distance-to-measure filtration is a large sparsely populated area with few cellular towers if any. Those picked up by the RDAD filtration are comparatively smaller regions with an abrupt drop in density. The distance-to-measure filtration fails to pick up the smaller homology classes... | The most commonly used filtration is the distance filtration. While it can identify clean global topological signals, it is less useful for small and noisy features. To overcome this, multiple alternatives have been suggested. In this subsection, after briefly discussing the distance filtration, we review Bell et al’s ... | B |
We visualize the estimated AU occurrence probabilities of several samples to show the differences between P(Y|X)𝑃conditional𝑌𝑋P(Y|X)italic_P ( italic_Y | italic_X ) and P(Y|do(X))𝑃conditional𝑌𝑑𝑜𝑋P(Y|do(X))italic_P ( italic_Y | italic_d italic_o ( italic_X ) ). From Fig. 7 we can see that the probabilities ... |
To this end, we formulate subject variation problem by constructing a causal diagram to analyze the causalities among facial images, subjects, latent AU semantic relations, and estimated AU occurrence probabilities. Our causal inference framework not only fundamentally explains how subject-specific AU semantic relatio... |
We formulate subject variant problem in AU recognition using an AU causal diagram to explain the whys and wherefores. To the best of our knowledge, this is the first work to explain this problem with the help of causal inference theory and make attempt to remove the effect caused by subject variation via causal interv... | To better elucidate the effectiveness of CIS, we evaluate the improvement brought by CIS module inserted in models with different backbone networks including ResNet18, ResNet34 and ResNet50 (He et al. 2016) in Table 3. By inserting CIS into models with different backbone networks, we can observe significant and consist... | This paper focuses on explaining the why and wherefores of subject variation problem in AU recognition with the help of causal inference theory and providing a solution for subject-invariant facial action unit recognition by deconfounding variable S𝑆Sitalic_S in the causal diagram via causal intervention. Unlike previ... | D |
Txilm ensures optimal transaction hash size by assessing collision probability and incorporating a ”salt” during hash calculation to safeguard against potential attacks [50]. The improvement of simply compressing the blockbody is bounded, Dino protocol alternatively transmits a block reconstruction rule to reduce the b... |
Compressing block sizes: One way to speed up block propagation is compressing the block size to shorten the block transmission time. Compact block is first adopted in Bitcoin [11], where the complete transactions in the blockbody are replaced with their hash, and the receiver reconstructs the full block based on its t... |
Accordingly, we can accelerate block propagation by decreasing i) block transmission time and ii) block validation time. To lower block transmission time, Bitcoin Improvement Proposal (BIP) 152 proposed to propagate compact blocks in place of full blocks. In compact blocks, transaction hashes replace the transactions ... |
Following the idea of compressing the block size, several works introduce network coding to multicast the blocks [52, 53]. Velocity utilizes fountain codes to enable the nodes to receive a full block from multiple neighbor nodes [52]. The authors in [53] design a new compact block protocol with cut-through forwarding ... | CBP is the current block propagation protocol adopted in Bitcoin to further reduce the network overload. When a node receives a new block, it validates the block and generates a compact block version. Then it announces this compact block by sending an Inv message to its neighbor nodes. If a neighbor node does not recei... | C |
In this paper, we study linear function approximation in POMDPs to address the statistical challenges amplified by infinite observation and state spaces. In particular, our contribution is fourfold. First, we define a class of POMDPs with a linear structure and identify an ill conditioning measure for sample-efficient ... | Our work is related to a line of recent work on the sample efficiency of reinforcement learning for POMDPs. In detail, Azizzadenesheli et al. (2016); Guo et al. (2016); Xiong et al. (2021) establish sample complexity guarantees for searching the optimal policy in POMDPs whose models are identifiable and can be estimate... |
More specifically, partial observability poses both statistical and computational challenges. From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property. In particular, predicting the future often involves inferring the distribution of the ... | In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinfo... | Partial observability poses significant challenges for reinforcement learning, especially when the observation and state spaces are infinite. Given full observability, reinforcement learning is well studied empirically (Mnih et al., 2015; Silver et al., 2016, 2017) and theoretically (Auer et al., 2008; Osband et al., 2... | A |
The following keywords were used to search all the databases: speech, language, disorder, impairment, assessment, therapy, rehabilitation, treatment, AI, artificial intelligence, automated, automatic. Boolean operators were used to combine the terms as: | We presented the language distribution of the papers based on the language addressed by the AI-based automated speech therapy tools as reported in the studies (see Figure 8). The most addressed languages were English (10 studies) and Spanish (4 studies). Furthermore, two studies addressed the Cantonese language, and th... | We further report the geographical distribution of the included studies based on the
location of the study indicated in the paper (see Figure 7). We looked at the author’s affiliation and funding agency when required. Most papers reported on studies which |
There were 91 unique authors identified from the included studies. The VOSviewer software was used to calculate the most impactful authors, generate co-authorship clusters, and perform co-occurrences of keyword analysis (Van Eck NJ, \APACyear\bibnodate). All the authors were counted irrespective of the authorship orde... |
We conducted this systematic literature review based on a sample of 24 out of 678 research papers deriving from Scopus, IEEEXplore, and ACM DL databases. Exciting insights and trends emerged from our analysis of these papers. In recent years, we observed an increasing interest in AI-based auto- mated speech therapy to... | C |
We consider the case that some points may have significantly higher noise perturbations than others. In this setting, we randomly select some points, and we generate some points with c⋅σ⋅𝑐𝜎c\cdot\sigmaitalic_c ⋅ italic_σ coordinate-wise variance, where c=8𝑐8c=8italic_c = 8 for our experiments (recall that the noise... |
This gives us initial theoretical evidence that in the random-mixture model with outliers, our simple outlier detection method can detect outliers when a non-negligible fraction of the points are outliers. Next, we use simulations of our model to test the efficacy of our outlier detection method and its impact on the ... | Outlier detection has been an active area of study in unsupervised learning, providing several influential algorithms. In a recent, comprehensive benchmarking of outlier detection algorithms, [HHH+22] compared the performance of several unsupervised learning algorithms on different datasets. They found that for unsuper... |
Next, we study the usefulness of our outlier detection method in these datasets. Unlike our simulations, there is no ground-truth labeling for outliers. Rather, we assume that in each community (as defined by the labels provided with the dataset), some points may behave more like an outlier, in that they may be a mixt... |
This shows that our outlier detection method is adept at detecting different kinds of outliers, outperforming popular outlier detection tools in some settings, and being competitive to them in others. We also observe that as the overall noise in the dataset increases, the performance of our method compared to the othe... | D |
We train the entire pipeline following the widely adopted stepwise training mechanism as in previous studies [1, 2, 19, 25, 3]. We firstly train the object detector on the image input with missingness. For the second stage, we freeze the parameters in the objector detector and attach the proposed SI-Dial to the pipeli... |
In this paper, we investigate the SGG task setting with missing input visions, and propose to supplement the missing visual information via interactive natural language dialog using the proposed SI-Dial framework. Extensive experiments on the benchmark dataset with various levels of missingness demonstrate the feasibi... | We use the benchmark Visual Genome (VG) dataset [26] for experiments. The VG dataset contains in total 108,077 images and 1,445,322 question-answer pairs.
We firstly perform the vision data processing to obtain three levels of missingness: the obfuscations applied on the objects, the obfuscations applied on entire imag... | Specifically, we pre-process the visual data to provide three different levels of missingness: obfuscations on the objects (e.g., humans, cars), obfuscations on the entire images, and the semantically masked visual images.
The masked visual data has more severe missingness compared to the other two levels. | Note that the ground truth question-answer pairs from the dataset annotations are not evenly distributed, meaning some of the images do not have corresponding questions and answers, therefore, the 100 candidates are formed from two sources. For the images with GT QA pairs, we include the GT pairs as part of the candida... | B |
Contribution.
Our primary conceptual contribution lies in introducing the concept of an entrance fee function to facility location games. In our model, the entrance fee function is arbitrary, forming a dynamic part of the input. This captures a broader range of real-life scenarios where facilities incur location-depend... | However, the arbitrariness of the entrance fee function introduces new challenges in designing strategyproof mechanisms. Agent preferences may no longer adhere to single-peakedness [22, 5], and standard mechanisms for the classical model cannot be directly extended to our setting while preserving strategyproofness. To ... | (Group) strategyproof mechanisms may not be able to achieve the optimal value for one of the two objectives. Thus we use approximation ratio to evaluate the performance of the mechanism.
For an entrance fee function e𝑒eitalic_e and position profile 𝐱∈ℝn𝐱superscriptℝ𝑛\mathbf{x}\in\mathbb{R}^{n}bold_x ∈ blackboard_R ... | A notable open problem is to narrow the gaps between our bounds in Table 3. In the classical model, randomized mechanisms such as the left-right-middle and proportional mechanisms achieve better ratios than deterministic mechanisms. However, these do not extend to our models while remaining strategyproof. Designing imp... | This paper extends the classical facility location game on the real line by incorporating entrance fee functions, adding versatility to the model. The extension prompts a reevaluation of existing facility location games, like capacitated and heterogeneous facilities, opening avenues for broader applications.
Our arbitr... | A |
Connected planar NSF (K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT,gem,W4subscript𝑊4W_{4}italic_W start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT,butterfly,K1,5subscript𝐾15K_{1,5}italic_K start_POSTSUBSCRIPT 1 , 5 end_POSTSUBSCRIPT,H𝐻Hitalic_H,snail,press,C4,…,Cksubscript𝐶4…subscript𝐶𝑘C_{4},\dots,C... | e∈E𝑒𝐸e\in Eitalic_e ∈ italic_E is dominated exactly by one edge then E′superscript𝐸′E^{\prime}italic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is called an efficient edge dominating set (EED). On the other hand, if we relax the definition, and let each of e∈E∖E′𝑒𝐸superscript𝐸′e\in E\setminus E^{\prime}italic_... |
We say that G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) is a neighborhood star-free graph, NSF graph for short, if for every vertex v∈V𝑣𝑉v\in Vitalic_v ∈ italic_V with degree at least 2, G[N[v]]𝐺delimited-[]𝑁delimited-[]𝑣G[N[v]]italic_G [ italic_N [ italic_v ] ], is not a star. In other words, every ... | There is some more bibliography to add to the already vast literature [3, 10, 23, 31, 33, 35] on dominating induced matchings. As we have seen before the papers on perfect edge domination are less frequent. There is a paper [16] where the authors describe ILP formulations for the PED problem, together with some experim... |
We wonder if there is some algorithmic relation between efficient and perfect edge domination. More specifically, we remark that there are graph classes which admit polynomial time solutions for solving the efficient edge domination problem while being hard for solving the perfect edge domination problem. However, we ... | D |
For all the examples in Tab. 1, we have n=2𝑛2n=2italic_n = 2 and m=1𝑚1m=1italic_m = 1, while k¯=2normal-¯𝑘2\bar{k}=2over¯ start_ARG italic_k end_ARG = 2, which coincides with the dimension of the state-space. Thus, to reproduce any of our controllers Φ(⋅)normal-Φnormal-⋅\Phi(\cdot)roman_Φ ( ⋅ ) exactly, the ReLU ne... | We have considered the design of ReLU-based approximations of traditional controllers for polytopic systems, enabling implementation even on very fast embedded control systems. We have shown that our reliability certificate require one to construct and solve an MILP offline, whose associated optimal value characterizes... | We now discuss several practical aspects related with the design of a ReLU network to compute a controller approximation ΦNN(⋅)subscriptΦNN⋅\Phi_{\textrm{NN}}(\cdot)roman_Φ start_POSTSUBSCRIPT NN end_POSTSUBSCRIPT ( ⋅ ), also including further numerical simulations and discussing potential practical limitations of the... |
Capitalizing on the methodology proposed in 17, we take an optimization-based approach to develop analytical tools fulfilling such quest for theoretical guarantees of ReLU-based approximations of traditional stabilizing controllers for polytopic systems. We develop a purely offline method based on the systematic const... |
We use the examples in Tab. 1 to verify Theorem 5.1 numerically, also discussing potential limitations of the proposed methodology to design fast ReLU-based proxies for ultimate boundedness control of polytopic systems. Simulations are run in Matlab using Gurobi 24 as an MILP solver on a laptop with a Quad-Core Intel ... | D |
To show the capabilities of our approach on real world data, we captured videos of three physical systems: A block sliding on an inclined plane, a thrown ball, see Figure 6, and a pendulum, see Figure 1. For the block, the initial position and velocity, the angle of the plane and the coefficient of friction are the unk... | To show the capabilities of our approach on real world data, we captured videos of three physical systems: A block sliding on an inclined plane, a thrown ball, see Figure 6, and a pendulum, see Figure 1. For the block, the initial position and velocity, the angle of the plane and the coefficient of friction are the unk... | Homography to correct for non-parallel planes.
Since Ttsubscript𝑇𝑡T_{t}italic_T start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is an affine transformation, it can only model movements that are parallel to the (static) camera plane. However, in particular for the real world examples, the plane of the movements does ... |
The real world data is more challenging than the synthetic data, due to image noise and motion blur. We employ the homography to account for a plane of movement that is not parallel to the image plane. For training, we extract a subset of the frames and evaluate on the remaining frames. | Qualitative results for a single scene can be seen in Figure 5, Table 2 shows a quantitative evaluation over all sequences. For more results we refer to the appendix. We see that our model produces photorealistic renderings of the scene, even for the predicted frames. While both baselines yield similar results on the t... | C |
For our simulations, we consider a scenario in which the data traffic is modeled according to 3GPP TSG-RAN1#481#481\#481 # 48 R1-070674 [22]. This leads to a complex data traffic model suitable for 5G wireless services. Here, we particularly model a gaming data traffic, where the data dimension is 3, and packet size ... |
First, in Figure 2, we compare the quantum communication resources needed for QSC and semantic-agnostic frameworks. We observe that as the data traffic increases (represented by |𝒳|𝒳\lvert\mathcal{X}\rvert| caligraphic_X |), the amount of semantic concepts extracted will increase which causes the monotonic increase ... |
As discussed earlier, the QSC framework ensures minimality of quantum communication resources by extracting and compressing the semantic representations of the data, unlike existing semantic-agnostic QCNs. Moreover, to assess the accuracy of the QSC performance within the quantum semantics’ extraction, transmission, r... |
Towards this goal, the main contribution of this letter is a novel resource-efficient QCN framework, dubbed the framework. This framework draws upon recent advancements in two key quantum information science areas. First, it utilizes high-dimensional quantum information and to extract underlying structures of classi... |
In Figure 3, we show the quantum semantic fidelity achieved against the amount of quantum communication resources used for |𝒳|=500𝒳500\lvert\mathcal{X}\rvert=500| caligraphic_X | = 500. At low noise, to achieve a quantum semantic fidelity of 0.70.70.70.7, QSC requires around 50%percent5050\%50 % quantum communicatio... | A |
The case that xv∈XSvsubscript𝑥𝑣subscript𝑋subscript𝑆𝑣x_{v}\in X_{S_{v}}italic_x start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ∈ italic_X start_POSTSUBSCRIPT italic_S start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT end_POSTSUBSCRIPT while xD∈XSvsubscript𝑥𝐷subscript𝑋subscript𝑆𝑣x_{D}\in X_{S_{v}}italic_x start_PO... | One can also check whether the sufficient condition in Theorem 2 is violated via the reduced E𝐸Eitalic_E-verifier RVE𝑅subscript𝑉𝐸RV_{E}italic_R italic_V start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT constructed by following Algorithm 2 (in this case, we already know that the sufficient condition will be violated ... | The first condition requires that,
as the interface of the system, the defensive function should be able to react to every observable sequence of events that can be generated by the system, such that defensive actions (deletions, insertions, or replacements) can be utilized. | In other words, in order to retain the privacy of a system, for each s𝑠sitalic_s-revealing sequence λssubscript𝜆𝑠\lambda_{s}italic_λ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT, the corresponding defensive actions should include a feasible one that does not disclose the occurrences of the secret events as describ... | If the condition holds, the reduced E𝐸Eitalic_E-verifier can be used to take defensive actions in response to system activity.
In fact, any action that is feasible at the present state —or one of the present states— of the reduced E𝐸Eitalic_E-verifier can be taken since there will always be a feasible action in all o... | D |
Their collective goal is to maximize the throughput (namely, the number of bits per frame) delivered to UEs. This requires a proper management of the backhaul and access link transmissions during a frame.
Note that one time slot of the frame corresponds to one step in the RL interactions, and one episode in the RL inte... | The reward for the agent i𝑖iitalic_i, thanks to its successful backhaul transmission to an IAB-node n𝑛nitalic_n, is additionally scaled by the IAB-node n𝑛nitalic_n’s buffer length Lnsubscript𝐿𝑛L_{n}italic_L start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT after the transmission is completed. This additional scaling ... | From a different perspective, the action space of each agent only depends on the number of sectors of each antenna panel, which is constant in a specific environment, regardless of how many users exist in the system. Hence, the size of the action space, similar to the DNN’s size, does not depend on the number of UEs.
I... | How to achieve high UE throughput without significantly reducing fairness depends on how Tx antenna panels (agents) point their beams, how IAB-node buffers are refilled, and how data bits flow through the network, crossing IAB-nodes.
These aspects will be managed by the MARL agents trained based on the observation, act... | Through the cooperation among MARL agents, the developed resource allocation approach can coordinate link interference and data caching on IAB-nodes, and capture network dynamics. We have designed different MARL setups for FD and HD node operations. Moreover, we have provided a learning framework considering potential ... | C |
Is this a good statistical protocol? The answer depends on how much money the pharmaceutical company will make, among other things. In particular, depending on the total profit the company earns when they are approved, even companies with ineffective drugs may be incentivized to run a trial. | Case 2: large profit. Suppose that companies who receive approval make $1 billion in profit, 100 times their investment. In this case, agents of type θ=0𝜃0\theta=0italic_θ = 0 would choose to run trials: their expected profit from seeking approval is $40 million. On average, 5% of such agents would receive approval, s... |
We begin with a stylized example to highlight the interaction between an agent’s incentives and the principal’s statistical protocol. Suppose there are two types of pharmaceutical companies: companies with ineffective drugs (θ=0𝜃0\theta=0italic_θ = 0) and companies with effective drugs (θ=1𝜃1\theta=1italic_θ = 1). F... |
Conversely, the statistical protocol changes the incentives of the agents. Consider again the large profit case above, where agents receive 100 times their initial investment if they receive approval. Now, however, suppose the principal changes to a stricter protocol such that the probability of approval is only 0.005... | Case 1: small profit. Suppose that companies who receive approval make $100 million in profit, 10 times their investment. In this case, agents of type θ=0𝜃0\theta=0italic_θ = 0 would choose not to run trials, since their expected profit for running a trial is -$5 million. Hence, all approved drugs would be effective d... | D |
Table 3: Non-Linear SSD registration test comparison between Clear, PPIR(MPC), PPIR(FHE)-v1 and PPIR(FHE)-v2. The registration metrics are reported as mean and standard deviation. Efficiency metrics in terms of average across iterations. RMSE: root mean square error.
| Brain MRI and PET data.
The registration of brain gray matter density images was performed by non-linear registration based on SSD, without gradient approximation, based on a cubic spline model (one control point every five pixels along both dimensions), with multiresolution steps r1subscript𝑟1r_{1}italic_r start_POST... | We demonstrate and assess the different versions of PPIR illustrated in Section 3 on a variety of image registration problem, namely: (i) SSD for rigid transformation of point cloud data, (ii) SSD with linear and non-linear alignment of whole body positron emission tomography (PET) data; (iii) SSD and MI for mono- and ... | Abdomen MR and CT data. The multimodal dataset Abdomen-MR-CT [25] was used for experiments with ANTs registration based on CC. The data was compiled from public studies of the cancer imaging archive (TCIA) [16] that contains 8 paired scans of MRI and CT from the same patients. The data have an isotropic resolution of 2... | Brain MRI data and whole body PET data: non-linear registration (SSD).
Table 3, comparing Clear and PPIR(MPC), PPIR(FHE)-v1 and v2, showcases the metrics resulting from spline-based non-linear registration between grey matter density images without the application of gradient approximation. Additionally, the table incl... | B |
We use an effective teacher-student pair of ResNet56 - MobileNet for experiments.
The results show that B2KD methods are generally more robust than traditional KD methods for small data sizes, and they can utilize the information in available samples maximumly to model compression in extreme cases. |
We conduct different teacher-student model pairs for distillation experiments, and use ResNet32 / ResNet56 / VGG13 / ResNet110 / ResNet50 / ResNeXt101 as teacher models and use ResNet8 / ResNet32 / VGG11 / MobileNet / ResNet34 / ResNeXt50 as student models. |
In all experiments, teacher and student models are trained for 350350350350 epochs, except 12121212 epochs for MNIST. We use Nesterov SGD with momentum 0.90.90.90.9 and weight-decay 0.00050.00050.00050.0005 for training and use a mini-batch size of 128128128128 images on a single NVIDIA GeForce RTX 3090 GPU. | The initial learning rate is 0.10.10.10.1, except 0.010.010.010.01 for MNIST, and we conduct a multi-step learning rate schedule which decreases the learning rate by 0.1 at the 116thsuperscript116𝑡ℎ116^{th}116 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT and 233thsuperscript233𝑡ℎ233^{th}233 start_POS... | We use ResNet [18], VGG [46] and MobileNet [20] as the backbone, and adopt standard data augmentation techniques (random crop and horizontal flip) and an SGD optimizer in all experiments.
We consistently train the teacher and student model for 350350350350 epochs, except for 12121212 epochs for MNIST, and we adopt a mu... | B |
\sum_{|i|>N}c_{i}\phi_{i}\right\|_{2}}{\left\|\sigma(f(x))\right\|_{2}}.italic_σ ( italic_f ( italic_x ) ) ≡ ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_E start_POSTSUBSCRIPT italic_a end_POSTSU... |
Our solution to the problem of aliasing errors is to decouple interpolation from the mapping between functional spaces. It is described in Section 4. However, it is possible to decrease aliasing error even when FNO is used for interpolation. One simply needs to train (or fine-tune) on a sufficiently fine grid. The dec... |
Suppose our function f(x)𝑓𝑥f(x)italic_f ( italic_x ) is (exactly) represented as Fourier series with |k|<N𝑘𝑁|k|<N| italic_k | < italic_N terms. We can equivalently store values of the function on the uniform grid with 2N+12𝑁12N+12 italic_N + 1 points. However, when we apply activation function σ(x)𝜎𝑥\sigma(x... | Figure 1: (a) the output of neural network N(x)𝑁𝑥N(x)italic_N ( italic_x ) computed on coarse and fine grids. On each subgrid, loss and gradients are zero, so the network provides the best (alas, pathological) approximation to f(x)=2x𝑓𝑥2𝑥f(x)=2xitalic_f ( italic_x ) = 2 italic_x on the interval [−1,1]11[-1,1][ ... |
Aliasing error defined in Equation 1 measures the norm of harmonics we cannot possibly resolve on the given grid relative to the norm of the function, transformed by activation. The following result gives aliasing error for rectifier and two extreme basis functions. | D |
To compute the distillation loss of the aforementioned approach, one need to select the source feature map from the teacher and the target feature map from the student, where these two feature maps must have the same spatial dimension. As shown in Figure 1 (b), the loss is computed in a one-to-one spatial matching fash... | To this end, we propose a one-to-all spatial matching knowledge distillation pipeline that allows the each feature location of the teacher to teach the entire student features in a dynamic manner.
To make the whole student mimic a spatial component of the teacher, we propose the Target-aware Transformer (TaT) to pixel-... | To this end, we propose a novel one-to-all spatial matching knowledge distillation approach. In Figure 1 (c), our method distills the teacher’s features at each spatial location into all components of the student features through a parametric correlation, i.e., the distillation loss is a weighted summation of all stude... |
Figure 2: Illustration of our framework. (a) Target-aware Transformer. Conditioned on the teacher feature and the student feature, the transformation map Corr. is computed and then applied on the student feature to reconfigure itself, which is then asked to minimize the L22{}_{2}start_FLOATSUBSCRIPT 2 end_FLOATSUBSCRI... | To compute the distillation loss of the aforementioned approach, one need to select the source feature map from the teacher and the target feature map from the student, where these two feature maps must have the same spatial dimension. As shown in Figure 1 (b), the loss is computed in a one-to-one spatial matching fash... | B |
Simultaneous Embedding. Some recent algorithms for simultaneous embedding/multiview embedding include
Multiview Stochastic Neighbor Embedding (m-SNE) [39, 40], based on a probabilistic framework that integrates heterogeneous features of the dataset into one combined embedding and Multiview Spectral Embedding (MSE) [38]... | The related work section is organized as follows. First, we review several dimensionality reduction algorithms that are widely used in visualization. Second, we delve into the fundamentals of t-SNE, to provide the needed background information needed for its generalization, ENS-t-SNE. Third, we review algorithms for su... | Dimension Reduction.
A wide variety of dimension reduction techniques abound: Principal Component Analysis (PCA) [23], Multi-Dimensional Scaling (MDS) [32], Laplacian Eigenmaps [6], t-Distributed Stochastic Neighbor Embedding (t-SNE) [28], Uniform Manifold Approximation and Projection (UMAP) [29]. These techniques atte... | Multi-view Data Visualization via Manifold Learning [31], proposes extensions of t-SNE, LLE and ISOMAP, for dimensionality reduction and visualization of multiview data by computing and summing together the gradient descent for each data-view.
Multi-view clustering for multi-omics data using unified embedding [30] uses... |
In Fig. 4, we compare MPSE embedding of Palmer’s Penguins dataset to ENS-t-SNE (Fig. ENS-t-SNE: Embedding Neighborhoods Simultaneously t-SNE) using the same variables. In the first view (Fig. 4(b)), blue and orange points are mixed, and in the second view (Fig. 4(c)), squared and circled shapes are mixed. ENS-t-SNE, h... | C |
Related Work. Our work follows the previous studies of POMDPs. In general, solving a POMDP is intractable from both the computational and the statistical perspectives (Papadimitriou and Tsitsiklis, 1987; Vlassis et al., 2012; Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a). Given such computational a... |
In contrast, partially observed Markov decision processes (POMDPs) with large observation and state spaces remain significantly more challenging. Due to a lack of the Markov property, the low-dimensional feature of the observation at each step is insufficient for the prediction and control of the future (Sondik, 1971;... | To learn a sufficient embedding for control, we utilize the low-rank transition of POMDPs. Our idea is motivated by the previous analysis of low-rank MDPs (Cai et al., 2020; Jin et al., 2020b; Ayoub et al., 2020; Agarwal et al., 2020; Modi et al., 2021; Uehara et al., 2021). In particular, the state transition of a low... | Deep reinforcement learning demonstrates significant empirical successes in Markov decision processes (MDPs) with large state spaces (Mnih et al., 2013, 2015; Silver et al., 2016, 2017). Such empirical successes are attributed to the integration of representation learning into reinforcement learning. In other words, ma... | Related Work. Our work follows the previous studies of POMDPs. In general, solving a POMDP is intractable from both the computational and the statistical perspectives (Papadimitriou and Tsitsiklis, 1987; Vlassis et al., 2012; Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a). Given such computational a... | B |
Such construction contrasts sharply with previous works on offline RL which build confidence regions via either least square regression or maximum likelihood estimation (Xie et al., 2021; Uehara and Sun, 2021; Liu et al., 2022).
Furthermore, we develop a novel theoretical analysis to show that any function in the confi... | From a theoretical perspective, the identification result and the backward induction property of the bridge functions provide a way of decomposing the suboptimality of the learned policy in terms of statistical errors of the bridge functions.
When combined with the pessimism and the fast statistical rates enjoyed by an... | To learn the optimal policy in the face of distributional shift, we adopt the pessimism principle which is shown to be effective in offline RL in MDPs (Liu et al., 2020b; Jin et al., 2021; Rashidinejad et al., 2021; Uehara and Sun, 2021; Xie et al., 2021; Yin and Wang, 2021; Zanette et al., 2021; Yin et al., 2022; Yan ... | Finally, leveraging the backwardly inductive nature of the bridge functions, our proposed confidence regions and analysis take the temporal structure into consideration, which might be of independent interest to the research on dynamic causal inference (Friston et al., 2003).
| Offline RL for POMDPs is known to be intractable in the worst case (Krishnamurthy et al., 2016). So we first identify a benign class of POMDPs where the causal structure involving latent states can be captured by only the observable variables available in the dataset 𝔻𝔻\mathbb{D}blackboard_D.
For such a class of POMD... | C |
where ℒ(𝒙,𝝀)=f(𝒙)+𝝀Tc(𝒙)ℒ𝒙𝝀𝑓𝒙superscript𝝀𝑇𝑐𝒙\mathcal{L}({\bm{x}},{\bm{\lambda}})=f({\bm{x}})+{\bm{\lambda}}^{T}c({\bm{x}})caligraphic_L ( bold_italic_x , bold_italic_λ ) = italic_f ( bold_italic_x ) + bold_italic_λ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_c ( bold_italic_x ) is the La... | Given the prevalence of streaming datasets in modern problems, offline methods that require dealing with a large batch set in each step are less attractive. It is desirable to design fully online methods, where only a single sample is used in each step, and to perform online statistical inference by leveraging those me... |
There are numerous methods for solving constrained optimization problems, such as projection-based methods, penalty methods, augmented Lagrangian methods, and sequential quadratic programming (SQP) methods (Nocedal2006Numerical). This paper particularly considers solving constrained stochastic optimization problems vi... | On the other hand, a growing body of literature leverages optimization procedures to facilitate online inference, starting with Robbins1951stochastic; Kiefer1952Stochastic and continuing through Robbins1971convergence; Fabian1973Asymptotically; Ermoliev1983Stochastic. To study the asymptotic distribution of stochastic ... | In particular, the StoSQP method computes a stochastic Newton direction in each iteration by solving a quadratic program, whose objective model is estimated using the new sample. Then, the method selects a proper stepsize to achieve a sufficient reduction on the merit function, which balances the optimality and feasibi... | A |
This condition was discussed by Bercovier and Pironneau in [2], which turned out as an enabler of (3), see [15]. We will refer to this inf-sup condition as the discrete BP condition.
In [2] the proof was given for k=2𝑘2k=2italic_k = 2 and for meshes made of rectangles for d=2𝑑2d=2italic_d = 2 and bricks for d=3𝑑3d=3... |
The rest of the paper is organized as follows. In Section 2 the technique of T𝑇Titalic_T-coercivity is discussed, which provides important auxiliary results for Section 3, which is the main section of the paper and contains the analysis of the discrete inf-sup conditions. In Appendix A known results on the continuous... | Previous work on the discrete LBB condition of the Stokes problem for the generalized Taylor-Hood family Qksubscript𝑄𝑘Q_{k}italic_Q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Qk−1subscript𝑄𝑘1Q_{k-1}italic_Q start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on quadrilateral/hexahedral meshes focused on the case... | The LBB condition is essential for the well-posedness of the Stokes problem. Its verification for the case (a) is typically done by an indirect proof using a compactness argument. We are aware of only one reference, namely [1], which contains a proof for the case |ΓN|>0subscriptΓ𝑁0|\Gamma_{N}|>0| roman_Γ start_POSTSUB... |
A popular class of finite element methods for discretizing the Stokes problem is the generalized Taylor-Hood family Pksubscript𝑃𝑘P_{k}italic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT–Pk−1subscript𝑃𝑘1P_{k-1}italic_P start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT on triangular/tetrahedral meshes with cont... | A |
We performed ablation studies using ImageNet-1k and CIFAR-10 datasets on WaveMix to understand the effect of each type of layer on performance by removing the 2D-DWT layer, replacing it with Fourier transform or random filters, as well as learnable wavelets. All of these led to a decrease in accuracy. Those methods tha... | The WaveMix block extracts learnable and space-invariant features using a convolutional layer, followed by spatial token-mixing and downsampling for scale-invariant feature extraction using multi-level 2D-DWT [52], followed by channel-mixing using a learnable MLP (1×\times×1 conv) layer, followed by restoring spatial r... |
Replacing the filters of 2D-DWT with random filters of similar size as Haar wavelet resulted in 6% fall in accuracy, confirming that the fixed kernel weights of 2D-DWT is already well-suited for computer vision based on previous studies. GPU RAM consumption increased by 8% and accuracy decreased by 5% when we replaced... | As shown in Figure 1 (c), the design of the WaveMix block is such it that does not collapse the spatial resolution of the feature maps, unlike CNN blocks that use pooling operations [9]. And yet, it reduces the number of computations required by reducing the spatial dimensions of the feature maps using 2D-DWT, which tr... | We performed ablation studies using ImageNet-1k and CIFAR-10 datasets on WaveMix to understand the effect of each type of layer on performance by removing the 2D-DWT layer, replacing it with Fourier transform or random filters, as well as learnable wavelets. All of these led to a decrease in accuracy. Those methods tha... | B |
order 1111 system by adding new variables xi,ksubscript𝑥𝑖𝑘x_{i,k}italic_x start_POSTSUBSCRIPT italic_i , italic_k end_POSTSUBSCRIPT standing to
xi(k)superscriptsubscript𝑥𝑖𝑘x_{i}^{(k)}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT, for 0≤k<ri0𝑘subscr... | xi(ri)=fi(x),1≤i≤s,formulae-sequencesuperscriptsubscript𝑥𝑖subscript𝑟𝑖subscript𝑓𝑖𝑥1𝑖𝑠x_{i}^{(r_{i})}=f_{i}(x),\quad 1\leq i\leq s,italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT = italic_f start_POSTSU... | i}^{(k+1)}}.italic_δ := ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_k ∈ blackboard_N end_POSTSUBSCRIPT italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k + 1 ) end_POSTSUPERSCRIPT ∂ / ∂ start_P... | order 1111 system by adding new variables xi,ksubscript𝑥𝑖𝑘x_{i,k}italic_x start_POSTSUBSCRIPT italic_i , italic_k end_POSTSUBSCRIPT standing to
xi(k)superscriptsubscript𝑥𝑖𝑘x_{i}^{(k)}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT, for 0≤k<ri0𝑘subscr... | xi,k′=xi,k+1superscriptsubscript𝑥𝑖𝑘′subscript𝑥𝑖𝑘1x_{i,k}^{\prime}=x_{i,k+1}italic_x start_POSTSUBSCRIPT italic_i , italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_x start_POSTSUBSCRIPT italic_i , italic_k + 1 end_POSTSUBSCRIPT, for 0≤k<ri−10𝑘subscript𝑟𝑖10\leq k<r_{i}-10 ≤ italic... | D |
Summary of different methodologies for addressing tasks related to informal mathematical text. The methods are categorised in terms of (i) The task description; (ii) Learning: Supervised (S), Self-supervised (SS), Unsupervised (UNS), Rule-based (R) (no learning); (iii) Approach description; (iv) Dataset used; (v) Metri... |
Identifier-Definition Extraction. Leading work in premise selection ferreira2020premise; ferreira2021star and informal theorem proving welleck2021towards has explicitly highlighted the need for improved pairing of variables with descriptions. The varied tasks related to identifier-definition extraction lack communally... |
A significant proportion of variables or identifiers in formulae or text are explicitly defined in the context wolska2010symbol. Descriptions are usually local to the first instance of the identifiers in the discourse. It is the broad goal of identifier-definition extraction and related tasks to pair up identifiers wi... |
There is a high variability in scoping definitions. The scope from which identifiers are linked to descriptions varies significantly, and is one of the reasons it is difficult to compare performance of methods even when tackling the same variant of the task schubotz2017evaluating; alexeeva2020mathalign. At the smalles... |
The task hasn’t converged to a canonical dataset. Despite the clarity of its overall aim, the task has materialised into different forms: kristianto2012extracting predict descriptions given expressions, pagael2014mathematical predict descriptions given identifiers through identifier-definition extraction, stathopoulos... | B |
This dissimilarity is a squared distance weighted by the block proportions between the connectivity matrices of the two networks. The parameters (block proportions and connectivity matrices) are computed separately on the two networks with the node grouping provided by the inference on the whole sub-collection. |
This dissimilarity is a squared distance weighted by the block proportions between the connectivity matrices of the two networks. The parameters (block proportions and connectivity matrices) are computed separately on the two networks with the node grouping provided by the inference on the whole sub-collection. |
Section 2 recalls the definition of the Stochastic Block Model on a single network. We motivate our new approach by inferring it independently on a collection of food webs. Then in Section 3, we present the various variants of the colSBM𝑐𝑜𝑙𝑆𝐵𝑀colSBMitalic_c italic_o italic_l italic_S italic_B italic_M. The ... | We then use 2222-medoids clustering to split the sub-collection of networks based on the dissimilarity measures. A split is validated if it increases the score of Equation (12). The mathematical definition of the dissimilarity measure and details on the recursive clustering algorithm are given in Appendix A.
| Figure 4: Above: Clustering and connectivity structures of a collection of 67676767 predation networks from the Mangal database into 5555 sub-collections. The length of the dendrogram is given by the difference in BIC-L to the best model. Below: Contingency table of the clustering found by π-colSBM𝜋-𝑐𝑜𝑙𝑆𝐵�... | C |
One of the common stopgaps for this problem is to continuously expand the size of datasets, in order to strengthen the learned invariance of the target objects, by getting rid of other mechanisms or factors of variation.
For example, ImageNet [10], which is a typical dataset for training classification and detection al... | Even so, popular classification models trained with ImageNet have experienced 40−45%40percent4540-45\%40 - 45 % performance drop when tested on ObjectNet, a bias-controlled dataset [4] that produces thousands of images with 600 combinations of parameters, by intervening only on three mechanisms in the photo generation ... | It is shown in the result that, by leveraging the learned knowledge of mechanisms, the estimator and the identifier as auxiliaries can improve the classification accuracy significantly with extra explainability.
When the two modules operate independently of the classifier, without accessing any data of hand-written dig... | This is based on the observation of the learning curves of the three mechanisms (in Fig. 12 in Appendix A).
Fast learning on translation and scaling and a slow one on rotation can be noticed for all models, which indicates that CNN models have greater difficulty learning the mechanism of rotation. | The first observation is that, in the case of rotated test set, the basic classifier has experienced nearly a 40%percent4040\%40 % performance drop.
However, the accuracy of CED has increased to 77%percent7777\%77 % when k=5𝑘5k=5italic_k = 5 (CED_5) and further to 82%percent8282\%82 % when k=10𝑘10k=10italic_k = 10 (C... | A |
We further evaluate puNCE in the binary semi-supervised setting. Training data contains samples from both the classes and a set of unlabeled samples. In particular, we perform experiments when only 1%, 5% and 10% of the data is available (Figure 5.2). | We further evaluate puNCE in the binary semi-supervised setting. Training data contains samples from both the classes and a set of unlabeled samples. In particular, we perform experiments when only 1%, 5% and 10% of the data is available (Figure 5.2).
It is important to note that, unlike PU Learning settings, here we p... | We further evaluate puNCE in the binary semi-supervised setting. Training data contains samples from both the classes and a set of unlabeled samples. In particular, we perform experiments when only 1%, 5% and 10% of the data is available (Figure 5.2).
It is important to note that, unlike PU Learning settings, here we p... | It is important to note that, unlike PU Learning settings, here we perform downstream tuning only over the labeled data. We see similar trends as in PU Learning experiments - puNCE improves over infoNCE by 4.13% and over SCL by 1.39% when using only 1% data on PU CIFAR10. With only 10% data puNCE is able to achieve sim... | even with small amount of weak supervision,
puNCE can dramatically improve the quality of the resulting embedding compared to unsupervised contrastive training using infoNCE. For example, on PU CIFAR10 with ResNet-18 puNCE achieves over 3% improvement compared to infoNCE while using 20% labeled data, and over 2% improv... | C |
The (N×K)𝑁𝐾(N\times K)( italic_N × italic_K ) outgoing community membership matrix in an SBM for directed networks, where K𝐾Kitalic_K is the number of communities generating the network. For an undirected networks 𝑼=𝑽𝑼𝑽\bm{U}=\bm{V}bold_italic_U = bold_italic_V (see below) and we simply call 𝑼𝑼\bm{U}bold_itali... |
In contrast to these previous approaches to identify layer interdependence, we propose that the nonnegative Tucker decomposition does so by identifying which layers can be described by shared generative stochastic block models. As we will discuss in Section 3.2, the NNTuck is a generalization of the strata SBM model f... | De Domenico et al. (2015) and De Domenico and Biamonte (2016) develop information-theoretic tools to identify layer dependency and cluster similar layers. In Stanley et al. (2016), the authors study layer interdependence by categorizing layers into groups such that all layers were drawn from the same SBM. In the MULTIT... | Stochastic block models (SBMs)identify latent groups of nodes and the density of connections between nodes in these groups as a descriptive and/or generative tool for analyzing networks. Introduced by White et al. (1976) and expanded by Holland et al. (1983),
SBMs decompose a network into factors that aim to uncover g... |
Distinct from heterogeneous networks (e.g., Dong et al., 2020), wherein there are different categories of nodes and edges, multilayer networks have only one type of node and only distinguish between different types of edges. Similarly, multigraphs allow for multiple edges to exist between nodes and labels correspondin... | C |
In this scenario, there is also a significant performance drop (e.g., -11.0% on MetaQA 3-hop and -53.1% on PathQuestion 3-hop).
From the above three scenarios, we observe that both the re-ranking module and question/graph encoders are necessary, and that they are complementary to each other. | Compared to Re-ranking Removed, i.e., the original encoders, performance degradations can be observed on all datasets (e.g., -26.2% on MetaQA 3-hop and -45.5% on PathQuestion 3-hop).
In G-Encoder (Linear), we further replace the conventional GCNs with feedforward neural networks. | Complete Model reports the results of the complete QAGCN model. Re-ranking Removed shows the performance when candidate answers are not re-ranked in Answer Search.
The importance of the re-ranking is demonstrated by the performance degradation (e.g., -43.3% on MetaQA 3-hop and -44.2% on PathQuestion 3-hop). | In this scenario, there is also a significant performance drop (e.g., -11.0% on MetaQA 3-hop and -53.1% on PathQuestion 3-hop).
From the above three scenarios, we observe that both the re-ranking module and question/graph encoders are necessary, and that they are complementary to each other. | While for questions with small subgraphs, in which entities do not have sufficient contexts, improvements are observed (e.g., +11.5% on MetaQA 1-hop and +5.5% on PathQuestion 2-hop).
In Q&G-Encoders (Linear), based on G-Encoder (Linear), we further replace the LSTMs in the question encoder with feedforward neural netwo... | A |
When classifying the phrase “Do you want to play football?”, the word football is recognized and each of the football, topic
weights are connected to the sum qubit for the corresponding topic. Each of the topic qubits is measured, and the winner is the topic | and two-qubit gates, in particular the ‘controlled-X’ or ‘controlled-NOT’ (CNOT) gate that perfoms an X-rotation on the second qubit
of the first qubit is in the |1⟩delimited-|⟩1\lvert 1\rangle| 1 ⟩ state. Example circuit diagram components for these are shown in Figure 2. | similar to a realistic use case for quantum NLP. The average number of words in each review is between 228 and 229, and classification experiments were performed in batches of 50 documents: the the number of word tokens involved in each experiment was around 11000 (whereas the total size of
the lambeq training and test... |
Building off of an n𝑛nitalic_n-qubit feature map for n𝑛nitalic_n-dimensional word vectors, the same QSVM classification process was followed for densely encoded feature maps (alexander2022quantum, ). In this case, vector representations were encoded into fewer qubits in the feature map circuit, using log2(n)subscri... | The design above is clear but wasteful — a system that requires distinct bits for each word in the vocabulary would be fine in classical
but not yet in quantum computing. In machine learning terms, using a qubit for each word is an example of a ‘one-hot encoding’, | D |
For example, Figure 3(a) has the same hnormsubscriptℎnormh_{\text{norm}}italic_h start_POSTSUBSCRIPT norm end_POSTSUBSCRIPT as 3(b) and 3(c). The absence of edge density in the homophilic metric brings undesirable results in restructuring as the measurement always prefer a disconnected graph. Moreover, although hnormsu... |
Figure 3: Examples of graphs with different label-topology relationships and comparison of different homophily measures. The node colour represents the node labels. The red edges connect nodes of different labels, while the green edges connect nodes of the same labels. Figure 3(a) - 3(c) shows homophilic graphs of dif... |
As homophily and performance are correlated, in the restructuring process, number of edges are chosen based on homophily level on the validation set. As shown in Equation 5, we chose 48000480004800048000 edges for Chameleon and 26000260002600026000 edges for Squirrel, each corresponds to the first peak of homophily on... | Propositions 1111 and 2222 define the limits of homophily and heterophily given a set of nodes and their labels. Propositions 3333 and 4444 define neutral graphs which are neither homophilic nor heterophilic. Proposition 3333 states that a uniformly random graph, which has no label preference on edges, is neutral thus ... |
We run GCN and SGC on the synthetic dataset of controlled homophily range from 00 to 1111. The model performance with homophily is plotted in Equation 4. As expected, higher homophily level corresponds to better performance for both GCN and SGC. All model reaches 100%percent100100\%100 % accuracy where homophily is la... | C |
Recalling that the risk (i.e. average loss) quantifies quality, this manifests as a multiplicative weights update: αijt+1∝αijt⋅exp(−γℛi(θj))proportional-tosuperscriptsubscript𝛼𝑖𝑗𝑡1⋅superscriptsubscript𝛼𝑖𝑗𝑡𝛾subscriptℛ𝑖subscript𝜃𝑗\alpha_{ij}^{t+1}\propto\alpha_{ij}^{t}\cdot\exp(-\gamma\mathcal{R}_{i}(\th... | We further remark that minority groups can be under-served particularly when considering worst-case risk over subpopulations (Hashimoto et al., 2018).
Even at a social welfare maximizer (α⋆,Θ⋆)superscript𝛼⋆superscriptΘ⋆(\alpha^{\star},\Theta^{\star})( italic_α start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT , roman_Θ star... | in multi-learner settings can result in higher welfare for small subpopulations
compared with single-learner settings, as studied by (Hashimoto et al., 2018; Zhang et al., 2019). This resonates with recent work showing that monopolies have higher performative power and lead to lower individual utility (Hardt et al., 20... | It would also be interesting to consider extensions or alternative dynamics models for the learner and subpopulation decisions.
One could investigate competitive learners who explicitly strategize to capture subpopulations (Ben-Porat and Tennenholtz, 2019; Aridor et al., 2020); this setting is related to facility locat... | This is similar to the retention function studied by Hashimoto et al. (2018) and has connections to replicator dynamics, a foundational evolutionary dynamic that can be interpreted as a process of information diffusion and imitation (Sandholm, 2020).
| D |
\alpha}.italic_η start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_α , italic_b ) = 1 - divide start_ARG italic_b end_ARG start_ARG italic_α end_ARG , italic_η start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_α , italic_b ) = 1 - divide start_ARG 1 - italic_b end_ARG start_ARG 1 - italic_α end_ARG .
|
This problem is equivalent to Eq. (15), but has no maximum operation. However, now we have non-linear constraints, thus violating the definition of an LP. To correct this, we use a local approximation of η𝜂\etaitalic_η (around an iterate 𝜶(t)superscript𝜶𝑡\boldsymbol{\alpha}^{(t)}bold_italic_α start_POSTSUPERSCRIPT... |
A note on usage. By definition, the function η𝜂\etaitalic_η in Eq. (9) is continuous in the open section (0,1)2superscript012(0,1)^{2}( 0 , 1 ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, but at the boundaries, it is easy to see that η(0,0)=0𝜂000\eta(0,0)=0italic_η ( 0 , 0 ) = 0, while limδ→0η(δ,0)=1subscript→𝛿0... | This split is an alternative formulation to the original definition of η𝜂\etaitalic_η in Eq. (9), only now it has no singularity point, and can be treated in the same way that we treat the maximization over all the terms involving η𝜂\etaitalic_η. That is, the relevant constraint satisfies:
|
To solve the problem we again use the sequential linear programming approach, as the structure of the problem again resembles a linear program. Given the t𝑡titalic_t-th iterate 𝐱(t)superscript𝐱𝑡\mathbf{x}^{(t)}bold_x start_POSTSUPERSCRIPT ( italic_t ) end_POSTSUPERSCRIPT for all the variables defined in Eq. (28), ... | C |
The top-1 motif set found by the approximate k𝑘kitalic_k-Motiflets alg. corresponds to the activation phase and the top-2 motif to the recovery phase. All methods find the activation phase, but with up to 100%percent100100\%100 % larger extent. Valmod and LM found the recovery phase, again with up to 100%percent100100... |
ECG Heartbeats: This data set was used throughout this paper. It contains two top motif sets, namely calibration and heartbeats (Figure 9). We discuss only the top-1 motif, and our webpage shows the full results (k-Motiflets Source Code and Raw Results, 2022). Learn-l took 1.51.51.51.5s, and learn-k took 0.50.50.50.5s... | Given silver standard parameters, all competitor methods find the activation phase, e.g. the found motif set overlaps with the actual motifs, but with up to 100%percent100100\%100 % larger extent. k𝑘kitalic_k-Motiflets and VALMOD are the only to identify both the activation phase as top-1 motif and recovery phase as t... | The top-1 motif set found by the approximate k𝑘kitalic_k-Motiflets alg. corresponds to the activation phase and the top-2 motif to the recovery phase. All methods find the activation phase, but with up to 100%percent100100\%100 % larger extent. Valmod and LM found the recovery phase, again with up to 100%percent100100... |
Muscle Activation: The two top motifs present in this dataset are the activation (top-1) and the recovery phase (top-2) of the Gluteus Maximus muscle and have 13131313 and 12121212 occurrences, respectively. Learn-l took 5.05.05.05.0s and learn-k took 3.93.93.93.9s. | D |
An offline algorithm usually needs to acquire a finite amount of data generated by an unknown, stationary probability distribution in advance, which heavily relies on the memory of the system especially when the dataset is far too large, while an online algorithm processes infinite data streams that are continuously g... |
In reality, many learning tasks process very large datasets, and thus decentralized parallel processing of data by communicating and computing units in the network is necessary, see e.g. [23]-[24] and references therein. Besides, if the data contains sensitive private information (e.g. medical and social network data,... |
An offline algorithm usually needs to acquire a finite amount of data generated by an unknown, stationary probability distribution in advance, which heavily relies on the memory of the system especially when the dataset is far too large, while an online algorithm processes infinite data streams that are continuously g... | The conditions (i) and (iii) in Theorem 1 imply ∑k=0∞(a(k)+b(k))=∞superscriptsubscript𝑘0𝑎𝑘𝑏𝑘\sum_{k=0}^{\infty}(a(k)+b(k))=\infty∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( italic_a ( italic_k ) + italic_b ( italic_k ) ) = ∞, which means that the algorithm g... | Besides, we consider both additive and multiplicative communication noises in the process of the information exchange among nodes. All these challenges make it difficult to analyze the convergence and performance of the algorithm, and the methods in the existing literature are no longer applicable.
For example, the met... | A |
Since the theoretical bounds are defined in terms of quantities that are not always easily computable, we also propose computationally inexpensive heuristics while still maintaining a close relation to the rigorous theoretical estimates. The heuristics allow to dynamically adjust the accuracy in the AAR calculations wi... | Since the theoretical bounds are defined in terms of quantities that are not always easily computable, we also propose computationally inexpensive heuristics while still maintaining a close relation to the rigorous theoretical estimates. The heuristics allow to dynamically adjust the accuracy in the AAR calculations wi... | Our theoretical results allow for accuracy reduction in different calculations performed by AAR on linear fixed-point problems. When the fixed-point operator evaluations are the dominant computational cost of AAR, one may choose to approximate the evaluations of the fixed-point operator to reduce the computational cost... |
In the numerical sections, we assess the effectiveness of our heuristics using two different approaches to inject inaccuracies into AAR. The first approach injects error in the evaluations of the fixed-point operator, and the heuristics are used to dynamically adjust the magnitude of the injected error. The second app... |
Guided by the theoretical bounds, we constructed a heuristic that dynamically adjusts the dimension of the projection subspace at each iteration. As the process approaches convergence, the backward error decreases, which allows for an increase of the inaccuracy of the least-squares calculations while still maintaining... | C |
We proposed STAS, a structured way to evaluate the generated summaries. In addition, we conducted a user study to validate and interpret the STAS score ranges. We also proposed topic-controllable methods that employ either topic embeddings or control tokens demonstrating that the latter can successfully influence the s... | There exist several controllable approaches that prepend information to the input source to influence the different aspects of the text such as the style [11] or the presence of a particular entity [12, 13]. Even though this technique can be readily combined with topic controllable summarization, this direction has not... | Controllable summarization belongs to the broader field of controllable text-to-text generation [15, 16]. Several approaches for controlling the model’s output exist, either using embedding-based approaches [5, 6], prepending information using special tokens [11, 12] or using decoder-only architectures [15]. Controllab... |
Future research could examine other controllable aspects, such as style [11], entities [13] or length [17, 13]. In addition, the tagging-based method could be further extended to working with any arbitrary topic, bypassing the requirement of having a labeled document collection of a topic to guide the summary towards ... |
Going one step further, we propose another method for controlling the output of a model based on tagging the most representative terms for each thematic category using a special control token. The proposed method assumes the existence of a set of the most representative terms for each topic. Then, given a document and... | C |
In this paper, we express the cost of the qubit routes through the count and depth of the nearest-neighbour SWAP circuits necessary to implement the routing. Nevertheless, in the context of neutral atom computers, the SWAP cost can be replaced by the cost of shuttling the qubits. | Tiling reduces significantly the time it takes to compute routes for qubit movement. Due to the regular structure of the tiles and the tiled circuit, we can formulate hardware-aware routing algorithms. When compared with automatic routing methods, such as the ones available in Google Cirq, our algorithms are significan... |
Fig. 5 compares the SWAP counts and SWAP depths of the circuits compiled with Cirq and our tiled versions. When compared with automatic routing methods, such as the ones available in Google Cirq, our tiled version of the multiplier improves significantly on both SWAP depth and count. |
Tiling is especially useful for highly regular, frequently repeated sub-circuits such as those in quantum arithmetic. We illustrated the capabilities of standard cells by the example of using 3D tiles which were specifically designed for cubic qubit lattices. We demonstrated the effectiveness of our method by using it... |
We illustrate the gate scheduling of the tiled multiplication circuit. The goal is to parallelise as many gates as possible: T gates, CNOTs and SWAPs. We analyse the cost of long range interactions in terms of SWAP gate counts. In the following, we present one of the algorithms used for extracting the schedules. In th... | A |
The param-net is a coarse-to-fine model which uses a series of expansive layers to construct the output image from input image features along with the parameters. It consists of eight blocks where each block consists of a transposed convolutional layer for upsampling followed by three convolutional layers that acts as ... |
After training the autoencoder, the outputs of 8888 encoding layers are used as input image features for later part of our network. The encoding layers transform the input image into low dimensional vector space. This compression removes redundancy in the input. Providing these low dimensional representations of the i... |
The parameters stacked in between the layers are shaped in the size of the layer it is being stacked to. The reason we pass the parameters in each block is that if we only passed them in the first block, it would be difficult for the later blocks to retain them. This problem is somewhat similar to the degradation prob... | The param-net is a coarse-to-fine model which uses a series of expansive layers to construct the output image from input image features along with the parameters. It consists of eight blocks where each block consists of a transposed convolutional layer for upsampling followed by three convolutional layers that acts as ... |
In this model, the parameters of the input image are not fixed. Hence, we also pass the input image parameters along with the output image parameters. This model is more generalizable than the previous one but also has a more complex non-linearity to learn and hence, lags behind in performance compared to default-to-p... | B |
The evolution of discrete free energy (evaluated on 101×101101101101\times 101101 × 101 uniform grid) for both methods is shown in Fig. 3(i). Clearly, the neural-network-based algorithm achieves energy stability and the result is consistent with the FDM. It is worth mentioning that the size of the optimization problem ... | Next, we consider an Allen-Cahn type equation, which is extensively utilized in phase-field modeling, and becomes a versatile technique to solve interface problems arising from different disciplines [15].
In particular, we focus on the following Allen-Cahn equation on Ω=(−1,1)2Ωsuperscript112\Omega=(-1,1)^{2}roman_Ω = ... |
Due to the complexity of the initial condition, we utilized a larger neural network compared to the one employed in the previous subsection. Our choice was a 1-block ResNet, which has two fully connected layers, each containing 20 nodes. The total number of parameters is 921921921921. The nonlinear activation function... | Various numerical experiments are presented to demonstrate the accuracy and energy stability of the proposed numerical scheme. In our future work, we will explore the effects of different neural network architectures, sampling strategies, and optimization methods, followed by a detailed numerical analysis. Additionally... | Many problems in soft matter physics, material science, and machine learning can be modeled as L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-gradient flows. Examples include Allen–Cahn equation [15], Oseen–Frank and Landau–de Gennes models of liquid crystals [11, 45], phase field crystal mod... | A |
is an equivalence of categories. This implies that Rj!F≃0similar-to-or-equalsRsubscript𝑗𝐹0\mathrm{R}{j}_{!}F\simeq 0roman_R italic_j start_POSTSUBSCRIPT ! end_POSTSUBSCRIPT italic_F ≃ 0 in 𝖣b(ℙ,T˙∗ℙ)superscript𝖣bℙsuperscript˙𝑇∗ℙ\mathsf{D}^{\mathrm{b}}(\mathbb{P},\dot{T}^{\ast}\mathbb{P})sansserif_D start_POST... |
RΓc(𝕍,𝐤C)≃0.similar-to-or-equalsRsubscriptΓ𝑐𝕍subscript𝐤𝐶0\mathrm{R}\Gamma_{c}(\mathbb{V},{\mathbf{k}}_{C})\simeq 0.roman_R roman_Γ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( blackboard_V , bold_k start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT ) ≃ 0 . | Since, the morphism RΓc([0,1];𝐤[0,1])→RΓc([0,1];𝐤{1})→RsubscriptΓ𝑐01subscript𝐤01RsubscriptΓ𝑐01subscript𝐤1\mathrm{R}\Gamma_{c}([0,1];{\mathbf{k}}_{[0,1]})\to\mathrm{R}\Gamma_{c}([0,1];%
{\mathbf{k}}_{\{1\}})roman_R roman_Γ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( [ 0 , 1 ] ; bold_k start_POSTSUBSCRIPT ... | ≃RΓc(u−1(t);F|u−1(t))similar-to-or-equalsabsentRsubscriptΓ𝑐superscript𝑢1𝑡evaluated-at𝐹superscript𝑢1𝑡\displaystyle\simeq\mathrm{R}\Gamma_{c}(u^{-1}(t);F|_{u^{-1}(t)})≃ roman_R roman_Γ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_u start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_t ) ; italic_F... |
(Rj!F)[x;0]≃RΓc(j−1([x;0]),F|j−1([x;0]))andj−1([x;0])=∅.similar-to-or-equalssubscriptRsubscript𝑗𝐹𝑥0RsubscriptΓ𝑐superscript𝑗1𝑥0evaluated-at𝐹superscript𝑗1𝑥0andsuperscript𝑗1𝑥0(\mathrm{R}{j}_{!}F)_{[x;0]}\simeq\mathrm{R}\Gamma_{c}(j^{-1}([x;0]),F|_{j^{-1% | D |
We show that the impact on some commonly used and competitive approximate score-based algorithms which search in DAG-space is considerable, and noteworthy effects are also found in some hybrid and constraint-based algorithms. We recognise that this sensitivity is unlikely to arise in score-based algorithms which search... |
By examining the way a DAG develops iteration by iteration in the simple HC algorithm, we find that arbitrary decisions about edge modifications play an important role in determining the accuracy of the learnt graph and thus, in judging the structure learning capability of an algorithm. This is particularly so when HC... | Figure 2 thus provides an overview of the HC learning process and the proportion of edge modifications whose orientation is determined arbitrarily by the variable order. Based on these results, it is reasonable to conclude that variable ordering has a significant effect in the initial iterations and that part of this e... |
We can contrast this behaviour with that of Hailfinder in Figure 2(b). In that case, the first eight highest-scoring arc additions are all between completely separate pairs of nodes, so that the DAG at iteration eight consists of eight unconnected arcs. Thus, at that iteration, the proportion of arbitrary arcs remains... |
We first examine the individual changes - arc addition, reversal or removal - which the HC algorithm makes at each iteration as it learns the DAG structure. In particular, we note where changes are arbitrary; that is, where two neighbouring DAGs are Markov equivalent. Figure 2 shows the proportion of graphical modific... | A |
Additionally, the evaluation for OPT family models and LLaMA family models can be found in Appendix Table 6 and Table 7, respectively, providing a comprehensive overview.
Latency measurements are conducted within the FasterTransformer framework, exploring different GPU configurations to assess potential speed-up gains ... | The results clearly indicate that LUT-GEMM provides lower latency as q𝑞qitalic_q decreases, although an excessively small g𝑔gitalic_g may have a marginal adverse impact on latency.
All in all, by integrating LUT-GEMM and OPTQ at the expense of an acceptable increase in perplexity, it is possible to reduce the number ... | From our observations, we can conclude the following: 1) Reducing the group size (g𝑔gitalic_g) effectively decreases perplexity, even when employing a simple RTN quantization scheme, at the cost of a marginal increase in latency, 2) Increasing the number of GPUs (and, consequently, parallelism) does not significantly ... | To address such a concern, researchers have proposed to use model parallelism, which distributes computations over multiple GPUs through GPU-to-GPU communication (Shoeybi et al., 2019; Narayanan et al., 2021).
Nevertheless, it is worth noting that model parallelism introduces additional overheads, stemming from the int... | As evidenced by the increase in the latency ratio of communication, such reductions in utilization indicate that some GPUs can be temporarily idle until all GPUs are synchronized.
Accordingly, the speed-up that can be obtained by tensor parallelism is a lot smaller than the number of GPUs. | B |
Let G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) be a graph and f:V→ℤnormal-:𝑓normal-→𝑉ℤf:V\to\mathbb{Z}italic_f : italic_V → blackboard_Z be a divisor. If there is a chip-firing game starting from f𝑓fitalic_f where each vertex fires at least once, then f𝑓fitalic_f is a non-halting configuration. | Goal: Compute the distance distGnh(f)subscriptsuperscriptdistnh𝐺𝑓\operatorname{dist^{nh}_{\mathnormal{G}}}(f)start_OPFUNCTION roman_dist start_POSTSUPERSCRIPT roman_nh end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT end_OPFUNCTION ( italic_f ) of f𝑓fitalic_f from a non-halting state.
| Now we turn to the proof of the main result of the paper, and show that approximating the rank of a graph divisor within reasonable bounds is hard. The high level idea of the proof is the following. First, we show that the Minimum Target Set Selection problem reduces to computing the distance of a divisor on an auxilia... |
Let G=(V,E)𝐺𝑉𝐸G=(V,E)italic_G = ( italic_V , italic_E ) be a graph and f:V→ℤnormal-:𝑓normal-→𝑉ℤf:V\to\mathbb{Z}italic_f : italic_V → blackboard_Z be a divisor. If there is a chip-firing game starting from f𝑓fitalic_f where each vertex fires at least once, then f𝑓fitalic_f is a non-halting configuration. |
A divisor f𝑓fitalic_f is called recurrent if there is a non-trivial chip-firing game starting from f𝑓fitalic_f that leads back to f𝑓fitalic_f. Clearly, a recurrent divisor f𝑓fitalic_f is also non-halting. Besides the distance of a divisor from a non-halting state, its distance from a recurrent state also plays a c... | D |
A Cross Convolutional Fusion (CCF) operation with a cross-receptive field is proposed to make use of the associated information. It not only uses the benefit of convolution to minimize parameters but also integrates features from both temporal and channel dimensions in an efficient fashion. |
We train and test our method on a workstation equipped with two Tesla P4 and two Tesla P10 GPUs. As the memory consumption, we use the Tesla P10 to train and test the CIFAR10-DVS dataset, N-Caltech 101 dataset, and DVS128 Gesture dataset, and use the Tesla P4 to train and test the Fashion-MNIST, CIFAR10/100 dataset. I... |
In recent years, the direct application of various ANNs algorithms for training deep SNNs, including gradient-descent-based methods, has gained traction. However, the non-differentiability of spikes poses a significant challenge. The Heaviside function, commonly used to trigger spikes, has a derivative that is zero ev... | During the forward pass, the Heaviside function is retained, while a surrogate function replaces it during the backward pass. One simple choice for the surrogate function is the Spike-Operator [29], which exhibits a gradient resembling a shifted ReLU function. In our work, we go beyond the conventional surrogate gradie... | Despite significant progress, SNNs have yet to fully exploit the superior representational capability of deep learning, primarily due to their unique training mode, which struggles to model complex channel-temporal relationships effectively. To address this limitation, Zheng et al. [11] introduced a batch normalization... | B |
v=vh=wh∈V̊h𝑣subscript𝑣ℎsubscript𝑤ℎsubscript̊𝑉ℎv=v_{h}=w_{h}\in\mathring{V}_{h}italic_v = italic_v start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ∈ over̊ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT, we get
| ⟦wh⋅n⟧e\llbracket w_{h}\cdot n\rrbracket_{e}⟦ italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⋅ italic_n ⟧ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT and
⟦wh×n⟧e\llbracket w_{h}\times n\rrbracket_{e}⟦ italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT × italic_n ⟧ start_POSTSUBSCRIPT italic_e end_POSTS... | \bigr{\rangle}_{\mathcal{E}_{h}^{\circ}}⟨ ∇ ⋅ italic_u , ⟦ italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⋅ italic_n ⟧ ⟩ start_POSTSUBSCRIPT caligraphic_E start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT
=⟨∇⋅u−Ph(∇⋅u),⟦wh⋅n⟧⟩ℰh∘\displaystyle=\bigl{\la... | ah(u−uh,wh)subscript𝑎ℎ𝑢subscript𝑢ℎsubscript𝑤ℎ\displaystyle a_{h}(u-u_{h},w_{h})italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_u - italic_u start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT )
=⟨∇⋅u,wh⋅n⟩∂𝒯h−⟨∇×u,wh×n⟩∂𝒯habsentsubscript⋅∇𝑢⋅s... | sup0≠wh∈V̊hah(u−uh,wh)∥wh∥h≲hs(∥∇⋅u∥s,μ−1+∥∇×u∥s,μ−1).less-than-or-similar-tosubscriptsupremum0subscript𝑤ℎsubscript̊𝑉ℎsubscript𝑎ℎ𝑢subscript𝑢ℎsubscript𝑤ℎsubscriptdelimited-∥∥subscript𝑤ℎℎsuperscriptℎ𝑠subscriptdelimited-∥∥⋅∇𝑢𝑠𝜇1subscriptdelimited-∥∥∇𝑢𝑠𝜇1\sup_{0\neq w_{h}\in\mathring{V}_{h}}\frac{a_{h}(u-u_... | C |
FlashSyn does not require prior knowledge of a vulnerable location or contract. Given a set of DeFi lego user interface contracts, action candidates and their special parameters such as strings are given by the users or automatically extracted from transaction history using FlashFind. FlashSyn utilizes these action can... | for actions. Note that 4444 benchmarks are partially closed-source (cs), and 5555 benchmarks are too complicated (cx), thus we are not able to extract mathematical precise summaries for them. For others, we list the profit generated using the manually extracted mathematical expressions in the synthesizer and optimizer.... | Threats to Validity:
The internal threat to validity mainly lies in human mistakes in the study. Specifically, we may understand results of FlashSyn incorrectly, or make mistakes in the implementation of FlashSyn. All authors have extensive smart contract security analysis experience and software engineering expertise ... | drive the approximated formula for this action. We set a timeout of 3333 hours for
FlashSyn-poly and 4444 hours for FlashSyn-inter. FlashSyn does not know a priori whether a benchmark has an attack vector with a positive profit, and it does not set any bounds on the profit. It tries iteratively to synthesize an attack ... | Specifically, we manually inspected all benchmarks whose relevant smart contracts that are all open-source and for each benchmark we allocated more than 4444 manual analysis hours
to extract the precise mathematical summaries. The baseline synthesizer then | D |
The parametric mapping is defined such that: P(s′|s,a)=θP(s,a,s′)𝑃conditionalsuperscript𝑠′𝑠𝑎subscript𝜃𝑃𝑠𝑎superscript𝑠′P(s^{\prime}|s,a)=\theta_{P}(s,a,s^{\prime})italic_P ( italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT | italic_s , italic_a ) = italic_θ start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT (... | For an agent to learn fast, additional structure of the problem is required. In Meta RL [4; 39; 7; 26], agents are allowed to train on a set of training tasks, sampled from the same task distribution as the task they will eventually be tested on. The hope is that similar structure between the tasks could be identified ... | The sample complexity in the general bound of Theorem 6 grows exponentially with the dimension of the parameter space ΘΘ\Thetaroman_Θ.
In many practical cases however, such as the HalfCircle domain of Example 3, there may be a low dimensional representation that encodes most of the important information in the tasks, e... | The tabular mapping in Example 2 does not assume any structure of the MDP space, and is common in the Bayesian RL literature, e.g., in the Bayes-adaptive MDP model of Duff [5].
The next example considers a structured MDP space, and is inspired by the domain in Figure 1. | We follow the Bayesian RL (BRL) formulation [9], and assume a prior distribution over the MDP parameter space f∈𝒫(Θ)𝑓𝒫Θf\in\mathcal{P}(\Theta)italic_f ∈ caligraphic_P ( roman_Θ ).
Our objective is the expected cumulative cost over a randomly sampled MDP from the prior: | C |
Parameter Settings. We determined the word vector space of each language by feeding all texts from English and Chinese healthcare Q&A corpus. We employed the word2vec module from gensim888https://radimrehurek.com/gensim/models/word2vec.html, which implements the Skip-gram algorithm [27]. All parameters of the Skip-gram... |
Test Set Preparation. We evaluated our proposed framework by testing the query performance of frequently used medical entities selected from our collected healthcare Q&A corpora. Only five divisions (community groups in MedHelp), including General-Health, Women-Health, Dermatology, Ear-Nose-Throat, and Neurology, were... |
Results. Table III reports the MRR performance for each model, in which EN →→\rightarrow→ ZH means that we used English queries to find Chinese synonym candidates (i.e., Chinese translations), and ZH →→\rightarrow→ EN was the reverse query direction. All three models outperformed the random baseline, indicating that a... |
To test GPT-3.5-Turbo, we designed a prompt (see Fig. 3) which provides GPT a query term and its list of translations. As we have annotated all 100 nearest translations for each query, we randomly shuffled all of them and asked GPT to sort the order of translations by their relevance to the query. This task was to tes... | Parameter Settings. We determined the word vector space of each language by feeding all texts from English and Chinese healthcare Q&A corpus. We employed the word2vec module from gensim888https://radimrehurek.com/gensim/models/word2vec.html, which implements the Skip-gram algorithm [27]. All parameters of the Skip-gram... | A |
6:Obtain the optimal explanation by computing characteristic θjsuperscript𝜃𝑗\theta^{j}italic_θ start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT of the models and identifying minimum (θj+1−θj)superscript𝜃𝑗1superscript𝜃𝑗(\theta^{j+1}-\theta^{j})( italic_θ start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT - ... |
A VAMPnetsMardt et al. (2018) deep neural network was constructed from two identical artificial neural network lobes, that take trajectory order parameters (OPs) at time steps t𝑡titalic_t and t+τ𝑡𝜏t+\tauitalic_t + italic_τ respectively as inputs. The input data was passed through several layers of neurons and final... | In this work, we trained a VAMPnets model on a standard toy system: alanine dipeptide in vacuum. An 8-dimensional input space with sines and cosines of all the dihedral angles ϕ,ψ,θ,ωitalic-ϕ𝜓𝜃𝜔\phi,\psi,\theta,\omegaitalic_ϕ , italic_ψ , italic_θ , italic_ω was constructed and passed to VAMPnets. VAMPnets was able ... |
Variational approach for markov processes (VAMPnets) is a popular technique for analyzing molecular dynamics (MD) trajectories.Mardt et al. (2018) VAMPnets can be used to featurize, transform inputs to a lower dimensional representation, and construct a markov state modelBowman, Pande, and Noé (2013) in an automated m... |
We call our approach Thermodynamics-inspired Explainable Representations of AI and other black-box Paradigms (TERP). Owing to its model-agnostic implementation, TERP can be used for explaining predictions from any AI classifier. We demonstrate this generality by explaining the following black-box models in this work: ... | C |
The Tracer subsystem determines the goals covered by test-cases produced by the bounded model checker and the fuzzer. Whenever a test-case is produced, Tracer compiles the instrumented program together with the newly generated test-cases and runs the resulting executable. Before the compilation, it performs additional ... | Another responsibility of Tracer is to handle the partial output of the bounded model checker when it reaches the timeout, outputting an incomplete counterexample. Tracer completes such counterexamples randomly and performs the coverage analysis and updates Seeds Store as described above.
|
Following seed generation, FuSeBMC begins the main coverage analysis phase (lines 7—30 of Algorithm 1). FuSeBMC incorporates three engines to carry out this analysis: two fuzzers (main fuzzer and selective fuzzer) and a bounded model checker. Here, the main fuzzer and the BMC engine are run with longer timeouts than d... | FuSeBMC begins by analyzing C code and then injecting goal labels into the given C program (based on the code coverage criteria that we introduce in Section 3.2.1) and ranking them according to one of the strategies described in Section 3.2.2 (i.e., depending on the goal’s origin or depth in the PUT). From then on, FuS... | The Tracer subsystem determines the goals covered by test-cases produced by the bounded model checker and the fuzzer. Whenever a test-case is produced, Tracer compiles the instrumented program together with the newly generated test-cases and runs the resulting executable. Before the compilation, it performs additional ... | A |
where it is assumed that μ1>0subscript𝜇10\mu_{1}>0italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT > 0 and μksubscript𝜇𝑘\mu_{k}italic_μ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, 2≤k≤N2𝑘𝑁2\leq k\leq N2 ≤ italic_k ≤ italic_N are positive rational multiples of μ1subscript𝜇1\mu_{1}italic_μ start_POSTSUBSCRIPT ... | We have illustrated the application of the JFP method to a variety of FIEs (including FDEs and a fractional PDE reformulated as FIEs) in which exponentially fast convergence to the solution is achieved. The JFP method converges much faster and with a lower overall complexity than the sparse sum space method in [25] for... |
We have demonstrated that the sum space method for the FIE (49) performs poorly compared to the JFP method as λ𝜆\lambdaitalic_λ increases. However, as illustrated in [25], the sum space method converges exponentially fast in linear complexity (because it yields banded or almost-banded systems) and in double precision... |
We have not attempted to make a rigorous classification of the types of problems for which the JFP method is superior to the sum space method and vice versa. However, Example 3 suggests that the sum space method performs poorly for problems in which the largest monomial coefficient of the solution becomes large (e.g.,... |
Our work is strongly influenced by the method proposed by Hale and Olver [25], in which direct sums of appropriately weighted Jacobi polynomials are used as basis functions. We shall refer to this method as the sum space method and we note that the basis functions are related to the “generalized Jacobi functions” of [... | B |
We choose the PA to measure the performance of the unsupervised and supervised prediction in all 550 networks. (𝐚)𝐚\bf{(a)}( bold_a ) For different choices of Lksubscript𝐿kL_{\text{k}}italic_L start_POSTSUBSCRIPT k end_POSTSUBSCRIPT (Lk=|LP|/4,|LP|/2,3|LP|/4,|LP|subscript𝐿ksuperscript𝐿𝑃4superscript𝐿𝑃23superscr... |
To deal with the imbalanced positive and negative samples, Muscoloni et al. propose the area under the magnified ROC (AUC-mROC) to measure the prediction performance of one index [52]. The mROC can make the performance of a random predictor always equal to 0.5. Denote n1=|LP|subscript𝑛1superscript𝐿𝑃n_{1}=|L^{P}|ita... | The variables applied here are the same as those in Fig. 1 of the main text: p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT represent the fraction of samples in the positive set LPsuperscript𝐿𝑃L^{P}italic_L start_POSTSUPERSCR... | To make the assessment more general, we here evaluate the other two alternative experiment setups. We call the first experiment setup sample1 in the following discussion. Sample1 uses balanced positive and negative samples. It differs from the experiment setup in the main text for the size of the training and testing s... |
By using Eqs. (S25)-(S29), we can derive a topological feature’s maximum capability (AUC-mROCuppersubscriptAUC-mROCupper\text{AUC-mROC}_{\text{upper}}AUC-mROC start_POSTSUBSCRIPT upper end_POSTSUBSCRIPT) measured by AUC-mROC in the unsupervised approach. The above deduction suggests that under the best index value ran... | C |
We update the last two blocks of the MCUNet [47] model and only 1/4 of the weights for each layer to compare the accuracy of different channel selection methods (larger magnitude, smaller magnitude, and random). The results are quite similar (within 0.2% accuracy difference). Channel selection is not very important for... | We visualize the update schedule of the MCUNet [47] model searched under 100KB extra memory (analytic) in Figure 11 (lower subfigure (b), with 10 classes). It updates the biases of the last 22 layers, and sparsely updates the weights of 6 layers (some are sub-tensor update).
The initial 20 layers are frozen and run for... |
We compare the performance of our searched sparse update schemes with two baseline methods: fine-tuning only biases of the last k𝑘kitalic_k layers; fine-tuning weights and biases of the last k𝑘kitalic_k layers (including fine-tuning the full model, when k𝑘kitalic_k equals to the total #layers). For each configurati... | The most straightforward way is to only update the classifier layer [15, 23, 26, 62], but the accuracy is low when the domain shift is large [12].
Later studies investigate other tuning methods including updating biases [12, 71], updating normalization layer parameters [53, 25], updating small parallel branches [12, 32... | Figure 9: Sparse update can achieve higher transfer learning accuracy using 4.5-7.5×\times× smaller extra memory (analytic) compared to updating the last k𝑘kitalic_k layers. For classifier-only update, the accuracy is low due to limited capacity. Bias-only update can achieve a higher accuracy but plateaus soon.
| C |
=\displaystyle==
1−‖A−1‖22‖A−1‖2.1superscriptsubscriptnormsuperscript𝐴122subscriptnormsuperscript𝐴12\displaystyle\frac{1-\|A^{-1}\|_{2}^{2}}{\|A^{-1}\|_{2}}.divide start_ARG 1 - ∥ italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRI... |
(σmin(A)2−1)max‖(A−D)−1‖2≤1+‖A−1‖2,subscript𝜎superscript𝐴21subscriptnormsuperscript𝐴𝐷121subscriptnormsuperscript𝐴12(\sigma_{\min}(A)^{2}-1)\max\|(A-D)^{-1}\|_{2}\leq 1+\|A^{-1}\|_{2},( italic_σ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( italic_A ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - 1 ) roman_... |
(σmin(A)2−1)max‖(A−D)−1‖2≤‖A+I‖2+‖A−I‖2.subscript𝜎superscript𝐴21subscriptnormsuperscript𝐴𝐷12subscriptnorm𝐴𝐼2subscriptnorm𝐴𝐼2(\sigma_{\min}(A)^{2}-1)\max\|(A-D)^{-1}\|_{2}\leq\|A+I\|_{2}+\|A-I\|_{2}.( italic_σ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( italic_A ) start_POSTSUPERSCRIPT 2 end_POSTSUPERS... | ‖A−1‖2(σmin(A)2−1)=subscriptnormsuperscript𝐴12subscript𝜎superscript𝐴21absent\displaystyle\|A^{-1}\|_{2}(\sigma_{\min}(A)^{2}-1)=∥ italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_σ start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT ( italic_A ) start_POSTSUPERS... | max‖(A−D)−1‖2≤‖A+I‖2+‖A−I‖2σmin(A)2−1.subscriptnormsuperscript𝐴𝐷12subscriptnorm𝐴𝐼2subscriptnorm𝐴𝐼2subscript𝜎superscript𝐴21\max\|(A-D)^{-1}\|_{2}\leq\frac{\|A+I\|_{2}+\|A-I\|_{2}}{\sigma_{\min}(A)^{2}-%
1}.roman_max ∥ ( italic_A - italic_D ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT 2... | A |
In summary, our results on the LeadingOnes and Jump benchmarks show that several arguments and methods from the bit-string world can easily be extended to permutation search spaces, however, the combinatorially richer structure of the set of permutations also leads to new challenges and new research problem such as wha... | As discussed in the introduction, the theory of evolutionary computation has massively profited from having a small, but diverse set of benchmark problems. These problems are simple enough to admit mathematical runtime analyses for a broad range of algorithms including more sophisticated ones such as ant colony optimiz... |
A closer look at these results [NW10, AD11, Jan13, DN20], however, reveals that the vast majority of these works only consider bit-string representations, that is, the search space is the space Ω={0,1}nΩsuperscript01𝑛\Omega=\{0,1\}^{n}roman_Ω = { 0 , 1 } start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT of bit strin... | With this work, we aim at contributing to the foundations of a systematic and principled analysis of permutation-based evolutionary algorithms. Noting that the theory of evolutionary algorithms for bit-string representations has massively profited from the existence of widely accepted and well-understood benchmarks suc... | In this section, we describe the most relevant previous works. In the interest of brevity, we only concentrate on runtime analysis works, knowing well that other theoretical aspects have been studied for permutation problems as well. Since the theory of evolutionary algorithms using bit-string representations has start... | D |
[BL89, NT02, AMS08]. The commonly adopted interior point geometry is based on Hessian metrics generated by self-concordant barrier functions, due to the provable optimality in connection with Newton-like second-order optimization [NN94], see e.g. [TP21] for recent related work. Closer to our work is the comprehensive p... |
In the present paper, we also consider such a Riemannian metric on a simpler structured bounded open convex feasible set, in order to focus on multilevel representation and accelerated first-order optimization that copes with large problem sizes. This necessitates, in particular, to devise restriction and prolongation... | We introduce basic notation and briefly recall required properties of Bregman divergences in Section 2. Section 3 summarizes the basic scheme of two-level optimization in Euclidean spaces. The core part of this paper, Section 4, generalizes this scheme to a Riemannian setting. In particular, information geometry is emp... |
We introduced a novel approach to geometric multilevel optimization. The approach employs information geometry in order to devise all ingredients of the iterative multilevel scheme. Invoking coarse level representations for computing descent directions effectively accelerates convergence. Experiments conducted for a r... | In this section we numerically evaluate the proposed approach. In a first experiment we compare the Riemannian Gradient (RG) descent, see Algorithm 2,
with a state-of-the-art Accelerated Bregmann Proximal Gradient (ABPG) method [HRX21]. In a second experiment we evaluate one-level vs. two-level (2L) schemes and compar... | A |
Namely f(x1,…xr)=𝟏(w1x1+…wrxr≥T)𝑓subscript𝑥1…subscript𝑥𝑟1subscript𝑤1subscript𝑥1…subscript𝑤𝑟subscript𝑥𝑟𝑇f(x_{1},\ldots x_{r})={\bf 1}(w_{1}x_{1}+\ldots w_{r}x_{r}\geq T)italic_f ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … italic_x start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) = bold_1 ( i... | It follows from Theorem 15 that if there was a monotone network (with threshold gates) of a given size approximating a harmonic extension f^^𝑓\hat{f}over^ start_ARG italic_f end_ARG, we could replace each gate with a polynomially sized monotone De Morgan circuit entailing a polynomial blowup to the size of the network... | The function hℎhitalic_h we use is the harmonic extension of a graph invariant function, introduced by Éva Tardos in [43]. The Tardos function and its properties build upon the seminal works of Razborov [35], and Alon and Boppana [1]. The mentioned works constitute a highly influential line of work, about the limitatio... | By Theorem 15 the monotone circuit complexity of a circuit with only AND and OR gates (De Morgan circuit) computing a threshold function with positive coefficients is polynomial. Therefore, we claim that the existence of CNsubscript𝐶𝑁C_{N}italic_C start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT entails the existence o... | Now we show that given a Boolean circuit to compute a given function f:{0,1}d→ℕ:𝑓→superscript01𝑑ℕf:\{0,1\}^{d}\to\mathbb{N}italic_f : { 0 , 1 } start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → blackboard_N, we can construct a general (not necessarily monotone) threshold network of comparable size to approximate f... | A |
In this study, we assumed the data to allow for solutions u∈H2𝑢superscript𝐻2u\in H^{2}italic_u ∈ italic_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT concerning the spatial variable. At the cost of some technical but standard extensions of the DG analysis, we can extend our results to the case that u∈H3/2+ϵ𝑢superscr... |
A. Rupp has been supported by the Academy of Finland’s grant number 350101 Mathematical models and numerical methods for water management in soils, grant number 354489 Uncertainty quantification for PDEs on hypergraphs, grant number 359633 Localized orthogonal decomposition for high-order, hybrid finite elements, Busi... | Figure 1 shows the numerical results for the affine case and locally linear DG or conforming finite element approximation. We recognize that the green plot for the SIPG method with η=10𝜂10\eta=10italic_η = 10 does not show the expected convergence behavior (with respect to the number of cubature points). This nicely r... | A common feature of virtually all of the aforementioned QMC literature related to PDE uncertainty quantification is that the QMC rule is designed for the non-discretized PDE problem (1) whereas, in practical computations, one only has access to a discrete approximation of the PDE system. Of course, as long as one uses ... |
Thus, the spatial grid of the discontinuous Galerkin method is significantly coarser than the conforming finite element grid. We do so to underline that it is generally considered ‘unfair’ to compare discontinuous Galerkin and conforming finite elements on the same grid, since DG usually has many more degrees of freed... | A |
For any R≥1𝑅1R\geq 1italic_R ≥ 1 and any T≥cTHn1R/K𝑇subscript𝑐𝑇𝐻superscript𝑛1𝑅𝐾T\geq c_{T}Hn^{\frac{1}{R}}/Kitalic_T ≥ italic_c start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT italic_H italic_n start_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG italic_R end_ARG end_POSTSUPERSCRIPT / italic_K for a univ... |
The key to the analysis for each individual arm is that, because of the interleaved mean structure, Alice misses most information of half of the terms held by Bob. Without this information, her adaptivity cannot help much in the task of identifying which arm is more likely to be the best arm. On the other hand, the ti... | The (homogeneous) CL model was first used in the work (HKK+13, ) for studying multi-agent BAI, but the model was not formally defined there. The results for fixed-time BAI in (HKK+13, ) only consider the special case where there is only one communication phase (i.e., R=2𝑅2R=2italic_R = 2). The CL model was rigorously ... |
We would like to highlight a couple of points regarding Theorem 1. First, this is the first lower bound result that addresses the local agent adaptivity in the CL models. In particular, it shows that the capacity of each agent to utilize newly observed information within each round does not contribute to reducing the ... | In each round, each agent takes a sequence of pulls (one at each time step) and observes the outcomes. At the end of each round there is a communication phase; the agents communicate with each other to exchange newly observed information and determine the number of time steps for the next round (the length of the first... | C |
As depicted in the figure, SPIRAL demonstrates significantly faster performance compared to the other algorithms. Even though the cost function in this scenario does not possess a Lipschitz continuous gradient, we evaluate the performance of adaSPIRAL, both in Bregman and Euclidean versions, in Figure 0(a).
It is impor... | As depicted in the figure, SPIRAL demonstrates significantly faster performance compared to the other algorithms. Even though the cost function in this scenario does not possess a Lipschitz continuous gradient, we evaluate the performance of adaSPIRAL, both in Bregman and Euclidean versions, in Figure 0(a).
It is impor... | Remarkably, adaSPIRAL-eucl performs well on cost functions without Lipschitz continuous gradients, as verified by this simulation. Consequently, adaSPIRAL-eucl demonstrates potential applicability to a wider range of cost functions in various applications.
Additionally, adaSPIRAL outperforms SPIRAL due to its ability t... |
In this section, we evaluate the proposed algorithm, SPIRAL, for both convex and nonconvex problems, considering cost functions with and without Lipschitz continuous gradients. We examine two versions of SPIRAL: 1) SPIRAL, which follows Algorithm 1, and 2) adaSPIRAL, an adaptive version with additional steps as outlin... | Figure 2 provides a comparison of different algorithms on six datasets. It is evident from the results that both SPIRAL and adaSPIRAL exhibit superior convergence performance compared to other algorithms, regardless of whether the datasets are synthetic or practical.
Also the same speed up by adaSPIRAL is evident for m... | B |
}}*\underline{\bf X}under¯ start_ARG bold_Y end_ARG = ( under¯ start_ARG bold_X end_ARG * under¯ start_ARG bold_X end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT * under¯ start_ARG bold_X end_ARG and this should be performed sequentially using the subspace... | In this paper, we proposed a new randomized fixed-precision algorithm for fast computation of tensor SVD (t-SVD). Unlike the existing randomized low tubal rank approximation methods, the proposed algorithm finds an optimal tubal rank and the corresponding low tubal rank approximation automatically given a data tensor a... |
The sampling approach can also be used for low tubal rank approximation besides the random projection. Indeed, a randomized slice sampling algorithm was proposed in [35] in which horizontal and lateral slices are selected and a low tubal rank approximation is computed based on them, see Figure 3 for a graphical illust... |
Our algorithm is a generalization of the randomized fixed-precision algorithm developed in [37] which is a modification and improved version of the fixed-precision algorithm proposed in [38]. Similar to the idea presented in [37], we propose to gradually increase the size of the second and first modes of 𝐐¯¯𝐐\underl... | This decomposition has found many applications including deep learning ([25, 26]), tensor completion ([27, 28]), numerical analysis ([29, 30, 31, 32]), image reconstruction [33]. There are mainly several randomized algorithms ([34, 35, 36]) to decompose a tensor into the t-SVD format but all of them need an estimation ... | B |
We next compare semi-supervised random forests (SSL-RF) with supervised random forests (CLUS-RF). From the results, we can observe that CLUS-RF improves over CLUS-RF on several datasets: Bibtex, Corel5k, Genbase, Medical, SIGMEA real, and marginally on Emotions and Enron datasets. However, as compared to single trees, ... | Figure 2 presents the predictive performance (AUPRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of semi-supervised (SSL-PCT, SSL-PTC-FR, SSL-RF and SSL-RF-FR) and supervised methods (SL-PCT and CLUS-RF) on the 12 MLC datasets, with an increasing amount of labeled data.
|
Figure 3 presents the learning curves in terms of the predictive performance (AUPRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of semi-supervised (SSL-PCT, SSL-PTC-FR, SSL-RF and SSL-RF-FR) and supervised methods (SL-PCT and CLUS-RF) on the 12 hierarchical multi-label classifi... | Figure 7: The graph depicts the magnitude of improvement in the predictive performance over supervised PCTs enabled by (i) the variance function that considers both the descriptive and target spaces (x𝑥xitalic_x-axis) and (ii) unlabeled data and the variance function that considers both the descriptive and target spa... | The statistical test is applied to the predictive performances (AUPRC¯AU¯PRC\mathrm{AU}\overline{\mathrm{PRC}}roman_AU over¯ start_ARG roman_PRC end_ARG) of the supervised and semi-supervised single trees (SL-PCT, SSL-PCT and SSL-PCT-FR) on the datasets considered in this study: 12 for multi-label classification and 1... | B |
Among various approaches in the field of autonomous driving, perception has always been the first stage as it is important to understand the surrounding area before planning and action. It can be achieved by performing various vision tasks such as semantic segmentation, depth estimation, and object detection [17][18][1... |
After achieving the ability to perceive the environment, a model also needs to leverage this ability to support the controller parts. In the field of end-to-end autonomous driving where perception and control are coupled together, better visual perception means better drivability as the controller gets better features... | Based on the experimental results, we disclosed several findings as follows. First, in line with our previous work [1], the BEV semantic feature is proven can improve the model performance in predicting waypoints and navigational controls. With a better perception, the model can leverage useful information which result... | Similar to Ishihara et al. [26] and Chitta et al. [27], the perception parts of DeepIPC are guided by completing a vision task to provide better features. However, it only uses semantic segmentation as auxiliary supervision since the depth is considered as an input. Then, the controller is equipped with two decision-ma... |
End-to-end learning has become a preferable approach in autonomous driving as manual configuration to integrate task-specific modules is no longer needed. This technique allows the model to share useful features directly from perception modules to controller modules. Moreover, the model can learn and receive extra sup... | A |
Therefore, the algorithm works by 𝒪(n2)𝒪superscript𝑛2\mathcal{O}(n^{2})caligraphic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) applications of the algorithm of 4.7, and therefore its time complexity is 2𝒪(k2)n𝒪(k)superscript2𝒪superscript𝑘2superscript𝑛𝒪𝑘2^{\mathcal{O}(k^{2})}n^{\mathcal{O}(k... | There is an algorithm that, for a given graph G𝐺Gitalic_G, an integer k𝑘kitalic_k, a tree decomposition of G𝐺Gitalic_G with independence number 𝒪(k)𝒪𝑘\mathcal{O}(k)caligraphic_O ( italic_k ), and a set W⊆V(G)𝑊𝑉𝐺W\subseteq V(G)italic_W ⊆ italic_V ( italic_G ) with α(W)≤6k𝛼𝑊6𝑘\alpha(W)\leq 6kitalic_α ( it... |
It remains to observe that by using iterative compression, we can satisfy the requirement of 4.8 to have a tree decomposition with independence number 𝒪(k)𝒪𝑘\mathcal{O}(k)caligraphic_O ( italic_k ) as an input (in particular, here the independence number will be at most 8k+18𝑘18k+18 italic_k + 1), and therefore ... | The requirement for having a tree decomposition with independence number 𝒪(k)𝒪𝑘\mathcal{O}(k)caligraphic_O ( italic_k ) as an input in the separator algorithm is satisfied by iterative compression (see, e.g., [12]), as we explain at the end of Section 4.3.
| First, we give a 2𝒪(k2)n𝒪(k)superscript2𝒪superscript𝑘2superscript𝑛𝒪𝑘2^{\mathcal{O}(k^{2})}n^{\mathcal{O}(k)}2 start_POSTSUPERSCRIPT caligraphic_O ( italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT italic_n start_POSTSUPERSCRIPT caligraphic_O ( italic_k ) end_POSTSUPERSCRIPT-time 2222... | B |
Proof Sketch.
We derive the privacy guarantee using Rényi Differential Privacy (RDP) mironov2017renyi as a bridge. We first leverage the RDP guarantee for the Gaussian mechanism mironov2017renyi to analyze the privacy cost for one communication round under local output perturbation. Then we use RDP Composition proper... | Additionally, the utility under client-level DP VFL is not directly comparable to sample-level DP in centralized ML abadi2016deep or client-level DP in standard (horizontal) FL mcmahan2018learning due to the unique properties of VFL.
For instances, (1) the dimension of DP-perturbed information in VFL can be smaller (... | Since DP mechanisms (i.e., clipping and noise addition), are applied to each client’s outputs (i.e., embedding or logits matrix) locally, these local outputs satisfy client-level local DP, protecting against privacy attacks from other clients, server or external attackers.
That is, by observing the local outputs matrix... | Regarding attack scenarios, these attackers may conduct membership inference attacks shokri2017membership to determine whether the data of a specific VFL client was included during training.
Our goal is to protect the local data of each client against potential attackers so that the attacker cannot make significant in... | Due to the privacy protection requirement of VFL, each client k𝑘kitalic_k does not share raw local feature set Xksubscript𝑋𝑘X_{k}italic_X start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT with other clients or the server. Instead, VFL consists of two steps: (1) local processing step: each client learns a local model th... | B |
In this paper, we proposed a robust knowledge propagation method for dynamic graph neural networks. We devised a reinforcement learning based strategy to dynamically determine whether the embedding of a node should be updated. In this way, we can effectively propagate knowledge to other nodes and learn robust node rep... | Dynamic graph neural networks aim to capture the temporal dynamics for updating the node embeddings, when new connections or links between nodes are established. Based on the properties of dynamic graphs, current dynamic graph neural networks can be roughly divided into two categories [54, 55, 56, 57]: discrete-time ba... | It is worth noting that, we devise our model based on two important assumptions in dynamic graphs: (1) newer interactions have a greater impact than older ones, and (2) neighbors should exhibit consistency.
These assumptions naturally hold in most of the dynamic graphs. | As a trade off between speed and performance, we set a limit to the neighborhood size k𝑘kitalic_k in our reinforced based agent. i.e., we only send the most recent k𝑘kitalic_k neighbors to the agent for selection.
Thus, we study the impact of different numbers of neighbors on the model performance. |
In this paper, we proposed a robust knowledge propagation method for dynamic graph neural networks. We devised a reinforcement learning based strategy to dynamically determine whether the embedding of a node should be updated. In this way, we can effectively propagate knowledge to other nodes and learn robust node rep... | B |
In the experiment of the second line, we only add ℒnc,isubscriptℒ𝑛𝑐𝑖\mathcal{L}_{nc,i}caligraphic_L start_POSTSUBSCRIPT italic_n italic_c , italic_i end_POSTSUBSCRIPT behind CCM, and the result shows that only constrain the two-stage module at the end of the final stage does not enough.
Compared with our methods, t... | The probe has been divided into three subsets according to the walking conditions and has been evaluated separately.
It should be noted that since our framework aims at improving the precision in realistic cloth-changing condition, we take the accuracy for CL as the main criteria, since it is the hardest target in our ... |
The sequence number of each cloth condition in original datasets and our benchmarks used for training can be seen in Figure 4. It can be seen that our benchmarks have a relatively smaller dataset volume compared with previous datasets, but the accuracy can be improved with further collected data. | With the four experiments, we can demonstrate that our improvement does not lie in the increased module parameter but in the two-stage module and the design of the triplets, making the CL condition has improvement (GaitSet: +3.3%, GaitGL: + 1.2%).
It’s important to recognize that our approach is not solely focused on a... | However, our framework can bring large improvement for the baseline in both backbones for the CL conditions (+6.2% for GaitSet, +3.2% for GaitGL), indicating we can handle the more challenging cloth-changing problem.
The accuracy of NM and BG also have some improvements compared with baseline, and the results with our ... | C |
Reinforcement learning (RL) algorithms, specifically Q-learning (Watkins and Dayan, 1992) based algorithms, have become a mainstream method for training the dialogue policy module (Peng et al., 2018; Zhang et al., 2020b). For each step, the policy agent updates its action value 111This value is the expected return for ... | In this work, we propose dynamic partial average (DPAV), a novel approach to mitigate the overestimation problem specifically for the task-completion dialogue policy. DPAV utilizes the partial average between the predicted maximal action value and the predicted minimal action value to estimate the ground truth maximum ... |
Q-learning suffers from overestimation bias because of the ME (Hasselt, 2010). To reduce the bias, in this work, we propose the dynamic partial average (DPAV) estimator. DPAV utilizes the partial average between the predicted maximal action value and the minimal action value to estimate the ground truth maximal action... |
The intuition behind this approach is that the predicted maximal action value overestimates the ground truth, so DPAV uses the predicted minimal action value to shift the estimate towards the ground truth. Because the predicted action values accuracy is improved in training, DPAV assigns less weight to the predicted m... | λtsubscript𝜆𝑡\lambda_{t}italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is a float number between [0,1] that is dynamic in time and problem-dependent such that the DPAV can take the average between the maximum and minimum of the action values. The weights assigned to the maximum and minimum are not the same, ... | A |
We also compare the influence of predicting intention correctly to the accuracy of action classification, illustrated in Table 3. The results show that there is a significant and direct relationship between noun and intention. By conditioning the action-level prediction framework through the intention, it is shown tha... |
We also compare the influence of predicting intention correctly to the accuracy of action classification, illustrated in Table 3. The results show that there is a significant and direct relationship between noun and intention. By conditioning the action-level prediction framework through the intention, it is shown tha... | Finally, we investigate the performance of our whole framework based on the end-to-end evaluation. First, H3M classifies the actions and the intention from the observed clips. Then, based on these predictions, our I-CVAE model anticipates the Z=20𝑍20Z=20italic_Z = 20 actions in the future. In Table 4 we evaluate the L... | Moreover, we further evaluate the influence of the intention as the context for our H3M model. For a given intention label, only a few verbs and nouns are observed. Then, we define an out-of-context error as predicting a verb or a noun which is unseen in a given intention. For instance, if the model predicts the action... |
Therefore, we develop a methodology that aims to constrain the variability of future actions based on the human intention estimated from past observations. We predict a hierarchical structure from a sequence of videos, each depicting a particular human action. From this given video clip sequence, we define two differe... | C |
This experiment is to quantitatively measure the interference of anomaly contamination to time series anomaly detectors, that is, we test the robustness of each anomaly detector w.r.t. different anomaly contamination ratios in the training set. Due to the continuity of time series data, we cannot directly remove or in... |
The Proposed Approach. Based on the above motivation and insights, this paper proposes a novel Calibrated One-class classification-based Unsupervised Time series Anomaly detection method (COUTA for short). The approach fulfills contamination-tolerant, anomaly-informed normality learning by two novel normality calibrat... | The performance of all anomaly detectors downgrades with the increase of anomaly contamination. COUTA shows better robustness compared to its contenders, especially on datasets with a large contamination rate. It owes to the novel one-class classification loss function, which successfully masks these noisy data via unc... | We address two key challenges in the current one-class learning pipeline, i.e., the presence of anomaly contamination and the absence of knowledge about anomalies.
COUTA achieves this goal through two novel calibration methods – uncertainty modeling-based calibration (UMC) and native anomaly-based calibration (NAC). In... | According to the comparison results, COUTA successfully achieves state-of-the-art performance by addressing two key limitations in the current one-class learning pipeline. The superiority of COUTA can be attributed to the synergy of our two novel one-class calibration components, which achieves contamination-tolerant, ... | B |
Effective Linearization: In line with the point above, the preference for plan-based approaches to D2T (§5.1.3) stems from issues in coherence. While there has been extensive work in linearizing graphs and tables (§4.2) showcasing that linearization in LLMs can be as effective, if not more, compared to dedicated encode... | The practice of automating the translation of data to user-consumable narratives through such systems is known as data-to-text generation, as depicted in Fig. 1. Although encompassed by the general umbrella of Natural Language Generation, the nuance that differentiates D2T from the rest of the NLG landscape is that the... | Numeracy for Data-to-text Generation: The NLP niche of building LLMs capable of quantitative reasoning (often referred to as Math-AI or Math-NLP) has garnered significant interest from the research community (Wallace et al., 2019; Sharma et al., 2022; Thawani et al., 2021). Although there are works in D2T that incorpor... |
The majority of the prominent datasets discussed in §2.1 - §2.3 are either collected by merging aligned data-narrative pairs that occur naturally in the \saywild (Lebret et al., 2016; Wiseman et al., 2017) or are collected through dedicated crowd-sourcing approaches (Novikova et al., 2017; Gardent et al., 2017). Howev... |
Following the seminal work by Reiter and Dale (Reiter and Dale, 1997), the most comprehensive survey on D2T to-date has been that by Gatt and Krahmer (Gatt and Krahmer, 2018). Although several articles have taken a close examination of NLG sub-fields such as dialogue systems (Santhanam and Shaikh, 2019), poetry genera... | B |
In this paper, we argued that two plausible assumptions about the ground-truth annotations are inapplicable to existing SGG datasets. To this end, we reformulated SGG as a noisy label learning problem and proposed a novel model-agnostic noisy label correction and sample training strategy: NICEST. It is composed of NICE... |
NIST. Unfortunately, there is no free lunch, sometimes newly assigned pseudo labels from NICE are not right (e.g., ⟨⟨\langle⟨boy-near-pizza⟩⟩\rangle⟩ is mistakenly changed to ⟨⟨\langle⟨boy-eating-pizza⟩⟩\rangle⟩ in Fig. 2). Like other debiasing methods, NICE may over-weight tail categories by increasing the sample per... |
NIST Training Details. In the experiment, the models with the bias of head predicates were the widely used baseline models (i.e., Motifs [24] and VCTree [26]). The models with the bias of tail predicates were trained after NICE cleaning (i.e., Moitfs+NICE and VCTree+NICE). The models obtained by NIST strategy training... |
Limitations The use of Multi-Teacher Knowledge Distillation in NICEST may introduce minor computational overhead. In the case of online training, it requires twice the GPU space compared to the baseline. However, with offline training, there is no additional GPU consumption overhead. Additionally, since certain hyperp... |
As mentioned above, the labels corrected by NICE [33] are not always accurate. Since most of them are corrected from the head to tail predicates, it may lead to a somewhat over-weighting of tail predicates. We propose a novel multi-teacher knowledge distillation strategy for effective model training that enables the m... | C |
to verify whether the proof πbsubscript𝜋𝑏\pi_{b}italic_π start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT holds. If the above verification passes, the other miners can be sure that the attacker holds a specific nonce𝑛𝑜𝑛𝑐𝑒nonceitalic_n italic_o italic_n italic_c italic_e and r𝑟ritalic_r, which enables the has... | In PSM, an attacker follows the partial block sharing strategy and shares the partial block information with rational miners. The partial block has some data covered, e.g., nonce and part of arbitrary bytes in the coinbase transaction. Miners can mine after it to get a new block. The hidden data can be recovered by oth... | If the attacker does not want to share the block information with every miner, it can also share the partial block data b𝑏bitalic_b with a specific miner. For this purpose, it can take advantage of the zero-knowledge contingent payment (ZKCP) protocol [campanelli2017zero]. We propose an example block-sharing strategy ... | The workflow of PSM is shown in Figure 3. In PSM, when the attacker finds a block, it keeps the block private instead of immediately releasing it. In the meantime, the attacker releases the partial block with proof of block possession. With these released data, rational miners could mine on the private branch. To assur... | Attacker: A miner or a colluding minority pool that has a newly mined block(s) and follows the PSM strategy. It can preserve a mined block(s), form a private branch, and share partial block data with rational miners like [mirkin2020bdos]. Besides, it also has access to the smart contract to share the proof of block pos... | B |
Finally, in Figure 1(c), we vary β2∈{0.9,0.99,0.999}subscript𝛽20.90.990.999\beta_{2}\in\{0.9,0.99,0.999\}italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ { 0.9 , 0.99 , 0.999 } as we fix β1=0.9subscript𝛽10.9\beta_{1}=0.9italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.9 and η∈{\eta\in\{italic_η ∈ {5e-5, 4e-4}}\... | To test this hypothesis, in Figure 10(a), we use full-batch Adam at a range of learning rates to train a Wide Resnet on CIFAR-100 until reaching a training loss value of 0.1.
We plot the sharpness λ1(𝐇(𝐱∗))subscript𝜆1𝐇superscript𝐱\lambda_{1}(\mathbf{H}(\mathbf{x}^{*}))italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSC... | In Figure 11, we train a WRN on CIFAR-100 (left two panes) and a fully-connected network on CIFAR-10 (right two panes) using both Adam and momentum GD.
It is impossible to make a blanket claim that Adam finds sharper solutions than momentum GD, since momentum at a small learning rate finds sharper solutions than Adam a... | Figure 10: Adam finds sharper solutions when η𝜂\etaitalic_η or β2subscript𝛽2\beta_{2}italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is small. We train a WRN on CIFAR-100 until reaching train loss 0.1, and we record the sharpness λ1(𝐇(𝐱∗))subscript𝜆1𝐇superscript𝐱\lambda_{1}(\mathbf{H}(\mathbf{x}^{*}))italic_λ... | In particular, we consider (a) a CNN on CIFAR-10; (b) an un-normalized Wide ResNet (WRN) [47] on CIFAR-10; and (c) a (batch-normalized) WRN on CIFAR-100.
Furthermore, Figure 7 in the subsequent section verifies that our minibatch findings apply to transformers on WMT machine translation. | D |
Comparing the free-energy barriers between the different embeddings in Fig. 4, we can see that they are similar, particularly for the mrse embedding and the free-energy surface spanned by the distance and the radius of gyration, i.e., from 10 to 15 kJ/mol. We can compare our results to the unbiased simulation data from... |
As an example for the two stochastic embedding methods mrse and stke, we consider folding and unfolding of a ten amino-acid miniprotein chignolin (CLN025) 73 in the solvent. We employ the CHARMM27 force field 74 and the TIP3P water model 75, and we perform the molecular dynamics simulation 59, 60 using the gromacs 201... |
In this work, we consider the problem of using manifold learning methods on data from enhanced sampling simulations. We provide a unified framework for manifold learning to construct CVs using biased simulation data, which we call reweighted manifold learning. To this aim, we derive a pairwise reweighting procedure in... | We can circumvent this issue by using learning data set from enhanced sampling simulations where transitions between metastable states are more frequently observed and are no longer rare events. However, in this case, the simulation data set is biased and does not correspond to the real system, as it is sampled from a ... |
Note that the simulation of CLN025 performed in Ref. 76 is ∼100similar-toabsent100{\sim}100∼ 100 μ𝜇\muitalic_μs long compared to our 1-μ𝜇\muitalic_μs simulation. This clearly illustrates the great benefit of combining manifold learning with the ability to learn from biased data sets. | D |
The data consists of a point cloud that represents a partial left coronary arterial tree and the accompanying left ventricular surface represented by another point cloud and set of tetrahedral elements. Here, the former is called the vascular data and the latter the ventricular data. The vascular and ventricular data ... |
There are at least two possible approaches to removing disconnected segments: remove the disconnected components from the subtree, or to build a new subtree using only the connected components of the one to be cleaned. If properly implemented, these should result in the same network, but their implementations do diffe... |
Figure 2: Radius filtered and pruned trees with a threshold set at 0.6 mm. Connected subtrees are shown in solid black and disconnected segments in cyan. (a) The mean radius condition is applied; there are 47 connected segments and 21 disconnected. (b) A proportion threshold is set to 0.8 and applied; there are 26 con... |
The set T𝑇Titalic_T contains 17945 nodes with 3D position, radius, and nodes to which it is connected by a single edge. The collection of nodes and edges that represent the coronary arterial tree is a disconnected, simple, directed graph. There is a large connected network that consists of all but 41 of the nodes; th... | By construction, all nodes in the tree are connected to one, two, or more than two other nodes. Nodes with only one connection are called terminal, nodes with two connections are body, and nodes with more than two connections are junction nodes.
Terminal nodes lie at the end of the branches of the graph; body nodes lie... | C |
Software-defined networking (SDN) is a state-of-the-art approach to network management, which permits dynamic and programmatic network configurations (e.g., routing), aimed at improved network performance and monitoring, abstracted away from individual network elements into a centralized network control layer. Consequ... | An illustrative example of above is shown in Fig. 1.
A successful approach to deal with the above-mentioned challenges, is to employ ideas from the domain of open-world machine learning (OWL) [1]. Compared with traditional machine learning, OWL is expected to extrapolate towards unseen input distribution, classes, and ... | In our model, we leverage the conclusions from the aforementioned works and cast the graph-size-generalization problem to a transferred formulation of graph,
which is considered to be learnable by spectral graph-convolution methods. On the other hand, extrapolation towards out-of-distribution data is a fundamental chal... | (iv) The features computed in Eq. 1 and Eq. 2 are supported by queuing theory;222A detailed proof is given Appendix.B-C, Link to the appendix same as above.
here we consider domain knowledge to be the key in producing a size-invariant graph signal on Giℒsubscriptsuperscript𝐺ℒ𝑖G^{\mathcal{L}}_{i}italic_G start_POSTSUP... | Risubscript𝑅𝑖R_{i}italic_R start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, for OD pairs to generate their trajectory in a different routing preference;
(3) Network topology samples introducing both sampling methods (1) and (2), as described above. A description of the used dataset is shown in Table II.333Detailed in... | A |
We illustrate the aggregated mean of human normalized scores among all tasks in Figure 1. We report the score for each task in Appendix F. In our experiments, we observe that (1) Both SPR and SPR-UCB outperform baselines that do not learn temporal consistent representations significantly, including DER, OTR, SimPLe, CU... |
Algorithmic Framework. We propose an online UCB-type contrastive RL algorithm, Contrastive UCB, for MDPs in Algorithm 1. At the k𝑘kitalic_k-th episode, we execute the learned policy from the last round to collect the datasets {𝒟~hk}h=1Hsuperscriptsubscriptsuperscriptsubscript~𝒟ℎ𝑘ℎ1𝐻\{\widetilde{\mathcal{D}}_{h}^{... | This section provides the analysis of the transition kernel recovery via contrastive learning and the proofs of the main results for single-agent MDPs and zero-sum MGs. Our theoretical analysis integrates contrastive self-supervised learning for transition recovery and low-rank MDPs in a unified manner. Part of our ana... |
We study contrastive-learning empowered RL for MDPs and MGs with low-rank transitions. We propose novel online RL algorithms that incorporate such a contrastive loss with temporal information for MDPs or MGs. We further theoretically prove that our algorithms recover the true representations and simultaneously achieve... | Second, we propose the first provably efficient exploration strategy incorporated with contrastive self-supervised learning. Our proposed UCB-based method is readily adapted to existing representation learning methods for RL, which then demonstrates improvements over previous empirical results as shown in our experimen... | C |
We propose combining importance sampling with the multilevel DLMC estimator to address rare events. We employ the time- and state-dependent control developed by (Ben Rached et al., 2023) for all levels in the multilevel DLMC estimator. Numerical simulations confirm a significant variance reduction in the multilevel DL... |
We extend the DLMC estimator introduced by (Ben Rached et al., 2023) to the multilevel setting and propose a multilevel DLMC estimator for the decoupling approach (dos Reis et al., 2023) for MV-SDEs. We include a detailed discussion on the bias and variance of the proposed estimator and devise a complexity theorem, di... |
In (Ben Rached et al., 2023), we introduced the DLMC estimator, based on a decoupling approach (dos Reis et al., 2023) to provide a simple importance sampling scheme implementation minimizing the estimator variance. The current paper extends the DLMC estimator to the multilevel setting, achieving better complexity tha... | We combine importance sampling with MLMC to reduce the relative estimator variance in the rare-event regime by extending the results in (Ben Rached et al., 2023) to the multilevel setting. Combining importance sampling with MLMC has been previously explored in other contexts, including standard SDEs (Ben Alaya et al., ... |
The remainder of this paper is structured as follows. In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) a... | D |
This section analyzes and discusses the results obtained by applying the proposed autonomous GN&C approach. In Section 5.1 we make some comments on the far-approach phase of an asteroid mission, which is beyond the scope of this work. Section 5.2 shows results considering the close-approach, which are further analyzed... |
By “far-approach”, we consider the period of the mission when the spacecraft changes from heliocentric to relative navigation about the small-body, which is the same as phase 1 of the Hayabusa 2 campaign [49]. That phase ends with the spacecraft at an arbitrarily far distance from the small body, when a preliminary as... |
We have purposely selected those specific asteroids to underscore the fact that our proposed guidance, navigation, and control (GN&C𝐺𝑁𝐶GN\&Citalic_G italic_N & italic_C) approach is not reliant on the size or shape of the asteroid. The mission profile can be customized based on the specific objectives of the missi... |
A mission could have different profiles depending on the mission’s goals and the availability of prior knowledge about the asteroid’s environment and properties. We consider that after the preliminary environment assessment at the end of the far-approach phase, the spacecraft could opt between different profiles, depe... | Due to the asteroid’s low gravity, many different forces reasonably affect the spacecraft operating in its vicinity. The importance of considering or not the action of a particular force varies from case to case, and it should be thought of according to the objective of the analysis, distance to the asteroid, mass, and... | A |
More generally, we hope that the Finslerian lens on fixed point iterations provides a new perspective on the problem of computing Brascamp–Lieb constants. We believe that the tools developed in this work can be applied to a wider class of Picard iterations that arise in the context of Brascamp–Lieb constants (such as ... | This project was started during a visit of MW to MIT, supported by an Amazon Research Award. Part of this work was done while MW visited the Simons Institute for the Theory of Computing in Berkeley, CA, supported by a Simons-Berkeley Research Fellowship. SS acknowledges support from an NSF-CAREER award (1846088).
|
In this paper, we introduced a novel fixed-point approach for computing Brascamp–Lieb constants, which is grounded in nonlinear Perron–Frobenius theory. In contrast to much of the prior literature, which has analyzed the problem through a Riemannian lens, our approach utilizes a Finslerian geometry on the manifold of ... | Our analysis leverages the Thompson part metric on the manifold of positive definite matrices to model convergence of the fixed-point iteration. To our knowledge, this is the first work that analyzes the computation of Brascamp–Lieb constants via Thompson geometry.
We note that a similar Finslerian lens can be employed... | The map G𝐺Gitalic_G resembles the alternate scaling algorithm for Brascamp-Lieb constants (Garg et al., 2018, Alg. 1). The resemblance of both approaches derives from an exploitation of the difference-of-convex structure of problem 1.3 (see also (Weber and Sra, 2023)). However, the Thompson geometry perspective employ... | A |
To evaluate the effectiveness of the bounds established in Corollary 3.10, we perform the following analysis. We consider the 1-centrality distances (refer to Example 3.6) calculated for various perturbation levels. From the bounds provided by Corollary 3.10, we subtract the actual 1-centrality distances. These differe... | In this study, we propose novel centrality measures that leverage the persistence of homology classes and their merge history along the filtration. Integral to this is the development of an algorithm that captures the merge history of homology classes. These homology-based centrality measures produce, for all cycle gen... | We introduced novel centrality measures that leverage both persistence and merge dynamics of homology classes. These measures aim to capture a more comprehensive picture of the topological structure within point cloud data compared to traditional summaries. The algorithm for computing the merge dynamics of homology cla... |
Moving forward, we plan to assess the efficacy of these measures across diverse real-world point cloud datasets and within machine learning contexts. Furthermore, future research endeavors will focus on refining the centrality functions and investigating their mathematical properties in greater depth. | This chapter explores the application of centrality measures to self-similar point clouds. While toy datasets offer valuable starting points, their applicability to real-world scenarios may be limited. In contrast, fractal-like point clouds, with their inherent complexity and potential for higher dimensionality, have b... | D |
Evaluation on the independent BN and the independent classifier. In this part, we validate the effectiveness of the independent BN and the independent classifier in our method. Experimental results are listed in Table 6, where “w/ SBN” and “w/ SC” indicate using the shared BN and the shared classifier in our method, r... | Test on the supervised case. In the experiment, we train our model in the supervised setting, and compare it with some supervised DG methods, as reported in Tables 7 and 8. MLDG (DBLP:conf/aaai/LiYSH18, ), MASF (DBLP:conf/nips/DouCKG19, ) and MetaReg (DBLP:conf/nips/BalajiSC18, ) are meta-learning based methods. FACT (... | Several domain generalization methods have been developed to handle this issue (DBLP:conf/cvpr/NamLPYY21, ; DBLP:conf/eccv/SeoSKKHH20, ; DBLP:conf/iccv/YueZZSKG19, ; wang2022feature, ; qi2022novel, ; DBLP:journals/pr/ZhangQSG22, ). However, these methods need to label all data from source domains, which is expensive an... | To validate the effectiveness of the unlabeled data in SSDG, we train the supervised DG methods using the labeled samples, including ResNet18, CrossGrad (DBLP:conf/iclr/ShankarPCCJS18, ), DDAIG (DBLP:conf/aaai/ZhouYHX20, ), RSC (DBLP:conf/eccv/HuangWXH20, ). Experimental results are shown in Fig. 7. As observed in this... |
Similarly, in the conventional semi-supervised learning (SSL), most existing methods assume that all training samples are sampled from the same data distribution (DBLP:conf/nips/GrandvaletB04, ; DBLP:conf/iclr/LaineA17, ; DBLP:conf/nips/TarvainenV17, ; Nassar_2021_CVPR, ; Zheng_2022_CVPR, ; Wang_2022_CVPR, ; DBLP:conf... | C |
As 1×1111\times 11 × 1 convolution layer can only transfer channel-wise information, we thus design the adapting of functional convolutions, i.e., intermediate K×K𝐾𝐾K\times Kitalic_K × italic_K convolutions, to keep locality sensitive.
On the contrary, adapting the whole residual block considers the transferring of p... | As 1×1111\times 11 × 1 convolution layer can only transfer channel-wise information, we thus design the adapting of functional convolutions, i.e., intermediate K×K𝐾𝐾K\times Kitalic_K × italic_K convolutions, to keep locality sensitive.
On the contrary, adapting the whole residual block considers the transferring of p... | It needs a more sophisticated design on not only the Conv-Adapter architecture but also the adaptation location [59], and we empirically find that stage-wise adaptation produces inferior performance and requires much more parameters.
Conv-Adapter is flexible to be inserted into every residual block of the ConvNet backb... | Intuitively, adapting the whole residual blocks has a larger capacity for modulating task-specific features than adapting only K×K𝐾𝐾K\times Kitalic_K × italic_K convolution but may introduce more parameters.
Plugging Conv-Adapter stage-wisely is not considered as it is impractical to make the receptive field of Conv-... | However, on Structured datasets, adapting whole residual blocks is far better than adjusting only the middle convolutions with more parameters,
demonstrating the superior capacity of adjusting residual blocks when there is a more significant domain gap. | C |
There are many variations of PINNs, e.g., Physics-informed generative adversarial networks [14] which have stochastic differential equations induced generators to tackle very high dimensional problems; [15] rewrites PDEs as backward stochastic differential equations and designs the gradient of the solution as policy f... | We next explore a scenario involving the Navier–Stokes equation, a standard model for incompressible fluid flow, often applied in contexts such as water flow in pipeline [51], air dynamics over aircraft surfaces [52], and pollutant dispersion [53]. Here, we focus on a specialized 2D case of the equation.
|
While changepoints detection methods have shown promise in identifying significant shifts in data characteristics across various fields—from high-dimensional time series data [31, 32, 33], computer vision [34, 35], speech recognition [36, 37], real-time medical monitoring [38, 39], to disturbance localization in power... | Existing PINNs methods face challenges in managing abrupt variations or discontinuities in dynamical systems. Such changes often signal shifts in system dynamics or the influence of external factors. For example, detecting leakages in pipelines using limited sensor data [18]; traffic flow management by predicting conge... | The standard PINNs model assumes that the parameters of PDEs are constant values across the entire time domain. In order to accommodate Definition 2.1, we allow for the changes in the λ(t)𝜆𝑡\lambda(t)italic_λ ( italic_t ) and introduce additional regularization term in a form of total variation penalty on the first ... | C |
A number hℎhitalic_h which forms the boundary between two consecutive legs will be called a waypoint. We count the numbers 1111 and m𝑚mitalic_m as waypoints by courtesy, and refer to them as terminal waypoints; all other waypoints are internal.
Thus, a walk consists of a sequence of legs from one waypoint to another. | This deals with all cases in which the minimal leg [V,W]𝑉𝑊[V,W][ italic_V , italic_W ] is internal. We turn finally to the few remaining cases where it is terminal. Without loss of generality, we may assume that
we are dealing with an initial leg, i.e. V=1𝑉1V=1italic_V = 1; hence W𝑊Witalic_W is an internal waypoint... | By a leg of f𝑓fitalic_f, we mean a maximal interval [i,j]⊆[1,m]𝑖𝑗1𝑚[i,j]\subseteq[1,m][ italic_i , italic_j ] ⊆ [ 1 , italic_m ] such that, for hℎhitalic_h in the range i≤h<j𝑖ℎ𝑗i\leq h<jitalic_i ≤ italic_h < italic_j, the difference d=f(h+1)−f(h)𝑑𝑓ℎ1𝑓ℎd=f(h{{+}}1){-}f(h)italic_d = italic_f ( italic_h + 1 ) -... | If hℎhitalic_h is an internal waypoint where the change is from an increasing to a decreasing leg, we call hℎhitalic_h a peak; if the change is from a decreasing to an increasing leg, we call it a trough. Not all waypoints need be peaks or troughs, because some legs may be flat; however, it is these waypoints that will... | that w′=uf′=vg′superscript𝑤′superscript𝑢superscript𝑓′superscript𝑣superscript𝑔′w^{\prime}=u^{f^{\prime}}=v^{g^{\prime}}italic_w start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_u start_POSTSUPERSCRIPT italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT = italic_v start_POSTSUPERSCRIPT itali... | C |
}\left[\frac{\partial^{2}}{\partial q_{k}\partial q_{l}}{\cal J}_{\text{red}}(%
\mathbf{q})\right]=Q_{kl}.( italic_I start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_k italic_l end_POSTSUBSCRIPT = blackboard_E [ divide start_ARG ∂ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG ... | As mentioned above, in an ideal world, one would define an information density based on the variances varp(𝐪)ksubscriptvar𝑝subscript𝐪𝑘\text{var}_{p}(\mathbf{q})_{k}var start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( bold_q ) start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, for example by using the inverse of the s... | One could similarly analyze papers from many other disciplines that use inverse problems. They may be using different words, but a common feature of the many definitions of resolution, adjoints, sensitivity, and identifiability that can be found in the literature, is that most of these notions originate in, and were de... |
In other words, the Fisher information matrix can be computed efficiently, unlike the covariance matrix. At the same time, the estimate above requires us to compute the inverse of the Fisher matrix, which for large and ill-conditioned problems is again not computable efficiently and accurately. However, we can use the... | However, [17] is concerned with choosing the mesh used for the solution of the state equation, not the discretization of the parameter we seek – although it seems reasonable to assume that the scheme could be adapted to the latter as well. At the same time, the scheme described in [17] requires the solution of a Bayesi... | C |
Continuing our long-standing interdisciplinary collaboration (Jänicke & Wrisley, 2017; Meinecke et al., 2021), we adopted a participatory design process (Jänicke et al., 2020) to address the above-mentioned issues. Our contributions can be summarized as follows:
|
All embeddings are added to a faiss (Johnson et al., 2019) index structure based on the Euclidean distance between them. For each image, the most similar images can be queried by their embedding. Pre-processing of image data in the form of object detection would be valuable; however, high-quality label hierarchies des... |
We study the coherence of the visual tradition of the corpus using the tools of computer vision, and in doing so we aim to put together new building blocks rooted in medieval imagery for further research. We also believe that computer vision applications in corpora of medieval illumination have great promise, particul... |
A label hierarchy for medieval illuminations produced by content specialists using the system. It can be straightforwardly applied to scenarios in which hierarchical classification or weakly supervised object detection (Inoue et al., 2018) is performed on specific historical sets of images with related themes. | Although the domain of medieval manuscripts might seem quite specialized, the situation of divergent common vocabularies and the desire to resolve and combine labels across knowledge bases is common to many research fields. Our system is designed to support subject specialists from different backgrounds in viewing thei... | C |
To further investigate the universality of the framework, we conduct additional experiments on two more parameter-efficient fine-tuning approaches, i.e., adapter [32] and prefix-tuning [9]. Specifically, we show the results of the vanilla approaches and those improved with PanDa in Figure 6. The results of model-tuning... | Table XII: Comparisons of the number of trainable parameters and training efficiency. Here, we use the STS-B as the target task, and train the BERT-large for 10 epochs. As for PoT methods, we additionally train on the MNLI (source) task for 1 epoch (the main reason for training latency in PoT). All experiments are done... |
To investigate the efficiency of PanDa, we use BERT-large as the backbone model, setting MNLI and STS-B as the source and target tasks respectively111111We set the training epoch as 1 for the source task, and as 10 for the target task. All experiments are done with an NVIDIA A100 GPU in this analysis., and show the co... | Table XII: Comparisons of the number of trainable parameters and training efficiency. Here, we use the STS-B as the target task, and train the BERT-large for 10 epochs. As for PoT methods, we additionally train on the MNLI (source) task for 1 epoch (the main reason for training latency in PoT). All experiments are done... | Table XII: Comparisons of the number of trainable parameters and training efficiency. Here, we use the STS-B as the target task, and train the BERT-large for 10 epochs. As for PoT methods, we additionally train on the MNLI (source) task for 1 epoch (the main reason for training latency in PoT). All experiments are done... | A |
Defacement Motives. The conflict caught the attention of existing defacers, who performed many attacks against other countries but not Russia and Ukraine until just after the invasion, suggesting their choice of targets was influenced. We also found some ‘new faces’ e.g., the second most active defacer targeting Russi... | We annotate motives based on 1 341 unique messages left on the defaced pages. We consider a political sentiment and mark it as supporting Russia/Ukraine if a support/objection is expressed e.g., ‘We stand with Ukraine!’. We mark messages consisting of defacer signatures e.g., ‘Hacked by ABC’ without clear motives, or j... | Further steps are performed to enhance data reliability. First, many on-hold submissions are valid but were never verified; we perform a semi-automatic validation using the messages left on defaced pages (see Appendix §F). Second, submissions may be reported to multiple archives to broaden their visibility. We de-dupli... |
We find diverse motives, but despite targeting Russia and Ukraine, most messages do not refer to the conflict. 2 723 (46.16%) were for self-aggrandisement, 1 219 (20.66%) self-expression, 143 (2.42%) related to other conflicts (such as Israel-Palestine), 58 (0.98%) related to patriotism, and 89 (1.51%) were financiall... |
Volunteer Hacking Discussions. Two days after the invasion, the Ukrainian government called on pro-Ukraine ‘hacktivists’ to join the IT Army of Ukraine, which was stood up in an ad-hoc manner (Soesanto, 2022) to support the war effort (Mykhailo Fedorov, 2022; The Guardian, 2022c). The most tangible outcome is a public... | A |
This efficiency in attack methodology is a critical advantage in practical adversarial settings. It underscores a reduced risk for the attacker, as there is no longer a requirement to send to the victim a vast number of perturbations in the hope of stumbling upon a successful one. Instead, HET provides a systematic and... |
There has been a great amount of research done on adversarial transferability, discussing attacks (Naseer et al., 2019; Wang et al., 2021; Springer et al., 2021; Zhu et al., 2021), defences (Guo et al., 2018; Madry et al., 2018) and performing general analysis of the phenomena (Tramèr et al., 2017; Katzir and Elovici,... |
For the attacks, we use FGSM (Goodfellow et al., 2014), PGD (Madry et al., 2017) and PGD+Momentum (denoted as Momentum) (Dong et al., 2018), which should have increased transferability according to (Xie et al., 2019). All of these algorithms are considered accepted baselines when evaluating adversarial attacks (Goodfe... |
Moreover, the consistent performance of HET across various datasets and model architectures offers a deeper understanding of the intrinsic characteristics of adversarial attacks. As suggested in previous works, it supports the claim that that different models, despite their distinct architectural designs, often share ... |
Many popular and powerful adversarial attacks (such as PGD (Madry et al., 2018) and CW (Carlini and Wagner, 2017)) are whitebox attacks. This means that in order to use these algorithms to generate x′superscript𝑥′x^{\prime}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, the attacker must have access to the lear... | C |
It is worth highlighting that the entanglement-induced bound in Theorem 2 is stated for a given pair of data-encoded states, and not as an average over all possible data pairs. Hence, it is thus natural to determine classes of data and embeddings where concentration will arise with high probability, e.g., cases when th... |
Given that exponential concentration leads to trivial data-independent models, it is important to determine when kernel values will, or will not, concentrate. In this section, we investigate the causes of exponential concentration for quantum kernels. | It is worth highlighting that the entanglement-induced bound in Theorem 2 is stated for a given pair of data-encoded states, and not as an average over all possible data pairs. Hence, it is thus natural to determine classes of data and embeddings where concentration will arise with high probability, e.g., cases when th... |
Entanglement-induced concentration can also occur in cases where the embedding is not highly expressive but still leads to states satisfying a volume-law. Here, Theorem 2 implies that the kernel values of the projected quantum kernels will exponentially concentrate. | Entanglement can also be detrimental when combined with local quantum kernels such as the projected quantum kernels, and suggests that one should be mindful about using embeddings leading to states satisfying volume-laws of entanglement.
Our results on global measurements demonstrate that the fidelity kernel can expone... | C |
To investigate whether our models recognize the motion clues and learn spatio-temporal information, we generated class activation maps on X3D-S and RGB video data. The features of the last convolution layer were used to calculate the CAMs. The calculated scores were normalised across all patches and frames before visua... |
Two approaches involving seven 3D action recognition networks are adopted to classify and anticipate lane change events on the PREVENTION dataset. For our RGB+3DN method, the lane change recognition problem is formulated as an action recognition task only utilising visual information collected by cameras. The best per... | We extract the samples of each class using the annotation provided by the PREVENTION dataset. The samples are initially centre-cropped from 1920 × 600 to 1600 × 600 pixels, then resized to 400400400400 × 400400400400 pixels in spatial resolution. In order to reduce the computational cost, the data is further downsample... | To investigate whether our models recognize the motion clues and learn spatio-temporal information, we generated class activation maps on X3D-S and RGB video data. The features of the last convolution layer were used to calculate the CAMs. The calculated scores were normalised across all patches and frames before visua... |
The 3D CNNs we employ in this work are initially designed for recognising general human behaviours and trained on human behaviours datasets such as Kinetics-400 and Kinetics-600. These datasets are formed by video clips with relatively high frame rates (25 fps) [3]. Therefore, in order to efficiently extract motion cl... | D |
define tϵ=1nsubscript𝑡italic-ϵ1𝑛t_{\epsilon}=\frac{1}{n}italic_t start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG. This ensures that the k𝑘kitalic_k authors
of the anonymity set lie within an interval that is smaller or equal to | Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Uncertainty Score (GCJ)Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Uncertainty Score (GH)Abuhamad et al.Caliskan et al.Original
Figure 6. Anonymization performance (uncertainty score) in the | Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Accuracy (GCJ)Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Accuracy (GH)Abuhamad et al.Caliskan et al.GuessingOriginal
Figure 5. Attribution performance (accuracy) of candidate techniques | Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Uncertainty Score (GCJ)Norm.Imit.Obf. IObf. II00\displaystyle 011\displaystyle 11Uncertainty Score (GH)Abuhamad et al.Caliskan et al.Original
Figure 4. Anonymization performance (uncertainty score) in the | of data in learning models (e.g., Chaudhuri et al., 2011; Su et al., 2016; Abadi et al., 2016; He et al., 2017; Jayaraman and Evans, 2019) and also in the field of
natural language processing (e.g., Weggenmann and Kerschbaum, 2018; Lyu et al., 2020; Fletcher et al., 2021; Mattern et al., 2022). | C |
During model training, CoOp solely focuses on learning the context by minimizing the cross-entropy loss on the training set while maintaining all other parameters constant. Depending on the setup, we can obtain two variants of CoOp: a class-agnostic version where all classes share a common context, and a class-specifi... | For this aim, we proposed the soft context shared prompt tuning, SoftCPT, which adopts a shared meta network across all tasks to generate the context of a task based on its task description text. The meta network involves extracting task features by pre-trained language model and transforming the task features to neede... | This work introduces SoftCPT, an extension of CoOp that effectively adapts to the multi-task scenario while facilitating the modeling of task relatedness. Fig. 2 illustrates the overall structure of SoftCPT. Unlike CoOp, SoftCPT incorporates a meta network, shared across all tasks, to generate task-specific contexts. T... | The remainder of the paper is organized in the following manner: In Section 2, we review related work including pre-trained models, vision-language models, parameter-efficient fine-tuning, prompt tuning for vision-language models, and multi-task learning. In Section 3, we detail the proposed SoftCPT. In Section 4, we d... |
We conduct extensive experiments on four datasets. In Section 4.1, the datasets are detailed. In Section 4.2, we detail the various methods used for comparison. In Section 4.3, the implementation details of different methods are presented. After that, Section 4.4 conducts ablation study. Lastly, in Section 4.5, the co... | B |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.