context stringlengths 250 7.19k | A stringlengths 250 4.62k | B stringlengths 250 4.17k | C stringlengths 250 4.99k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
Table 1 summarizes the precision and recall scores of our methods compared to other models with three different GAN objectives, when the number of discriminators is set to 5 or 10 (M=5or10𝑀5or10M=5~{}\text{or}~{}10italic_M = 5 or 10) and the number of experts is 1 (k=1𝑘1k=1italic_k = 1).
MCL-GAN achieves outstandin... | Table 7 presents that MCL-GAN achieves outstanding performance compared to other strategies especially for the recall metric, even compared with GT-Assign without relying on class labels for discriminator specialization.
The proposed approach is also efficient since it is free from any time-consuming clustering procedu... | To achieve these goals, we employ a Multiple Choice Learning (MCL) [8] framework to learn multiple discriminators and update the generator via a set of expert discriminators, where each discriminator is associated with a subset of the true and generated examples.
Our approach, based on a single generator and multiple d... | In practice, our specialized discriminators to the subsets of training data outperforms independent training of discriminators on the whole dataset (as in GMAN) or different discriminator assignment strategies such as minimum-score discriminator selections, opposite to MCL-GAN, and random selections.
Also, the performa... | MCL-GAN also achieves better results than a clustering-based approach, e.g., self-conditioned GAN [45], which requires additional computation due to clustering during training and the extra model parameters for conditioning the generator’s inputs with cluster membership.
The results imply that MCL-GAN is effective to m... | D |
The WER scores comparison of different approaches are compared in Fig. 6. From the figure, the proposed DeepSC-SR can provide lower WER scores and outperform the speech transceiver under various channel conditions, as well as the text transceiver under the Rayleigh channels when SNR is lower than around 8 dB. Moreover,... |
In this article, we have investigated a DL-enabled semantic communication system for speech recognition, named DeepSC-SR, which aims to restore the text transcription by utilizing the text-related semantic features. Particularly, we jointly design the semantic and channel coding to learn and extract the features and m... | Regarding the semantic commutations for speech information, our previous work developed an attention mechanism-based semantic communication system to restore the source message, i.e., reconstruct the speech signals[18]. However, in this paper, we consider an intelligent task at the receiver to recover the text informat... | In this article, we have investigated a DL-enabled semantic communication system for speech recognition, named DeepSC-SR, which aims to restore the text transcription by utilizing the text-related semantic features. Particularly, we jointly design the semantic and channel coding to learn and extract the features and mi... | Inspired by the end-to-end (E2E) communication systems developed to address the challenges in traditional block-wise commutation systems[9, 10], different types of sources have been considered on E2E semantic communication systems. Particularly, an initial research on semantic communication systems for text information... | C |
Failure cases on S3DIS Area-5. We observed that our model encounters difficulties in distinguishing between categories that have geometric similarities. Specifically, when two categories exhibit close geometric resemblances, the model often struggles to delineate clear boundaries between them.
|
Indeed, our methodology allows for the implementation of soft propagation, enabling the seamless transfer of supervision signals from labeled points in sample 𝑷isubscript𝑷𝑖\bm{P}_{i}bold_italic_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to unlabeled points in sample 𝑷jsubscript𝑷𝑗\bm{P}_{j}bold_italic_P sta... | We present several failure cases in Figure 6. Consistent with numerous existing studies, our approach struggles when two categories exhibit similar geometric properties. This specific problem is a well-documented challenge, and its presence is notable even in methods that employ full supervision. The inherent limitatio... |
Comparison with fully supervised methods: We compare our weakly supervised method with some fully supervised state-of-the-art methods[23, 24, 25, 26, 2, 26, 4] on the public dataset S3DIS Area-5. Also, we compare our method under weak supervision against full supervision. Our method produces even slightly higher resul... | Failure cases on S3DIS Area-5. We observed that our model encounters difficulties in distinguishing between categories that have geometric similarities. Specifically, when two categories exhibit close geometric resemblances, the model often struggles to delineate clear boundaries between them.
| B |
The geometric features are concatenated with the image features from the backbone for depth estimation.
Based on the depth and other 3D predictions from the base detection branch, the detectors outputs the 3D object detection results. The symbol ⓒ indicates a concatenation operation. | M3D-RPN [3] focuses on the design of depth-aware convolution layers to improve 3D parameter estimation and post-optimization of the orientation by exploring the consistency between projected and annotated bounding boxes.
To address the common occlusion issue in monocular object detection, MonoPair [9] proposes to model... | In this paper, we propose an effective holistic geometric formula by principled modeling of the relationships between the depth and different geometry elements predicted from the deep network for the task of monocular 3D object detection, including 2D bounding boxes, 3D object dimensions, object poses, and object posit... | either weakly use the geometry considering the projection consistency between 2D and 3D
for post-processing or employ perspective projection regardless of the object poses and positions. However, object poses and positions can provide considerably stronger geometric constraints and are extremely important for accurate ... | There are several recent methods considering utilizing the geometric information for monocular 3D object detection [15, 31, 29, 5, 19].
One research direction mainly focuses on using geometry information to improve the detection performance in the inference stage via post-processing [3, 39]. For instance, M3D-RPN [3] e... | D |
1) We utilize the relational feature of GCNs to rectify text segments by globally considering their “characterness” and “streamline” in the same relational structure through a weakly supervised training process.
To the best of our knowledge this is the first time the classification ability, instead of link prediction, ... | we have proposed false positive/negative suppression strategies that take visual-relational feature maps into account to infer grouping of densely designed text segments with regard to GCN’s node classification and relational reasoning ability.
We have also proposed a simple but effective shape-approximation method to ... | However, sharing similar visual features sometimes leads to error accumulation. In this example, a text instance appears to be separated in both the text region map and its center line map as the FPN layer fails to build long-range dependency between the text segments in the middle. Although the relational information ... | The node classification process in GCNs can be used to further rectify false text segments. This is because the relational feature aggregation between the text segments enable GCNs to rectify a text segment by globally considering the ‘characterness’ and ‘streamline’ of text segments that are in the same relational str... | 2) We propose a novel visual-relational reasoning approach to \addincrease the feature discriminability for falsely detected text segments in typical bottom-up arbitrary-shape text-detection approaches and take advantage of their strengths. This is demonstrated to be effective in capturing both visual
and continuity pr... | D |
undirected extension of a graph (𝒱,ℰ)𝒱ℰ({\cal V},{\cal E})( caligraphic_V , caligraphic_E ) is the
graph (𝒱,ℰ∪ℰ−1)𝒱ℰsuperscriptℰ1({\cal V},{\cal E}\cup{\cal E}^{-1})( caligraphic_V , caligraphic_E ∪ caligraphic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ). | graph (𝒱,ℰ)𝒱ℰ({\cal V},{\cal E})( caligraphic_V , caligraphic_E ) is said to be directed if all edges are directed
edges, or, equivalently, if ℰ∩ℰ−1=∅ℰsuperscriptℰ1{\cal E}\cap{\cal E}^{-1}=\emptysetcaligraphic_E ∩ caligraphic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT = ∅, that is, when no | additional vocabulary and notation. In the graph (𝒱,ℰ)𝒱ℰ({\cal V},{\cal E})( caligraphic_V , caligraphic_E ), the
undirected edges are the elements of ℰ∩ℰ−1ℰsuperscriptℰ1{\cal E}\cap{\cal E}^{-1}caligraphic_E ∩ caligraphic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT — that | In the graph (𝒱,ℰ)𝒱ℰ({\cal V},{\cal E})( caligraphic_V , caligraphic_E ), the directed edges are the elements of
ℰ∩(ℰ−1)𝖼ℰsuperscriptsuperscriptℰ1𝖼{\cal E}\cap({{\cal E}^{-1}})^{\mathsf{c}}caligraphic_E ∩ ( caligraphic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT sansserif_c end_POSTSUPER... | undirected extension of a graph (𝒱,ℰ)𝒱ℰ({\cal V},{\cal E})( caligraphic_V , caligraphic_E ) is the
graph (𝒱,ℰ∪ℰ−1)𝒱ℰsuperscriptℰ1({\cal V},{\cal E}\cup{\cal E}^{-1})( caligraphic_V , caligraphic_E ∪ caligraphic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ). | C |
A hash table is an effective method for collecting the statistics of IP addresses Sanders2015HS . It uses a hash function to compute a hash codes for an array of buckets with the statistical results. The hash function assigns each key to a unique bucket for each IP address. Unfortunately, the hash function can generate... |
The hardware architecture of modern processors usually consists of more than two independent central processing units (CPUs) or graphics processing units (GPUs). Parallel software platforms can be implemented using high-level programming frameworks for specific hardware architectures Chen2009SA . The Compute Unified D... |
The statistics collection algorithm should be stable, effective, and efficient for large-scale records. To overcome the disadvantages of general statistics collection methods, a number of parallel techniques have been developed for large-scale records by optimizing the efficiency and complexity. For example, these alg... |
A number of statistical algorithms for solving this problem have been studied in the last few decades Jing05DLBS ; Kapur2013SA ; Klein2013Sorting ; Fredman2014Sorting ; Agapitos2016RSA . A classic divide-and-conquer strategy has been proposed in which IP addresses are first divided into multiple subsets. Then, each su... | In this paper, we present two efficient algorithms for collecting the statistics of large-scale IP address data. We can obtain the frequently occurring IP addresses from the statistics, which can be regarded as a pre-processing step of user behavior analysis in network traffic management. Because of the increasing volu... | B |
0&0&I\end{array}\right].caligraphic_A = [ start_ARRAY start_ROW start_CELL italic_I end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT end_CELL s... | In this study, we explore two methodologies for designing preconditioners tailored for 3333-by-3333 block systems exhibiting block tridiagonal form, as well as their extensions to n𝑛nitalic_n-tuple scenarios. One approach centers around the nested (or recursive) Schur complement [37]. Unlike the approach presented in ... | or 4 (block-diagonal preconditioners). See [31, 25] for the proof. In contrast, the preconditioners based on the nested Schur complement satisfy polynomials with degrees may be as high as n𝑛nitalic_n (block triangular)
or n!𝑛n!italic_n ! (block-diagonal). Therefore, an additive Schur complement based preconditioner i... | \mathcal{U}caligraphic_P start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT caligraphic_A = ( caligraphic_L caligraphic_D ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT caligraphic_A = caligraphic_U, which is the block upper
triangular m... | We study both block-triangular and block-diagonal preconditioners for the system matrix (1). For block-triangular preconditioners, we focus on a lower triangular type with left preconditioning because an upper triangular one with right preconditioning can be discussed in a similar way [4, 25]. We consider the following... | D |
We note that for each sample p𝑝pitalic_p, 𝐗j(p)superscriptsubscript𝐗𝑗𝑝\mathbf{X}_{j}^{(p)}bold_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_p ) end_POSTSUPERSCRIPT is only held in a single client in silo j𝑗jitalic_j.
A sample ID in the banking and insurance example corresponds t... | If we map this system model on the bank and insurance example in Section 1, there are two silos, a banking silo corresponding to a banking holding company and an insurance silo corresponding to an insurance holding company. The clients in the banking silo are subsidiary banks of the banking holding company,
and similar... | Each silo holds a distinct set of features (e.g., customer/patient features);
the data within each silo may even be of a different modality; for example, one silo may have audio features, whereas another silo has image data. At the same time, there exists an overlap in the sample ID space. More specifically, the silos ... | The features of the samples in the banking silo consist of balance, installments, debit history, credit history, etc., and the banking features for each customer are stored by the bank subsidiary that serves this customer. Similarly, the insurance silo has insurance-related features such as policy details, premium, age... | We note that for each sample p𝑝pitalic_p, 𝐗j(p)superscriptsubscript𝐗𝑗𝑝\mathbf{X}_{j}^{(p)}bold_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_p ) end_POSTSUPERSCRIPT is only held in a single client in silo j𝑗jitalic_j.
A sample ID in the banking and insurance example corresponds t... | C |
In both pictures within Fig. 1, the cyan pentagrams represent true T-eigenvalues of 𝒜𝒜\mathcal{A}caligraphic_A. This observation highlights that opting for an appropriate transformation matrix can lead to tighter bounds or even yield the true T-eigenvalues.
The result presented in cao2021tensor is characterized by i... |
Hundreds of years have witnessed the power of the eigenvalue tool of matrices not only useful in practice but also fundamental in concept trefethen2005spectra . However, in the nonnormal matrix case, eigenvalue analysis may reveal little significance and therefore pseudospectra analysis springs up to remedy such a sit... | Due to many scholars have focused their attentions on the matrix perturbation analysis Bauer-Fike ; 1986Generalization ; Rellich1969 ; shi2012sharp ; Sun1987 ; trefethen2005spectra , a wealth of results have been developed up to now. These include the Gershgorin disc theorem, the Bauer-Fike theorem, and the Kahan theor... |
Eigenvalue sensitivity analysis is a subject of particular interest to many researchers, especially in the context of the symmetric eigenproblem for real matrices, where orthogonal transformations are commonly employed. Notably, classical results such as the Gershgorin disk theorem and the perturbation result Wielandt... |
It is not hard to calculate that each frontal slice of tensor 𝒮𝒮\mathcal{S}caligraphic_S is exactly the symmetric matrix S𝑆Sitalic_S. Then we know that all T-eigenvalues of 𝒜𝒜\mathcal{A}caligraphic_A are the same as those T-eigenvalues of 𝒮𝒮\mathcal{S}caligraphic_S which are all real and only appear in the real... | C |
We also show additional comparison to EdgeConnect [18] and PRVS [10] in Figure 6 as these methods all claim to improve results by reconstructing image structures. Comparatively, the proposed model recovers more reasonable and sharper structures, leading to better results. |
User Study. We further perform subjective user study. 10 volunteers with image processing expertise are involved in this evaluation. They are invited to choose the most realistic image from those inpainted by the proposed method and the representative state-of-the-art approaches. Specifically, each participant has 15 ... |
We evaluate the proposed method on the CelebA [16], Paris StreetView [4] and Places2 [39] datasets, which are widely adopted in the literature, and we follow their original training, testing, and validation splits. Irregular masks are obtained from [13] and classified based on their hole sizes relative to the entire i... | Motivated by global and local GANs [7], Gated Convolution [36] and Markovian GANs [9], we develop a two-stream discriminator to distinguish genuine images from the generated ones by estimating the feature statistics of both texture and structure. The discriminator is shown in Figure 2 (b). The texture branch includes t... |
Objective evaluation. We quantitatively evaluate the proposed method using three major metrics: LPIPS, PSNR and SSIM, and compare the scores to those of the state-of-the-art counterparts with irregular mask ratios of 0-20%, 20-40% and 40-60%. Table 1 shows the results achieved on the Places2 dataset, where the propose... | D |
In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability ε𝜀\varepsilonitalic_ε. The concept of BEC was first introduced by Elias in 1955 InfThe . Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory ... |
The problem of decoding linear codes over the erasure channel has received renewed attention in recent years due to their wide application in the internet and the distributed storage system in analyzing random packet losses Byers ; Luby ; Lun . Three important decoding principles, namely unambiguous decoding, maximum ... | In this paper we carried out an in-depth study on the average decoding error probabilities of the random parity-check matrix ensemble ℛm,nsubscriptℛ𝑚𝑛\mathcal{R}_{m,n}caligraphic_R start_POSTSUBSCRIPT italic_m , italic_n end_POSTSUBSCRIPT over the erasure channel under three decoding principles, namely unambiguous de... |
First recall that the error exponents of the average decoding error probability of the ensemble ℛ(1−R)n,nsubscriptℛ1𝑅𝑛𝑛\mathcal{R}_{(1-R)n,n}caligraphic_R start_POSTSUBSCRIPT ( 1 - italic_R ) italic_n , italic_n end_POSTSUBSCRIPT over the erasure channel under the three decoding principles are defined by | In particular in FFW , upon improving previous results, the authors provided a detailed study on the decoding error probabilities of a general q𝑞qitalic_q-ary linear code over the erasure channel under the three decoding principles. Via the notion of qℓsuperscript𝑞ℓq^{\ell}italic_q start_POSTSUPERSCRIPT roman_ℓ end_P... | A |
We train the transformer with the objective to predict the k𝑘kitalic_k-th step ahead. The main advantages of this subgoal objective are simplicity and empirical efficiency. We used expert data to generate labels for supervised training. When offline datasets are available, which is the case for the environments consid... |
Finally, we formulate the following hypothesis aiming to shed some light on why kSubS is successful: we speculate that subgoal generation may alleviate errors in the value function estimation. Planning methods based on learning, including kSubS, typically use imperfect value function-based information to guide the sea... | MCTS-kSubS and BF-kSubS differ in the choice of the search engine: the former uses Monte-Carlo Tree Search (MCTS), while the latter is backed by Best-First Search (BestFS).
We provide two sets of implementations for the generator, the low-level policy, and the value functions. The first one uses transformer architectur... | The planner is used to search over the graph induced by the subgoal generator and is guided by the value function. The role of the low-level policy is to prune the search tree as well as to transition between subgoals.
In this paper, we assume that the generator predicts subgoals that are k𝑘kitalic_k step ahead (towar... |
In this setup, one easily sees that the probability that the good subgoal will have the highest value estimation among the generated states grows with k𝑘kitalic_k. Consequently, kSubS can handle higher levels of noise than the baseline BestFS, see Table 4. | A |
We conduct experiments on our substitution dataset and three general datasets to verify our method from different perspectives. Standard precision, recall and, F1 scores are calculated as evaluation metrics to show the performance of different model settings. In this paper, we set up experiments on Pytorch and FastNLP ... | Meanwhile, in order to relieve character substitution problems and enhance the robustness of NER models, researchers have also paid attention to utilizing glyph and phonetic features of Chinese characters. Jiang Yang and Hongman Wang suggested using the ‘Four-corner’ code, a radical-based encoding method for Chinese ch... | In this paper, we propose a lightweight method, Multi-feature Fusion Embedding for Chinese Named Entity Recognition (MFE-NER), which fuses extra glyph and phonetic features to detect possible substitution forms of named entities in Chinese. On top of using pre-trained models to represent the semantic feature, we choose... | In order to verify whether our method has the ability to cope with the character substitution problem, we also build our own dataset. This specially designed dataset is collected from informal news reports and blogs. We label the Named Entities in raw materials first and then create their substitution forms by using si... | Nowadays, the informal language environment created by social media has deeply changed the way that people express their thoughts. Using character substitution to generate new named entities becomes a common linguistic phenomenon which is a big challenge for NER. In this paper, we propose a lightweight method fusing th... | C |
where ℱℱ\mathcal{F}caligraphic_F is the 2D Fourier transform, 𝒮𝒮\mathcal{S}caligraphic_S is the SLM modulation, U(⋅)𝑈⋅U(\cdot)italic_U ( ⋅ ) is zeroth-order upsampling operator from the low-resolution SLM to the high-resolution neural étendue expander, and ⊙direct-product\odot⊙ is the Hadamard product. | To assess whether the optimized neural étendue expander ℰℰ\mathcal{E}caligraphic_E, shown in Fig. 1b, has learned the image statistics of the training set we evaluate the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG, defined as the spectrum of the generated image with th... | The differentiability of Eq. (2) with respect to the modulation variables ℰℰ\mathcal{E}caligraphic_E and 𝒮𝒮\mathcal{S}caligraphic_S allows us to learn the optimal wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E by jointly optimizing the static neural étendue expander in concunction with... | Next, we analyze the expansion of étendue achieved with the proposed technique. To this end, suppose we want to generate the étendue-expanded hologram of only a single scene.
Then, the optimal complex wavefront modulation for the neural étendue expander would be the inverse Fourier transform of the target scene, and, a... | Specifically, we model the holographic image formation in a fully differentiable manner following Fourier optics. We relate the displayed holographic image I𝐼Iitalic_I to the wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E as
| B |
To counter data scarcity in the multi-choice question answering task, \addedJin
et al. (2020) propose a multi-stage MTL model that is first coarsely pre-trained using a large out-of-domain natural language inference dataset and then fine-tuned on an in-domain dataset. | et al., 2020) where speech recognition and text translation are learned jointly. Similarly for video captioning (Pasunuru and
Bansal, 2017), the video prediction task and text entailment generation task are used to enhance the encoder and decoder of the model, respectively. A multimodal representation space also makes ... | et al. (2019) add an unsupervised auxiliary task that learns continuous bag-of-words embeddings on the retrieval corpus in addition to the sentence-level parallel data. \deletedRecently, \addedWang
et al. (2020d) build a \replacedmultilingualmulti-lingual NMT system with source-side language modeling and target-side de... | For text generation tasks, MTL is brought in to improve the quality of the generated text.
It is observed in (Domhan and Hieber, 2017) that adding a target-side language modeling task on the decoder of a neural machine translation (NMT) model brings moderate but consistent performance gain. | In some settings where MTL is used to improve the performance of a primary task, the introduction of auxiliary tasks at different levels could be helpful. Several works integrate a language modeling task on lower-level encoders for better performance on simile detection (Rei, 2017), sequence labeling (Liu
et al., 2018a... | C |
The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ... | The IEEE Template Selector will always have the most up-to-date versions of the LaTeX and MSWord templates. Please see: https://template-selector.ieee.org/ and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have... | Be sure to use the \\\backslash\IEEEmembership command to identify IEEE membership status.
Please see the “IEEEtran_HOWTO.pdf” for specific information on coding authors for Conferences and Computer Society publications. Note that the closing curly brace for the author group comes at the end of the thanks group. This w... | The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ... |
IEEE recommends using the distribution from the TeXUser Group at http://www.tug.org. You can join TUG and obtain a DVD distribution or download for free from the links provided on their website: http://www.tug.org/texlive/. The DVD includes distributions for Windows, Mac OS X and Linux operating systems. | D |
extract a coloring of V(G)𝑉𝐺V(G)italic_V ( italic_G ) as follows: for i∈[n]𝑖delimited-[]𝑛i\in[n]italic_i ∈ [ italic_n ] we set c(vi)𝑐subscript𝑣𝑖c(v_{i})italic_c ( italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) to be
the minimum α𝛼\alphaitalic_α such that Wi∩Sα≠∅subscript𝑊𝑖subscript𝑆𝛼W_{i}\cap S... | and only if ℓ(u1)∈Wiℓsubscript𝑢1subscript𝑊𝑖\ell(u_{1})\in W_{i}roman_ℓ ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ∈ italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ℓ(u2)∈Wi′ℓsubscript𝑢2subscript𝑊superscript𝑖′\ell(u_{2})\in W_{i^{\prime}}roman_ℓ ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBS... | the coloring c:V(G)→[3]:𝑐→𝑉𝐺delimited-[]3c:V(G)\to[3]italic_c : italic_V ( italic_G ) → [ 3 ] defined in this way is proper. Consider i,i′∈[n]𝑖superscript𝑖′delimited-[]𝑛i,i^{\prime}\in[n]italic_i , italic_i start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ [ italic_n ] such that (vi,vi′)∈E(G)subscript𝑣𝑖subscript�... | {2}\setminus u_{2}]italic_H , roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , caligraphic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊧ italic_ϕ start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT [ italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∖ italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ... | c(vi)≠c(vi′)𝑐subscript𝑣𝑖𝑐subscript𝑣superscript𝑖′c(v_{i})\neq c(v_{i^{\prime}})italic_c ( italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ≠ italic_c ( italic_v start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ).
| B |
There has been prior work examining both the extension of public goods to static (exogenous) networks, and the provision of public goods on endogenous networks. In particular, Bramoullé et al. (2007) launched research into this environment, by showing that given a network shape, specialized Nash equilibria (in which a... | Prior extensions of public goods provision to environments with endogenous linking include Galeotti and Goyal (2010), which furthers the specialization result of Bramoullé et al. (2007). These papers emphasize the prevalence of core-periphery architectures as equilibrium networks, but in a setting where players choose ... | Our experiment allows us to examine the impact of the information structure on reciprocal behavior. In the control (or baseline) condition, players are given information only about the total inflow of benefits from others after each round, but cannot identify the source of those benefits. In this way, direct reciprocit... |
The estimation in column (3) of Table 1 shows the effects of the treatment on subjects’ abilities to coordinate on efficient structure. By efficient structure, we refer to a network topology that satisfies the conditions required for efficiency by Proposition 2, without necessarily satisfying the requirement of full c... | Elliott and Golub (2019) characterize outcomes in public goods games on exogenous networks by the spectrum of a matrix called the benefits matrix, in which each entry gives the marginal rate of substitution between decreasing own contribution and increased benefits from a neighbor in a fixed network. Their results tie ... | D |
The purpose of SISR is to enlarge a smaller size image into a larger one and to keep it as accurate as possible. Therefore, enlargement operation, also called upsampling, is an important step in SISR. The current upsampling mechanisms can be divided into four types: pre-upsampling SR, post-upsampling SR, progressive up... |
Due to the particularity of the SISR task, it is difficult to construct a large-scale paired real SR dataset. Therefore, researchers often apply degradation patterns on the aforementioned datasets to obtain corresponding degraded images to construct paired datasets. However, images in the real world are easily disturb... | et al., 2019), Zhao et al. proposed a deep Channel Splitting Network (CSN) to ease the representational burden of deep models and further improve the SR performance of MR images. In (Peng
et al., 2020), Peng et al. introduced a Spatially-Aware Interpolation Network (SAINT) for medical slice synthesis to alleviate the m... |
Interpolation is the most widely used upsampling method. The current mainstream of interpolation methods includes Nearest-neighbor Interpolation, Bilinear Interpolation, and Bicubic Interpolation. Being highly interpretable and easy to implement, these methods are still widely used today. Among them, Nearest-neighbor ... |
Many SISR methods have been studied long before, such as bicubic interpolation and Lanczos resampling (Duchon, 1979), which are based on interpolation. However, SISR is an inherently ill-posed problem, and multiple HR images corresponding to the same LR image always exist. To solve this issue, some numerical methods (... | C |
The effect of learning patch-based representation rather than direct pixel values has been illustrated in Figure 3 as part of the ablation study included in the experiments. It becomes quite clear that patch-based representation alone (third column), while helpful, may not yield satisfactory results for challenging syn... |
We begin our analysis with an ablation study of the proposed architecture to demonstrate the utility of each introduced loss component. Figure 3 illustrates the effect of the following adjustments to the conventional coordinate network (second column): i) patch output (third column), ii) cross-patch consistency loss ... |
Another important property to enforce, especially when some parts of the signal need to be synthesized, is for all predicted patches to come from a distribution of likely patches, derived from the available information in the source image. This is achieved with the aid of a discriminator tasked to predict which patche... |
Cross-Patch Consistency Loss The ability to produce likely pixels or patches does not necessarily lead to consistent network output when the entire learned image is considered. By default, all patches for which ground truth is available, are optimized to be close to that reference, but this does not guarantee that all... |
To perform super-resolution, a Neural Knitwork has to translate the information contained in the patches of the original scale to a domain of patches of finer scale. This can be done by matching the patch distribution across scales [8, 25, 26, 29]. For blind super-resolution, Neural Knitwork core module is utilized wi... | C |
We identify a potential issue in the application of IDS to contextual problems, or those with non-stationary expected information gains. Namely, that it may fail to take account of the magnitude of the information gain and make counter-intuitive selections as a result. We explain how a tunable variant avoids this issue... | Our theoretical results on TS are given in Section 2. In Section 3 we discuss the Polya-Gamma augmentation scheme necessary for a practical implementation of TS, and in Section 4 we discuss the limitations of a similar implementation of IDS. Finally, we demonstrate the efficacy of TS numerically in Section 5.
|
The present paper is the first work we aware of that specifically applies TS to apple tasting, but previous work has considered its use for logistic bandits. For logistic contextual bandits, the implementation of exact TS (i.e. the policy that draws its sample from the exact posterior) is infeasible due to the intract... | Our motivations for a renewed treatment of apple tasting are threefold. Firstly, despite the existence of alternative theoretically justified approaches, new developments in the theoretical understanding of TS allow us to derive guarantees for the empirically superior TS policy. Second, the apple tasting setting strike... |
A related algorithm, inspired by the link between information gain and the necessary exploration in sequential decision-making problems is Information-Directed Sampling (IDS), introduced in Russo and Van Roy, 2014a ; Russo and Van Roy, (2018). Like TS, IDS also selects at random based on the posterior belief, but cons... | A |
In a legal analytics scenario [11] where the identification of unfair clauses is done automatically, a system’s output of “potential unfairness” could be explained by the distribution of attention mass on specific segments of text. However, this in itself is not the type of explanation that a legal expert would provide... | As observed, in unfairness detection, SS regularization is highly effective in selecting the appropriate legal rationales when detecting an unfair clause (see Table 4). This is especially true with categories with a larger memory like CR, LTD, and TER. Compared to WS, SS regularization helps in filtering out irrelevant... | Online Terms of Service (ToS) often contain potentially unfair clauses to the consumer.
Unfair clause detection in online ToS is a binary classification task where each clause is labeled as either fair (negative class) or unfair (positive class) [10]. Multiple unfairness categories result in multiple binary classificat... | Here, the identification of relevant rationales is the result of an abstraction process carried out by the legal expert in an effort to explain an opinion. Similarly, an automatic system for unfair clause detection could use a shortlist of legal rationales to justify a prediction of unfairness, as they would define the... | In a legal analytics scenario [11] where the identification of unfair clauses is done automatically, a system’s output of “potential unfairness” could be explained by the distribution of attention mass on specific segments of text. However, this in itself is not the type of explanation that a legal expert would provide... | C |
Formal verification methods have a long-standing tradition in the computer science community. They have historically emerged in the context of hardware and software systems to provide strong guarantees about the correctness of the analyzed implementation by verifying the satisfaction of complex logical properties. From... | Formal verification methods have a long-standing tradition in the computer science community. They have historically emerged in the context of hardware and software systems to provide strong guarantees about the correctness of the analyzed implementation by verifying the satisfaction of complex logical properties. From... | (see Legay et al., 2019, for a recent survey on the area), which is a simulation-based version of probabilistic model checking.
The idea of SMC is to calculate the probability of satisfaction of logical properties by Monte Carlo integration, that is, simulate many trajectories from the stochastic system, determine for ... |
In this paper, we propose a framework for predictive model checking and comparison, where in addition to usual approaches, we advocate for the specification of concrete (spatio-temporal) properties that the predictions from a model should satisfy. Given trajectories from the Bayesian predictive distribution, the poste... | SMC applications usually consider the model parameters to be fixed to specific values (e.g., the maximum likelihood estimates) and simulate from the stochastic system conditioned on the fixed parameter values, without taking into account how the uncertainty on the parameter naturally propagates to the satisfaction prob... | B |
where U∈ℝn×O(k/ϵ)𝑈superscriptℝ𝑛𝑂𝑘italic-ϵU\in\mathbb{R}^{n\times O(k/\epsilon)}italic_U ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_O ( italic_k / italic_ϵ ) end_POSTSUPERSCRIPT
defines an embedding of data points into ℝO(k/ϵ)superscriptℝ𝑂𝑘italic-ϵ\mathbb{R}^{O(k/\epsilon)}blackboard_R start_POSTSUPE... | In kernel k𝑘kitalic_k-Means,
the input is a dataset X𝑋Xitalic_X with weight function wX:X→ℝ+:subscript𝑤𝑋→𝑋subscriptℝw_{X}:X\to\mathbb{R}_{+}italic_w start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT : italic_X → blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT | since ‖c⋆−φ(u)‖2=K(u,u)−2n∑x∈XK(x,u)+1n2∑x,y∈XK(x,y)superscriptnormsuperscript𝑐⋆𝜑𝑢2𝐾𝑢𝑢2𝑛subscript𝑥𝑋𝐾𝑥𝑢1superscript𝑛2subscript𝑥𝑦𝑋𝐾𝑥𝑦\|c^{\star}-\varphi(u)\|^{2}=K(u,u)-\frac{2}{n}\sum_{x\in X}{K(x,u)}+\frac{1}{%
n^{2}}\sum_{x,y\in X}{K(x,y)}∥ italic_c start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ... | define wU(S):=∑u∈SwU(u)assignsubscript𝑤𝑈𝑆subscript𝑢𝑆subscript𝑤𝑈𝑢w_{U}(S):=\sum_{u\in S}{w_{U}(u)}italic_w start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT ( italic_S ) := ∑ start_POSTSUBSCRIPT italic_u ∈ italic_S end_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT ( italic_u ).
For any oth... |
A weighted set U𝑈Uitalic_U is a finite set U𝑈Uitalic_U associated with a weight function wU:U→ℝ+:subscript𝑤𝑈→𝑈subscriptℝw_{U}:U\to\mathbb{R}_{+}italic_w start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT : italic_U → blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT. | D |
For instance, replicability of standard equality in substructural logics has a neat algebraic explanation:
such an equality is defined by a left adjoint, as pioneered by Lawvere [Law69, Law70], and, as we will show, predicates defined in this way are always replicable. | how does this (standard) notion of equality relate to our quantitative equality?
To answer this question in a precise way, first of all we observe that also elementary R𝑅Ritalic_R-graded doctrines can be organised in a 2-category, and then we compare it with the 2-category of R𝑅Ritalic_R-Lipschitz doctrines. |
Theorem 22 shows that the notion of quantitative equality given in this paper is coalgebraic, in the sense that Lipschitz doctrines are the coalgebras of a comonad over the category of graded doctrines. This generalizes a known situation that holds in the non-linear case, where elementary doctrines are the coalgebras ... | This shows that a quantitative equality cannot be given by a left adjoint,
however, thanks to the language of doctrines, we manage to compare in a rigorous way quantitative equality with the standard one, proving they share other fundamental structural properties. | This provides us with a universal construction yielding an R𝑅Ritalic_R-Lipschitz doctrine from an R𝑅Ritalic_R-graded one, and we use it to generate semantics for the calculus.
In Section 5.2 we relate quantitative equality with the usual one defined by left adoints, formally proving that the former indeed refines the... | C |
In this section, we first define a new role similarity measure, namely ForestSim, based on spanning rooted forests. We then show that the ForestSim score can be expressed in terms of the diagonal elements in the forest matrix and prove that ForestSim is an admissible role similarity metric. After that, we propose Fores... |
In this paper, we propose a novel node similarity metric, namely ForestSim, to quickly and effectively process top-k similarity search on large networks. Different from previous frameworks, ForestSim, based on spanning rooted forests of graphs, adopts the average size of all trees rooted at node u𝑢uitalic_u in spanni... |
In this paper, on the basis of spanning rooted forests, we propose ForestSim, a new node similarity metric. ForestSim uses the average size of the trees rooted at the node u𝑢uitalic_u in spanning rooted forests of the graph, denoted by s(u)𝑠𝑢s(u)italic_s ( italic_u ), to capture its structural properties. Two node... | The key point of analyzing structural roles is figuring out how a vertex connects with its context nodes [43]. To some extent, the sizes of those trees rooted at u𝑢uitalic_u in the spanning rooted forests of a graph reflects the connection mode between the node u𝑢uitalic_u and its context vertices. Here, we use the a... |
Figure 2: Each tree rooted at u𝑢uitalic_u in the spanning rooted forest F∈ℱuu𝐹subscriptℱ𝑢𝑢F\in\mathcal{F}_{uu}italic_F ∈ caligraphic_F start_POSTSUBSCRIPT italic_u italic_u end_POSTSUBSCRIPT for u=1,2,3,4𝑢1234u=1,2,3,4italic_u = 1 , 2 , 3 , 4 in the toy graph G0subscript𝐺0G_{0}italic_G start_POSTSUBSCRIPT 0 end... | C |
Yet, some strides have been made on a similar topic, namely sentiment dependency. These approaches, featured in several studies Zhang et al. (2019); Huang and Carley (2019); Phan and Ogunbona (2020),
hypothesize that sentiments of aspects may be dependent and usually leverage syntax trees to reveal potential sentiment ... | However, sentiment dependency remains a somewhat ambiguous concept in the current research landscape.
Furthermore, previous methods Zhou et al. (2020); Zhao et al. (2020); Tang et al. (2020); Li et al. (2021a, a) tend to model context topological dependency (e.g., context syntax structure) rather than sentiment depende... |
Aspect-based sentiment classification Pontiki et al. (2014, 2015, 2016) (ABSC) aims to identify sentiments associated with specific aspects within a text, as highlighted in several studies Ma et al. (2017); Fan et al. (2018); Zhang et al. (2019); Yang et al. (2021). | DGEDT-BERT Tang et al. (2020) is a dual-transformer-based network enhanced by a dependency graph, while SDGCN-BERT Zhao et al. (2020) is a GCN-based model designed to capture sentiment dependencies between aspects.
Dual-GCN Li et al. (2021a) is an innovative GCN-based model that enhances the learning of syntax and sema... | However, the progress of sentiment dependency-based methods, such as the work by Zhang et al. (2019); Zhou et al. (2020); Tian et al. (2021); Li et al. (2021a); Dai et al. (2021), has contributed to the improvement of coherent sentiment learning.
These studies explored the effectiveness of syntax information in ABSC, w... | A |
𝐗𝐗\mathbf{X}bold_X. Suppose that there exist orthogonal projections 𝐋:ℝN→𝒫p(𝕊):𝐋→superscriptℝ𝑁superscript𝒫𝑝𝕊\mathbf{L}:\mathbb{R}^{N}\rightarrow{\mathcal{P}}^{p}(\mathbb{S})bold_L : blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT → caligraphic_P start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR... | gradient estimate of 𝜸𝐖subscript𝜸𝐖\bm{\gamma}_{\mathbf{W}}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT, where 𝜸𝐖0superscriptsubscript𝜸𝐖0\bm{\gamma}_{\mathbf{W}}^{0}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is the initial
guess. The main ... | In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic... |
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and me... | The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop... | D |
Ablation on post-measurement normalization.
Table 5 compares the accuracy and signal-to-noise ratio (SNR) before and after post-measurement normalization on MNIST-4. We study 4 different QNN architectures and evaluate on 3 devices. The normalization can significantly and consistently increase SNR. | Figure 7. Ablation on different noise injection methods. Left: Without quantization, gate insertion and measurement perturbation performs similar, both better than rotation angle perturbation. Right: With quantization, gate insertion is better as perturbation effect can be canceled by quantization.
| As in Figure 5, during training, for each QNN gate, we sample error gates based on ℰℰ\mathcal{E}caligraphic_E and insert it after the original gate. A new set of error gates is sampled for each training step. In reality, the QNN is compiled to the basis gate set of the quantum hardware (e.g., X, CNOT, RZ, CNOT, and ID)... | Direct perturbation.
Besides gate insertion, we also experimented with directly perturbing measurement outcomes or rotation angles as noise sources. For outcome perturbation, with benchmarking samples from the validation set, we obtain the error Err𝐸𝑟𝑟Erritalic_E italic_r italic_r distribution between the noise-fr... | With different noise factors T𝑇Titalic_T, the gate insertion and measurement outcome perturbation have similar accuracy, both better than rotation angle perturbation. A possible explanation is that the rotation angle perturbation does not consider non-rotation gates such as X and SX.
The right side further investigate... | D |
For the parameters used in the proposed EDA, we adopt the same confidence interval [α,β]𝛼𝛽[\alpha,\beta][ italic_α , italic_β ] as that of ETD chen2019asynchronous for fair competition. The number of the time slices Nssubscript𝑁𝑠N_{s}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT is empirically set to 10... | We extensively evaluate the proposed EDA on object tracking. The experimental results demonstrate the superiority of EDA over other state-of-the-art event-based tracking methods and several popular conventional tracking methods. In addition, the estimated true event trajectories corresponding to object motions are also... | For the evaluation metrics, we follow the frame-wise tracking protocol in chen2020end and use the Average Overlap Rate (AOR) and the Average Robustness (AR) to respectively measure the precision and the robustness of all the competing object tracking methods, as follows:
|
To evaluate the proposed EDA on visual tracking, we need to calculate the corresponding object bounding box based on the event trajectories associated by EDA. According to the frame-wise tracking protocol, for each tracking instance, we have the ground truth bounding box of the tracked object at the current frame. EDA... | For the object tracking task, since the event data is associated between two adjacent frames, we employ an evaluation protocol of frame-wise tracking to evaluate the performance of the proposed EDA approach. For the frame-wise tracking, the evaluation of these competing methods is based on object pairs, each of which i... | D |
As a connected vertex-ordering of H𝐻Hitalic_H can be obtained in linear time using a standard graph traversal algorithm, and a colouring of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT may be computed in O(mn)𝑂𝑚𝑛O(mn)italic_O ( italic_m italic_n ) time [12, Chapter 5.7], we concl... | class of perfect graphs. We also give a simple and constructive proof for comparability graphs (which are perfect). Note that there exist bad graphs in these graph classes, consider for example the fish graph, which is K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT-minor-free and comparability; see... | Recently, connected greedy edge-colourings (equivalently, connected greedy colourings of line graphs) have been studied in [3], and it was proved that there is no line graph of a bipartite graph that is ugly.444Moreover, a careful analysis of the proof of [3] gives an algorithm running in time O(n4)𝑂superscript𝑛4O(n... |
We now prove our main result, that there are no ugly perfect graphs. This generalizes the same fact which was previously proved for Meyniel graphs [22] (a class which contains chordal graphs, HHD-free graphs, Gallai graphs, parity graphs, distance-hereditary graphs…) and line graphs of bipartite graphs [3]. Our proof ... | Concerning other subclasses of interest, as mentioned in the introduction, the same task can be done in time O(m+n)𝑂𝑚𝑛O(m+n)italic_O ( italic_m + italic_n ) for chordal graphs using the LexBFS algorithm [21], for Meyniel graphs this can be done in time O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUP... | C |
The general goal of URL is identical across various URL tasks:
Given a finite set of samples X=[x1,…,xn]∈ℝn×D𝑋subscript𝑥1…subscript𝑥𝑛superscriptℝ𝑛𝐷X=[x_{1},...,x_{n}]\in\mathbb{R}^{n\times D}italic_X = [ italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT... | As for the societal impacts of GenURL, it can be regarded as a unified framework for the unsupervised representation learning (URL) problem that bridges the gap between various methods. The ablation studies of basic hyper-parameters can reflect the relationship between different URL tasks. The core idea of GenURL is to... | Generally, the intrinsic structures of data are determined by both the input data and task-specific assumptions.
However, most research in URL has focused on individual data modalities or specific tasks, resulting in specific designs and different learning objectives. We take the following four widely used URL tasks as... | It is a fact that these two independent algorithms are designed to excel in their respective areas of applicability.
There is thus a natural question if the intrinsic representation of data is determined by both the global data structure and data-specific prior assumptions in a unified framework. | Based on ℳℳ\mathcal{M}caligraphic_M, the similarity between two non-adjacent samples within each 𝒩isubscript𝒩𝑖\mathcal{N}_{i}caligraphic_N start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT can be approximated by the shortest-path distance.
Since most URL algorithms are designed for specific tasks or data, the following... | B |
Note that MCUNetV2-M4 shares a similar computation with MCUNet (172M vs. 168M) but a much better mAP. This is because the expanded search space from patch-based inference allows us to choose a better configuration of larger input resolution and smaller models.
|
We provide the face detection results on WIDER FACE validation set with RNNPool-Face-Quant [43] and MCUNetV2-S. The quantitative results are shown in Table 7, where we follow [43] to calculate the peak memory. Our model has better mAP at 1.3×\times× smaller peak memory. |
We provide the face detection results on WIDER FACE validation set with RNNPool-Face-Quant [43] and MCUNetV2-S. The quantitative results are shown in Table 7, where we follow [43] to calculate the peak memory. Our model has better mAP at 1.3×\times× smaller peak memory. |
We benchmarked MCUNetV2 for memory-efficient face detection on WIDER FACE [51] dataset in Table 4. We report the analytic memory usage of the detector backbone in fp32 following [43]. We train our methods with S3FD face detector [53] following [43] for a fair comparison. |
We provide the face detection results on WIDER FACE validation set with RNNPool-Face-Quant [43] and MCUNetV2-S. The quantitative results are shown in Table 7, where we follow [43] to calculate the peak memory. Our model has better mAP at 1.3×\times× smaller peak memory. | C |
The ICDM 2020 Knowledge Graph Contest is a competition-style event co-located with the leading ICDM conference. This paper describes our solution for the consumer event-cause extraction task, and we won 1st place in the first stage leaderboard and 3rd place in the final stage leaderboard. Extracting causes of consumer... |
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging [8] framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a... |
The ICDM 2020 Knowledge Graph Contest is a competition-style event co-located with the leading ICDM conference. This paper describes our solution for the consumer event-cause extraction task, and we won 1st place in the first stage leaderboard and 3rd place in the final stage leaderboard. Extracting causes of consumer... | The Consumer Event Cause Extraction (CECE) task aims to extract consumer events and the cause of the event from the text of a given brand or product. Traditional methods use a model structure similar to extract machine reading comprehension (MRC) [7]. Most of the related work [6] extracted events type and events-cause ... | In the 2020 ICDM Competition 333https://www.biendata.xyz/competition/icdm_2020_kgc/,
the task adds judgments on multiple event types, which is difficult to solve with a reading comprehension framework. The goal of the competition is to extract multiple event types and event-causes for each text and brand/product. To th... | C |
You et al. [35] reach similar conclusions after testing various augmentation strategies. For instance, edge perturbation is more suitable for social networks but hurts biochemical molecules.
Facing the aforementioned issue, many researchers turn to explore the possibility of discarding data augmentation from contrastiv... | To remedy the issue of unstable invariance from inappropriate data augmentations, we propose a novel graph-level contrastive learning framework named CGCL, where no handcrafted graph augmentation is needed. CGCL uses multiple GNN-based graph encoders to enforce contrastive learning in a collaborative way, remedying the... |
In contrast to existing GCL methods which use the same encoder to observe multiple augmented graphs, CGCL uses multiple encoders to observe the same graph and generate contrastive views. It is pivotal to recognize that the quintessence of contrastive learning is to learn invariance between different contrastive views ... | We propose a novel Collaborative Graph Contrastive Learning (CGCL) to reinforce unsupervised graph-level representation learning, which requires no handcrafted data augmentations. We explain the essence of collaborative framework as generating multiple contrastive views from the encoder perspective.
| In this study, we introduce CGCL, a novel collaborative graph contrastive learning framework, designed to address the invariance challenge encountered in current GCL methods. Unlike the conventional practice of constructing augmented graphs by hand, CGCL employs multiple GNN-based encoders to generate multiple contrast... | A |
In this paper, we theoretically show that inductive biases on both the training framework and the data are needed for compositionality to emerge.
A similar observation has been made by Kottur et al., (2017); however, our result is more fundamental and points out a common misconception that compositionality can be learn... | Different number of symbols
Here we study the impact of a different number of symbols on compositionality. In our experiments we used the communication channel with |𝒜s|=5subscript𝒜𝑠5|\mathcal{A}_{s}|=5| caligraphic_A start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT | = 5, giving a total of 25=(5×5)255525=(5\times 5)2... | We experimentally verify that a certain range of noise levels, dependent on the model and the data, promotes compositionality. We provide a wide range of experiments that illustrate the influence of different priors. For the inductive biases in the training framework, we look into the impact of the network architecture... | We perform a series of experiments in order to understand different aspects of the proposed framework better. We empirically validate that, indeed, a certain range of noise levels, dependent on the model and the data, promotes compositionality. Our work is foundational research and does not lead to any direct negative ... | The receiver’s network has a two-headed output and the training framework uses a factorized loss function.
In the standard setting, we compare the heads’ outputs with ’color’ and ’shape’ respectively, therefore we reflect human priors from the data. In this experiment, we distort this setting, | B |
The previous result can be used to guarantee that the set 𝒞𝒞\mathcal{C}caligraphic_C defined in equation (5) contains the set 𝒟¯¯𝒟\underline{\mathcal{D}}under¯ start_ARG caligraphic_D end_ARG, i.e., 𝒟¯⊆𝒞¯𝒟𝒞\underline{\mathcal{D}}\subseteq\mathcal{C}under¯ start_ARG caligraphic_D end_ARG ⊆ caligraphic_C. Hence, ... | where ⊕direct-sum\oplus⊕ is the Minkowski sum operator. The set 𝒩𝒩{\mathcal{N}}caligraphic_N should be thought of as a layer of width σ𝜎\sigmaitalic_σ surrounding the set 𝒟𝒟\mathcal{D}caligraphic_D, see Fig. 3 (right) for a graphical depiction. As will be made clear in the sequel, by enforcing that the value of th... | The previous section provides safety guarantees when h(x)ℎ𝑥h(x)italic_h ( italic_x ) is a ROCBF. However, one is still left with the potentially difficult task of constructing a twice continuously differentiable function h(x)ℎ𝑥h(x)italic_h ( italic_x ) such that (i) the set 𝒞𝒞\mathcal{C}caligraphic_C defined in e... | We first learn a ROCBF controller in the case that the state x𝑥xitalic_x is perfectly known, i.e., the model of the output measurement map is such that X^(y)=X(y)=x^𝑋𝑦𝑋𝑦𝑥\hat{X}(y)=X(y)=xover^ start_ARG italic_X end_ARG ( italic_y ) = italic_X ( italic_y ) = italic_x and the error is ΔX(y):=0assignsubscriptΔ𝑋... |
Propositions 1 and 2 guarantee that the level-sets of the learned function h(x)ℎ𝑥h(x)italic_h ( italic_x ) satisfy the desired geometric safety properties. We now derive conditions that ensure that h(x)ℎ𝑥h(x)italic_h ( italic_x ) is a ROCBF, i.e., that the ROCBF constraint (4) is also satisfied. | D |
Since the work of Baker, Gill, and Solovay [BGS75], whenever complexity theorists were faced with an impasse like the one above, a central tool has been relativized or black-box complexity: in other words, studying what happens when all the complexity classes one cares about are fed some specially-constructed oracle. M... |
So what is it that distinguishes 𝖡𝖯𝖯𝖡𝖯𝖯\mathsf{BPP}sansserif_BPP from 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP in these cases? In all of the above examples, the answer turns out to be one of the fundamental properties of classical randomized algorithms: namely, that one can always “pull the randomness out” from suc... |
A detailed description of the classical algorithm is given in Algorithm 1. Intuitively speaking, the simulation algorithm simply applies the algorithm from Lemma 52 T𝑇Titalic_T times consecutively, recording into f𝑓fitalic_f the queries that have been made so far. Additionally, any time the algorithm encounters a ro... | Since the work of Baker, Gill, and Solovay [BGS75], whenever complexity theorists were faced with an impasse like the one above, a central tool has been relativized or black-box complexity: in other words, studying what happens when all the complexity classes one cares about are fed some specially-constructed oracle. M... |
In quantum complexity theory, even more than in classical complexity theory, relativization has been an inextricable part of progress from the very beginning. The likely explanation is that, even when we just count queries to an oracle, in the quantum setting we need to consider algorithms that query all oracle bits i... | D |
In order to prove the upper bound from (4), we represent the image of k[x(⩽ℓ)]/ℐm(∞)𝑘delimited-[]superscript𝑥absentℓsuperscriptsubscriptℐ𝑚k[x^{(\leqslant\ell)}]/\mathcal{I}_{m}^{(\infty)}italic_k [ italic_x start_POSTSUPERSCRIPT ( ⩽ roman_ℓ ) end_POSTSUPERSCRIPT ] / caligraphic_I start_POSTSUBSCRIPT italic_m end_PO... | Connections between the multiplicity structure of the arc space of a fat point and Rogers-Ramanujan partition identities from combinatorics were pointed out by Bruschek, Mourtada, and Schepers in [11] (for a recent survey, see [29, §9]).
In particular, they used Hilbert-Poincare series of similar nature to (1) (motivat... | We used Macaulay2 [19] and, in particular, package Jets [18, 17] to explore possible analogues of our Theorem 3.1 for this more general case.
A related Sage implementation for computing the arc space of an affine scheme with respect to a fat point can be found in [37, Section 9] and [36, Section 5.4]. | In this direction, new results have been obtained recently in [1, 4, 7].
In [1], Afsharijoo used computational experiments to conjecture [1, Section 5] the initial ideal of ℐm(∞)superscriptsubscriptℐ𝑚\mathcal{I}_{m}^{(\infty)}caligraphic_I start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( ∞ ) end_... | The proofs of the results are given in Section 4.
Section 5 describes computational experiments in Macaulay2 we performed to check whether formulas similar to (2) hold for more general fat points in knsuperscript𝑘𝑛k^{n}italic_k start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. | D |
DCDFM can also generate signed network by setting ℙ(A(i,j)=1)=1+Ω(i,j)2ℙ𝐴𝑖𝑗11normal-Ω𝑖𝑗2\mathbb{P}(A(i,j)=1)=\frac{1+\Omega(i,j)}{2}blackboard_P ( italic_A ( italic_i , italic_j ) = 1 ) = divide start_ARG 1 + roman_Ω ( italic_i , italic_j ) end_ARG start_ARG 2 end_ARG and ℙ(A(i,j)=−1)=1−Ω(i,j)2ℙ𝐴𝑖𝑗11norm... |
Eq (5) means that we only assume all elements of A𝐴Aitalic_A are independent random variables and 𝔼[A]=ΘZPZ′Θ𝔼delimited-[]𝐴Θ𝑍𝑃superscript𝑍′Θ\mathbb{E}[A]=\Theta ZPZ^{\prime}\Thetablackboard_E [ italic_A ] = roman_Θ italic_Z italic_P italic_Z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT roman_Θ without any p... |
Since our model DCDFM has no limitation on the choice of distribution ℱℱ\mathcal{F}caligraphic_F as long as Eq (5) holds, setting ℱℱ\mathcal{F}caligraphic_F as any other distribution (see, Double exponential, Exponential, Gamma and Uniform distributions in http://www.stat.rice.edu/~dobelman/courses/texts/distributions... | Follow similar analysis as [16], we let ℱℱ\mathcal{F}caligraphic_F be some specific distributions as examples to show the generality of DCDFM as well as nDFA’s consistent estimation under DCDFM. For i,j∈[n]𝑖𝑗delimited-[]𝑛i,j\in[n]italic_i , italic_j ∈ [ italic_n ], we mainly bound γ𝛾\gammaitalic_γ to show that γ𝛾\... |
In this paper, we introduced the Degree-Corrected Distribution-Free Model (DCDFM), a model for community detection on weighted networks. The proposed model is an extension of previous Distribution-Free Models by incorporating node heterogeneity to model real-world weighted networks in which nodes degrees vary, and it ... | B |
π~(a|s):=ρ(s,a)μ(a|s)∑b∈𝒜ρ(s,b)μ(b|s).assign~𝜋conditional𝑎𝑠𝜌𝑠𝑎𝜇conditional𝑎𝑠subscript𝑏𝒜𝜌𝑠𝑏𝜇conditional𝑏𝑠\tilde{\pi}(a|s):=\frac{\rho(s,a)\mu(a|s)}{\sum_{b\in\mathcal{A}}\rho(s,b)\mu(%
b|s)}.over~ start_ARG italic_π end_ARG ( italic_a | italic_s ) := divide start_ARG italic_ρ ( italic_s , italic... | In Table 1 and Figure 1, we present the results of the main version of our algorithm – MA-Trace (obs), in which the critic uses stacked observations of agents as described in Appendix F. MA-Trace (obs) reaches competitive results and in some cases exceeds the current state-of-the-art. We compare with a selection of the... | The theorem is an extended version of (Espeholt
et al., 2018, Theorem 1). First, we assume the vectorized statement, which is natural for the multi-agent setting. Second, the condition (6) admits more general importance sampling weights. We also fix a mathematical inaccuracy present in the original proof of (Espeholt | et al. (2018) propose the V-Trace algorithm to address the problem that in distributed (e.g. multi-node) training the policy used to generate experience is likely to lag behind the policy used for learning. Munos et al. (2016) considered earlier a similar off-policy corrections for the target of the Q𝑄Qitalic_Q-functi... |
In this work, we take a step towards amending this situation. We propose MA-Trace, a new on-policy actor-critic algorithm, which adheres to the centralized training and decentralized execution paradigm Lowe et al. (2017); Foerster et al. (2018); Rashid et al. (2018). The key component of MA-Trace is the usage of impor... | B |
Evaluation. While we already conducted a task-based user study with 12 participants that tested the applicability and effectiveness of VisRuler, additional review sessions with experts could help us to validate our tool further. However, as illustrated in Figure 3, our VA system is designed to be operated with a single... |
EnsembleMatrix Talbot2009EnsembleMatrix and Manifold Zhang2019Manifold are two VA tools specifically designed for model comparison. The former uses a confusion matrix representation for contrasting models. The latter produces and compares pairs of models across all data classes. We adopt a similar approach as with t... | We presented VisRuler, a VA tool that allows users to explore diverse rules extracted from bagged and boosted decision trees to reach a consensus about a final decision for each individual case. The multiple coordinated views facilitate the selection of diverse and performant models, the characterization of per-feature... |
The rest of this paper is organized as follows. In Section Related Work, we discuss relevant techniques for visualizing bagging and boosting decision trees, along with tree- and rule-based models and a bulk of relevant works of visual analytics systems for multi-model comparison. Section Random Forest vs. Adaptive Boo... | As in VisRuler, relevant works that utilize bagging methods use the RF algorithm to produce decision trees. Zhao2019iForest ; Neto2021Explainable ; Eirich2022RfX ; Nsch2019Colorful ; Neto2021Multivariate iForest Zhao2019iForest
provides users with tree-related information and an overview of the involved decision path... | B |
The conventional HS-MIMO scheme does not show full-matching of selected antenna indices with any of two PR-HS-MIMO schemes in the scenarios that Lt=4subscript𝐿𝑡4L_{t}=4italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 4 and 6666. On the other hand, the two PR-HS-MIMO schemes, i.e., EW and global polarization... |
This section delineates the PR-HS-MIMO communication system that performs spatial multiplexing with the selected antenna elements as depicted in Fig. 3. The Tx selects Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT out of Ntsubscript𝑁𝑡N_{t}italic_N start_POSTSUBSCRIPT italic_t end_POSTSU... | It is worth emphasizing that estimation of optimal polarization vectors before the hybrid antenna selection stage is inevitable to have full benefit of joint polarization pre-post coding and the corresponding polarization reconfigurable antenna selection in PR-HS-MIMO spatial multiplexing.
|
It is worth emphasizing that the combination of polarization reconfiguration and hybrid antenna selection, i.e., PR-HS can provide significant improvement in effective channel gain, i.e., the squared envelop of hijeffsubscriptsuperscriptℎeff𝑖𝑗h^{\rm eff}_{ij}italic_h start_POSTSUPERSCRIPT roman_eff end_POSTSUPERSCR... | Even for a set of selected Tx antenna elements based on the conventional HS-MIMO, the channel capacity is improved via joint polarization pre-post coding after the hybrid antenna selection. However, the Tx antenna elements chosen by hybrid selection are different from the selection based on the proposed PR-HS-MIMO sche... | B |
In some packing problems, the “container” has no predefined boundaries (contrary to the cases of strip and bin packing and the study of critical densities), but the pieces can be placed anywhere in the plane and the container is dynamically updated as the bounding box or the convex hull of the pieces. | Lassak and Zhang [35] proved that the Potato Sack Theorem also holds for any dimension d≥1𝑑1d\geq 1italic_d ≥ 1 when the convex bodies appear online, if rotations are allowed. In order to achieve this, each convex body of volume V𝑉Vitalic_V is rotated so that it has an axis-parallel bounding box of volume at most d!⋅... | Then the density of the piece in its axis-parallel bounding box is at least 1/2121/21 / 2, and the algorithm for rectangles can be applied to the bounding box.
An interesting question that remained open was therefore whether there is a competitive algorithm for minimizing the perimeter when the pieces are convex polygo... | In Bin-Packing, the pieces have to be placed in unit squares, and the goal is to use a minimum number of these squares.
In Perimeter-Packing, we can place the pieces anywhere in the plane, and the goal is to minimize the perimeter of their axis-parallel bounding box. | Fekete and Hoffmann [25] studied online packing axis-parallel squares so as to minimize the area of their bounding square, and gave an 8888-competitive algorithm for the problem.
Abrahamsen and Beretta [1] gave a 6666-competitive algorithm for the same problem and studied the more general case where the pieces are axis... | D |
Q: What about other selection methods? We also try some other methods based on ideas from active learning. VAAL [31] is a typical active learning framework which we re-implement and start with random 5 instances. Next, we obtain N𝑁Nitalic_N(N=5𝑁5N=5italic_N = 5) instances suggested by VAAL to be templates. | Finally, we estimate the entropy (of one-hot vectors), uncertainty (difference of two classifiers) and loss as metrics to suggest templates to label.
As shown in Table 6, VAAL needs labeled data to initialize, and performs badly in only one iteration. | MSE works while uncertainty and entropy are not, where the probable reason is that patterns in medical images appear simple for both classifiers which give similar predictions and cause very low uncertainty for all images and low entropy to distinguish instances clearly.
| Uncertainty and Entropy are also common tools for active learning [28].
We re-implement the framework by connecting one encoder with two classifiers, and train by classifying all instances and enlarging the difference of outputs from two classifiers. |
Q: What about other selection methods? We also try some other methods based on ideas from active learning. VAAL [31] is a typical active learning framework which we re-implement and start with random 5 instances. Next, we obtain N𝑁Nitalic_N(N=5𝑁5N=5italic_N = 5) instances suggested by VAAL to be templates. | C |
(b) Let A(i,j)𝐴𝑖𝑗A(i,j)italic_A ( italic_i , italic_j ) be a random number generated from distribution ℱℱ\mathcal{F}caligraphic_F with expectation Ω(i,j)Ω𝑖𝑗\Omega(i,j)roman_Ω ( italic_i , italic_j ) for 1≤i<j≤n1𝑖𝑗𝑛1\leq i<j\leq n1 ≤ italic_i < italic_j ≤ italic_n, set A(j,i)=A(i,j)𝐴𝑗𝑖𝐴𝑖𝑗A(j,i)=A(i,j)... |
For simplicity, we do not consider missing edges in our simulation study. Actually, similar to numerical results in [38], DFSP performs better as sparsity parameter p𝑝pitalic_p increases when missing edges are generated from the Erdös-Rényi random graph G(n,p)𝐺𝑛𝑝G(n,p)italic_G ( italic_n , italic_p ). |
Let 𝒜∈{0,1}n×n𝒜superscript01𝑛𝑛\mathcal{A}\in\{0,1\}^{n\times n}caligraphic_A ∈ { 0 , 1 } start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT be a symmetric and connected adjacency matrix of an undirected unweighted network. To model real-world large-scale overlapping undirected weighted networks with mis... | In particular, when 𝒜𝒜\mathcal{A}caligraphic_A is generated from the Erdös-Rényi random graph G(n,p)𝐺𝑛𝑝G(n,p)italic_G ( italic_n , italic_p ) [43], ℙ(𝒜(i,j)=1)=pℙ𝒜𝑖𝑗1𝑝\mathbb{P}(\mathcal{A}(i,j)=1)=pblackboard_P ( caligraphic_A ( italic_i , italic_j ) = 1 ) = italic_p and ℙ(𝒜(i,j)=0)=1−pℙ𝒜𝑖𝑗01𝑝\math... | From Examples 1, 4, and 5, we find that A(i,j)𝐴𝑖𝑗A(i,j)italic_A ( italic_i , italic_j ) is almost always nonzero for i≠j𝑖𝑗i\neq jitalic_i ≠ italic_j, which is impractical for real-world large-scale networks in which many nodes have no connections [15]. Similar to [36, 38], an edge with weight 0 is deemed as a mis... | A |
Two classic setups of Incremental Learning are Class Incremental Learning (CIL)[26, 12, 20, 19, 13, 1, 22] and Task Incremental Learning (TIL)[16, 21, 2, 28, 31, 30, 24]. CIL and TIL both split all training classes into multiple tasks and learn them sequentially.
The difference between these two setups is that TIL allo... | The major challenge of CIL is that the model performance on previously learned classes usually degrades seriously after learning new classes, a.k.a. catastrophic forgetting [23, 9].
To reduce forgetting, most previous works [18, 26, 12, 8, 29] focus on phases after the initial one, e.g. introducing forgetting-reduction... |
Many CIL methods mitigate forgetting through knowledge distillation[18, 26, 12, 8, 29]. In these methods, when learning at a new phase, the model of the previous phase is used as the teacher, and the CIL Learner is regularized to produce similar outputs as the teacher. In this way, the knowledge of previously learned ... | However, the role of the initial phase in CIL (the phase before the CIL learner begins incrementally learning new classes) is largely neglected and much less understood.
We argue that the initial phase is of critical importance, since the model trained at this phase implicitly affects model learning in subsequent CIL p... | Specifically, at the initial phase, we regularize the CIL learner to produce similar representations as the model trained with data of all classes (i.e., the oracle model), since the upper bound of CIL is the oracle model.
According to our results, this additional regularization drastically improves CIL performance. | B |
Unsupervised method for intra-patient registration of brain magnetic resonance images based on objective function weighting by inverse consistency: Contribution to the brats-reg challenge, in: International MICCAI Brainlesion Workshop, Springer. pp. 241–251.
| Mok and Chung (Mok and Chung, 2022b) proposed a 3-step registration method, which comprises an affine pre-alignment, a convolutional neural network with forward-backward consistency constraint, and a nonlinear instance optimization. First, possible linear misalignments caused by the tumour mass effect were eliminated w... | Mok and Chung (Mok and Chung, 2022b) proposed a 3-step registration method, which comprises an affine pre-alignment, a convolutional neural network with forward-backward consistency constraint, and a nonlinear instance optimization. First, possible linear misalignments caused by the tumour mass effect were eliminated w... | Mok and Chung (Mok and Chung, 2022b) proposed a 3-step registration method, which comprises an affine pre-alignment, a convolutional neural network with forward-backward consistency constraint, and a nonlinear instance optimization. First, possible linear misalignments caused by the tumour mass effect were eliminated w... | Mok and Chung (Mok and Chung, 2022b) proposed a 3-step registration method, which comprises an affine pre-alignment, a convolutional neural network with forward-backward consistency constraint, and a nonlinear instance optimization. First, possible linear misalignments caused by the tumour mass effect were eliminated w... | A |
that comes with the constraint that the attribute idid\mathrm{id}roman_id functionally determines namename\mathrm{name}roman_name and deptdept\mathrm{dept}roman_dept. This means that the attribute idid\mathrm{id}roman_id is the key of 𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒𝐸𝑚𝑝𝑙𝑜𝑦𝑒𝑒\mathit{Employee}italic_Employee. Consider also the ... | It is easy to see that D𝐷Ditalic_D is inconsistent since we are uncertain about Bob’s department, and the name of the employee with id 2222. To devise a repair, we need to keep one tuple from each conflicting pair, which leads to a maximal subset of D𝐷Ditalic_D that is consistent. Thus, we get the four repairs depict... | Concerning Question 1, we lift the dichotomy of [25] for primary keys and SJFCQs to the general case of FDs (Theorem 2). To this end, we build on the dichotomy for the problem of counting repairs (without a query) from [23], which allows us to concentrate on FDs with an LHS chain (up to equivalence) since for all the o... |
Observe now that the (Boolean) query that asks whether employees 1111 and 2222 work in the same department is true only in two out of four repairs, that is, 𝑅𝑒𝑝𝑎𝑖𝑟3𝑅𝑒𝑝𝑎𝑖𝑟3\mathit{Repair~{}3}italic_Repair italic_3 and 𝑅𝑒𝑝𝑎𝑖𝑟4𝑅𝑒𝑝𝑎𝑖𝑟4\mathit{Repair~{}4}italic_Repair italic_4. Therefore, since th... | of four repairs in total, only two of those entail the query. Of course, to compute the relative frequency of a tuple, we need a way to compute (i) the number of repairs that entail a tuple (the numerator), and (ii)
the total number of repairs (the denominator). | A |
In the present paper, we examine absorbing random walks on graphs in which different nodes can have different absorption rates, inducing an “effective” network structure that is reflected only partially by the edge weights of a network. Many notions of network community structure arise from the analysis of random walk... | We develop community-detection algorithms that account for node-absorption rates. We adapt the widely-used community-detection algorithm InfoMap [35, 36, 41] to absorbing random walks and thereby account for heterogeneous node-absorption rates in the detected communities. In our adaptation, we apply InfoMap to absorpti... |
In our adaptations of InfoMap to absorbing random walks, we introduce a family of associated absorption-scaled graphs and then apply Markov time sweeping to these absorption-scaled graphs. To illustrate how the node-absorption rates impact the communities that we detect, consider the matrix Plsubscript𝑃𝑙P_{{l}}itali... | Our adaptation of InfoMap to absorbing random walks involves a family of absorption-scaled graphs G~(Dδ,H)~𝐺subscript𝐷𝛿𝐻\tilde{G}(D_{\delta},H)over~ start_ARG italic_G end_ARG ( italic_D start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT , italic_H ), where H𝐻Hitalic_H is a scaling matrix that controls the relative i... |
The community-detection algorithm InfoMap is based on random walks, so it is natural to adapt it to absorbing random walks. However, there are numerous approaches to community detection [12, 33], and it is worthwhile to adapt other approaches, such as modularity maximization [29] and statistical influence using stocha... | A |
In this section, we consider a special case of the QNR problem, viz., the case wherein there is a single source-destination (s,d)𝑠𝑑(s,d)( italic_s , italic_d ) pair and the goal is to select a single swapping tree for the (s,d)𝑠𝑑(s,d)( italic_s , italic_d ) pair.
For this special case, we design an optimal algorith... | in that both
consider only balanced trees; however, we use a heuristic metric that facilitates a polynomial-time Dijkstra-like heuristic to select the optimal path, while their recursive metric 666We note that their formula (Eqn. 10 in [18]) is incorrect as it either ignores the 3/2 factor or assumes the EP generations... | First, we note that a Dijkstra-like shortest path approach which builds a shortest-path tree greedily doesn’t work for the QNR-SP problem—mainly, because the task is to find an optimal tree rather than an optimal path. As noted before, a routing path can have exponentially many swapping trees over it, with different ge... | Now, to compute the optimal path for each path-length, we can use a simple dynamic
programming approach that run in O(mτl)𝑂𝑚subscript𝜏𝑙O(m\tau_{l})italic_O ( italic_m italic_τ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) time where m𝑚mitalic_m is the number of edges | may be applicable for the QNR-SP problem. However, we need to “combine” trees
rather than paths in the recursive step of a DP approach. Consequently, we were unable to design a DP approach based on the Floyd-Warshall’s approach, but, are able to extend the Bellman-Ford approach for the QNR-SP problem after addressing a... | B |
1) Timing mechanism of explanations: Delivering timely explanations can help human drivers/passengers react to emergent situations, such as takeover requests, appropriately and prevent a potential danger in the vicinity. According to Koo et al.’s study [193], it is favorable to convey explanations before a driving even... | 3. An explanation component: This constituent of the framework provides understandable insights into the real-time action decisions made by autonomous driving, complying with and corresponding to an eeC𝑒𝑒𝐶eeCitalic_e italic_e italic_C and a srC𝑠𝑟𝐶srCitalic_s italic_r italic_C. The explanation component must j... |
Some investigations involve users in case studies to understand the effective strategies for explanation generation in autonomous driving tasks. The key idea of a user study is that getting people’s input in designated driving tasks can help improve the adequacy and quality of explanations in autonomous driving. Wiega... |
2) The impact of lead time on the safe transition from an automated mode to a human takeover: Another important criterion is determining the amount of time needed to alert human actors for a takeover request. In the user study measuring the impact of 4 s vs. 7 s as the lead time on takeover alert, Huang and Pitts [195... | 1) Timing mechanism of explanations: Delivering timely explanations can help human drivers/passengers react to emergent situations, such as takeover requests, appropriately and prevent a potential danger in the vicinity. According to Koo et al.’s study [193], it is favorable to convey explanations before a driving even... | C |
we establish and open a location dataset of the Tianjin University campus named TJU-Location Dataset, including images of 50 iconic locations. In addition, two public datasets named Pitts30k and Tokyo 24/7 are also used to discuss the performance of the proposed model and several VPR models. | Tokyo 24/7 dataset. The experimental results demonstrate that Patch-NetVLAD achieves the best performance on the Pitts30k test dataset and Tokyo 24/7 dataset, while Ghost-dil-NetVLAD performs the best on TJU-Location test dataset because most Recall@N of Ghost-dil-NetVLAD are greater than those of the remaining models.... |
The remaining framework of this paper is as follows: related works are presented in Section II. In Section III, we propose a lightweight model named Ghost-dil-NetVLAD. Section IV contains the experimental results and the ablation experiments of different fusion methods. The last section is the conclusion. |
In this section, six models including Alex-NetVLAD, VGG16-NetVLAD, Patch-NetVLAD (Considering our limited computational resources, we only use its built-in storage mode. In this paper, we uniformly call this method Patch-NetVLAD.), MobileNetV3-NetVLAD (lightweight CNN + NetVLAD), Ghost-NetVLAD (the Ghost module does n... | The framework of our proposed Ghost-dil-NetVLAD is shown in Fig. 1. The Ghost-dil-NetVLAD contains two parts. One is the lightweight feature extraction architecture (GhostCNN) shown in Section 3.1, and the remaining is the NetVLAD layer described in Section 3.2.
| B |
Traditionally, stream ciphers are attacked with two approaches: correlation attacks, that exploit possible correlations between some part of the keystream and a portion of the initial state, and approximation attacks, where the nonlinear part is approximated by a linear component. The design defenses against these type... |
In Chapter 2, we collect all the notations, definitions and known facts needed in the remainder of the paper. We briefly illustrate the XL-Algorithm to solve Boolean equations systems and the algebraic attack to nonlinear filter generators presented in [10]. | The main aim of our work is not to perform effectively the attack described in Section 3 on WG-PRNG, but to estimate how many keystream bits one needs to perform successfully the attack on WG-PRNG. We will show that knowing less than 218superscript2182^{18}2 start_POSTSUPERSCRIPT 18 end_POSTSUPERSCRIPT keystream bits, ... |
In Chapter 4, to validate our algebraic attack, first we apply it to two toy stream ciphers and then we show that it is feasible to perform it on WG-PRNG. We conclude showing that the security of WG-PRNG is less that claimed until now. For the sake of presentation, we will first describe the part regarding WG-PRNG, an... | In this paper, we propose a new form of algebraic attack, which is especially effective against nonlinear filter generators. We show with two toy examples how the attack can be performed in practice. We also apply our attack to WG-PRNG and we provide a complexity estimate that shows a fatal weakness of this cipher. We ... | D |
Local best response Lisý and Bowling (2017) is an evaluation tool for poker. It uses a given abstraction in its action space. It picks the best action in each decision set, looking at the fold probability for the opponent in the next decision node and then assuming the game is called until the end. Our algorithm CDBR i... | The opponent modeling and exploitation process consists of two steps: opponent modeling and model exploitation. Opponent modeling requires building a model from previous data or actions observed during an online play. Model exploitation is finding a good strategy against the given model and is the main focus of this pa... |
Our contributions are: 1) We formulate the algorithms to find the responses given the opponent strategy and an evaluation function. This results in the best performing theoretically sound robust response applicable to large games. 2) We prove the soundness of the proposed algorithms. 3) We provide an analysis of probl... |
This work explores the full model exploitation and proposes continual depth-limited best response (CDBR). CDBR relies on the value function used in the standard limited look-ahead solving, and we prove theoretical guarantees on the performance. A drawback of using the same value function is decreased performance, and ... |
Approximate best response (ABR) Timbers et al. (2020) is also a generalization of the LBR and showed promising results in evaluating strategies. However, our approach focuses on model exploitation, which requires crucial differences, such as quick re-computation against unseen models. ABR needs to independently learn ... | D |
But now, since q𝒜(G)=q≤k∗(G)subscript𝑞𝒜𝐺subscriptsuperscript𝑞absent𝑘𝐺q_{\mathcal{A}}(G)=q^{*}_{\leq k}(G)italic_q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_G ) = italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ≤ italic_k end_POSTSUBSCRIPT ( italic_G ) and since |𝒜′... | which completes the proof of (6.9). It is now possible to choose η𝜂\etaitalic_η small enough that the inequality holds for all t≥0𝑡0t\geq 0italic_t ≥ 0 and we may take 2222 as the coefficient of the exponential - the details of this calculation appear for example in the last part of the proof of Theorem 7.1 of [32]. ... |
The rough idea of the proof of Theorem 1.2 is that we can use the fattening lemma, Lemma 3.1, to bound the probability that a vertex partition behaves badly by the probability that a fat vertex partition behaves similarly badly, and we can use probabilistic methods to handle fat partitions. However, even after the str... | The proof follows that of the non-weighted case, Theorem 1.1, line by line with the following adaptations. In place of the fattening lemma used on the underlying graph, use the weighted version, Lemma 10.3; and replace instances of G𝐺Gitalic_G and Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSC... | As in the last proof, we may follow the proof of the non-weighted version, Theorem 1.2, with the following adaptations. In place of the fattening lemma, use the weighted version, Lemma 10.3; replace instances of G𝐺Gitalic_G and Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT by w𝑤witalic_w... | B |
- Defining the research question and keywords. The purpose of our systematic search is to collect all possible economic contributions to the impact of environmental factors on migration determinants. We define three keywords of the three phenomena under analysis: | Figure 2 shows that the scientific production in the specific field is quite recent, spanning from 2003 to 2020, with a peak of 20 contributions in 2016 and an annual growth rate for the overall period at 18.5 percent. Taking a closer look at the cited references, it is possible to trace back an article published befor... | Table 3 shows the results of the multiple MRA on the literature in slow-onset events (precipitation, temperature, and soil quality) in which potential biases are filtered out sequentially by the addition, in a stepwise manner, of statistically significant controls.
Column (1) presents results for the whole sample of st... | - climate change, as the most investigated environmental factor in the literature. The events connected to climate change are hereby intended as slow-onset events that gradually modify climatic conditions in the long run. We specifically focus on variations of temperature, precipitation, and soil quality (such as deser... |
The specific objective of the study is the impact of environmental variables on migration, thus on the right-hand side of the regression a proxy of the environmental change is included. Slow-onset events are typically defined as gradual modifications of temperature, precipitation, and soil quality. Respectively, three... | C |
Small-loss bounds are first introduced in the context of prediction with expert advice (Littlestone and Warmuth, 1994; Freund and Schapire, 1997), which replace the dependence on T𝑇Titalic_T by cumulative loss of the best expert. Later, Srebro et al. (2010) show that in the online convex optimization setting, OGD with... | which draws considerable attention recently (Zhang et al., 2018a; Zhao et al., 2020b; Cutkosky, 2020a; Zhao et al., 2021a; Baby and Wang, 2021; Zhang et al., 2021; Zhao et al., 2022c, 2023). The measure is also called the universal dynamic regret (or general dynamic regret), in the sense that it gives a universal guara... | There are many studies on the worst-case dynamic regret (Besbes et al., 2015; Jadbabaie et al., 2015; Mokhtari et al., 2016; Yang et al., 2016; Zhang et al., 2017, 2018b; Baby and Wang, 2019; Zhang et al., 2020b; Zhao and Zhang, 2021), but only few results are known for the universal dynamic regret. Zinkevich (2003) sh... | In addition, problem-dependent static regret bounds are also studied in the bandit online learning setting, including gradient-variation bounds for two-point bandit convex optimization (Chiang et al., 2013), as well as small-loss bounds for multi-armed bandits (Allenberg et al., 2006; Wei and Luo, 2018; Lee et al., 202... |
There are many developments for dynamic regret minimization after our work became publicly available (Zhao et al., 2021c), and we briefly mention a few here. For exp-concave or strongly convex online functions, optimal dynamic regret can be obtained by algorithms minimizing strongly adaptive regre (Baby and Wang, 2021... | C |
such that x(−∞,i)=uωvwkzsubscript𝑥𝑖superscript𝑢𝜔𝑣superscript𝑤𝑘𝑧x_{(-\infty,i)}={}^{\omega}uvw^{k}zitalic_x start_POSTSUBSCRIPT ( - ∞ , italic_i ) end_POSTSUBSCRIPT = start_FLOATSUPERSCRIPT italic_ω end_FLOATSUPERSCRIPT italic_u italic_v italic_w start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_z whe... | with bounded gaps in ℒ(σ)ℒ𝜎\mathcal{L}(\sigma)caligraphic_L ( italic_σ ) and for every b∈ℒ(σ)𝑏ℒ𝜎b\in\mathcal{L}(\sigma)italic_b ∈ caligraphic_L ( italic_σ )
there is n≥1𝑛1n\geq 1italic_n ≥ 1 such that |σn(a)|b≥1subscriptsuperscript𝜎𝑛𝑎𝑏1|\sigma^{n}(a)|_{b}\geq 1| italic_σ start_POSTSUPERSCRIPT italic_n end_PO... | An endomorphism σ:A∗→A∗:𝜎→superscript𝐴superscript𝐴\sigma\colon A^{*}\to A^{*}italic_σ : italic_A start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → italic_A start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is primitive if
there is an n≥1𝑛1n\geq 1italic_n ≥ 1 such that for every a,b∈A𝑎𝑏𝐴a,b\in Aitalic_a , italic_b ∈ italic... | For every n≥1𝑛1n\geq 1italic_n ≥ 1, there is some N≥1𝑁1N\geq 1italic_N ≥ 1
and a∈A𝑎𝐴a\in Aitalic_a ∈ italic_A such that x[−n,i)subscript𝑥𝑛𝑖x_{[-n,i)}italic_x start_POSTSUBSCRIPT [ - italic_n , italic_i ) end_POSTSUBSCRIPT is a factor of σN(a)superscript𝜎𝑁𝑎\sigma^{N}(a)italic_σ start_POSTSUPERSCRIPT italic_N ... | and a∈A𝑎𝐴a\in Aitalic_a ∈ italic_A such that x[−n,n]subscript𝑥𝑛𝑛x_{[-n,n]}italic_x start_POSTSUBSCRIPT [ - italic_n , italic_n ] end_POSTSUBSCRIPT is a factor of σN(a)superscript𝜎𝑁𝑎\sigma^{N}(a)italic_σ start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ( italic_a ).
We may assume that a𝑎aitalic_a is growing. | C |
{n}}{n}+\bigg{(}\frac{C_{n}}{n}\bigg{)}^{\frac{2}{2s+1}}\Bigg{)}.italic_R ( caligraphic_T start_POSTSUBSCRIPT italic_n , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n e... | smoothness index k+1𝑘1k+1italic_k + 1 in each coordinate direction, and any third index q≥1𝑞1q\geq 1italic_q ≥ 1, is indeed n−2s/(2s+1)superscript𝑛2𝑠2𝑠1n^{-2s/(2s+1)}italic_n start_POSTSUPERSCRIPT - 2 italic_s / ( 2 italic_s + 1 ) end_POSTSUPERSCRIPT for s>1/2𝑠12s>1/2italic_s > 1 / 2 (or 2k+2>d2𝑘2𝑑2k+2>d2 it... | This paper is a unification and extension of Sadhanala et al. (2016, 2017) (more will be said about the relationship to these papers
in Section 1.3). The models of smoothness for f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that we | The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al. (2017). More precisely,
these authors established the third term on the right-hand side in | ℋdk(1)superscriptsubscriptℋ𝑑𝑘1\mathcal{H}_{d}^{k}(1)caligraphic_H start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( 1 ). This matches the optimal rate for estimation over Holder
classes (see Sadhanala et al. (2017) for a formal statement and proof for | C |
The identification of Wbsubscript𝑊𝑏W_{b}italic_W start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT is based on the modification to Kruskal’s or Prim’s algorithm that identifies the MST (Lee et al., 2012; Songdechakraiwut and Chung, 2023). Then Wdsubscript𝑊𝑑W_{d}italic_W start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ... | Figure 9: The estimated state spaces of dynamically changing brain networks. The correlations are averaged over every time point and subject within each state for k𝑘kitalic_k-means clustering (top) and Wasserstein distance based topological clustering (bottom). In k𝑘kitalic_k-means clustering, the connectivity patter... |
Like the majority of clustering methods such as k𝑘kitalic_k-means and hierarchical clustering that use geometric distances (Johnson, 1967; Hartigan and Wong, 1979; Lee et al., 2012), we propose to develop a topological clustering method using topological distances (Figure 5). For this purpose we use the Wasserstein d... | Thus, our topological clustering is equivalent to k𝑘kitalic_k-means clustering restricted to the convex set 𝒯0⊗𝒯1tensor-productsubscript𝒯0subscript𝒯1\mathcal{T}_{0}\otimes\mathcal{T}_{1}caligraphic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⊗ caligraphic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. The convergence of... | We validated the topological clustering in a simulation with the ground truth against k𝑘kitalic_k-means and hierarchical clustering (Lee et al., 2011). We generated 4 circular patterns of identical topology (Figure 6-top) and different topology (Figure 6-bottom). Along the circles, we uniformly sampled 60 nodes and ad... | B |
11.2.5 in [31], the ω−limit-from𝜔\omega-italic_ω -limit set ω(θ)𝜔𝜃\omega(\theta)italic_ω ( italic_θ )
is independent of θ∈R1𝜃superscript𝑅1\theta\in R^{1}italic_θ ∈ italic_R start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT and is either R1superscript𝑅1R^{1}italic_R start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT or perfec... | orbits). If the rotation number of ϕitalic-ϕ\phiitalic_ϕ is rational,
ρ¯(ϕ)=pq¯𝜌italic-ϕ𝑝𝑞\bar{\rho}(\phi)=\frac{p}{q}over¯ start_ARG italic_ρ end_ARG ( italic_ϕ ) = divide start_ARG italic_p end_ARG start_ARG italic_q end_ARG, then ϕqsuperscriptitalic-ϕ𝑞\phi^{q}italic_ϕ start_POSTSUPERSCRIPT italic_q end_POSTSUPE... | and if it is a rational number, ρ¯(ϕ)=pq¯𝜌italic-ϕ𝑝𝑞\bar{\rho}(\phi)=\frac{p}{q}over¯ start_ARG italic_ρ end_ARG ( italic_ϕ ) = divide start_ARG italic_p end_ARG start_ARG italic_q end_ARG, then we can determine
the periodic orbits of the angle map by analyzing the | [θ¯1,θ¯2]subscript¯𝜃1subscript¯𝜃2[\bar{\theta}_{1},\bar{\theta}_{2}][ over¯ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , over¯ start_ARG italic_θ end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] is positively invariant
under the ϕ(.)\phi(.)italic_ϕ ( . ) map | ϕ(θ)=ϕ2(θ¯)>θ¯≥ϕ(θ)italic-ϕ𝜃superscriptitalic-ϕ2¯𝜃¯𝜃italic-ϕ𝜃\phi(\theta)=\phi^{2}(\bar{\theta})>\bar{\theta}\geq\phi(\theta)italic_ϕ ( italic_θ ) = italic_ϕ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( over¯ start_ARG italic_θ end_ARG ) > over¯ start_ARG italic_θ end_ARG ≥ italic_ϕ ( italic_θ ). This contradic... | A |
The parameter k¯5subscript¯𝑘5\bar{k}_{5}over¯ start_ARG italic_k end_ARG start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT essentially makes the safety definition (9) a practical ISSf. That is, with k¯5=0subscript¯𝑘50\bar{k}_{5}=0over¯ start_ARG italic_k end_ARG start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT = 0, (9) becomes ISSf con... | In the subsequent sections, our approach of finding the control gains are as follows. First, in Section 3, we find the conditions on control gains that satisfy the pISSf criterion in (9). Next, in Section 4, we show that the pISSf conditions on control gains additionally guarantee ISSt for the system in the sense of (1... | Stability-Only Control (St-C): In this case, we have used ISSt criterion (49) to design the closed-loop control gains. Note that, if the design is done solely based on ISSt criterion, then there is no guarantee that it will also satisfy pISSf. To illustrate this point, and to highlight the potential advantage of combin... |
In light of the aforementioned discussion, the main contributions of this paper is the following: Building upon the existing literature, we extend PDE safety research by designing a feedback based control that satisfies both pISSf and ISSt under disturbances, utilizing pISSf barrier functional characterization and ISS... | In the present section, we will show that the control gain conditions in Theorem 1 simultaneously satisfy the pISSf criterion in the sense of (9) and ISSt criterion in the sense of (10).
Following the results in existing literature [34], we can say that if there exists a functional V(h)𝑉ℎV(h)italic_V ( italic_h ) for... | A |
Driven by the early success in the assessment of wellbeing in organizations, researchers have turned their focus to the use of passive sensing in the assessment of workplace performance. By tracking behaviors, recent work has attempted to characterize workplace performance with the long-term goals of identifying the be... | Driven by the early success in the assessment of wellbeing in organizations, researchers have turned their focus to the use of passive sensing in the assessment of workplace performance. By tracking behaviors, recent work has attempted to characterize workplace performance with the long-term goals of identifying the be... |
In the behavioral side of workplace performance inventories, IOD-ID and IOD-OD are measures of “bad” conduct in the workplace. Behaviors that indicate IOD-ID can involve cursing a co-worker, playing pranks, or making fun of someone. Behaviors that indicate IOD-OD can be tardiness or absenteeism, leaving work early wit... | Feng and Narayanan [10] propose a method for capturing behavioral consistency in wearable data using the activity curve model. They find that consistency features improve accuracy by up to 6% when compared to using only summary features from the Fitbit fitness tracker in a study of 97 hospital workers throughout 10 wee... |
Therefore, researchers in the pervasive computing community have often turned to job performance inventories developed by psychologists as ground truth to measure perceived workplace performance across organizations and industries in a generalizable manner. They have used passive sensing to predict participant scores ... | D |
In addition, FedACG adds a regularization term in the objective function of clients to make the local gradients more consistent across clients.
We show that subtle differences in federated learning algorithms can have a significant impact on the final results and discuss the behavior of FedACG together with related met... |
We introduce a federated learning benchmark11{}^{\text{\ref{footnote}}}start_FLOATSUPERSCRIPT end_FLOATSUPERSCRIPT to facilitate the evaluation of federated learning algorithms. The benchmark contains the implementations of various algorithms including FedACG. | In addition, FedACG adds a regularization term in the objective function of clients to make the local gradients more consistent across clients.
We show that subtle differences in federated learning algorithms can have a significant impact on the final results and discuss the behavior of FedACG together with related met... | One of the critical challenges in federated learning is the partial participation of clients, which can slow down the convergence of the global model.
To verify the robustness of FedACG to low client participation rates, we conduct experiments with 500 clients and a participation rate as low as 1%. |
To properly address the issue of client heterogeneity, we propose a novel federated learning algorithm, Federated averaging with Accelerated Client Gradient (FedACG), which conveys the momentum of the global gradient to clients and enables the momentum to be incorporated into the local updates in the individual client... | A |
Finally, non-trivial parameters such as weather, terrain and obstacles, PU transmitters being directional, etc., can be relatively easily incorporated in a learning approach (see §III-F), while they would require more sophisticated modelling techniques and algorithms to be incorporated
in non-learning approaches. | the input (features) being the primary-user parameters, spectrum sensor (SS) readings, and secondary user (SU) request parameters, and the output (label) being the maximum power that can be allocated to the SU without
resulting in any harmful interference to the PUs’ receivers. | Collecting Training Samples. Recall that a sample in PU-Setting is comprised of a sample of PUs’ parameters (location and power) and the optimal power allocated to the SU. In SS-Setting, a training sample is comprised of spectrum sensors’ received power readings. The location of entities is available by using a GPS don... | Gathering and Labeling Training Samples. Note that gathering a training sample for 𝒜𝒜\mathcal{A}caligraphic_A entails gathering feature values and determining its “label”—in our context, for a given feature vector (SU location and PUs/SSs parameters), the label is the maximum power that can be allocated to the SU wit... | in creating them.
In general, from a given sample {X,y}𝑋𝑦\{X,y\}{ italic_X , italic_y } where X𝑋Xitalic_X is the set of features (PU parameters or SS readings, and the requesting SU’s location) and y𝑦yitalic_y is the label (allocated power), we create synthetic samples of the type {X′,y}superscript𝑋′𝑦\{X^{\prime}... | C |
The system Tαα=−cαkTsubscript𝑇𝛼𝛼𝑐superscript𝛼𝑘𝑇T_{\alpha\alpha}=-c\alpha^{k}Titalic_T start_POSTSUBSCRIPT italic_α italic_α end_POSTSUBSCRIPT = - italic_c italic_α start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_T consists of two decoupled equations of the type u′′(α)=−cαku(α)superscript𝑢′′𝛼𝑐... | The affine curvature of 𝒞𝒞\mathcal{C}caligraphic_C at p𝑝pitalic_p is the curvature of the osculating conic 777The osculating conic to 𝒞𝒞\mathcal{C}caligraphic_C at p𝑝pitalic_p passes through p𝑝pitalic_p, and the derivatives of the affine arc-length parameterizations at α=0𝛼0\alpha=0italic_α = 0 (with α=0𝛼0\alp... |
This work was performed during the REU 2020 program at the North Carolina State University (NCSU) and was supported by the Department of Mathematics at NCSU and the NSA grant H98230-20-1-0259. At the time when the project was performed, Jose Agudelo was an undergraduate student at North Dakota State University, Brooke... | This paper is the result of an REU project, which turned out to be of great pedagogical value, as it taught the students to combine the results and methods from various subjects: differential geometry, algebra, analysis and numerical analysis. In addition, this project involved theoretical work and the work of designin... |
The Euclidean curvature of a circle of radius r𝑟ritalic_r is constant and is equal to 1r1𝑟\frac{1}{r}divide start_ARG 1 end_ARG start_ARG italic_r end_ARG. The Euclidean curvature of 𝒞𝒞\mathcal{C}caligraphic_C at p𝑝pitalic_p equals the curvature of its osculating circle 555The osculating circle to 𝒞𝒞\mathcal{C}... | B |
In the paper, we consider the case where only one coordinate is allowed to change per iteration. The results on random coordinate descent can be extended to other cases where probabilities of selections are unequal with potentially overlapping components [20], such as the random sleep scheme [34]. Intuitively speaking,... |
In the seminal work of [41], the method of online gradient descent is proposed for OCO problems, where at each time step the decision maker performs one gradient descent step using the latest available information. A static regret upper bound that is sublinear in T𝑇Titalic_T is proved, where T𝑇Titalic_T is the lengt... | To the best of our knowledge, Coordinate descent [31], as an important class of optimization algorithms, is not sufficiently analyzed by researchers in the online optimization community. In coordinate descent algorithms, most components of the decision variable are fixed during one iteration while the cost function is ... |
A substantial review of variants of coordinate descent algorithms can be found in [4, Section 6.5.1]. The cyclic selection of coordinates is normally assumed to ensure convergence of the algorithm. On the other hand, the use of an irregular order is then considered by researchers to accelerate convergence. Particularl... |
The main contributions of the paper can be summarized as follows. First, we extend the coordinate descent algorithms considered in [31] to the online case and provide their regret analysis. To the best of our knowledge, this is the first attempt to look at possibilities of using coordinate descent methods to solve OCO... | C |
Advancements in technology offer a broader range of materials that could potentially facilitate the design of improved silicon-based brains. Architectures using this device challenge the conventional choices for abstraction level and partitioning in mixed-signal neuromorphic processors. These designs can employ more ad... |
The commonly used level of abstraction, which closely resembles direct biological replication, models neural computation using coupled ordinary differential equations. The temporal dynamics of this model enable information integration over time, while the coupling across state variables models spatial information inte... |
The last section (VI.C) explored the computational properties of STP and the double exponential dynamics. Both STP and double exponential decay dynamics increased the accuracy of the network compared to the original HOTS network with single exponentials and no-STP. STP contributes to reducing ”Noise” in the network. M... | Advancements in technology offer a broader range of materials that could potentially facilitate the design of improved silicon-based brains. Architectures using this device challenge the conventional choices for abstraction level and partitioning in mixed-signal neuromorphic processors. These designs can employ more ad... |
Hybrid substrates present an intriguing design space that can leverage the advantages of both analog and digital domains [61]. The most prevalent hybrid model utilizes analog circuits for computation and digital circuits for communication [62, 63, 64]. This approach recognizes that digital circuits operate much faster... | A |
We highlight that our simulations not only confirm our theoretical results but also allow us to empirically pinpoint the constants hidden in the asymptotic analysis. For the simulations carried out on the path our simulation results indicate a constant of roughly 1.51.51.51.5, giving a total running time of roughly 1.5... | Interestingly, the total running times are sharply concentrated around their mean. In fact, the plot in Fig. 2 actually shows box plots, but starting with a number of agents as small as only 40 the upper and lower whiskers become almost identical with no outliers detected.
| In the following lemma, we prove that the projected system behaves similarly to the original system in the sense that the length of the edge e𝑒eitalic_e stays the same and the influence network does not change.
Furthermore, the agents in the original HKS move at least as much as the agents in the projected state, when... | Researchers have investigated the convergence to stable states and the corresponding convergence speed in many variants of the Hegselmann-Krause model. The existing work can be categorized along two dimensions: complete or arbitrary social network and synchronous or asynchronous updates of the opinions.
Synchronous opi... | For these systems, we can prove a better upper bound on the expected number of steps needed to reach a δ𝛿\deltaitalic_δ-stable state.
Examples of such graphs are the path, where all the nodes are positioned with equal distance of at most ε𝜀\varepsilonitalic_ε, as well as the graph from Theorem 9 if the social network... | A |
Lack of Medical History: The model solely relies on the information extracted from the X-ray images and does not consider the patient’s medical history. Medical history, including patient symptoms, previous diagnoses, and other relevant clinical information, plays a crucial role in disease diagnosis. The model could p... |
Evaluation by Medical Professionals: While the model demonstrates high sensitivity and specificity, it is essential to emphasize that medical professionals have not evaluated it. Comparing the model’s performance to human performance is challenging, as it requires expert evaluation and validation. Therefore, further a... |
However, there are still several areas for improvement and further research. Incorporating different types of X-ray images, such as lateral radiographs, could enhance the model’s performance and expand its applicability. Integration of patient medical history and contextual information may improve the accuracy and dia... |
Deep learning models should not be considered as a replacement for clinical diagnosis by medical professionals. These models should be used as complementary tools to aid medical professionals in making more accurate diagnoses. It is also crucial to validate the accuracy and reliability of these models on diverse and r... | In particular, the model achieved high sensitivity and accuracy on all conditions, indicating its ability to correctly identify positive cases. However, it is important to note that the positive predictive value (PPV) of the predictions can still be low. For example, for the Pneumonia condition, the sensitivity is 0.6,... | A |
(Reward distribution)
Although the frequentist results of Audibert et al. (2010) and Carpentier and Locatelli (2016) concern Bernoulli arms, the same results can be applied to Gaussian arms with unit variance because the empirical means of both the Bernoulli and Gaussian distributions are bounded by the Hoeffding inequ... | The KG algorithm (Gupta and Miescke, 1994) also adopts a Bayesian approach. However, the analytical properies of KG are little understood.
The results of Ryzhov et al. (2012) show the discounted version of KG most frequently drawing the best arm. Elsewhere, Wang and Powell (2018) have further characterized KG without g... | Elsewhere, while the results of Komiyama et al. (2021) also apply specifically to Bernoulli arms, the Θ(1/T)Θ1𝑇\Theta(1/T)roman_Θ ( 1 / italic_T ) Bayesian simple regret can also be derived for Gaussian arms, provided we are not interested in the magnitude of the constant.
| Thompson sampling (Thompson, 1933) is among the oldest of the heuristics and is known to be asymptotically optimal in terms of the frequentist CRM (Agrawal and Goyal, 2012; Kaufmann et al., 2012; Komiyama et al., 2015). One of the seminal results regarding Bayesian CRM is the Gittins index theorem (Gittins, 1989; Weber... | (Reward distribution)
Although the frequentist results of Audibert et al. (2010) and Carpentier and Locatelli (2016) concern Bernoulli arms, the same results can be applied to Gaussian arms with unit variance because the empirical means of both the Bernoulli and Gaussian distributions are bounded by the Hoeffding inequ... | B |
This pattern continues for about 7777s such that most of the underlying agents are close to their targets.
A slight violation of the safety constraint (less than 0.030.030.030.03m) happens around 2.62.62.62.6s due to the strong air turbulence in the confined space, which however can be compensated by enlarging the safe... | trajectory generation is essential for multi-robot systems to perform various missions in a shared environment, such as cooperative inspection and transportation [1, 2, 3, 4].
However, it becomes especially challenging when a large number of agile robots navigate in a crowded space with high speed. | Furthermore, the completion time is evaluated for BVC [32] and the proposed method to illustrate the efficiency of MBVC-WB.
As provided in Table III, the proposed method has a significant decrease in transition time especially in a more crowded scenario. | Effectiveness and performance of the proposed algorithm are validated extensively by numerical simulations against other state-of-the-art methods including iSCP [39], DMPC [40] and BVC [32].
Our method shows a significant increase in both success rate and feasibility, especially for large-scale crowed and high-speed sc... | It is a fully distributed method that requires only local communication and scales well with the number of robots.
Compared with other state-of-the-art baselines, its advantages especially in crowded and high-speed scenarios are significant, as demonstrated both in simulations and hardware experiments. | D |
To this end, we introduce a group-invariant representation learning method that encodes data in a group-invariant latent code and a group action. By separating the embedding in a group-invariant and a group-equivariant part, we can learn expressive lower-dimensional group-invariant representations utilizing the power ... | We characterize the mathematical conditions of the group action function component and
we propose an explicit construction suitable for any group G𝐺Gitalic_G. To the best of our knowledge, this is the first method for unsupervised learning of separated invariant-equivariant representations valid for any group. | The first consists in learning an approximate group action in order to match the input and the reconstructed data.
For instance, Mehr et al. (2018b) propose to encode the input in quotient space, and train the model with a loss that is defined by taking the infimum over the group G𝐺Gitalic_G. While this is feasible fo... | In this work we proposed a novel unsupervised learning strategy to extract representations from data that are separated in a group invariant and equivariant part for any group G𝐺Gitalic_G. We defined the sufficient conditions for the different parts of our proposed framework, namely the encoder, decoder and group func... |
To this end, we introduce a group-invariant representation learning method that encodes data in a group-invariant latent code and a group action. By separating the embedding in a group-invariant and a group-equivariant part, we can learn expressive lower-dimensional group-invariant representations utilizing the power ... | A |
The second metric is the Euclidean distance between the estimated maximiser x^^𝑥\hat{x}over^ start_ARG italic_x end_ARG and the true maximiser x∗superscript𝑥x^{*}italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. Again, the average is used, D=1P∑p=1P|x^p−xp∗|𝐷1𝑃superscriptsubscript𝑝1𝑃subscript^𝑥𝑝subscriptsu... | At each iteration a location is chosen at random from those available. The result is given as the average over 100 runs on each of the data snapshots in each subset. For BO one run is used. The metrics are calculated on the pre-processed data, i.e. after log transform and standardisation / mean-centring.
| Secondly, the measurements are treated as ground-truth readings. This simplifies the error metric calculations, but also makes irrelevant one of the advantages of probabilistically modelling the data, as this allows that uncertainty to be modelled directly.
Another limitation is that the results on the satellite data a... |
Table 2: Confidence intervals for the means of the maximiser distance at the final iteration for the satellite data. Given are means ±plus-or-minus\pm± one standard deviation of the mean. The best values are given in bold. This confirms the story from Figs. 4 and 5 that similar results are obtained on the selection su... |
Table 1: Confidence intervals for the means of the maximum ratio at the final iteration for the satellite data. Given are means ±plus-or-minus\pm± one standard deviation of the mean. The best values are given in bold. This confirms the story from Figs. 4 and 5 that improved results are obtained on the strong and selec... | A |
The downside of these approaches is that f𝑓fitalic_f must be evaluated Kd𝐾𝑑Kditalic_K italic_d times per training step instead of K𝐾Kitalic_K times as in RODEO, a prohibitive cost when f𝑓fitalic_f is expensive and d≥200𝑑200d\geq 200italic_d ≥ 200 as in Section 6.
| In fact, both Stein et al. [56] and Dellaportas and Kontoyiannis [12] developed CVs of the form (P−I)h𝑃𝐼ℎ(P-I)h( italic_P - italic_I ) italic_h based on reversible discrete-time
Markov chains and linear input functions h(x)=xiℎ𝑥subscript𝑥𝑖h(x)=x_{i}italic_h ( italic_x ) = italic_x start_POSTSUBSCRIPT italic_i en... | In particular, the gradient estimator of Liu et al. [34] is based on the Langevin Stein operator [19] for continuous distributions and coincides with the continuous counterpart of RELAX [20].
In contrast, our approach considers discrete Stein operators for Monte Carlo estimation in discrete distributions with exponenti... | The RELAX [20] estimator generalizes REBAR by noticing that their continuous relaxation can be replaced with a free-form CV.
However, in order to get strong performance, RELAX still includes the continuous relaxation in their CV and only adds a small deviation to it. | The benefits of using Stein operators to construct discrete CVs are twofold. First, the operator structure permits us to learn CVs with a flexible functional form such as those parameterized by neural networks.
Second, since our operators are derived from Markov chains on the discrete support, they naturally incorporat... | B |
Formally, a stopping set s(n)superscript𝑠𝑛s^{(n)}italic_s start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT of order n𝑛nitalic_n is a subset of n𝑛nitalic_n patterns such that in every slot there is either 00, or ≥2absent2\geq 2≥ 2 users; that is, the decoding cannot proceed as there is no slot with a single tr... | stopping set of a given order, not
“at least one”. The reason is that stopping sets are closed under union, so their combination produces another stopping set of a higher order [31]. As such, by summing over n𝑛nitalic_n we would count some of the stopping sets multiple times. | Formally, a stopping set s(n)superscript𝑠𝑛s^{(n)}italic_s start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT of order n𝑛nitalic_n is a subset of n𝑛nitalic_n patterns such that in every slot there is either 00, or ≥2absent2\geq 2≥ 2 users; that is, the decoding cannot proceed as there is no slot with a single tr... | In order to take into account stopping sets and augment the expression (16), we need to consider three cases.
If there is a stopping set of certain order n𝑛nitalic_n, then with probability n/U𝑛𝑈n/Uitalic_n / italic_U the user in focus is its member and cannot be decoded. | Conversely, with probability 1−n/U1𝑛𝑈1-n/U1 - italic_n / italic_U the user is not involved in that stopping set and decoding is possible, however SIC is impaired since there are S=n𝑆𝑛S=nitalic_S = italic_n noncancellable users.
Otherwise, if there are no stopping sets then S=0𝑆0S=0italic_S = 0 and there are no lim... | C |
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper... |
It should be noted that the work of Jones [Jon90], Okikiolu [Oki92], and Schul [Sch07] provides the sets for which a solution exists but not the solution itself when the set is infinite. That is, they classify the sets which are contained in curves of finite length, but do not provide the parametrization of the curves... |
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper... | We remark that although time-wise this algorithm is fast, the ratio constant of the yielded path over the optimal path has not been computed and is much larger that Christofides’ 3/2323/23 / 2 ratio. In fact, in our algorithm, the yielded path has length at
most (300)9/2log300superscript30092300(300)^{9/2}\log{300}( ... | In lieu of the above discussion, “nearly-optimal” algorithms have been explored. That is, algorithms that produce a path which may not be the optimal one but is comparable (or even arbitrarily close) in length to the optimal one. The nearest insertion algorithm [RSL74] computes in O(n2)𝑂superscript𝑛2O(n^{2})italic_O... | C |
Finally, we compute the gcd of N=𝒪(m)𝑁𝒪𝑚N=\mathcal{O}(m)italic_N = caligraphic_O ( italic_m ) Chow forms. As each Chow form has (r+1)(n+1)𝑟1𝑛1(r+1)(n+1)( italic_r + 1 ) ( italic_n + 1 )-variables, bitsize 𝒪~(ndr−1(τ+n))~𝒪𝑛superscript𝑑𝑟1𝜏𝑛\widetilde{\mathcal{O}}(nd^{r-1}(\tau+n))over~ start_ARG caligr... | In this section, we provide algorithms to compute supp(V)supp𝑉\operatorname{supp}(V)roman_supp ( italic_V ) by the means of Theorem 4.1. The idea is to compute dimπI(V)dimensionsubscript𝜋𝐼𝑉\dim\pi_{I}(V)roman_dim italic_π start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( italic_V ) for every I⊂[l]𝐼delimited-[]𝑙I... | We have assumed in Alg. 2 that r=dimV𝑟dimension𝑉r=\dim Vitalic_r = roman_dim italic_V is part of the input. We could also compute r𝑟ritalic_r using the algorithms in [41, 11], without changing the single exponential nature of the complexity of the algorithm.
| Our contribution There is an extensive literature on the complexity of elimination theory procedures in general, and complexity of polynomial system solving in particular; we provide a small sample here [23, 31, 30, 8, 25]. Despite the strong literature on the subject, we were not able to locate any results on the bit ... | The coefficients of the linear forms in this factorization correspond to the solutions of the zero dimensional system. To force (some) of these solutions to have multiplicities
we compute the discriminant R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of R1subscript𝑅1R_{1}italic_R start_POSTSUBSCR... | B |
Applying the whole Audiovisual Stimuli dataset in WEMAC would be unfeasible due to the excessive duration of the resulting experimentation. Thus, two batches of videos are generated so that the experimental procedure lasts 1111 to 1.51.51.51.5 hours per volunteer. The selection criterion for the videos in these two bat... |
Self-reported annotations [19]: They contain the emotional labeling reported by the participants after watching each of the 14141414 videos in the experiment. The data are stored in one CSV file, that contains 14141414 columns and 1,40014001,4001 , 400 rows (100100100100 volunteers ×\times× 14141414 clips). Regarding ... |
Volunteers’ annotations are collected in two instants: prior to the experiment and during the experimentation. Before the experiment, each volunteer is provided with informed consent, a personal data form, and a general questionnaire to supply additional information related to cognition, appraisal, attention, personal... | The speech signals recorded have a duration ranging from 20s20𝑠20\leavevmode\nobreak\ s20 italic_s to 60s60𝑠60\leavevmode\nobreak\ s60 italic_s and mostly contain speech.
Since the release of the raw speech signals is not possible due to ethics and privacy issues 777Regulation (EU) 2016/679 of the European Parliame... |
Upon arrival, participants were informed about the experimental procedure. Then, they signed the informed consent, filled out the personal data form, and answered the general questionnaire. Next, participants listened to instructions regarding the experiment. | B |
Malware Dataset: The Drebin dataset (Arp et al., 2014), which includes 5560 malware applications collected between August 2010 and October 2012, is widely used in Android malware research. Another popular dataset is AMD (Wei et al., [n. d.]), which contains 24650 malware applications collected between 2010 and 2016 and... | Similar to surrogate model training by Papernot et al. (Papernot et al., [n. d.]), Rosenberg et al. (Rosenberg et al., 2018) propose a GADGET framework that generated adversarial examples without access to malware source code in black-box settings. Authors perform mimicry attacks to mimic the system calls of the benign... |
Malware encompasses any malicious software that exhibits malicious activity, e.g., trojan, worm, backdoor, botnet, spyware (Faruki et al., 2014). Trojans disguise as benign apps but perform malicious actions without the consent of the users (e.g., Fakeinst (Hidhaya and Geetha, 2013)). Backdoor employs root exploits to... | Android operating system has one of the largest smartphone market shares and is one of the most targeted platforms by malware attackers (AV-Test, 2022). Researchers at the cybersecurity company Kaspersky have identified a concerning trend in past years. Applications, called apps, downloaded from the Google Play Store, ... | Evasion robustness (ER): It measures the robustness of the detection system against adversarial attacks and is measured by the true positive rate (TPR) of the malicious applications (Berger et al., 2022b). We compute ER as the ratio of malicious instances successfully detected to the total number of malicious applicati... | B |
)}=\infty∑ start_POSTSUBSCRIPT italic_n = 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT divide start_ARG italic_s start_POSTSUBSCRIPT 2 italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - italic_s start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POS... | Roughly speaking, we consider a zero-sum game between an adversary and a statistician, in which the adversary chooses a deviation and the statistician, after observing the realization s𝑠sitalic_s, has to guess the deviator if s∉D𝑠𝐷s\notin Ditalic_s ∉ italic_D. A strategy for the statistican in this game is a blame f... | In our opening example, the target set is the set of realizations with long-run frequency 1/2121/21 / 2 of Heads,
and in the second example the target set is the set of all realizations where the induced random walk crosses the origin infinitely often. | The algorithm above can be adapted to this case.
Indeed, supposing that the realization s𝑠sitalic_s is such that the origin is visited finitely many times, then applying the above algorithm to the suffix of the realization after the last visit to the origin identifies Deviator with probability 1. |
The idea of the proof is that since Deviator must move to the right during periods where Honest moves substantially to the left (to avoid going below zero), Deviator must thus move to the left when Honest moves to the right to avoid being clearly right-biased (Steps 1 and 2 detect right-biased behavior). Thus Deviator... | C |
Physical-layer attacks by road object manipulation. The prediction component in AD systems predicts obstacle trajectory based on its detected physical properties (e.g., obstacle type, dimension, position, orientation, speed). Therefore, assuming upstream components such as AD perception is functioning correctly, the at... | GPS spoofing refers to sending fake satellite signals to the GPS receiver, causing it to resolve positions that are specified by the attacker. Prior works leverage GPS spoofing to attack MSF localization [92], LiDAR object detection [86], and traffic light detection [80].
| Cyber-layer attacks. Perhaps the most direct way to manipulate the inputs to the downstream components is via cyber-layer attacks. For example, a compromised ROS node [59] in the AD system can directly send malicious messages to prediction or planning. Unlike the road object manipulation method, this does not require t... |
Physical-layer attacks by localization manipulation. The planning component takes the prediction and localization outputs as inputs to calculate the optimal driving path in the near future. Therefore, the change in the localization directly affects the decision-making in the planning. To attack the planning, one can l... | Among the vast majority (50/54) of attack works that target the more practical modular AD system designs (§II-A), so far none of them targeted downstream AI components such as prediction and planning; they predominately focus on the upstream ones such as perception and localization. This is understandable since under t... | C |
In their paper, each vertex and edge of the original cubic graph was represented by a set of intervals, called vertex and edge gadgets respectively.
The interval model consisted of first all the vertex gadgets, and then all the edge gadgets arranged from left to right. | For a vertex gadget, these two partitions were made to correspond to its membership in one of the partition sets for a maximum cut of the original cubic graph.
If two adjacent vertices of the cubic graph belonged to different sets, then the corresponding edge gadget would make more cut edges with link intervals than if... | On each of the figures, this gadget is colored in Red and Blue.
After we construct the whole graph H𝐻Hitalic_H of interval count two, we will argue that, for every Maximum Cut partition of H𝐻Hitalic_H, the coloring of each of its gadgets is similar to one displayed on the corresponding figure. | In their paper, each vertex and edge of the original cubic graph was represented by a set of intervals, called vertex and edge gadgets respectively.
The interval model consisted of first all the vertex gadgets, and then all the edge gadgets arranged from left to right. | The number of intervals in any gadget was much greater than the total number of link intervals in the graph.
It was shown that, in any Maximum Cut partition of this interval graph, each vertex gadget or edge gadget could have only two possible partitions. | D |
Finally, we compare our models and strategies to previous work in instrument anticipation [58, 86].
In Table 6, we observe that ConvNeXt and FrozenBN outperform previous work. The GN model only performs well when trained completely end-to-end but gives poor results with partial CNN freezing. As expected, BN models comp... | Across backbones, complete end-to-end training is more effective than freezing layers, possibly because anticipation is a more challenging task and requires more finetuning of CNN weights. Also, the CHE strategy outperforms CHT in this task. Presumably, the correlation between subsequent batches (and thus SGD steps) is... | We provide a comprehensive and detailed analysis of how BatchNorm affects end-to-end surgical workflow analysis. We show the advantage of end-to-end over 2-stage learning (Hypothesis 1), longer training sequences (H.2) and carrying hidden states across batches in online tasks (H.3) and how this can fail using BN-based ... | Further, our experiments focus on 2D backbones, due to limitations in the surgical domain, and LSTMs, to show the effectiveness of simple, state-based end-to-end models for online tasks.
While the incompatibility of BN and single-sequence batches is general to the video setting, investigating the feasibility and effect... | Interestingly, the CHT strategy is not effective with FrozenBN. We found models often to collapse and finally only achieve subpar performance by lowering the initial learning rate. The lack of proper normalization with FrozenBN might cause more instability than in GN or ConvNeXt models and we suspect that the non-i.i.d... | A |
Results on MegaFace. MegaFace consists of a gallery set of 1 M images with 690 K classes and probe photos of 100 K images with 530 classes. We followed the test protocol of ArcFace[4]. We removed noisy images and measured rank-1 accuracy for the 1 M distractor after following the identification scenarios using the dev... | Test. Cosine similarity was used as a similarity score. Different evaluation metrics were applied depending on the FR tasks. In the verification task (1:1), verification accuracy using the best threshold was exploited for a dataset that has a small number of test images with the same ratio between positive and negative... |
Results on K-FACE. K-FACE focuses on FR under fine-grained conditions. It consists of 4.3 M images with 6 accessories, 30 illuminations, 3 expressions, and 20 poses for 400 persons. We adopted the same training and test splits used in MixFace [41]. The training split was composed of 3.8 M images with 370 persons. In p... | Results on LFW, CFP-FP, AgeDB-30, and CALFW. FR on LFW, CFP-FP, AgeDB-30, and CALFW is straightforward. Thus, the performance was saturated. LFW, AgeDB-30, and CALFW contain 6,000 images, and CFP-FP has 6,000 images. They have 1:1 ratios between the positive and negative pairs. Verification accuracy was employed with t... |
This paper proposes unified negative pair generation (UNPG) by combining two PG strategies (i.e., MLPG and CLPG) from a unified perspective to alleviate the mismatch. Moreover, it includes filtering noise-negative pairs, such as too-easy/hard negative pairs, in order to guarantee reliable convergence and improve perfo... | C |
We employed three evaluation measures, i.e., the Fréchet Inception Distance (FID), a well-known GAN evaluation score [40], the ability of human experts to identify the synthetic images and the classification accuracy. FID is often used to assess the quality and variety of the generated images and, even though it has be... | The second experiment determines whether clinical experts, who are very experienced with the analysis of eye fundus images, can distinguish between synthetic and real images. Such a step is essential to evaluate the effectiveness of StyleGAN2-ADA for generating synthetic eye fundus images. The experts were provided wit... |
Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR) focus on pixel-wise comparisons and are limited in their capacity to assess higher-level features and perceptual quality. FID leverages the features extracted from a pre-trained Inception Network to measure the similarity of feature representati... | Figure 4 shows a comparison between (a) real images from the training dataset and (b) synthetic images produced by StyleGAN2-ADA. The trained model yields realistic-looking images for both, with and without AMD, conditioned by sampling from latent representations. Visual inspection shows that the generated images are s... | We employed three evaluation measures, i.e., the Fréchet Inception Distance (FID), a well-known GAN evaluation score [40], the ability of human experts to identify the synthetic images and the classification accuracy. FID is often used to assess the quality and variety of the generated images and, even though it has be... | B |
From Table VI, it is observed that the performance with more local regions is superior to with less local regions. It implies that the size of each local region is too large to attain multiple diverse local information when M𝑀Mitalic_M is set as a small value. Whereas, it is also notice that the computational complexi... | Note that the weights of some adjacent patches are also decreased with the central patch, due to overlap pixels between two adjacent patches.
Practically, the local vector encoded based on one obscured patch is given a small weight, which effectively diminishes the influence of that obscured patch for facial expression... |
In the previous experiments, 1/3131/31 / 3 of whole pixels in each patch are applied as the overlapping pixels between two neighbor patches, which is a more appropriate value, since the number of pixels overlapping between the middle patch and both sides is only 2/3232/32 / 3 , and the information of 1/3131/31 / 3 of ... |
In Table VII, it shows accuracies obtained by the proposed method based on different number (N𝑁Nitalic_N) of overlapping pixels. From the results, it is seen that the performance on the test set increases slowly to plateau as the number of overlapping pixels increases. It illustrates that the more the overlapping pix... | The number M𝑀Mitalic_M of local regions is set as 16, and each patch (local region) overlaps about 16 pixels with its adjacent patches, and the parameter α𝛼\alphaitalic_α is set as 0.7 in Eq.(1).
The size of the epoch is set to 24, the initial learning rate is 0.0003, and the weight decay is set as 0.95 each epoch. | B |
xt≤1+12+y(t−1)=1+12+1+12f−1(2t−2).subscript𝑥𝑡112𝑦𝑡1112112superscript𝑓12𝑡2x_{t}\leq 1+\frac{1}{2}+y(t-1)=1+\frac{1}{2}+1+\frac{1}{2}f^{-1}(2t-2).italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ≤ 1 + divide start_ARG 1 end_ARG start_ARG 2 end_ARG + italic_y ( italic_t - 1 ) = 1 + divide start_ARG 1 end... | Why is there a jump in the upper bound from f(k)1/3𝑓superscript𝑘13f(k)^{1/3}italic_f ( italic_k ) start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT to f(k/n)𝑓𝑘𝑛f(k/n)italic_f ( italic_k / italic_n ) at n=k1/3𝑛superscript𝑘13n=k^{1/3}italic_n = italic_k start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT? Can one find... | as c→∞→𝑐c\to\inftyitalic_c → ∞, or any other family of σ𝜎\sigmaitalic_σ approaching the pathological 12sign(x)+1212sign𝑥12\frac{1}{2}\textnormal{sign}(x)+\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG sign ( italic_x ) + divide start_ARG 1 end_ARG start_ARG 2 end_ARG. The bound of Theorem 3.2 becomes we... | The first strategy does not depend at all on the pot function used, but requires that everyone’s initial ratings be exactly equal.
The second strategy has a small dependence on the pot function used, but works for any symmetric list of initial ratings. | The value of f−1superscript𝑓1f^{-1}italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT for some selected families of pot functions. Most of the families are parameterized by some constant c𝑐citalic_c, which correlates with the slope of each σ𝜎\sigmaitalic_σ at 0.
The logistic pot function in the top row is the us... | C |
Scalability for a large number of instances and features.
In general, the number of instances and features that can be visually expressed with our approach has no intrinsic limit. Collaris and van Wijk [CVW22] found that usually the top 10–20 features were impactful for the tabular data sets they experimented with. For... |
The undersampling phase is perhaps most crucial since removing unsafe instances without justifying one’s action could cause a severe issue to the ML model. We choose to activate the de-facto NCR algorithm without any tweaks to check the suggestions (Figure 3(c) and (d)). The distribution of instances changes according... |
Table 1: Time taken to complete each activity of the sampling process for all use cases. The completion time is expressed in minute:second format. Please note that for the iris flower data set, the undersampling time refers to two consecutive rounds. |
Figure 3: At first, a comparison of different data types projections and then two consecutive undersampling phases with the NCR algorithm are shown in this arrangement of screenshots. The default value for the number of neighbors is 5 (see (a)), which is used as input for computing the type of each instance with KNN. ... | Completion time for each activity.
The frontend of HardVis has been developed in JavaScript and uses Vue.js [vue14], D3.js [D311], and Plotly.js [plo10], while the backend has been written in Python and uses Flask [Fla10] and Scikit-learn [PVG∗11]. More technical details are made available on GitHub [Har22]. All experi... | B |
We evaluate the performance of FIRST on real Ethereum traces over a month long period of observation.
We analyze FIRST’s suggested 𝐹𝐼𝑅𝑆𝑇_𝐹𝐸𝐸𝐹𝐼𝑅𝑆𝑇_𝐹𝐸𝐸\mathit{FIRST\_FEE}italic_FIRST _ italic_FEE during our experiment and show the effectiveness of FIRST in terms of the percentage of frontrunnable transa... | As we discussed before, on Ethereum, the chance of transactions being frontrun is a bit higher on account of higher volatility (we theorize, due to NFT transactions and slower block confirmation time) compared to BSC, which is more stable on account of the faster settling of transactions. For example, from our data, in... | To evaluate the cost of FIRST transaction verifications by the smart contract (Protocol 7) we deployed the aggregated signature [19] verification function on a smart contract using the Solidity programming language.
We used elliptic curve pairing operations, such as addition, multiplication, and pairing checks introduc... | In order to get the most accurate waiting times of transactions in the pending pool, we deployed a Geth333https://github.com/ethereum/go-ethereum full node (v.1.11.0) running on an Amazon AWS Virtual Machine located in North Virginia. The AWS node had an AMD EPYC 7R32 CPU clocked at 3.30 GHz with 8 dedicated cores, 32 ... | Out of the total 30.6M transactions our node was able to detect the wait time for 29.65M transactions. Since our node did not receive a total of 944807 transactions (roughly 3.08%percent3.083.08\%3.08 %), we conclude that these transactions were either never sent to the P2P layer because of the use of relayers (e.g., F... | C |
In Section 5, we aim to learn the selection criterion that maximizes the equilibrium policy value. We develop a consistent estimator of the policy gradient, the gradient of the equilibrium policy value with respect to the selection criterion, and run gradient descent. Adapting the approach of Wager and Xu (2021), we e... |
In this work, we study the problem of capacity-constrained treatment assignment in the presence of strategic behavior. We frame the problem in a dynamic setting, where the decision maker assigns treatments at each time step. Suppose a decision maker deploys a fixed selection criterion for all time. At time step t𝑡tit... | Decision makers often aim to learn policies for assigning treatments to human agents under capacity constraints (Athey and Wager, 2021; Bhattacharya and Dupas, 2012; Kitagawa and Tetenov, 2018; Manski, 2004). These policies map an agent’s observed characteristics to a treatment assignment. For example, Bhattacharya and... |
In policy learning, practitioners often assume that the data used for treatment choice is exogenous to the treatment assignment policy. Such an assumption is plausible if the treatment assignment policy is unknown to human agents or knowledge of the policy is unlikely to affect the agents’ observed characteristics. Fo... |
The problem of learning optimal treatment assignment policies has received attention in econometrics, statistics, and computer science (Athey and Wager, 2021; Bhattacharya and Dupas, 2012; Kallus and Zhou, 2021; Kitagawa and Tetenov, 2018; Manski, 2004). Most related to our work, Bhattacharya and Dupas (2012) study op... | D |
Early Exit Networks. OccamNet is a multi-exit architecture designed to encourage later layers to focus on samples that earlier layers find difficult. Multi-exit networks have been studied in past work to speed up average inference time by minimizing the amount of compute needed for individual examples [10, 31, 66, 74],... | OccamNets are biased toward using fewer spatial locations for prediction, which we enable by using spatial activation maps [24, 44, 54]. While most recent convolutional neural networks (CNNs) use global average pooling followed by a linear classification layer [25, 29, 32], alternative pooling methods have been propose... | We tested OccamNets implemented with CNNs; however, they may be beneficial to other architectures as well. The ability to exit dynamically could be used with transformers, graph neural networks, and feed-forward networks more generally. There is some evidence already for this on natural language inference tasks, where ... |
Here, we propose convolutional OccamNets which have architectural inductive biases that favor using the minimal amount of network depth and the minimal number of image locations during inference for a given example. The first inductive bias is implemented using early exiting, which has been previously studied for spee... | In experiments using biased vision datasets, we demonstrate that OccamNets greatly outperform architectures that do not use the proposed inductive biases. Moreover, we show that OccamNets outperform or rival existing debiasing methods that use conventional network architectures.
| A |
In this section, we focus our discussion on local temporal contexts. To begin with, we introduce the technical motivation of the proposed Coarse-to-Fine Feature Mining (CFFM) for mining the local temporal contexts in §3.1. Then, we introduce the first sub-operation of Coarse-to-Fine Feature Assembling (CFFA) in §3.2. ... | Before introducing our method, we discuss our technical motivation to help readers better understand the proposed technique. As discussed above, local temporal contexts include static contexts and motional contexts. The former has been well exploited in image semantic segmentation [18, 19, 20, 21, 22, 23, 106, 24, 77, ... | As widely accepted, the contextual information plays a central role in image semantic segmentation [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]. When considering videos, the contextual information can be divided into two cases based on how much temporal information is used: local temporal contexts a... |
We first discuss local temporal contexts which are widely exploited in VSS [11, 14, 13, 12, 34, 35, 36, 37, 38, 39, 40, 41, 42]. The local temporal contexts can be further divided into static contexts and motional contexts among neighboring video frames, as shown in Fig. 1b. The former refers to the contexts within th... | Image semantic segmentation has always been a key topic in the vision community, mainly because of its wide applications in real-world scenarios. Since the pioneer work of FCN [2] which adopts fully convolution networks to make densely pixel-wise predictions, a number of segmentation methods have been proposed with dif... | A |
Embedding Prefetching: Previous studies [31, 30, 14] have investigated prefetching embeddings into a GPU-based cache for the next mini-batch of training. However, this prefetching-based approach introduces complexities such as data hazards, complex cache eviction policies, and asynchronous training with limited scalab... |
1. The Learning Phase: The Hotline accelerator actively determines the frequently-accessed embeddings at runtime. To achieve this, the accelerator performs mini-batch sampling in the first epoch. Our experiments demonstrate that sampling just 5% of the mini-batches is sufficient to identify over 90% of the frequently-... | Machine learning accelerators: There are proposals for accelerators designed to execute the compute portion of deep learning models [60, 61, 62, 63, 64], including some for collaborative filtering-based recommender models [65, 43, 66]. However, Hotline does not aim to design a specialized architecture for optimizing th... |
Recommendation models constitute a crucial and widely deployed class of machine learning (ML) workloads [1]. These models employ compute-intensive neural networks and memory-intensive embedding tables to store user and item features [2]. With the increasing number of interactions between users and items, the size of t... |
Table II presents the specifications of four open-sourced recommender models that were evaluated using Hotline. The models have varying numbers of sparse parameters, ranging from 5.1M for RM1 to 266M for RM3. These models consist of a top and bottom multi-layer perceptron (MLP) with a deep learning attention layer for... | B |
\leq\frac{1}{2}&\mbox{if $n_{1}\leq 2n+1$}.\end{array}\right.{ start_ARRAY start_ROW start_CELL 0 end_CELL start_CELL if italic_n start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≤ italic_n end_CELL end_ROW start_ROW start_CELL ≤ divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL start_CELL if italic_n start_POSTSUBSCRIPT ... | Additionally, note that for architectures whose hidden layers all have the same dimension (a common choice), the upper bound given by Theorem 2 is significantly lower for deep networks than for shallow networks of the same total dimension. For ReLU neural networks with >1absent1>1> 1 hidden layer, we expect the true pr... |
It is immediate that if C𝐶Citalic_C is a cell for which the components of θC(ℓ)subscriptsuperscript𝜃ℓ𝐶\theta^{(\ell)}_{C}italic_θ start_POSTSUPERSCRIPT ( roman_ℓ ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT are all non-positive for some ℓℓ\ellroman_ℓ, then C𝐶Citalic_C is flat. The following... | We will now prove that the above expression gives an upper bound on the probability that F:ℝn→ℝ:𝐹→superscriptℝ𝑛ℝF:\mathbb{R}^{n}\rightarrow\mathbb{R}italic_F : blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_R is PL Morse in the case ℓ>1ℓ1\ell>1roman_ℓ > 1. By the discussion above and bef... | Let F:ℝn→ℝ:𝐹→superscriptℝ𝑛ℝF:\mathbb{R}^{n}\rightarrow\mathbb{R}italic_F : blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_R be a ReLU neural network map with hidden layers of dimensions (n1,…,nℓ)subscript𝑛1…subscript𝑛ℓ(n_{1},\ldots,n_{\ell})( italic_n start_POSTSUBSCRIPT 1 end_POSTSUBS... | A |
The recent state-of-the-art neural networks have been shown to provide high efficient representations of such complex states, making the overwhelming complexity computationally tractable [6, 7]. Except for the success in the industrial applications, such as the image and speech recognitions [8], the autonomous driving,... |
Among these successful applications in physical sciences, the more challenging task is to use neural networks to study nonequilibrium problems. Recently, an algorithm of artificial neural networks was proposed to solve the unitary time evolutions in a quantum many-body system [15]. Later developments in this direction... |
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions. Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39]. Critical slowing down near the phase transition point may invalidate the applic... |
We have realized the time evolutions of the energy expectation value, the universal statistics of the topological defects numbers and the kink-kink correlations in a quantum phase transition of a TFQIM by virtue of the neural networks. The results were found to satisfy theoretical predictions. Thus, it numerically ver... | One of the most challenging problems in modern physics is the so-called many-body problem. In its quantum version – quantum many-body physics, the exponential complexity of the states in the Hilbert space makes the strongly correlated systems difficult to deal with [1]. Only limited analytical solutions are amenable to... | A |
Numerical modeling and simulation for the incompressible Navier-Stokes system is critical in a number of applications. Therefore, there have been a lot of efforts in designing numerical methods for solving the incompressible Navier-Stokes equations. It is well-known that the Navier-Stokes system has various conserved q... | The original PINN approach trains the NN model to predict the entire space-time at once. In complex cases, this can be more difficult to learn. Seq2seq strategy was proposed in [16], where the PINN learns to predict the solution at each time step, instead of all times. Note that the only data available of the first seq... | Numerical modeling and simulation for the incompressible Navier-Stokes system is critical in a number of applications. Therefore, there have been a lot of efforts in designing numerical methods for solving the incompressible Navier-Stokes equations. It is well-known that the Navier-Stokes system has various conserved q... |
The goal of this paper is to construct the neural network model that can preserve the fluids helicity. Unlike the standard finite element methods based on the weak formulation of the PDE models, Physics-informed neural networks (PINN) model [27] is based on the strong PDE and thus, conservation can be shown to be made... | Neural network is popular and it has a lot of potential. In this paper, we provide a first attempt to use neural network function to preserve helicity of Navier-Stokes equation. Our observation is that PINN model is based on the strong form of PDE, it is easier to demonstrate the conservation property unlike the weak f... | C |
In the following, we provide the complimentary experiments by choosing alternative settings compared to the previous experiments.
In short, in the complimentary experiments, we observed consistency with our previous results, further validating and verifying our proposal. | We repeat this experiment for two classifications (AD and DCC) and two regression (HS and DI) data sets. The results are illustrated in Figure 20. In summary, the results are consistent with the previous observations, confirming the validity of our previous experiment and the minimal impact of removing the outliers fro... | Validation on Regression:
In this experiment, we study the effectiveness of our RU measures in the regression tasks. Accordingly, we used RN and HS data sets and computed strongRU and weakRU values for all the query points in the uniform sample. Thereafter, we repeated the bucketization process as we did in the last ex... | Having provided the visual validation results, we next validate our RU measures on classification tasks. In this regard, using SYN data set, we first computed the RU measures for all the query points in the uniform sample and bucketized the points w.r.t. their RU values in ranges of length 0.1. We repeated this for bot... |
We used Gaussian as the underlying distribution of our synthetic datasets. In this experiment, we study whether the underlying distribution of the data would affect the capacity of the RU measures in revealing unreliability. To do so, we follow the same procedure outlined in the construction of synthetic datasets in §... | D |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.