context
stringlengths
250
5.39k
A
stringlengths
250
8.2k
B
stringlengths
250
7.25k
C
stringlengths
250
4.17k
D
stringlengths
250
3.2k
label
stringclasses
4 values
Each discriminator learns to be specialized in a subset of reference data space, which is identified automatically during training, so the ensemble of discriminators provides not only the differentiation of fake data but also more accurate predictions over the clusters of real data. In this respect, a generator is enco...
We presented a generative adversarial network framework with multiple discriminators, where each discriminator behaves as an expert classifier and covers a separate mode in the underlying distribution. This idea is implemented by incorporating the concept of multiple choice learning.
Table 1 summarizes the precision and recall scores of our methods compared to other models with three different GAN objectives, when the number of discriminators is set to 5 or 10 (M=5⁢or⁢10𝑀5or10M=5~{}\text{or}~{}10italic_M = 5 or 10) and the number of experts is 1 (k=1𝑘1k=1italic_k = 1). MCL-GAN achieves outstandin...
To achieve these goals, we employ a Multiple Choice Learning (MCL) [8] framework to learn multiple discriminators and update the generator via a set of expert discriminators, where each discriminator is associated with a subset of the true and generated examples. Our approach, based on a single generator and multiple d...
In practice, our specialized discriminators to the subsets of training data outperforms independent training of discriminators on the whole dataset (as in GMAN) or different discriminator assignment strategies such as minimum-score discriminator selections, opposite to MCL-GAN, and random selections. Also, the performa...
C
6:         𝐓𝜷𝒞⁢(𝐓𝜶𝒮⁢(𝑺))→𝑿→superscriptsubscript𝐓𝜷𝒞superscriptsubscript𝐓𝜶𝒮𝑺𝑿\mathbf{T}_{\boldsymbol{\beta}}^{\mathcal{C}}(\mathbf{T}_{\boldsymbol{\alpha}}% ^{\mathcal{S}}(\boldsymbol{S}))\rightarrow\boldsymbol{X}bold_T start_POSTSUBSCRIPT bold_italic_β end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_...
In this section, we compare to the performance between the proposed DeepSC-SR and the traditional communication systems under the AWGN channels and the Rayleigh channels, where the accurate CSI is assumed at the receiver. The traditional communication systems include benchmark 1 and benchmark 2, which are introduced i...
The CER results of DeepSC-SR and two benchmarks under the AWGN channels and the Rayleigh channels are shown in Fig. 5, where the baseline is the result tested by feeding the speech sample sequence into the ASR module directly without considering communication problems. From the figure, DeepSC-SR obtains lower CER score...
The rest of this article is structured as follows. Section II introduces the model of semantic communication system for speech recognition and performance metrics. In Section III, the details of the proposed DeepSC-SR is presented. Simulation results are discussed in Section IV and Section V draws conclusions.
The WER scores comparison of different approaches are compared in Fig. 6. From the figure, the proposed DeepSC-SR can provide lower WER scores and outperform the speech transceiver under various channel conditions, as well as the text transceiver under the Rayleigh channels when SNR is lower than around 8 dB. Moreover,...
A
We argue that joint training of the two modules would hamper the performance of the basic segmentation network since the cross-regularization loss in the CSFR module and the self-regularization loss in the ISFR module may interfere with the training of each other as the CSFR module may bring wrong supervision from unc...
As shown in Figure2, we use a two-stage training strategy to avoid interference between the two modules during training. In stage one, we train the basic segmentation network with the cross-sample feature reallocating module. In this stage, for each sample, the network learns from the weak labels of this sample and th...
The two modules implicitly guide the optimization of the basic network only at training time. In inference, we can simply discard the two modules and use the basic segmentation network as a normal point cloud segmentation network to get the segmentation predictions. Therefore, no extra memory and computational resource...
Two stage training versus joint training: Table I compares one-stage training with two-stage training performances trained with 10% and 1% labels. For one-stage training, we perform experiments with only the CSFR module or ISFR module, each of the modules produces performance gain over the baseline method for both 10%...
We argue that joint training of the two modules would hamper the performance of the basic segmentation network since the cross-regularization loss in the CSFR module and the self-regularization loss in the ISFR module may interfere with the training of each other as the CSFR module may bring wrong supervision from unc...
B
Extensive experiments conducted on the challenging KITTI [11] dataset clearly demonstrate the effectiveness of the proposed approach and show that our method achieves 13.81% in terms of the AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT metric, which is 2.80% absolute AP40subscriptAP40\r...
However, the performance gap between LiDAR-based and monocular image-based approaches remains significant, mainly because of the lack of reliable depth information. A quantitative investigation is conducted by replacing the depth predictions with the ground-truth depth values on a baseline model. The detection performa...
Compared with the methods with LiDAR and stereo sensors, 3D object detection with monocular images is challenging due to the absence of reliable depth information. Existing works [6, 28, 26, 25, 5, 10] have considered using external pretrained networks, extra training data, and prior knowledge to improve the performanc...
M3D-RPN [3] focuses on the design of depth-aware convolution layers to improve 3D parameter estimation and post-optimization of the orientation by exploring the consistency between projected and annotated bounding boxes. To address the common occlusion issue in monocular object detection, MonoPair [9] proposes to model...
In this paper, we propose an effective holistic geometric formula by principled modeling of the relationships between the depth and different geometry elements predicted from the deep network for the task of monocular 3D object detection, including 2D bounding boxes, 3D object dimensions, object poses, and object posit...
B
To ensure that the text segments are able to depict the “characterness” of text, text segments are designed to be fine enough to reflect characters and the spacings between them (see the segment type definition in Fig. III-A2). Such a dense, overlapping design of text segments also
The Graph Guided Text Region map and the text segment classification results from the GCN are then used to rectify the TCL map to remove false detection. The relationship prediction and the dense overlapping text segments together ensure the completeness and accuracy of using the contour of the grouped text segments to...
The relational connection of text segments (acting as the ‘node’ in the graph) can thus reflect the ‘characterness’ and ‘streamline’ properties of text segments. Although link prediction is one of the tasks that GCNs are good at and has been well exploited by existing methods, our FPNS (Node) innovatively utilizes the ...
1) We utilize the relational feature of GCNs to rectify text segments by globally considering their “characterness” and “streamline” in the same relational structure through a weakly supervised training process. To the best of our knowledge this is the first time the classification ability, instead of link prediction, ...
The existing methods [7, 8, 12] have used the linkage reasoning ability of GCN but ignored its node classification ability. When text segments are represented as nodes of a graph, segments of the same text instance are those who share similar “characterness” and “streamline” properties.
D
the restriction of the operator ℒℒ{\cal L}caligraphic_L on 𝒮0⁢(T)∪𝒮1⁢(T)subscript𝒮0𝑇subscript𝒮1𝑇{\cal S}_{0}(T)\cup{\cal S}_{1}(T)caligraphic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_T ) ∪ caligraphic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_T ) is the constant mapping
Chrel in Coq – defined by ((v1,v2)⁢CH⁢(v3,v4)⇔v2=v3)iffsubscript𝑣1subscript𝑣2subscript𝐶Hsubscript𝑣3subscript𝑣4subscript𝑣2subscript𝑣3((v_{1},v_{2})\,C_{\textsc{H}}\,(v_{3},v_{4})\iff v_{2}=v_{3})( ( italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_C star...
(v1♭,v1♯,o1)⁢AtrW⁢(v2♭,v2♯,o2)⇔iffsuperscriptsubscript𝑣1♭superscriptsubscript𝑣1♯subscript𝑜1superscriptsubscript𝐴tr𝑊superscriptsubscript𝑣2♭superscriptsubscript𝑣2♯subscript𝑜2absent\displaystyle({v_{1}^{\flat},v_{1}^{\sharp},o_{1}})\,A_{\textsc{tr}}^{W}\,({v_% {2}^{\flat},v_{2}^{\sharp},o_{2}})\iff( italic_v start...
\flat},v_{2}^{\sharp},o_{2})\iff v_{1}^{\sharp}=v_{2}^{\flat})( ( italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ♭ end_POSTSUPERSCRIPT , italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ♯ end_POSTSUPERSCRIPT , italic_o start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) italic_C star...
,v_{n}})}\big{)}\;,∀ ( italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_v start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ∈ ∏ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_T , caligraphic_L start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( ital...
A
Assume that all IP addresses are logically divided into q𝑞qitalic_q subsets according to the value of the first part of an individual IP address. Further assume are q𝑞qitalic_q computers for parallel computation, where the statistics collection task of each subset can be performed by an individual computer. A minimu...
In this section, we evaluate the performance of the proposed methods 111https://github.com/chenjie20/IPStatistics on three synthetic datasets that contain 5 million, 10 million, and 50 million randomly generated IP records. Each individual IP address contains one or more of IP records. The average number of IP records...
IP Mapping. All IP records are partitioned into q𝑞qitalic_q subsets according to the first part of each IP address. The statistics of the IP records in each subset are mapped into an array, whose memory is pre-allocated on a computer according to the last three parts of each IP address. The first k𝑘kitalic_k most fr...
Parameter k𝑘kitalic_k was set to 10 or 100, and we repeated each experiment 10 times. The average computational costs and standard deviations are reported in Table 4, and the mean memory use and standard deviations are given in Table 4. The results show that TLMB consistently outperformed all the other methods in term...
In this paper, we present two efficient algorithms for collecting the statistics of large-scale IP address data. We can obtain the frequently occurring IP addresses from the statistics, which can be regarded as a pre-processing step of user behavior analysis in network traffic management. Because of the increasing volu...
A
In this work, we only assume that A1subscript𝐴1A_{1}italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is invertible and the global system matrix 𝒜𝒜\mathcal{A}caligraphic_A is invertible. Many special cases can be cast into the above forms of twofold saddle point systems. For example, (a) A2=0subscript𝐴20A_{2}=0itali...
In this study, we explore two methodologies for designing preconditioners tailored for 3333-by-3333 block systems exhibiting block tridiagonal form, as well as their extensions to n𝑛nitalic_n-tuple scenarios. One approach centers around the nested (or recursive) Schur complement [37]. Unlike the approach presented in ...
The above 3333-by-3333 block linear problems (1) and (2) can be naturally extended to the n𝑛nitalic_n-tuple cases. For example, when the system matrix in (1) is extended to the n𝑛nitalic_n-tuple case, it is the block tridiagonal systems discussed in [37]. When the system matrix in (2) is extended to the n𝑛nitalic_n-...
The outline of the remainder of this paper is as follows. In section 2, we briefly recall the classic saddle point problem and its Schur complement, and introduce the twofold saddle point problem and the form of Schur complement, we then construct and analyze the block-triangular and block-diagonal preconditioners base...
Commencing with a twofold saddle point problem, we generalize our theory to n𝑛nitalic_n-tuple block tridiagonal saddle point problems. Our study demonstrates that judiciously selecting signs in front of Schur complements in preconditioners results in a positively stable preconditioned system [16]. By using the Routh–H...
B
To execute the local iterations, clients need embeddings from the data from clients in other silos. The hubs orchestrate this information exchange every time they update their parameter blocks. By waiting for several training iterations to exchange this information, the communication cost of the algorithm is reduced. O...
Further, we cannot use existing vertical federated algorithms directly since the data in each silo is also horizontally distributed across clients. Therefore, such a system setting calls for a new algorithm that combines aspects of both horizontal and vertical federated learning.
While this approach is similar to the multiple local iterations in horizontal federated learning, an important distinction is that in vertical federated learning, each silo updates only on its own subset of coordinates in contrast to updating all the coordinates in the case of horizontal federated learning.
The settings above have aspects of both the horizontal federated learning and vertical federated learning paradigms. However, it is not possible to directly apply any one paradigm in this setting. We cannot directly apply horizontal federated learning since horizontal federated learning requires each client to have the...
TDCD is novel since it interleaves both the horizontal and vertical federated learning paradigms and thus has to account for both the perturbed gradients from the horizontal federated learning component and the stale information from the vertical federated learning component. This combination leads to a different conve...
D
Figure 5: Boundaries of pseudospectra Λε⁢(ℱ)subscriptΛ𝜀ℱ\Lambda_{\varepsilon}(\mathcal{F})roman_Λ start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT ( caligraphic_F ) for ε=10−1,10−2,…,10−10𝜀superscript101superscript102…superscript1010\varepsilon=10^{-1},10^{-2},\ldots,10^{-10}italic_ε = 10 start_POSTSUPERSCRIPT - 1 end...
Furthermore, by Theorem 4.6, one can explore more positive definite tensors that surround the positive definite tensor 𝒜𝒜\mathcal{A}caligraphic_A. We only need to ensure that the localization set Γε⁢(𝒜)subscriptΓ𝜀𝒜\Gamma_{\varepsilon}(\mathcal{A})roman_Γ start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT ( caligraphi...
We consider the validation of T-positive definiteness for third-order symmetric tensors 𝒜𝒜\mathcal{A}caligraphic_A through the application of the pseudospectra localizations strategy. Let 𝒜𝒜\mathcal{A}caligraphic_A be a symmetric tensor with three frontal slices
Consider the tensor 𝒜𝒜\mathcal{A}caligraphic_A belonging to the complex space ℂm×m×ℓsuperscriptℂ𝑚𝑚ℓ\mathbb{C}^{m\times m\times\ell}blackboard_C start_POSTSUPERSCRIPT italic_m × italic_m × roman_ℓ end_POSTSUPERSCRIPT. Through the utilization of the normalized DFT matrix, the matrix bcirc⁡(𝒜)bcirc𝒜\operatorname{bci...
In this section, we delve into the study of pseudospectra for third-order tensors within the tensor-tensor multiplication framework. Specifically, we explore different formulations of pseudospectra for third-order tensors in Subsection 4.1. Subsection 4.2 is dedicated to the examination of various properties of pseudos...
B
where λr⁢e⁢csubscript𝜆𝑟𝑒𝑐\lambda_{rec}italic_λ start_POSTSUBSCRIPT italic_r italic_e italic_c end_POSTSUBSCRIPT, λp⁢e⁢r⁢csubscript𝜆𝑝𝑒𝑟𝑐\lambda_{perc}italic_λ start_POSTSUBSCRIPT italic_p italic_e italic_r italic_c end_POSTSUBSCRIPT, λs⁢t⁢y⁢l⁢esubscript𝜆𝑠𝑡𝑦𝑙𝑒\lambda_{style}italic_λ start_POSTSUBSCRIPT it...
We evaluate the proposed method on the CelebA [16], Paris StreetView [4] and Places2 [39] datasets, which are widely adopted in the literature, and we follow their original training, testing, and validation splits. Irregular masks are obtained from [13] and classified based on their hole sizes relative to the entire i...
The model is implemented in PyTorch. Training is launched on a single NVIDIA 1080TI GPU (11GB) with the batch size of 6, optimized with the Adam optimizer. Analogous to [13], we first use a learning rate of 2×10−42superscript1042\times 10^{-4}2 × 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT for initial training, th...
User Study. We further perform subjective user study. 10 volunteers with image processing expertise are involved in this evaluation. They are invited to choose the most realistic image from those inpainted by the proposed method and the representative state-of-the-art approaches. Specifically, each participant has 15 ...
Objective evaluation. We quantitatively evaluate the proposed method using three major metrics: LPIPS, PSNR and SSIM, and compare the scores to those of the state-of-the-art counterparts with irregular mask ratios of 0-20%, 20-40% and 40-60%. Table 1 shows the results achieved on the Places2 dataset, where the propose...
A
In digital communication, it is common that messages transmitted through a public channel may be distorted by the channel noise. The theory of error-correcting codes is the study of mechanisms to cope with this problem. This is an important research area with many applications in modern life. For example, error-correc...
In a q𝑞qitalic_q-ary erasure channel, the channel input alphabet is a finite field 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT of order q𝑞qitalic_q, and during transmission, each symbol x∈𝔽q𝑥subscript𝔽𝑞x\in\mathbb{F}_{q}italic_x ∈ blackboard_F start_POSTSUBSCRIPT ita...
In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability ε𝜀\varepsilonitalic_ε. The concept of BEC was first introduced by Elias in 1955 InfThe . Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory ...
In digital communication, it is common that messages transmitted through a public channel may be distorted by the channel noise. The theory of error-correcting codes is the study of mechanisms to cope with this problem. This is an important research area with many applications in modern life. For example, error-correc...
The ensemble ℛm,nsubscriptℛ𝑚𝑛\mathcal{R}_{m,n}caligraphic_R start_POSTSUBSCRIPT italic_m , italic_n end_POSTSUBSCRIPT has been studied for a long time and many strong results have been obtained. For example, in the classical work of Gallager Gallager2 , an upper bound of the average number of codewords of a given we...
B
The planner is used to search over the graph induced by the subgoal generator and is guided by the value function. The role of the low-level policy is to prune the search tree as well as to transition between subgoals. In this paper, we assume that the generator predicts subgoals that are k𝑘kitalic_k step ahead (towar...
MCTS-kSubS and BF-kSubS differ in the choice of the search engine: the former uses Monte-Carlo Tree Search (MCTS), while the latter is backed by Best-First Search (BestFS). We provide two sets of implementations for the generator, the low-level policy, and the value functions. The first one uses transformer architectur...
Subgoal Search consists of four components: planner, subgoal generator, low-level policy, and a value function. The planner, coupled with a value function, is used to search over the graph induced by the subgoal generator. Namely, for each selected subgoal, the generator allows for sampling the candidates for the next...
The subgoals are trained to predict states k𝑘kitalic_k steps ahead of the current one. Higher k𝑘kitalic_k should make planning easier as the search graph is smaller. However, as k𝑘kitalic_k increases, the quality of the generator may drop, and thus the overall effect is uncertain. Similarly, the task of the low-leve...
The planner is used to search over the graph induced by the subgoal generator and is guided by the value function. The role of the low-level policy is to prune the search tree as well as to transition between subgoals. In this paper, we assume that the generator predicts subgoals that are k𝑘kitalic_k step ahead (towar...
A
In this paper, we propose to use ‘Five-strokes’, a famous structure-based encoding method for Chinese characters, to get our glyph embedding. ‘Five-Strokes’ was put forward by Yongmin Wang in 1983. This special encoding method for Chinese characters is based on their structures. ‘Five-Strokes’ holds the opinion that Ch...
Chinese use ‘Pinyin’, a special phonetic system, to represent the pronunciation of Chinese characters. In the phonetic system of ‘Pinyin’, we have four tunes, six single vowels, several plural vowels, and auxiliaries. Every Chinese character has its expression, also known as a syllable, in the ‘Pinyin’ system. A comple...
Nowadays, the informal language environment created by social media has deeply changed the way that people express their thoughts. Using character substitution to generate new named entities becomes a common linguistic phenomenon which is a big challenge for NER. In this paper, we propose a lightweight method fusing th...
We propose the ’Trans-pinyin’ system to represent character pronunciation, in which auxiliaries and vowels are transformed to standard forms and keep the tune in the ‘Pinyin’ system. After transformation, ‘c’ becomes ‘t⁢s𝑡𝑠tsitalic_t italic_s’ and ‘z’ becomes ‘t⁢s′𝑡superscript𝑠′ts^{\prime}italic_t italic_s start_PO...
In this paper, we propose a lightweight method, Multi-feature Fusion Embedding for Chinese Named Entity Recognition (MFE-NER), which fuses extra glyph and phonetic features to detect possible substitution forms of named entities in Chinese. On top of using pre-trained models to represent the semantic feature, we choose...
A
where ℱℱ\mathcal{F}caligraphic_F is the 2D Fourier transform, 𝒮𝒮\mathcal{S}caligraphic_S is the SLM modulation, U⁢(⋅)𝑈⋅U(\cdot)italic_U ( ⋅ ) is zeroth-order upsampling operator from the low-resolution SLM to the high-resolution neural étendue expander, and ⊙direct-product\odot⊙ is the Hadamard product.
To assess whether the optimized neural étendue expander ℰℰ\mathcal{E}caligraphic_E, shown in Fig. 1b, has learned the image statistics of the training set we evaluate the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG, defined as the spectrum of the generated image with th...
Specifically, we model the holographic image formation in a fully differentiable manner following Fourier optics. We relate the displayed holographic image I𝐼Iitalic_I to the wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E as
The differentiability of Eq. (2) with respect to the modulation variables ℰℰ\mathcal{E}caligraphic_E and 𝒮𝒮\mathcal{S}caligraphic_S allows us to learn the optimal wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E by jointly optimizing the static neural étendue expander in concunction with...
Next, we analyze the expansion of étendue achieved with the proposed technique. To this end, suppose we want to generate the étendue-expanded hologram of only a single scene. Then, the optimal complex wavefront modulation for the neural étendue expander would be the inverse Fourier transform of the target scene, and, a...
C
GLUE (Wang et al., 2019b) is a benchmark dataset for evaluating natural language understanding (NLU) models. The main benchmark consists of 8 sentence and sentence-pair classification tasks as well as a regression task. The tasks cover a diverse range of genres, dataset sizes, and difficulties. Besides, a diagnostic da...
ABC (Gonzalez et al., 2020), the Anti-reflexive Bias Challenge, is a multi-task benchmark dataset designed for evaluating gender assumptions in NLP models. ABC consists of 4 tasks, including language modeling, natural language inference (NLI), coreference resolution, and machine translation. A total of 4,560 samples ar...
GLUE (Wang et al., 2019b) is a benchmark dataset for evaluating natural language understanding (NLU) models. The main benchmark consists of 8 sentence and sentence-pair classification tasks as well as a regression task. The tasks cover a diverse range of genres, dataset sizes, and difficulties. Besides, a diagnostic da...
SuperGLUE (Wang et al., 2019a) is a generalization of GLUE. As the performance of state-of-the-art models has exceeded non-expert human baselines on GLUE, SuperGLUE contains a set of 8 more challenging NLU tasks along with comprehensive human baselines. Besides retaining the two hardest tasks in GLUE, 6 tasks are added...
LSParD (Shao et al., 2019) is a multi-task semantic parsing dataset with 3 tasks, including question type classification, entity mention detection, and question semantic parsing. Each logical form is associated with a question and multiple human annotated paraphrases. This dataset contains 51,164 questions in 9 categor...
C
The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ...
The IEEE Template Selector will always have the most up-to-date versions of the LaTeX and MSWord templates. Please see: https://template-selector.ieee.org/ and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have...
The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ...
IEEE recommends using the distribution from the TeXUser Group at http://www.tug.org. You can join TUG and obtain a DVD distribution or download for free from the links provided on their website: http://www.tug.org/texlive/. The DVD includes distributions for Windows, Mac OS X and Linux operating systems.
Be sure to use the \\\backslash\IEEEmembership command to identify IEEE membership status. Please see the “IEEEtran_HOWTO.pdf” for specific information on coding authors for Conferences and Computer Society publications. Note that the closing curly brace for the author group comes at the end of the thanks group. This w...
C
all 2⁢k2𝑘2k2 italic_k vertices of the cliques Yj1,Yj2superscriptsubscript𝑌𝑗1superscriptsubscript𝑌𝑗2Y_{j}^{1},Y_{j}^{2}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSC...
exists a Yjsubscript𝑌𝑗Y_{j}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT such that each vertex of Yj1superscriptsubscript𝑌𝑗1Y_{j}^{1}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT has the same neighborhood in S𝑆Sitalic_S as Wisubscript𝑊𝑖W_{i}italic_W...
For each i∈[n]𝑖delimited-[]𝑛i\in[n]italic_i ∈ [ italic_n ], we attach a leaf to each vertex of Wisubscript𝑊𝑖W_{i}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For each j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ], we attach two leaves to each vertex of Yj1superscriptsubscript𝑌𝑗1Y_{j}^{1}italic_Y...
each vertex of Yj2superscriptsubscript𝑌𝑗2Y_{j}^{2}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, and four leaves to the remaining vertex of Yjsubscript𝑌𝑗Y_{j}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT.
For each i∈[n]𝑖delimited-[]𝑛i\in[n]italic_i ∈ [ italic_n ], number the vertices of Wisubscript𝑊𝑖W_{i}italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT arbitrarily as w(i,1),w(i,2),…,w(i,k)subscript𝑤𝑖1subscript𝑤𝑖2…subscript𝑤𝑖𝑘w_{(i,1)},w_{(i,2)},\ldots,w_{(i,k)}italic_w start_POSTSUBSCRIPT ( italic_i , ...
B
The next set of simulations, shown in Figure 5, shows the effects of a uniform increase in overall reciprocity. From this figure, we can see that our measure of overall reciprocity is essentially capturing the opposite effect as trust; in the baseline, overall reciprocity creates substantial increases in linking and co...
Finally, Figure 6 shows the effects of an increase in positive reciprocity, and thus a shift away from a punishment mindset and toward a focus on gains from mutual effort and collaboration. As we can see, shifting players toward positive reciprocity has some of the largest effects in the baseline, with large increases...
Finally, we use our structural framework to conduct three counterfactual simulations, each examining the effects of a uniform increase in one of the three principal attributes—trust, overall reciprocity, and positive reciprocity. Consistent with our estimates for the model with individual heterogeneity, an increase in ...
The next set of simulations, shown in Figure 5, shows the effects of a uniform increase in overall reciprocity. From this figure, we can see that our measure of overall reciprocity is essentially capturing the opposite effect as trust; in the baseline, overall reciprocity creates substantial increases in linking and co...
When provided with more detailed information, players exhibit a clear preference to target the positive externalities they generate toward others who have shared with them, reciprocating directly. Using our structural estimation methods, we characterize the tradeoff between altruism and different forms of reciprocity. ...
A
where Y^u,vsubscript^𝑌𝑢𝑣\hat{Y}_{u,v}over^ start_ARG italic_Y end_ARG start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT represents the spectrum of the recovered image, and Yu,vsubscript𝑌𝑢𝑣Y_{u,v}italic_Y start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT represents the spectrum of the ground truth...
Mixed Loss: In SISR, there are also some classic combinations of loss functions that are widely used to guide the network towards generating high-quality HR images. These combinations aim to balance the quality, details, and visual perception of the generated image. Here are some commonly used classic combinations of l...
In the past, most SISR models relied on L1 loss or MSE loss. Although some other new loss functions like content loss, texture loss, and adversarial loss have been proposed, they still cannot achieve a good balance between reconstruction accuracy and perceptual quality. Therefore, it remains an important research topi...
The choice of loss function combinations depends on the specific requirements of the SISR task, such as the desired balance between perceptual quality and computational efficiency. In practical applications, researchers may adjust the weights of the loss functions based on experimental results to find the combination ...
In the SISR task, the loss function is used to guide the iterative optimization process of the model by computing some kind of error. Meanwhile, compared with a single loss function, researchers find that combining multiple loss functions can better reflect the situation of image restoration. In this section, we brief...
A
\phi}_{(\textrm{{x}})})^{2}*\textrm{{m}}_{(\textrm{{x}})}}{|\phi_{(\textrm{{x}% })}|}caligraphic_L start_POSTSUBSCRIPT italic_R italic_e italic_c italic_o italic_n end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT x end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT divide start_ARG ( italic_ϕ start_POSTSUBSC...
The effect of learning patch-based representation rather than direct pixel values has been illustrated in Figure 3 as part of the ablation study included in the experiments. It becomes quite clear that patch-based representation alone (third column), while helpful, may not yield satisfactory results for challenging syn...
We begin our analysis with an ablation study of the proposed architecture to demonstrate the utility of each introduced loss component. Figure 3 illustrates the effect of the following adjustments to the conventional coordinate network (second column): i) patch output (third column), ii) cross-patch consistency loss ...
The research on utilizing coordinate-based networks for image synthesis has developed significantly, yielding a range of impressive results [1, 2, 3, 4, 5, 6]. However, most of the published works propose architectures that have no capability to model directly the spatial relationships within the represented signal, t...
The resulting architecture performs the equivalent operation to a conventional coordinate-based since the network ultimately predicts a single pixel value. However, the intermediate patch-based representation of the proposed architecture forces the model to establish the natural relationship between the encoded coord...
A
N⁢(u):=mint∈ℕ⁡{t:∑s=1t𝕀⁢{As=1}=u}assign𝑁𝑢subscript𝑡ℕ:𝑡superscriptsubscript𝑠1𝑡𝕀subscript𝐴𝑠1𝑢N(u):=\min_{t\in\mathbb{N}}\left\{t:\sum_{s=1}^{t}\mathbb{I}\{A_{s}=1\}=u\right\}italic_N ( italic_u ) := roman_min start_POSTSUBSCRIPT italic_t ∈ blackboard_N end_POSTSUBSCRIPT { italic_t : ∑ start_POSTSUBSCRIPT ital...
- Frequent Informative Actions The sequence of contexts {xt}t=1Tsuperscriptsubscriptsubscriptxtt1T\{x_{t}\}_{t=1}^{T}{ italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT and prior distribution π0subscriptπ0\pi_...
The Bayesian regret measures the difference between the expected loss of an oracle decision maker who knows the status distribution of each item and acts optimally with respect to this information, and the expected loss of the learner making decisions according to the policy φ𝜑\varphiitalic_φ. We will be interested in...
Our first theoretical result is given in this section. It bounds the Bayesian regret of TS under any sequence of bounded feature vectors. To complete our formalisation of the setting in which theoretical guarantees can be established, we suppose w.l.o.g. that the parameter set ΘΘ\Thetaroman_Θ lies in the Euclidean un...
If the probability of choosing the informative action was bounded below by 1/C1𝐶1/C1 / italic_C in every round, Assumption 1 would follow immediately. In practice, Assumption 1 may be satisfied when the prior has sufficiently heavy tails or the contexts are sufficiently variable and uncorrelated. The latter may be in...
D
The concept of WS is important since not always memory slots can be explicitly associated with individual training examples (e.g., this might be costly with regard to the time required from an expert, but also intrinsically challenging). This is a very general setting. Yet, if such annotation information is provided, w...
Note that this strategy is purely similarity-based and does not consider each memory slot’s contribution to the task-specific objective. In other terms, it operates under the assumption that each knowledge descriptive concept contained in the memory should have high semantic and syntactic similarity with its associate...
We define target memory slots as those that have been linked to the input example during the annotation phase. We introduce a penalty term that enforces target memory slots to have a higher similarity score with respect to the input x𝑥xitalic_x than to the remaining slots, up to a γ𝛾\gammaitalic_γ margin:
In order to do that, at training time, each individual input example should modify the underlying sampling distribution by giving more priority to those memory slots that have been reputed as useful. We explore two different formulations of priority that ground the notion of usefulness to a specific architectural prope...
where for the n𝑛nitalic_n-th example M+nsubscriptsuperscript𝑀𝑛M^{n}_{+}italic_M start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT start_POSTSUBSCRIPT + end_POSTSUBSCRIPT is the set of target memory slots for a given input example x𝑥xitalic_x, M−nsubscriptsuperscript𝑀𝑛M^{n}_{-}italic_M start_POSTSUPERSCRIPT itali...
B
Figure 1 contains the crowdedness measure averaged over the 10-minute time intervals for the whole analyzed period over the 21×21212121\times 2121 × 21 grid. Areas with high levels of crowdedness are apparent in the central grid squares in the area surrounding the Milan Duomo 3 and in the upper center, where the two ma...
In order to illustrate the temporal behavior of the crowdedness measure, we present in Figure 2 the time-series of the grid units containing the three representative districts of 3 Duomo, 5 Navigli and 4 Bocconi. We observe a larger high activity in the area of 3 Duomo compared to the other two districts, which pea...
The top panel in Figure 7 presents the three clusters identified by employing Binder’s loss on the samples of the cluster assignments vector, while the bottom panel presents the crowdedness measure averaged over all locations in one cluster. The blue cluster is one where the difference in the crowdedness between weeken...
The yellow cluster contains areas where the activity is high on the working days and lower on the weekends, with an intraday peak around noon. Typical locations in this cluster are university centers or the city center, where most office buildings are situated. The green cluster is the smallest one, with the characteri...
The seasonality in the data can be identified also in Figure 3, which contains a visualization of the whole dataset through a raster plot. Aside from the strong daily seasonality that is present in all locations, one can observe different temporal patterns among the areas. Most areas share the characteristic of relati...
A
A weighted set U𝑈Uitalic_U is a finite set U𝑈Uitalic_U associated with a weight function wU:U→ℝ+:subscript𝑤𝑈→𝑈subscriptℝw_{U}:U\to\mathbb{R}_{+}italic_w start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT : italic_U → blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT.
define wU⁢(S):=∑u∈SwU⁢(u)assignsubscript𝑤𝑈𝑆subscript𝑢𝑆subscript𝑤𝑈𝑢w_{U}(S):=\sum_{u\in S}{w_{U}(u)}italic_w start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT ( italic_S ) := ∑ start_POSTSUBSCRIPT italic_u ∈ italic_S end_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT ( italic_u ). For any oth...
In kernel k𝑘kitalic_k-Means, the input is a dataset X𝑋Xitalic_X with weight function wX:X→ℝ+:subscript𝑤𝑋→𝑋subscriptℝw_{X}:X\to\mathbb{R}_{+}italic_w start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT : italic_X → blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT
since ‖c⋆−φ⁢(u)‖2=K⁢(u,u)−2n⁢∑x∈XK⁢(x,u)+1n2⁢∑x,y∈XK⁢(x,y)superscriptnormsuperscript𝑐⋆𝜑𝑢2𝐾𝑢𝑢2𝑛subscript𝑥𝑋𝐾𝑥𝑢1superscript𝑛2subscript𝑥𝑦𝑋𝐾𝑥𝑦\|c^{\star}-\varphi(u)\|^{2}=K(u,u)-\frac{2}{n}\sum_{x\in X}{K(x,u)}+\frac{1}{% n^{2}}\sum_{x,y\in X}{K(x,y)}∥ italic_c start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ...
A weighted set U𝑈Uitalic_U is a finite set U𝑈Uitalic_U associated with a weight function wU:U→ℝ+:subscript𝑤𝑈→𝑈subscriptℝw_{U}:U\to\mathbb{R}_{+}italic_w start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT : italic_U → blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT.
A
Working with explicit quantities makes it necessary to add specific rules to manage them (such as (Arch)) and to allow sequents with infinitely many hypotheses. Note also that infinitary sequents are needed to deal with infinitary function symbols, which are allowed in QETs, while LPLLR only deals with finitary ones to...
types are objects, contexts are products and terms are arrows whose Lipschitz constants agree with the grades reported in the context. To interpret labelled arrow type, we can exploit the structure of a Lipschitz doctrine to define a graded exponential, that intuitively collects all Lipschitz arrows having a fixed Lips...
while, to write QS0 as an LPLLR theory, one takes a ℝ≥0subscriptℝabsent0\mathbb{R}_{\geq 0}roman_ℝ start_POSTSUBSCRIPT ≥ 0 end_POSTSUBSCRIPT-graded signature with one sort, no predicate symbols and whose function symbols are +:⟨1,1⟩+:{\langle{{1,1}}\rangle}+ : ⟨ 1 , 1 ⟩ and 0:⟨⟩:00:{\langle{{}}\rangle}0 : ⟨ ⟩, meaning ...
Another important difference is that in QETs function symbols are forced to be non-expansive by the rule (NExp), while in LPLLR they can be arbitrary Lipschitz functions. Since non-expansive maps are Lipschitz with constant equal to 1111, LPLLR can treat them, but, as we will see, it can go beyond this limitation, deal...
Mardare et al. [MPP16, MPP17] introduced the notion of quantitative equational theory (QET) as a formal tool to describe and reason about quantitative algebras, that is, algebras whose carrier is a metric space and whose operations are non-expansive maps.
C
Note that for large networks, RoleSim, StructSim, and ForestSim-EX cannot finish the computation due to their high time and memory cost, while ForestSim-AP works well on all these real networks, which shows the significant efficiency advantages of ForestSim-AP.
In this paper, we propose a novel node similarity metric, namely ForestSim, to quickly and effectively process top-k similarity search on large networks. Different from previous frameworks, ForestSim, based on spanning rooted forests of graphs, adopts the average size of all trees rooted at node u𝑢uitalic_u in spanni...
We use six labeled networks [51, 52, 43] to evaluate the effectiveness of the studied measures. These datasets are extensively used in machine learning studies [52, 43, 53]. Each vertex in the datasets has a label related to its topology, and we thus use the labels as the ground truth of roles. Related information abo...
The key point of analyzing structural roles is figuring out how a vertex connects with its context nodes [43]. To some extent, the sizes of those trees rooted at u𝑢uitalic_u in the spanning rooted forests of a graph reflects the connection mode between the node u𝑢uitalic_u and its context vertices. Here, we use the a...
In this subsection, we evaluate the efficiency of studied metrics on 20 real networks, all of which are publicly available in Koblenz Network Collection [49] and Network Repository [50]. These real-world networks are preprocessed as simple graphs. In each network, we find the top-10 similar nodes for all vertices. Sta...
B
Evaluation: According to our extensive experimental results, LSA achieve impressive aspect sentiment coherency prediction results. Besides, our ensemble LSA model also obtains state-of-the-art aspect sentiment classification performance on five public datasets.
In the review about a restaurant in Fig. 1, the reviewer expresses positive sentiments about the atmosphere, food, and service but remains neutral about dinner and drinks. This tendency to express similar sentiments about related aspects (e.g., atmosphere, food, and service) is what we refer to as sentiment coherency.
Table 6: The examples for aspect sentiment coherency found by LSA. The target aspects are denoted in bold and the underlined words indicates the aspects with coherent sentiments. “Pos”, “Neg” and “Neu” represent positive, negative and neutral, respectively.
where picsubscriptsuperscript𝑝𝑐𝑖p^{c}_{i}italic_p start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and pjasubscriptsuperscript𝑝𝑎𝑗p^{a}_{j}italic_p start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are the positions of...
In this work, we make efforts to address an intriguing problem within ABSC that has been overlooked in existing research, i.e., “aspect sentiment coherency”, which focuses on modeling aspects that share similar sentiments. For instance, in the sentence “This laptop has a lot of storage, and so does the battery capacity...
A
𝐗𝐗\mathbf{X}bold_X. Suppose that there exist orthogonal projections 𝐋:ℝN→𝒫p⁢(𝕊):𝐋→superscriptℝ𝑁superscript𝒫𝑝𝕊\mathbf{L}:\mathbb{R}^{N}\rightarrow{\mathcal{P}}^{p}(\mathbb{S})bold_L : blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT → caligraphic_P start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR...
In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic...
The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop...
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and me...
gradient estimate of 𝜸𝐖subscript𝜸𝐖\bm{\gamma}_{\mathbf{W}}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT, where 𝜸𝐖0superscriptsubscript𝜸𝐖0\bm{\gamma}_{\mathbf{W}}^{0}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is the initial guess. The main ...
B
Quantitatively, we adopt signal-to-noise ratio, S⁢N⁢R=‖𝑨‖22/‖𝑨−𝑨~‖22𝑆𝑁𝑅superscriptsubscriptnorm𝑨22superscriptsubscriptnorm𝑨~𝑨22SNR=\|\bm{A}\|_{2}^{2}/\|\bm{A}-\widetilde{\bm{A}}\|_{2}^{2}italic_S italic_N italic_R = ∥ bold_italic_A ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPER...
Figure 3. QuantumNAT Overview. (1) Post-measurement normalization matches the distribution of measurement results between noise-free simulation and real QC. (2) Based on realistic noise models, noise-injection inserts quantum error gates to the training process to increase the classification margin between classes. (3)...
Compatibility with existing noise mitigation. QuantumNAT is orthogonal to existing noise mitigation such as extrapolation method. It can be combined with post-measurement normalization (Table 4). The QNN model has 2 blocks, each with three U3+CU3 layers. For “Normalization only”, the measurement outcomes of the 3-laye...
QuantumNAT comprises a three-stage pipeline. The first step, post-measurement normalization normalizes the measurement outcomes on each quantum bit (qubit) across data samples, thus removing the quantum error-induced distribution shift. Furthermore, we inject noise to the PQC training process by performing error gate i...
Although the normalization above mitigates error impacts, we can still observe small discrepancies on each individual measurement outcome, which degrade the accuracy. Therefore, to make the QNN model robust to those errors, we propose noise injection to the training process.
D
We extensively evaluate the proposed EDA on object tracking. The experimental results demonstrate the superiority of EDA over other state-of-the-art event-based tracking methods and several popular conventional tracking methods. In addition, the estimated true event trajectories corresponding to object motions are also...
Event-based methods have achieved promising performance on various tasks gallego2022event . However, the study of the fundamental event data association problem is still challenging and in its infancy. Unlike a traditional camera, an event camera only sparsely emits binary (i.e., On and Off) retinal events at the edges...
Compared with the conventional tracking methods, event-based tracking methods barranco2018real ; camunas2017event ; glover2017robust , which fuse the event data, show their superiority under fast motions and HDR scenes. The recently proposed event-based tracking methods have achieved state-of-the-art performance by lev...
As mentioned previously, event cameras have shown their great potential on various computer vision tasks gallego2022event . Different from conventional cameras that are synchronized to the global camera shutter, these bio-inspired vision sensors work in an asynchronous way with ultra low microsecond latencies in respon...
As one of the most delicate neural systems, biological retinas can precisely and efficiently capture object motions olveczky2003segregation . Inspired by the biological retina, asynchronous event cameras, such as DVS lichtsteiner2008128 , DAVIS brandli2014240 and ATIS posch2011qvga , have been proposed to mimic it. Th...
C
As a connected vertex-ordering of H𝐻Hitalic_H can be obtained in linear time using a standard graph traversal algorithm, and a colouring of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT may be computed in O⁢(m⁢n)𝑂𝑚𝑛O(mn)italic_O ( italic_m italic_n ) time [12, Chapter 5.7], we concl...
We now prove our main result, that there are no ugly perfect graphs. This generalizes the same fact which was previously proved for Meyniel graphs [22] (a class which contains chordal graphs, HHD-free graphs, Gallai graphs, parity graphs, distance-hereditary graphs…) and line graphs of bipartite graphs [3]. Our proof ...
class of perfect graphs. We also give a simple and constructive proof for comparability graphs (which are perfect). Note that there exist bad graphs in these graph classes, consider for example the fish graph, which is K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT-minor-free and comparability; see...
Recently, connected greedy edge-colourings (equivalently, connected greedy colourings of line graphs) have been studied in [3], and it was proved that there is no line graph of a bipartite graph that is ugly.444Moreover, a careful analysis of the proof of [3] gives an algorithm running in time O⁢(n4)𝑂superscript𝑛4O(n...
Concerning other subclasses of interest, as mentioned in the introduction, the same task can be done in time O⁢(m+n)𝑂𝑚𝑛O(m+n)italic_O ( italic_m + italic_n ) for chordal graphs using the LexBFS algorithm [21], for Meyniel graphs this can be done in time O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUP...
A
KD was first proposed by [59], which aims to transfer knowledge from trained neural networks to a smaller one without losing too much generalization power. There are three types of existing KD methods: response-based [59, 60, 61] and feature-based [62] methods require labels to utilize intermediate-level supervision f...
Unlike previous DR and GE methods, the proposed GenURL can import extra prior knowledge and is robust to highly redundant data; additionally, different from GE and SSL methods, GenURL is agnostic to network structures and predefined proxy tasks. Extensive experiments conducted on benchmarks of four URL tasks (self-supe...
Firstly, we compare the effects of hyper-parameters in GenURL for DR and SSL tasks. As shown in Figure 12, GenURL prefers smaller νZsubscript𝜈𝑍\nu_{Z}italic_ν start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT, i.e., using ν=0.01𝜈0.01\nu=0.01italic_ν = 0.01 to balance the local and global structures. Figure 5 shows that...
We summarize and analyze the above URL approaches and propose GenURL, a framework that successfully links the two important elements of URLs, data geometry and task hypotheses, based on generalized similarity. GenURL not only takes into account the data geometries that are the focus of DE and GE but also introduces tas...
Then, we compare how GenURL deals with the negative samples in SSL and KD tasks. In Figure 7, we find that GenURL prefers the similar νZsubscript𝜈𝑍\nu_{Z}italic_ν start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT and σ𝜎\sigmaitalic_σ for both SSL and KD tasks, which indicates using νZ=100subscript𝜈𝑍100\nu_{Z}=100ita...
C
Table 6: Our NAS method outperforms existing state-of-the-art tiny networks in terms of computation-accuracy trade-off, especially under tiny computation settings (<50M). All our models are derived from the same search space, while obtaining the best accuracy at different budgets. For models with *, we re-measure the M...
We provide the face detection results on WIDER FACE validation set with RNNPool-Face-Quant [43] and MCUNetV2-S. The quantitative results are shown in Table 7, where we follow [43] to calculate the peak memory. Our model has better mAP at 1.3×\times× smaller peak memory.
We provide the face detection results on WIDER FACE validation set with RNNPool-Face-Quant [43] and MCUNetV2-S. The quantitative results are shown in Table 7, where we follow [43] to calculate the peak memory. Our model has better mAP at 1.3×\times× smaller peak memory. The qualitative results are shown in Figure 12. O...
We provide the face detection results on WIDER FACE validation set with RNNPool-Face-Quant [43] and MCUNetV2-S. The quantitative results are shown in Table 7, where we follow [43] to calculate the peak memory. Our model has better mAP at 1.3×\times× smaller peak memory.
We provide the face detection results on WIDER FACE validation set with RNNPool-Face-Quant [43] and MCUNetV2-S. The quantitative results are shown in Table 7, where we follow [43] to calculate the peak memory. Our model has better mAP at 1.3×\times× smaller peak memory.
A
Main Results Table I shows the results of different models for CECE on two leaderboards. The single model we proposed, overwhelmingly outperforms the baseline in terms of all leaderboards and achieves encouraging 65.9%percent65.965.9\%65.9 %, 77.0%percent77.077.0\%77.0 % improvements in F1-score over the baseline meth...
The ICDM 2020 Knowledge Graph Contest is a competition-style event co-located with the leading ICDM conference. This paper describes our solution for the consumer event-cause extraction task, and we won 1st place in the first stage leaderboard and 3rd place in the final stage leaderboard. Extracting causes of consumer...
In the 2020 ICDM Competition 333https://www.biendata.xyz/competition/icdm_2020_kgc/, the task adds judgments on multiple event types, which is difficult to solve with a reading comprehension framework. The goal of the competition is to extract multiple event types and event-causes for each text and brand/product. To th...
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging [8] framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a...
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a ran...
D
To measure the destruction of invariance brought by those strategies quantitatively, we annotate the cosine similarity between embeddings of original graph and augmented ones. The graph embeddings are extracted by Node2vec [7]. The upper part of each augmentation strategy shows an augmented graph preserving high invari...
You et al. [35] reach similar conclusions after testing various augmentation strategies. For instance, edge perturbation is more suitable for social networks but hurts biochemical molecules. Facing the aforementioned issue, many researchers turn to explore the possibility of discarding data augmentation from contrastiv...
Data augmentation has achieved great success in image data where the invariance of various views (e.g., color-invariant, rotation-invariant, and resizing-invariant) are well-understood [3, 31]. However, due to the complex structural information and the coupling between nodes in the graph, the changes induced by the da...
Some latest researches have been dedicating to explore the the non-necessity of graph data augmentation. AF-GRL [13] develops an augmentation-free framework, generating views by exploiting nodes sharing the local structural information and global semantics.
To remedy the issue of unstable invariance from inappropriate data augmentations, we propose a novel graph-level contrastive learning framework named CGCL, where no handcrafted graph augmentation is needed. CGCL uses multiple GNN-based graph encoders to enforce contrastive learning in a collaborative way, remedying the...
A
Different number of symbols Here we study the impact of a different number of symbols on compositionality. In our experiments we used the communication channel with |𝒜s|=5subscript𝒜𝑠5|\mathcal{A}_{s}|=5| caligraphic_A start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT | = 5, giving a total of 25=(5×5)255525=(5\times 5)2...
Interestingly, the topographic similarity values for the small noise regime (up to 0.080.080.080.08) do not improve over the baseline value (0.790.790.790.79), see Figure 5(LABEL:fig:grid:8x8). This behavior changes for medium to large values of noise, where we can observe a visible increase in topo, peaking at 0.880.8...
It turns out, that allowing for only 16161616 messages makes the training less stable. For topographic similarity, see Figure 5(LABEL:fig:grid:4x4), small to medium values of noise exhibit wide confidence intervals and it is statistically hard to distinguish between the metric values (this might be attributed to a bimo...
The accuracy drops down with an increase of the noise level, as expected, however the speed of the decline increases. This shows that there is an interesting compositionality-accuracy trade-off. The bottom panel of Figure 4 complements the overall picture with a visualization of metrics’ distribution. We see interestin...
The results for topographic similarity are presented in Figures 5(LABEL:fig:grid:variable_noise_0.1)-(LABEL:fig:grid:variable_noise_0.15), where ϵ0=0subscriptitalic-ϵ00\epsilon_{0}=0italic_ϵ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0 and ϵT=0.1subscriptitalic-ϵ𝑇0.1\epsilon_{T}=0.1italic_ϵ start_POSTSUBSCRIPT italic_T...
B
Control barrier functions (CBFs) were introduced in [3, 4] to render a safe set controlled forward invariant. A CBF defines a set of safe control inputs that can be used to find a minimally invasive safety-preserving correction to a nominal control law by solving a convex quadratic program. Many variations and extensio...
Control barrier functions (CBFs) were introduced in [3, 4] to render a safe set controlled forward invariant. A CBF defines a set of safe control inputs that can be used to find a minimally invasive safety-preserving correction to a nominal control law by solving a convex quadratic program. Many variations and extensio...
Learning with CBFs: Approaches that use CBFs during learning typically assume that a valid CBF is already given, while we focus on constructing CBFs so that our approach can be viewed as complementary. In [19], it is shown how safe and optimal reward functions can be obtained, and how these are related to CBFs. The aut...
A promising research direction is to learn CBFs from data. The authors in [36] construct CBFs from safe and unsafe data using support vector machines, while authors in [37] learn a set of linear CBFs for clustered datasets. The authors in [38] proposed learning limited duration CBFs and the work in [39] learns signed ...
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an...
D
There exists an oracle relative to which 𝖭𝖯𝖡𝖰𝖯⊄𝖡𝖰𝖯𝖭𝖯not-subset-ofsuperscript𝖭𝖯𝖡𝖰𝖯superscript𝖡𝖰𝖯𝖭𝖯\mathsf{NP}^{\mathsf{BQP}}\not\subset\mathsf{BQP}^{\mathsf{NP}}sansserif_NP start_POSTSUPERSCRIPT sansserif_BQP end_POSTSUPERSCRIPT ⊄ sansserif_BQP start_POSTSUPERSCRIPT sansserif_NP end_POSTSUPERSCRIPT,...
Theorem 10 says, in effect, that there is no relativizing obstruction to 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP being inordinately powerful even while 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP is inordinately weak. It substantially extends the Raz-Tal Theorem, that there is an oracle relative to which 𝖡𝖰𝖯⊄𝖯𝖧not-subset-of𝖡𝖰...
Given the experience of classical complexity theory, it would be reasonable to hope for a theorem showing that, if 𝖭𝖯⊆𝖡𝖰𝖯𝖭𝖯𝖡𝖰𝖯\mathsf{NP}\subseteq\mathsf{BQP}sansserif_NP ⊆ sansserif_BQP, then 𝖯𝖧𝖯𝖧\mathsf{PH}sansserif_PH collapses—analogous to the Karp-Lipton Theorem [KL80], that if 𝖭𝖯⊂𝖯/𝗉𝗈𝗅𝗒𝖭𝖯𝖯...
As mentioned earlier, Theorem 3 resolves an open problem of Fortnow [For05], and demonstrates a clear difference between 𝖡𝖯𝖯𝖡𝖯𝖯\mathsf{BPP}sansserif_BPP and 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP that exemplifies the impossibility of pulling the randomness out of a quantum algorithm. Indeed, Theorem 3 shows that t...
So what is it that distinguishes 𝖡𝖯𝖯𝖡𝖯𝖯\mathsf{BPP}sansserif_BPP from 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP in these cases? In all of the above examples, the answer turns out to be one of the fundamental properties of classical randomized algorithms: namely, that one can always “pull the randomness out” from suc...
C
∑h=0∞(dimk⁢[𝐱(⩽h)]/I(∞))⋅th=m1−m⁢t⁢?superscriptsubscriptℎ0⋅dimension𝑘delimited-[]superscript𝐱absentℎsuperscript𝐼superscript𝑡ℎ𝑚1𝑚𝑡?\sum\limits_{h=0}^{\infty}(\dim k[\mathbf{x}^{(\leqslant h)}]/I^{(\infty)})% \cdot t^{h}=\frac{m}{1-mt}?∑ start_POSTSUBSCRIPT italic_h = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ e...
The starting point of our proof of the lower bound uses the insightful conjecture by Afsharijoo [1, Section 5] suggesting how the standard monomials of ℐm(∞)superscriptsubscriptℐ𝑚\mathcal{I}_{m}^{(\infty)}caligraphic_I start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( ∞ ) end_POSTSUPERSCRIPT with ...
Connections between the multiplicity structure of the arc space of a fat point and Rogers-Ramanujan partition identities from combinatorics were pointed out by Bruschek, Mourtada, and Schepers in [11] (for a recent survey, see [29, §9]). In particular, they used Hilbert-Poincare series of similar nature to (1) (motivat...
Note that the series does not depend on the multiplicity m𝑚mitalic_m of the point. One way to capture the scheme structure of ℒ⁢(X)ℒ𝑋\mathcal{L}(X)caligraphic_L ( italic_X ) could be to take the components of the projections in (3) with their multiplicities.
The authors are grateful to Joris van der Hoeven, Hussein Mourtada, Bernd Sturmfels, Dmitry Trushin, and the referees for helpful discussions. We thank Yassine El Maazouz and Claudia Fevola for their support with making the Mathrepo webpage. GP was partially supported by NSF grants DMS-1853482, DMS-1760448, and DMS-185...
D
DCDFM can also generate signed network by setting ℙ⁢(A⁢(i,j)=1)=1+Ω⁢(i,j)2ℙ𝐴𝑖𝑗11normal-Ω𝑖𝑗2\mathbb{P}(A(i,j)=1)=\frac{1+\Omega(i,j)}{2}blackboard_P ( italic_A ( italic_i , italic_j ) = 1 ) = divide start_ARG 1 + roman_Ω ( italic_i , italic_j ) end_ARG start_ARG 2 end_ARG and ℙ⁢(A⁢(i,j)=−1)=1−Ω⁢(i,j)2ℙ𝐴𝑖𝑗11norm...
Follow similar analysis as [16], we let ℱℱ\mathcal{F}caligraphic_F be some specific distributions as examples to show the generality of DCDFM as well as nDFA’s consistent estimation under DCDFM. For i,j∈[n]𝑖𝑗delimited-[]𝑛i,j\in[n]italic_i , italic_j ∈ [ italic_n ], we mainly bound γ𝛾\gammaitalic_γ to show that γ𝛾\...
Eq (5) means that we only assume all elements of A𝐴Aitalic_A are independent random variables and 𝔼⁢[A]=Θ⁢Z⁢P⁢Z′⁢Θ𝔼delimited-[]𝐴Θ𝑍𝑃superscript𝑍′Θ\mathbb{E}[A]=\Theta ZPZ^{\prime}\Thetablackboard_E [ italic_A ] = roman_Θ italic_Z italic_P italic_Z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT roman_Θ without any p...
In this paper, we introduced the Degree-Corrected Distribution-Free Model (DCDFM), a model for community detection on weighted networks. The proposed model is an extension of previous Distribution-Free Models by incorporating node heterogeneity to model real-world weighted networks in which nodes degrees vary, and it ...
Since our model DCDFM has no limitation on the choice of distribution ℱℱ\mathcal{F}caligraphic_F as long as Eq (5) holds, setting ℱℱ\mathcal{F}caligraphic_F as any other distribution (see, Double exponential, Exponential, Gamma and Uniform distributions in http://www.stat.rice.edu/~dobelman/courses/texts/distributions...
D
To estimate the performance of MA-Trace we run training for 3333 days or until convergence. We report the median win rate of 10101010 runs (with different random seeds) along with the interquartile range. Training curves for all the tasks can be found in Appendix G.
To estimate the performance of MA-Trace we run training for 3333 days or until convergence. We report the median win rate of 10101010 runs (with different random seeds) along with the interquartile range. Training curves for all the tasks can be found in Appendix G.
Below we present a comprehensive list of ablations to evaluate the design choices of our algorithm. In each case, we present training curves for tasks, which best illustrate our claims. For the complete training results and more details, we refer to Appendix E.
We evaluate MA-Trace on StarCraft Multi-Agent Challenge Samvelyan et al. (2019a) – a standard benchmark for multi-agent algorithms. Our approach achieves competitive performance on all tasks and exceeds state-of-the-art results on some of them. Additionally, we provide a comprehensive set of ablations to quantify the i...
In Table 1 and Figure 1, we present the results of the main version of our algorithm – MA-Trace (obs), in which the critic uses stacked observations of agents as described in Appendix F. MA-Trace (obs) reaches competitive results and in some cases exceeds the current state-of-the-art. We compare with a selection of the...
B
(ii) during this phase, the ML expert should choose which features are important for the active models compared to all models (see Figure 1(b)); (iii) in the next exploration phase, both experts should examine which decisions explain the data set globally and decide upon impactful decisions for a specific test instance...
VisRuler incorporates a single workflow for the synchronous co-located collaboration between the ML expert and the domain expert, as depicted in Figure 3. It is a VA system that comprises multiple coordinate views arranged in a single webpage to manage the entire process without any distractions occuring due to the nav...
The tool consists of five main interactive visualization panels (Figure 1): (a) models overview (T1), (b) global feature ranking (T2), (c) decisions space (T3), (d) manual decisions (T4), and (e) decisions evaluation (T5). Our proposed workflow is a two-party system with the ML expert on the one side and the domain exp...
(iv) in this same phase, the domain expert should interpret the manual decisions selected in order to gain insights about the models’ decisions—either globally or locally—for a particular test instance (Figure 1(d)); and (v) in the final phase, the domain expert can evaluate the agreement and extract suitable manual de...
(ii) during this phase, the ML expert should choose which features are important for the active models compared to all models (see Figure 1(b)); (iii) in the next exploration phase, both experts should examine which decisions explain the data set globally and decide upon impactful decisions for a specific test instance...
C
An electromagnetic wave that propagates from the transmitter (Tx) to the receiver (Rx) of a wireless communications system is characterized, inter alia, by its polarization, i.e., orientation of the field vector. Interaction of a wave (multipath component, MPC) with environmental objects may change the orientation, an...
In this section, we present simulation results focusing on the system performance when applying the proposed PR-HS-MIMO scheme. The results include improvement in channel capacity; and the comparison of EW and global polarization reconfiguration schemes in terms of channel capacity and selected Tx antenna indices. In t...
In particular, several recent research works present the benefit of utilizing the polarization domain in recently proposed communication schemes including, but not limited to, MIMO spatial multiplexing [1]; spatial modulation (SM) [2, 3, 4]; non-orthogonal multiple access (NOMA) [5]; and beamforming [6, 7]. It is vali...
It has been demonstrated that utilizing the polarization domain may increase channel capacity and spectral efficiency; and improve symbol error rate (SER) [15, 16, 17, 1, 18]. For this reason, impact of polarization on the wireless communication systems has been regarded as a promising research topic [1, 19, 20, 21, 22...
Various other aspects of polarization in MIMO systems have been investigated as well. Ref. [16] showed that space-time block coding (STBC) with single polarization outperforms STBC with dual polarization in Rayleigh and Ricean fading channels. A MIMO system with dual-polarized antenna elements can have lower spatial di...
C
Packing problems are omnipresent in our daily lives and likewise appear in many large-scale industries. For instance, two-dimensional versions of packing arise when a given set of pieces has to be cut out from a large piece of material such that waste is minimized.
such that if she were to cut them out in an online fashion as directed by the butler, she would come short even if she had access to all the curtains in the entire von Trapp villa or even in the world. Remarkably, this even holds if all the pieces have diameter at most 1111 cm or any other arbitrarily small fixed size.
Undoubtedly, this task did require careful planning of how to cut out the individual pieces of curtain fabric in order not to run out. In practical settings, it is often important that the inherent structure of the host material (grain of fabric, patterns, etc.) is respected, i.e., the pieces should not be arbitrarily ...
This is relevant in clothing production where pieces are cut out from a strip of fabric, and similarly in leather, glass, wood, and sheet metal cutting. In the 1965 American classic The Sound of Music, Maria sewed playclothes for the seven von Trapp children, Liesl, Friedrich, Louisa, Kurt, Brigitta, Marta, and Gretl, ...
The objects may in some applications appear in an online fashion, i.e., the pieces are given one after the other, and each of them must be placed before the next one is known. For example, in a scene which was not included in the final cinema edition of The Sound of Music, the butler of the von Trapp family was singing...
C
WFLW: This dataset is from [38] containing 10,000 faces with 7500 and 2500 in training and test sets, respectively. All images are collected from the WIDER FACE dataset [40] and manually labeled with 98 landmarks. The dataset contains different test subsets where the image appearances vary due to variations in pose, ex...
Metrics: Following the official challenge [14, 37], mean radial error (MRE) and successful detection rate (SDR) in four radii (2mm, 2.5mm, 3mm, and 4mm) are applied, based on the Euclidean distance between prediction and ground truth. In addition, similarity via Eq. (9) is demonstrated for comparison. For the WFLW data...
Figure 1: The distribution of the mean radial error (MRE) when choosing a different image as a template in one-shot medical landmark detection task. The x-axis refers to MRE and the y-axis refers to the percentage of MRE lying in the corresponding ranges. Evidently, the choice of template affects the performance signi...
Cephalometric Xray: It is a widely-used public dataset for cephalometric landmark detection, containing 400 radiographs, and is provided in IEEE ISBI 2015 Challenge [14, 37]. There are 19 landmarks of anatomical significance labeled by 2 expert doctors in each radiograph. The averaged version of annotations by two doct...
However, during our research following the work of [42], we observe an interesting phenomenon (see Figure 1). The template choice highly impacts the final performance. The mean radial error (MRE) of our trained model varies from 2.9mm under the “best” template to 4.5mm under the “worst" template. It is evident that the...
A
(1) We provide a general Mixed Membership Distribution-Free (MMDF for short) model for overlapping weighted networks in which a node can belong to multiple communities and an edge weight can be any real number. MMDF allows edge weights to follow any distribution as long as the expected adjacency matrix has a block str...
In this paper, we have proposed a general, flexible, and identifiable mixed membership distribution-free (MMDF) model to capture community structures of overlapping weighted networks. An efficient spectral algorithm, DFSP, was used to conduct mixed membership community detection and shown to be consistent under mild c...
(Comparison to LFR benchmark networks) In [44], the authors proposed LFR benchmark graphs for testing community detection algorithms on non-overlapping unweighted networks. In [45], the authors proposed generalizations of LFR benchmark networks for testing community detection methods on overlapping weighted graphs. \ad...
(1) We provide a general Mixed Membership Distribution-Free (MMDF for short) model for overlapping weighted networks in which a node can belong to multiple communities and an edge weight can be any real number. MMDF allows edge weights to follow any distribution as long as the expected adjacency matrix has a block str...
(2) We use a spectral algorithm to fit MMDF. We show that the proposed algorithm stably yields consistent community detection under MMDF. Especially, theoretical results when edge weights follow a specific distribution can be obtained immediately from our results.
D
In addition, we also perform detailed ablation studies on how the effectiveness of CwD is influenced by factors such as the number of classes at the initial CIL phase, the number of exemplars for each class and regularization coefficient of the CwD term.
Inspired by this, we consider improving CIL from a novel perspective—encouraging the CIL learner to mimic the oracle model in the initial phase. To achieve this, we first need to understand the difference between representations produced by a naïvely-trained initial-phase model and the oracle model.
Specifically, at the initial phase, we regularize the CIL learner to produce similar representations as the model trained with data of all classes (i.e., the oracle model), since the upper bound of CIL is the oracle model. According to our results, this additional regularization drastically improves CIL performance.
The contributions of this paper are as follows: 1) We empirically discover that encouraging the CIL learner to mimic the oracle model in the initial phase can boost the CIL performance. 2) We find that compared with naïvely-trained initial-phase model, data representations of each class produced by the oracle model sca...
We are thus motivated to enforce data representations of each class to be more uniformly scattered at the initial phase, which mimics the representations produced by the oracle model. To this end, we first theoretically show that, a group of embeddings will scatter more uniformly in the space if its correlation matrix ...
C
Each step of the proposed registration method can process only one MRI sequence at a time. Thus, we first verified on the training set of the BraTS-Reg challenge dataset which sequence was the best to guide the registration in both the rigid and non-rigid approaches. Our final solution used the T2 images in the paramet...
Team SuperX This method consists of two steps i) A rigid registration method, the Nelder-Mead method (also named downhill simplex method) and ii) Affine transformation for rigid registration algorithm which was applied on floating image before non-rigid registration was performed using free-formed deformations. In the ...
This method consists of two steps i) A rigid registration method, the Nelder-Mead method (also named downhill simplex method) and ii) Affine transformation for rigid registration algorithm which was applied on floating image before non-rigid registration was performed using free-formed deformations. In the non-rigid re...
Team SuperX This method consists of two steps i) A rigid registration method, the Nelder-Mead method (also named downhill simplex method) and ii) Affine transformation for rigid registration algorithm which was applied on floating image before non-rigid registration was performed using free-formed deformations. In the ...
This method consists of two steps i) A rigid registration method, the Nelder-Mead method (also named downhill simplex method) and ii) Affine transformation for rigid registration algorithm which was applied on floating image before non-rigid registration was performed using free-formed deformations. In the non-rigid re...
A
Assume, towards a contradiction, that there is an atom α𝛼\alphaitalic_α of Q𝑄Qitalic_Q over a relation name R𝑅Ritalic_R that uses orphan variables x𝑥xitalic_x and w𝑤witalic_w at positions (R,A)𝑅𝐴(R,A)( italic_R , italic_A ) and (R,A′)𝑅superscript𝐴′(R,A^{\prime})( italic_R , italic_A start_POSTSUPERSCRIPT ′ end...
If the condition of line 7 is satisfied by Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, then it is also satisfied by Q𝑄Qitalic_Q as we have not changed constants or removed variables from primary-lhs positions, which contradicts Lemma 29.
If the condition of line 1 is satisfied by Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, then it is also satisfied by Q𝑄Qitalic_Q as we have only removed an orphan variable, which does not affect the complex part of the query. This is a contradiction to Lemma 29.
If the condition of line 1 is satisfied by Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, then it is also satisfied by Q𝑄Qitalic_Q as we have only removed an orphan variable, which does not affect the complex part of the query. This is a contradiction to Lemma 29.
If the condition of line 1 is satisfied by Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, then it is also satisfied by Q𝑄Qitalic_Q as we have only removed an orphan variable, which does not affect the complex part of the query. This is a contradiction to Lemma 29.
D
For this linearization, we require that 0<t<1/maxi⁡{ωi/(hi⁢ωi+δi)}0𝑡1subscript𝑖subscript𝜔𝑖subscriptℎ𝑖subscript𝜔𝑖subscript𝛿𝑖0<t<1/\max_{i}\{\omega_{i}/(h_{i}\omega_{i}+\delta_{i})\}0 < italic_t < 1 / roman_max start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT { italic_ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRI...
In our adaptations of InfoMap to absorbing random walks, we introduce a family of associated absorption-scaled graphs and then apply Markov time sweeping to these absorption-scaled graphs. To illustrate how the node-absorption rates impact the communities that we detect, consider the matrix Plsubscript𝑃𝑙P_{{l}}itali...
We use the matrix H𝐻Hitalic_H in Definition 5 because this matrix allows us to tune the relative effects of the edge weights and the node-absorption rates on the communities that we detect using our adaptations of InfoMap. In Algorithms 1a and 1b, we summarize our adaptations of InfoMap.
We study an example that plays a similar role to the example of Salathé and Jones [38]. In our example, setting the absorption rates of bridge nodes to larger values than the absorption rates of other nodes is analogous to removing community bridges. Unlike in the example of Salathé and Jones, our example uses the sam...
Our paper proceeds as follows. In Section 2, we present the original InfoMap algorithm and the Markov time-sweeping technique that we use in our adaptations of InfoMap. In Section 3, we apply InfoMap to absorption-scaled graphs. In Section 4, we introduce a definition of a map function L(a)superscript𝐿𝑎L^{(a)}italic_...
B
In addition, we see that performance increases with increase in pbsubscript𝑝𝑏p_{b}italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, number of nodes, or network link density, as expected due to availability of better trees/paths; it also increases with decrease in (s,d)𝑠𝑑(s,d)( italic_s , italic_d ) distance ...
on a path may have much different lengths. In particular, we pick link lengths randomly in the range of 10 to 50 kms. With this setting, we see that DP-Approx performs much better than Balanced-Tree, and in some cases, up to 100% better. Note that, Balanced-Tree and Caleffi have similar performance over linear graphs, ...
Among our schemes, we use DP-OPT, DP-Approx and Balanced-Tree (see §IV-B) for the QNR-SP problem, and LP (Appendix A) and ITER schemes for the QNR problem. For ITER, we use three schemes: ITER-DPA, ITER-Bal and ITER-SP, which iterate over DP-Approx, Balanced-Tree and SP respectively. To be comprehensive,
Now, in Fig. 10(c), we demonstrate the effect of decoherence time of quantum memories used in nodes. Here, we use 30-35 km links. We see that even with decoherence time of as low as 100 ms, DP-Approx is able to create EPs for up to 200 kms while Balanced-Tree can only create EP for paths up to 120 kms; they perform sim...
Fig. 10(a)-(b) shows EP generation rates and fidelity for path lengths of 500km and 1000km for varying link lengths, for the single-tree schemes DP-Approx and Balanced-Tree. Q-Cast and Delft-LP are not shown as their EP rate is near-zero (≤10−20absentsuperscript1020\leq 10^{-20}≤ 10 start_POSTSUPERSCRIPT - 20 end_POSTS...
D
Chen et al. [114] introduce a sequential latent environment model learned with RL and a probabilistic graphical model-based approach interpreting autonomous cars’ actions via a bird-eye mask. They use video cameras and LIDAR images as input in the CARLA simulator [115]. For the purpose of interpretability of actions an...
In another probabilistic decision-making model, Wang et al. [118] approach lane merging task as a dynamic process and integrate internal states into joint Hidden Markov Model (HMM) and Gaussian Mixture Regression (GMM). The experiments conducted on the
Rjoub et al. [122] have shown that federated deep RL combined with XAI can lead to trusted autonomous driving. They use a federated learning approach for decision-making and leverage edge computing that enables different devices to train an ML model in a collaborative manner. The model is first developed on the paramet...
Chen et al. [114] introduce a sequential latent environment model learned with RL and a probabilistic graphical model-based approach interpreting autonomous cars’ actions via a bird-eye mask. They use video cameras and LIDAR images as input in the CARLA simulator [115]. For the purpose of interpretability of actions an...
making has further been explored by subsequent studies as well [18, 8, 91]. While the mentioned studies focus on vision-based explanations of already obtained predictions of the model, there have been some recent studies paying attention to counterfactual explanations. In the context of automated driving, counterfactu...
A
From Table 4, Ghost-NetVLAD achieves better performance when GhostCNN is pre-trained on the Places-365 dataset. Compared with the ImageNet dataset, the Places-365 dataset includes more building categories, like museums and palaces, etc., while a small proportion is in the ImageNet dataset. Compared with the model pre-...
Figure 3: Ghost module with dilated convolution filter. We add dilated convolutions to the first step of the Ghost module. Sub-figure (A) shows the 3×3333\times 33 × 3 receptive field of 1-dilated convolution filter (i.e., ordinary convolution). (B) reveals the 7×7777\times 77 × 7 receptive field of 2-dilated convoluti...
In this paper, to improve the original NetVLAD, we proposes a lightweight model to make a good trade-off between accuracy and model efficiency. The experimental results show that the proposed model, Ghost-dil-NetVLAD (i.e., Ghost-NetVLAD with dilated convolutions), achieves similar accuracy with VGG16-NetVLAD and outp...
Dilated convolution can expand the receptive field to get multi-scale context information. To further improve the accuracy of Ghost-NetVLAD, we try to apply dilated convolutions to GhostCNN on the premise of not increasing the model size and training speed. We vary the dilated rate to validate our hypothesis. From Tabl...
From Fig. 3, the dilated convolution part shows the different receptive fields of convolution filter with different dilated rates. For different dilated rates, the number of parameters associated with each layer is identical. Intuitively, when multiple convolution kernels with different dilation rates are superimposed...
C
In Chapter 2, we collect all the notations, definitions and known facts needed in the remainder of the paper. We briefly illustrate the XL-Algorithm to solve Boolean equations systems and the algebraic attack to nonlinear filter generators presented in [10].
In Chapter 4, to validate our algebraic attack, first we apply it to two toy stream ciphers and then we show that it is feasible to perform it on WG-PRNG. We conclude showing that the security of WG-PRNG is less that claimed until now. For the sake of presentation, we will first describe the part regarding WG-PRNG, an...
Stream ciphers[16] are one of the main cryptographic primitives used in symmetric cryptography. Historically, the first stream ciphers were built with “linear” registers, where linearity is meant both in the register update function (which sends one state to the next) and in the output function, which computes the keys...
In this paper, we propose a new form of algebraic attack, which is especially effective against nonlinear filter generators. We show with two toy examples how the attack can be performed in practice. We also apply our attack to WG-PRNG and we provide a complexity estimate that shows a fatal weakness of this cipher. We ...
The main aim of our work is not to perform effectively the attack described in Section 3 on WG-PRNG, but to estimate how many keystream bits one needs to perform successfully the attack on WG-PRNG. We will show that knowing less than 218superscript2182^{18}2 start_POSTSUPERSCRIPT 18 end_POSTSUPERSCRIPT keystream bits, ...
A
\istbH[left, xshift=1.5mm, yshift=1.3mm] \istbT[right, xshift=-1.5mm, yshift=1.3mm] \endist\istrooto(2)(0-2)1111+6.5mm..2*1.1cm+ \istbH[left, xshift=1.5mm, yshift=1.3mm] \istbT[right, xshift=-1.5mm, yshift=1.3mm] \endist\istrooto(3)(1-1)2222+7.5mm..1.1cm+
Figure 6. Gain comparison of best response (BR), local best response (LBR - only poker), and continual depth-limited best response (CDBR) in Leduc Hold’em (top) and IIGoofspiel 5 (bottom) against strategies from CFR using a small number of iterations (left) and random strategies (right). The a stands for the average o...
Opponent modeling and exploitation is an essential topic in computational game theory, with many approaches attempting to model and exploit opponents in various games. However, exploiting opponents in very large games is not trivial, and only recently was an algorithm created to exploit models in depth-limited solving....
Fully exploiting opponent models in small games boils down to computing a best response. This is infeasible in games with an intractable number of information sets for which we use the continual depth-limited solving algorithms. The depth-limited setting does not allow computing BR in one pass anymore. The game we alr...
In practice, we will compute CDBR similarly to depth-limited solving with a few key changes. First, we fix the opponent’s strategy in the currently resolved part of the game to allow the player to respond to it, which corresponds to the argmax from the definition. Another key change that simplifies the algorithm is th...
C
By Theorem 2.2 we have concentration for q∗⁢(Gp)superscript𝑞subscript𝐺𝑝q^{*}(G_{p})italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) around 𝔼⁢[q∗⁢(Gp)]𝔼delimited-[]superscript𝑞subscript𝐺𝑝{\mathbb{E}}[q^{*}(G_{p})]blackboard_E [ italic_q start_POSTSU...
In Ecological networks each interaction observed reveals that an edge is present in the underlying network, and the effect of sampling effort can be modelled by taking the observed network after varying numbers of observations. It was noted in multiple papers on ecological networks that a lower sampling effort, under-s...
Figure 1.1: Simulation results. The dolphin social network [29] with 62 vertices and 159 edges was taken to be the underlying graph G𝐺Gitalic_G. It is known that q∗⁢(G)=0.529superscript𝑞𝐺0.529q^{*}(G)=0.529italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_G ) = 0.529 to three decimal places [7]. In the ...
Network data which is of interest to cluster often has weights associated with each edge. Though we have stated the modularity score of a partition for binary edge weights it is simple to take the weight of edges inside each part (instead of the number of edges) and to take the degree of a vertex v𝑣vitalic_v to be th...
The plan of the rest of the paper is as follows. In Section 2, we first show an application of our results to the stochastic block model in Section 2.1, then provide an overview of the further results of this paper in Section 2.2 and lastly give some background and the relation of this paper to previous results in Sec...
A
Note: Sample of academic contributions about migration and environmental factors from Scopus, Web of Science, Google Scholar, IDEAS RePEc, and previous meta-analyses (Hoffmann et al., , 2020; Beine and Jeusette, , 2021) collected, merged, screened and included by the authors.
This first step provides the most comprehensive sample of economic contributions on the relationship between climatic variations (and natural hazards) and human mobility, in all its different forms. We implement a systematic review aimed at mapping the body of literature and defining the boundaries of our focus. System...
Specifically, economic environmental migration is the object of publication in journals specialized in environmental sciences, geography, and social sciences such as urban studies, agriculture, demography, and political studies. A special mention has to be done for development studies: many reviews and journals specia...
The first cluster (Cluster 1) is the most populated, counting 51 papers spanning the entire period considered (from 2003 to 2020). In terms of the type of analysis, it contains the largest variety: as in all clusters, quantitative studies represent the majority (as they are the 76% of the full sample), but this cluste...
The PRISMA flow diagram (Moher et al., , 2009) in Figure 1 shows the process of identifica-tion, screening, eligibility, and inclusion of contributions in the final sample. It is important to note that there are two levels of inclusion: the first level identifies the sample of contributions included in our network anal...
B
We focus on the standard OCO setup as specified in Section 3.1. At iteration t∈[T]𝑡delimited-[]𝑇t\in[T]italic_t ∈ [ italic_T ], the player first chooses the decision 𝐱t∈𝒳subscript𝐱𝑡𝒳\mathbf{x}_{t}\in\mathcal{X}bold_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ caligraphic_X, then the environments reveal th...
We resolve the question affirmatively. The new algorithm, called Sword++, also implements an online ensemble structure. Compared to Sword presented in Section 4.2, the key novel ingredient is the framework of collaborative online ensemble. We carefully introduce correction terms to the online loss and optimism, forming...
The overall algorithmic template implements a meta-base two-layer online ensemble. There are three crucial ingredients in collaborative online ensemble: (i) the surrogate loss, (ii) the surrogate optimism, and (iii) the correction terms. Additionally, the negative terms, hidden in the analysis, play a significant role ...
We emphasize that these ingredients are particularly important for achieving gradient-variation dynamic regret, which we will demonstrate to be more fundamental than the small-loss bound. In particular, our proposed Sword++ algorithm effectively utilizes negative terms and introduces correction terms to ensure effecti...
In this paper, we exploit the easiness of problem instances to enhance the universal dynamic regret. We propose two novel online ensemble algorithms, Sword and Sword++, for convex and smooth online learning. Both algorithms achieve a best-of-both-worlds dynamic regret of order 𝒪⁢((1+PT+min⁡{VT,FT})⁢(1+PT))𝒪1subscrip...
B
and x−superscript𝑥x^{-}italic_x start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT in Fω⁢(σn)superscript𝐹𝜔superscript𝜎𝑛{}^{\omega}F(\sigma^{n})start_FLOATSUPERSCRIPT italic_ω end_FLOATSUPERSCRIPT italic_F ( italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ), then x𝑥xitalic_x is formed of non-growing
and x−superscript𝑥x^{-}italic_x start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT in Fω⁢(σn)superscript𝐹𝜔superscript𝜎𝑛{}^{\omega}F(\sigma^{n})start_FLOATSUPERSCRIPT italic_ω end_FLOATSUPERSCRIPT italic_F ( italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ), then x𝑥xitalic_x is formed of non-growing
Then x+superscript𝑥x^{+}italic_x start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT and x−superscript𝑥x^{-}italic_x start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT are one-sided fixed points. If x+superscript𝑥x^{+}italic_x start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT is in F⁢(σn)ω𝐹superscriptsuperscript𝜎𝑛𝜔F(\sigma^{n})^{\om...
is right-prolongable on u∈A+𝑢superscript𝐴u\in A^{+}italic_u ∈ italic_A start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT. If x𝑥xitalic_x is in F⁢(σn)ω𝐹superscriptsuperscript𝜎𝑛𝜔F(\sigma^{n})^{\omega}italic_F ( italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRI...
non-growing letters and x+superscript𝑥x^{+}italic_x start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT contains growing letters, then by Proposition 5.4 we have x+=σn⁢ω⁢(u)superscript𝑥superscript𝜎𝑛𝜔𝑢x^{+}=\sigma^{n\omega}(u)italic_x start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT = italic_σ start_POSTSUPERSCRIPT italic_n it...
D
{n}}{n}+\bigg{(}\frac{C_{n}}{n}\bigg{)}^{\frac{2}{2s+1}}\Bigg{)}.italic_R ( caligraphic_T start_POSTSUBSCRIPT italic_n , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n e...
This paper is a unification and extension of Sadhanala et al. (2016, 2017) (more will be said about the relationship to these papers in Section 1.3). The models of smoothness for f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that we
ℋdk⁢(1)superscriptsubscriptℋ𝑑𝑘1\mathcal{H}_{d}^{k}(1)caligraphic_H start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( 1 ). This matches the optimal rate for estimation over Holder classes (see Sadhanala et al. (2017) for a formal statement and proof for
smoothness index k+1𝑘1k+1italic_k + 1 in each coordinate direction, and any third index q≥1𝑞1q\geq 1italic_q ≥ 1, is indeed n−2⁢s/(2⁢s+1)superscript𝑛2𝑠2𝑠1n^{-2s/(2s+1)}italic_n start_POSTSUPERSCRIPT - 2 italic_s / ( 2 italic_s + 1 ) end_POSTSUPERSCRIPT for s>1/2𝑠12s>1/2italic_s > 1 / 2 (or 2⁢k+2>d2𝑘2𝑑2k+2>d2 it...
The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2⁢k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al. (2017). More precisely, these authors established the third term on the right-hand side in
D
In contrast to previous studies that reported relatively low heritability in functional brain networks (Glahn et al., 2010; Xu et al., 2017; Korgaonkar et al., 2014; Wan et al., 2022), our findings indicate significant higher heritability across various regions of the brain network. This discovery not only challenges ...
In this study, we proposed the topological clustering method for the estimation and quantification of dynamic state changes in time-varying brain networks. A coherent statistical theory, grounded in persistent homology, was developed, and we demonstrated the application of this method to resting-state fMRI data. Restin...
The predominant method for computing time-varying correlation in time series data, particularly in neuroimaging studies, involves Sliding Windows (SW). This technique entails computing correlations between brain regions across various time windows (Allen et al., 2014; Hutchison et al., 2013; Shakil et al., 2016; Mokht...
The Wasserstein distance or Kantorovich–Rubinstein metric, as originally defined between probability distributions, can be used to measure topological differences (Vallender, 1974; Canas and Rosasco, 2012; Berwald et al., 2018). Due to the connection to the optimal mass transport, which enjoys various optimal propertie...
Intraclass correlation (ICC) has long been recognized as a vital reliability and reproducibility metric, especially for gauging similarity in paired data when the order of pairing is not preserved (Chen et al., 2018; Sarker et al., 2023; Solís-Lemus et al., 2023). In brain imaging, it serves as a popular baseline for t...
D
θ∈[0,π)𝜃0𝜋\theta\in[0,\pi)italic_θ ∈ [ 0 , italic_π ) or fs⁢(θ,τ)=0subscript𝑓𝑠𝜃𝜏0f_{s}(\theta,\tau)=0italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_θ , italic_τ ) = 0 for all θ∈[0,π)𝜃0𝜋\theta\in[0,\pi)italic_θ ∈ [ 0 , italic_π ); if det(M⁢(τ))<0𝑀𝜏0\det(M(\tau))<0roman_det ( italic_M ( italic...
Note that, for any fixed τ∈ℝ>0𝜏subscriptℝabsent0\tau\in\mathbb{R}_{>0}italic_τ ∈ blackboard_R start_POSTSUBSCRIPT > 0 end_POSTSUBSCRIPT, det(M⁢(τ))>0𝑀𝜏0\det(M(\tau))>0roman_det ( italic_M ( italic_τ ) ) > 0 implies |tr⁢(M⁢(τ))|>|a|tr𝑀𝜏𝑎|\mathrm{tr}(M(\tau))|>|a|| roman_tr ( italic_M ( italic_τ ) ) | > | italic_a...
θ∈[0,π)𝜃0𝜋\theta\in[0,\pi)italic_θ ∈ [ 0 , italic_π ) if, additionally, tr⁢(M⁢(τ))=0tr𝑀𝜏0\mathrm{tr}(M(\tau))=0roman_tr ( italic_M ( italic_τ ) ) = 0. Similarly, if det(M⁢(τ))<0𝑀𝜏0\det(M(\tau))<0roman_det ( italic_M ( italic_τ ) ) < 0 then fs⁢(θ,τ)=0subscript𝑓𝑠𝜃𝜏0f_{s}(\theta,\tau)=0italic_f start_POSTSUBSCRI...
First, we prove sufficiency. If det(M⁢(τ))>0𝑀𝜏0\det(M(\tau))>0roman_det ( italic_M ( italic_τ ) ) > 0 for all τ∈(0,τ1)𝜏0subscript𝜏1\tau\in(0,\tau_{1})italic_τ ∈ ( 0 , italic_τ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ), then by Corollary 4 we know that for
fs⁢(θ,τ)=0subscript𝑓𝑠𝜃𝜏0f_{s}(\theta,\tau)=0italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_θ , italic_τ ) = 0 has no solutions; if det(M⁢(τ))=0𝑀𝜏0\det(M(\tau))=0roman_det ( italic_M ( italic_τ ) ) = 0 then fs⁢(θ,τ)=0subscript𝑓𝑠𝜃𝜏0f_{s}(\theta,\tau)=0italic_f start_POSTSUBSCRIPT italic_s end_...
A
This proof can be done by following the approach shown in [4]. Note that the parameter ρ𝜌\rhoitalic_ρ captures the effect of k¯5subscript¯𝑘5\bar{k}_{5}over¯ start_ARG italic_k end_ARG start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT presented in pISSf definition (9).
In light of the aforementioned discussion, the main contributions of this paper is the following: Building upon the existing literature, we extend PDE safety research by designing a feedback based control that satisfies both pISSf and ISSt under disturbances, utilizing pISSf barrier functional characterization and ISS...
According to the above proposition, if we can construct a safety barrier functional ℬℬ\mathscr{B}script_B, we can guarantee the pISSf property for the PDE system given in (4). Let us now construct a pISSf barrier functional ℬℬ\mathscr{B}script_B that satisfies the two conditions presented in Proposition 1.
In the present section, we will show that the control gain conditions in Theorem 1 simultaneously satisfy the pISSf criterion in the sense of (9) and ISSt criterion in the sense of (10). Following the results in existing literature [34], we can say that if there exists a functional V⁢(h)𝑉ℎV(h)italic_V ( italic_h ) for...
In this section, we focus on the Input-to-State Safety condition mentioned in (9). First, we formulate the unsafe set 𝒰𝒰\mathscr{U}script_U and the distance metric |h|𝒰subscriptℎ𝒰\left|h\right|_{\mathscr{U}}| italic_h | start_POSTSUBSCRIPT script_U end_POSTSUBSCRIPT. Subsequently, we construct a control barrier fun...
B
Other health and wellbeing related topics that have been studied using passive sensing among workers include focus and awakeness. Soto et al. utilize biometric data from an arm-wear (viz., physical activity, HR, skin response, skin temperature and respiration) to estimate worker’s stress, focus and awakeness [25]. The...
Passive sensing is increasingly involved in various aspects of our daily lives. Within the workplace, it has been used to monitor physiological factors of workers, promote work safety, enhance efficiency among other things [8]. Recently, the Tesserae [9] project involved over 700 information workers to investigate how...
We present a survey of existing research on the use of passive sensing technologies in the workplace to assess and promote wellbeing and productivity of the workforce. In this work, we consider “workplace” as the setting or place of employment where individuals perform tasks for their employer without regards to whethe...
Other works target efforts to support the design of future interventions in the workplace. Kimani et al. [30] created a conversational agent designed to assist information workers in achieving various work-related objectives, such as task scheduling and prioritization, task switching, providing reminders to take break...
Passive sensing technologies are also being embraced by organizations to promote employees’ wellbeing. Organizations make use of gamification, personalized recommendations or even offer incentive programs to encourage employees to be more active in their day-to-day life [8]. Researchers studying the use of wearable tec...
D
They formulate the problem and propose FedAvg as a solution to address main challenges in federated learning, such as massively distributed clients and partial client participation. Subsequent works attempt to address the challenge of non-i.i.d. client data in federated learning empirically [49] and derive convergence ...
Specifically, FedACG transmits the global model integrated with the global momentum in the form of a single message, which allows each client to perform its local gradient update step along the landscape of the global loss function. This approach is effective in reducing the gap between global and local losses.
Due to non-i.i.d. data and limited client participation rate in each round of training, FedAvg suffers from client drift [15]. Such a phenomenon results in the inconsistent updates of client models caused by overfitting to local data of individual clients, which consequently leads to the high variance of the global mod...
There exists a long line of research on client-side optimization aimed at reducing the divergence of clients from the global model. FedProx [25] penalizes the difference between the server and client parameters, while FedDyn [1] and FedPD [48] adopt cumulative gradients of each client for dynamic regularization of loca...
FedDC [10] introduces the auxiliary drift variables of each client to reduce the impact of the local drift on the global objective. Another line of work adopts variance reduction techniques in client updates to eliminate inconsistent updates across local models.
C
See Fig. 10(b), which plots the 𝒜errsubscript𝒜err\mathrm{\mathcal{A}_{err}}caligraphic_A start_POSTSUBSCRIPT roman_err end_POSTSUBSCRIPT metric in PU-Setting for the above models compared with our DeepAlloc. We see that our approach of pre-training using log-normal model-based images in DeepAlloc yields a notable per...
Irrespective, the fundamental techniques developed in our work are largely independent of the formulation or algorithm used to determine the optimal allocation power—since learning models and techniques are solely based on training examples. In §IV, we use the above formulation to generate the training examples for the...
Collecting Training Samples. Recall that a sample in PU-Setting is comprised of a sample of PUs’ parameters (location and power) and the optimal power allocated to the SU. In SS-Setting, a training sample is comprised of spectrum sensors’ received power readings. The location of entities is available by using a GPS don...
Computation Times and Complexity. The inference time complexity of all our ML approaches is linear in the size of the input, and thus, the inference time in practice is minimal (a fraction of a second). The training time complexity of most ML models depends on the training samples and the resulting convergence, and is ...
readings, while the ML algorithms have knowledge of only one of these inputs—thus, it is surprising and commendable that DeepAlloc is able to outperform the IP-Based approach. The above observations suggest that DeepAlloc is able to learn the spectrum allocation function effectively.
C
Rigid motions – compositions of translations, rotations and reflections – are fundamental transformations on the plane studied in a high-school geometry course. Two shapes related by these transformations are called congruent. The geometry studied in high-school is based on the set of axioms formulated by Euclid around...
Rigid motions – compositions of translations, rotations and reflections – are fundamental transformations on the plane studied in a high-school geometry course. Two shapes related by these transformations are called congruent. The geometry studied in high-school is based on the set of axioms formulated by Euclid around...
In this paper, we considered practical aspects of reconstructing planar curves with prescribed Euclidean or affine curvatures. An immediate extension of the current work would be the reconstruction of planar curves with prescribed projective curvatures, and obtaining distance estimates between curves, modulo a projecti...
To a human eye, two figures look the same if they are related by a rigid motion. However, since a reflection changes the orientation of an object, a group of orientation-preserving rigid motions, consisting of rotations and translations only, is often considered. This group is called the special Euclidean group and is...
In this work, we consider congruence of planar curves relative to the special Euclidean group S⁢E⁢(2)𝑆𝐸2SE(2)italic_S italic_E ( 2 ) and the special affine group S⁢A⁢(2)𝑆𝐴2SA(2)italic_S italic_A ( 2 ). The latter group consists of compositions of area and orientation preserving (i.e. unimodular) linear transformati...
C
Online learning [29], resource allocation [9], demand response in power systems [14], and localization of moving targets [2] are just a few examples where online convex optimization (OCO) has been applied. In the problem setup of OCO, the objective functions are time-varying and are not available to the decision maker ...
Online learning [29], resource allocation [9], demand response in power systems [14], and localization of moving targets [2] are just a few examples where online convex optimization (OCO) has been applied. In the problem setup of OCO, the objective functions are time-varying and are not available to the decision maker ...
In the seminal work of [41], the method of online gradient descent is proposed for OCO problems, where at each time step the decision maker performs one gradient descent step using the latest available information. A static regret upper bound that is sublinear in T𝑇Titalic_T is proved, where T𝑇Titalic_T is the lengt...
The main contributions of the paper can be summarized as follows. First, we extend the coordinate descent algorithms considered in [31] to the online case and provide their regret analysis. To the best of our knowledge, this is the first attempt to look at possibilities of using coordinate descent methods to solve OCO...
To the best of our knowledge, Coordinate descent [31], as an important class of optimization algorithms, is not sufficiently analyzed by researchers in the online optimization community. In coordinate descent algorithms, most components of the decision variable are fixed during one iteration while the cost function is ...
B
This paper studies the effect of different memristive dynamics on recognition accuracy by complex networks on complex tasks by simulating a neuromorphic architecture based on a LiWES artificial synapse: a three-terminal electrochemical memristor with double exponential decays in the order of ten to hundred milliseconds...
In this work, we used a Lixx{}_{\textbf{x}}start_FLOATSUBSCRIPT x end_FLOATSUBSCRIPTWO33{}_{\textbf{3}}start_FLOATSUBSCRIPT 3 end_FLOATSUBSCRIPT electrochemical memristor to test the effect of programmable time constants, double exponential decay and STP on the widely used N-MNIST dataset and POKERDVS dataset. We used...
To analyze the effects of stochastic dynamics on recognition rate on a neuromorphic architecture, we compared the performance of HOTS architectures on the N-MNIST dataset using the stochastic memristor model (the ’Noisy’ network) and the ideal memristor model (the ’Ideal’ network) defined in Section II-A.
We evaluate performance on the N-MNIST [29] and POKERDVS [30] datasets. First, we study whether inherent stochasticity in memristive dynamics could negatively impact accuracy. We compare the device model against an ideal noiseless memristor, finding no significant difference in the classification accuracy. Using inform...
This paper studies the effect of different memristive dynamics on recognition accuracy by complex networks on complex tasks by simulating a neuromorphic architecture based on a LiWES artificial synapse: a three-terminal electrochemical memristor with double exponential decays in the order of ten to hundred milliseconds...
C
Notably, this lower bound applies for arbitrary update sequences, while our upper bound applies when the updating agent is chosen uniformly at random. Thus, even when resorting to a smarter choice of the updating agent, one cannot drastically reduce the convergence time in the worst case.
In this section, we will prove two improved upper bounds, each for a more restricted set of graph classes. The first result holds when the social network is a complete graph, while the second holds when, in each step of the HKS, the influence network is the same as the social network.
Furthermore, by Lemma 4, we know that the influence network in both systems has the same set of edges (Eq. 3), the longest edge e𝑒eitalic_e preserves its length in the projection (Eq. 1), and all other edges do not increase their length (Eq. 2). Therefore, the length of the longest edge in the influence network of I¯...
First, in Lemma 4 we show that for each HKS in d𝑑ditalic_d dimensions there exists a mapping to a suitable 1111-dimensional HKS, such that the length of all edges does not increase, and the influence network (consisting of the active edges) as well as the length of the longest edge λ𝜆\lambdaitalic_λ is preserved. We ...
In the following lemma, we prove that the projected system behaves similarly to the original system in the sense that the length of the edge e𝑒eitalic_e stays the same and the influence network does not change. Furthermore, the agents in the original HKS move at least as much as the agents in the projected state, when...
C
We utilized the ChestX-ray8 dataset which consists of Chest X-ray images and is commonly used for research in the thoracic disease field. From this dataset, we randomly selected 99,000 images as our training set. Each image in the dataset was annotated with labels that identify 14 distinct pathological conditions. Thes...
Based on these findings, we hypothesize that the underperformance of the model can be attributed to underfitting and the limited size of the training dataset, as well as the relatively small number of training epochs. With a small dataset, the model may not have had enough examples to learn the complex patterns and va...
We utilized the ChestX-ray8 dataset which consists of Chest X-ray images and is commonly used for research in the thoracic disease field. From this dataset, we randomly selected 99,000 images as our training set. Each image in the dataset was annotated with labels that identify 14 distinct pathological conditions. Thes...
To evaluate the model’s performance and assess its generalization capabilities, we created a separate test set. This test set comprised 500 images randomly selected from the test dataset. We applied the trained model to the test set and measured various evaluation criteria as explained in Section 4.2.
We addressed the data leakage as explained in Section 3.1.1 and created a train and test set. We employed deep learning architecture as explained in Section 3.2 to develop a model that can predict the presence or absence of the 14 pathological conditions based on the input Chest X-ray images. The training process invo...
D
Moreover, Figure 2 shows the fraction of the runs in which arm 3333 receives a very small (5% of T𝑇Titalic_T) number of samples, demonstrating that the underestimation of arm 3333 consistently occurs for all values of T𝑇Titalic_T. This supports our finding that the initial underestimation of the best arm persists wi...
This paper addresses the problem of identifying the best arm (or treatment) among multiple options from a fixed number of samples. This problem associates each arm with an (unknown) parameterized distribution, which is a (noisy) signal of the quality of the treatment.
Thompson sampling (Thompson, 1933) is among the oldest of the heuristics and is known to be asymptotically optimal in terms of the frequentist CRM (Agrawal and Goyal, 2012; Kaufmann et al., 2012; Komiyama et al., 2015). One of the seminal results regarding Bayesian CRM is the Gittins index theorem (Gittins, 1989; Weber...
This section reviews related studies concerning Bayesian algorithms in the context of sequential decision-making. The multi-armed bandit problem (Robbins, 1952) involves multiple treatment arms and decisions made using samples obtained sequentially via experiments. The aim is to maximize the sum of the rewards, which b...
Unlike multi-armed bandit algorithms, BAI algorithms are designed solely to deliver the most effective exploration.111Although the term “best arm identification” has appeared only recently, several strands of research share the same goal, among which ranking and selection (RS; Bechhofer 1954) is among the best known. S...
C
Important advances on deadlock analysis and resolution are recently reported in [30, 36, 37]. The authors in [30, 36, 37] analyze the condition of deadlocks based on the Karush-Kuhn-Tucker (KKT) formulation for multi-robot systems and present a proportional-derivative control law to resolve deadlock, which can be theor...
Furthermore, an extra warning band is added to the terminal horizon to facilitate the resolution of potential deadlocks. An improved distributed MPC formulation is proposed based on these constraints and a novel cost function that deals with potential deadlocks.
To address the open problem of deadlock resolution with feasibility guarantee in a distributed manner, this work proposes a novel systematic trajectory generation method, called infinite horizon model predictive control with deadlock resolution (IMPC-DR). Firstly, the traditional buffered Voronoi cells (BVC) proposed i...
This scenario is designed to emphasize that the modified space partition constraint in (III-A) leads to a more accurate separation among the robots and thus a higher utility rate of the workspace. A comparison between the proposed method and the traditional BVC [32] is shown in Fig. 9 for a particular setup.
One thousand random tests are conducted with a success rate of 100%percent100100\%100 % and zero deadlocks as well as no livelock. This implies that the proposed method in this work has a good performance of deadlock resolution and livelock avoidance.
B
To this end, we introduce a group-invariant representation learning method that encodes data in a group-invariant latent code and a group action. By separating the embedding in a group-invariant and a group-equivariant part, we can learn expressive lower-dimensional group-invariant representations utilizing the power ...
We characterize the mathematical conditions of the group action function component and we propose an explicit construction suitable for any group G𝐺Gitalic_G. To the best of our knowledge, this is the first method for unsupervised learning of separated invariant-equivariant representations valid for any group.
In this work we proposed a novel unsupervised learning strategy to extract representations from data that are separated in a group invariant and equivariant part for any group G𝐺Gitalic_G. We defined the sufficient conditions for the different parts of our proposed framework, namely the encoder, decoder and group func...
The first consists in learning an approximate group action in order to match the input and the reconstructed data. For instance, Mehr et al. (2018b) propose to encode the input in quotient space, and train the model with a loss that is defined by taking the infimum over the group G𝐺Gitalic_G. While this is feasible fo...
To this end, we introduce a group-invariant representation learning method that encodes data in a group-invariant latent code and a group action. By separating the embedding in a group-invariant and a group-equivariant part, we can learn expressive lower-dimensional group-invariant representations utilizing the power ...
A
The details of this sampling process are given in the appendix. The result is H𝐻Hitalic_H samples of (η(h),θ(h))superscript𝜂ℎsuperscript𝜃ℎ(\eta^{(h)},\theta^{(h)})( italic_η start_POSTSUPERSCRIPT ( italic_h ) end_POSTSUPERSCRIPT , italic_θ start_POSTSUPERSCRIPT ( italic_h ) end_POSTSUPERSCRIPT ). The M𝑀Mitalic_M sa...
Two data sets are used for evaluation. The first comes from the TROPOspheric Measuring Instrument (TROPOMI) aboard the Sentinel-5P satellite from the EU’s Copernicus programme (Copernicus 2018). The data set consists of 1083 images of 28x28 pixels, each giving the NO2 concentration in mol/m2molsuperscriptm2\mathrm{mol/...
The full data set consists of years of data across multiple pollutants. For this paper, the NO2 readings were used, and a tuning set constructed from the 2015 data and a test set from the 2016 data. Examples from the tuning set are shown in Fig. 3. The days with less than 40 readings were discarded, and only the readin...
We believe the risks are very low from our proposed method. The only data used are NO2 concentrations, and the latitude and longitude of the stationary sensors. No images or personal data of any kinds are used. This is an advantage over other proposed solutions, e.g. using taxis to collect data. The potential benefits ...
Table 2: Confidence intervals for the means of the maximiser distance at the final iteration for the satellite data. Given are means ±plus-or-minus\pm± one standard deviation of the mean. The best values are given in bold. This confirms the story from Figs. 4 and 5 that similar results are obtained on the selection su...
A
In particular, the gradient estimator of Liu et al. [34] is based on the Langevin Stein operator [19] for continuous distributions and coincides with the continuous counterpart of RELAX [20]. In contrast, our approach considers discrete Stein operators for Monte Carlo estimation in discrete distributions with exponenti...
The two main classes of gradient estimators used in machine learning are the pathwise or reparameterization gradient estimators [29, 47, 58] and the REINFORCE or score function estimators [65, 18]. The pathwise estimators have shown great success in training variational autoencoders [29] but are only applicable to cont...
while still maintaining a tractable correction term. Our method enables online adaptation of CVs to minimize gradient variance (similar to RELAX [20]) but does not assume qηsubscript𝑞𝜂q_{\eta}italic_q start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT has a continuous reparameterization.
Recently, Parmas and Sugiyama [44, App. E.4] used a probability flow perspective to characterize all unbiased gradient estimators satisfying a mild technical condition; our estimators fall into this broad class but were not specifically investigated.
We then develop a gradient estimation framework—RODEO—that augments REINFORCE estimators with mean-zero CVs generated from Stein operators. Finally, inspired by Double CV [60], we extend our method to develop CVs for REINFORCE leave-one-out estimators [49, 30] to further reduce the variance.
C
In Fig. 5 we present a comparison of the outage probability obtained by the simulation results and the developed approximations. Once again, we consider the performance of the Steiner system and Random selection across the range of received SNRs and traffic intensities.
In Fig. 4 we plot the outage probability as given by the proposed approximation (17) (dashed lines) and compare it to the results of the corresponding simulations in which the full procedure is implemented (markers). The derived approximations prove to be very close to the exact results across the whole SNR range and ...
The results are obtained based on the derived approximations (26) and (17), respectively, which are applied to (I). To improve the readability of the figures, we do not show the results for Random selection schemes in this section, noting that they are always strictly worse than the corresponding Steiner system (see ea...
The results from simulations, which implement the exact procedure, are given with markers. Solid and dashed lines (Steiner and Random respectively) correspond to the approximation based on eq. (26) (Approx. 1). Similarly, dotted and dash-dotted lines (Approx. 2) correspond to the simpler approximation by gamma distribu...
The markers represent the performance of the probability of the outage for the idealized version of the Random selection scheme and serve as a reference. The dotted lines show the actual performance of MRC in the presence of correlated interference. The dashed lines depict the scenario with finite pool of Q=24𝑄24Q=24i...
C
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper...
In lieu of the above discussion, “nearly-optimal” algorithms have been explored. That is, algorithms that produce a path which may not be the optimal one but is comparable (or even arbitrarily close) in length to the optimal one. The nearest insertion algorithm [RSL74] computes in O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O...
We remark that although time-wise this algorithm is fast, the ratio constant of the yielded path over the optimal path has not been computed and is much larger that Christofides’ 3/2323/23 / 2 ratio. In fact, in our algorithm, the yielded path has length at most (300)9/2⁢log⁡300superscript30092300(300)^{9/2}\log{300}( ...
It should be noted that the work of Jones [Jon90], Okikiolu [Oki92], and Schul [Sch07] provides the sets for which a solution exists but not the solution itself when the set is infinite. That is, they classify the sets which are contained in curves of finite length, but do not provide the parametrization of the curves...
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper...
B
Finally, we compute the gcd of N=𝒪⁢(m)𝑁𝒪𝑚N=\mathcal{O}(m)italic_N = caligraphic_O ( italic_m ) Chow forms. As each Chow form has (r+1)⁢(n+1)𝑟1𝑛1(r+1)(n+1)( italic_r + 1 ) ( italic_n + 1 )-variables, bitsize 𝒪~⁢(n⁢dr−1⁢(τ+n))~𝒪𝑛superscript𝑑𝑟1𝜏𝑛\widetilde{\mathcal{O}}(nd^{r-1}(\tau+n))over~ start_ARG caligr...
We have assumed in Alg. 2 that r=dimV𝑟dimension𝑉r=\dim Vitalic_r = roman_dim italic_V is part of the input. We could also compute r𝑟ritalic_r using the algorithms in [41, 11], without changing the single exponential nature of the complexity of the algorithm.
In this section, we provide algorithms to compute supp⁡(V)supp𝑉\operatorname{supp}(V)roman_supp ( italic_V ) by the means of Theorem 4.1. The idea is to compute dimπI⁢(V)dimensionsubscript𝜋𝐼𝑉\dim\pi_{I}(V)roman_dim italic_π start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( italic_V ) for every I⊂[l]𝐼delimited-[]𝑙I...
Our contribution There is an extensive literature on the complexity of elimination theory procedures in general, and complexity of polynomial system solving in particular; we provide a small sample here [23, 31, 30, 8, 25]. Despite the strong literature on the subject, we were not able to locate any results on the bit ...
The coefficients of the linear forms in this factorization correspond to the solutions of the zero dimensional system. To force (some) of these solutions to have multiplicities we compute the discriminant R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of R1subscript𝑅1R_{1}italic_R start_POSTSUBSCR...
A
Upon arrival, participants were informed about the experimental procedure. Then, they signed the informed consent, filled out the personal data form, and answered the general questionnaire. Next, participants listened to instructions regarding the experiment.
The study was conducted between October 2020202020202020 and April 2021202120212021. It took place in the Electronics Technology Department at the School of Engineering of the Universidad Carlos III de Madrid, Spain. The experimental methodology designed to be applied for each volunteer is schematized in Figure 1. Duri...
The participants were requested to avoid unnecessary actions or movements during the experiment (e.g., turning the wrist). They were also informed that they could skip any clip or quit the experiment at any time. Once the procedure was clear to the participants, the sensors were set up, as well as the Virtual Reality H...
As introduced before, only 8888 of the 12121212 emotions initially selected were included in WEMAC (see the Stimuli Section), although the 12121212 emotions were considered for the discrete emotion labeling (see the Measures Section). It means that the number of targeted emotions is smaller than the reported ones in th...
Upon arrival, participants were informed about the experimental procedure. Then, they signed the informed consent, filled out the personal data form, and answered the general questionnaire. Next, participants listened to instructions regarding the experiment.
B
Support Vector Machine (SVM): SVM works by finding the hyperplane that maximizes the margin between classes in a high-dimensional feature space (Hearst et al., [n. d.]). SVMs are especially effective in scenarios where the decision boundary is nonlinear and complex, achieved through kernel tricks with linear, polynomia...
Model Reliability Assessment: In binary classification, the model outputs a probability (p𝑝pitalic_p) for the target class and its complement probability (p^^𝑝\hat{p}over^ start_ARG italic_p end_ARG) for second class. Let the entropy of the target class and the second class for each sample be denoted as s𝑠sitalic_s,...
DroidEye by Chen et al. (Chen et al., 2018a) is an adversarial defense approach using the concept of count featurization. In count featurization, the categorical output is represented by the conditional probability of the output class given the frequency of the feature observed in that class. DroidEye transforms the b...
K-nearest Neighbor (KNN): KNN is a non-parametric and instance-based learning method that doesn’t assume the distribution of the training data (Fix and Hodges, 1989). It operates by identifying the K closest instances or data points in the training data to a given input and then assigns the class label based on the ma...
Naive Bayes (NB): NB is a probabilistic machine learning algorithm based on Bayes’ theorem. It assumes feature independence within a dataset. The algorithm calculates the probability of a data point belonging to a certain class based on its feature values and the prior probability of that class. Subsequently, it assig...
D
The algorithm above can be adapted to this case. Indeed, supposing that the realization s𝑠sitalic_s is such that the origin is visited finitely many times, then applying the above algorithm to the suffix of the realization after the last visit to the origin identifies Deviator with probability 1.
Almost surely either a player was chosen on Step 1 or 2 or the sum of just the odd-numbered terms (given by B’s moves) of expression (2) diverges to ∞\infty∞, by Lemma 3.6. In the latter case, the sum of the even-numbered terms must diverge to −∞-\infty- ∞ (as the sum of all terms is convergent), and therefore cannot d...
The idea of the proof is that since Deviator must move to the right during periods where Honest moves substantially to the left (to avoid going below zero), Deviator must thus move to the left when Honest moves to the right to avoid being clearly right-biased (Steps 1 and 2 detect right-biased behavior). Thus Deviator...
The algorithm above can be adapted to this case. Indeed, supposing that the realization s𝑠sitalic_s is such that the origin is visited finitely many times, then applying the above algorithm to the suffix of the realization after the last visit to the origin identifies Deviator with probability 1.
Since sn2superscriptsubscript𝑠𝑛2s_{n}^{2}italic_s start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT increases by 1111 in expectation on Honest’s moves, it should decrease on Deviator’s moves to keep the walk close to 00, and this discrepancy is what’s detected in Step 3.
B
We summarize 6 most substantial scientific gaps observed based on quantitative comparisons both vertically among existing AD AI security works and horizontally with security works from closely-related domains. With these, we are able to provide insights and potential future directions not only at the design level, but...
In this paper, we perform the first systematization of knowledge of the growing semantic AD AI security research space. We collect and analyze 53 such papers, and systematically taxonomize them based on research aspects critical for the security field. We summarize 6 most substantial scientific gaps based on both quan...
To address the most critical scientific methodology-level gap, we take the initiative to develop an open-source, uniform, and extensible system-driven evaluation platform PASS for the semantic AD AI security research community. We also use our implemented prototype to showcase the capabilities and benefits of such a p...
Among all the identified scientific gaps, the one on the general lack of system-level evaluation (§IV-A) is especially critical as proper evaluation methodology is crucial for valid scientific progress. In this section, we take the initiative to address this critical scientific methodology-level gap by developing an op...
In this paper, we thus take the initiative to address this critical scientific methodology-level gap by developing a uniform and extensible system-driven evaluation platform, named PASS (Platform for Autonomous driving Safety and Security), for the semantic AD AI security research community (§V). We choose a simulatio...
B
Second, a more recent result of de Figueiredo et al. in [2], where they extend the result of the first paper by proving that Maximum Cut is NP-complete on graphs of interval count four. Using the technique of the above work, de Figueiredo et al. prove the NP-completeness of Maximum Cut on permutation graphs as well, wh...
Thus, a cut of maximum size of the given cubic graph always corresponded to a cut of maximum size of the constructed interval graph and vice versa. As Maximum Cut on cubic graphs is NP-complete, this reduction implies that it is also NP-complete on interval graphs.
The problem remains NP-complete for many graph classes, such like cubic graphs [5], split graphs [6], co-bipartite graphs [6], unit disk graphs [7], total graphs [8], and interval graphs [1]. On the positive side, polynomial time algorithms are known for planar graphs [9], line graphs [8], graphs not contractible to K5...
The bounding of the number of interval lengths brings us closer to the final goal: to characterize Maximum Cut for unit interval graphs as they are exactly interval graphs of interval count one. There were attempts to provide a polynomial-time algorithm for unit interval graphs [12, 13], but they both were later shown ...
Recall that the Maximum Cut problem asks for a cut of the maximal value. Notice that, for interval graphs, Maximum Cut is equivalent to the problem of finding a coloring of an interval model in two colors, say R𝑅Ritalic_R (Red) and B𝐵Bitalic_B (Blue), where the number of differently colored pairs with non-empty inter...
C
Cholec80: For our main analysis, we train phase recognition and anticipation models on the widely-studied Cholec80 dataset [70], which consists of 80 recorded gallbladder removals (cholecystectomies). Videos range from 12 to 100 min (mean ca. 38 min), processed at 1fps with frame-wise labels for 7 instruments and 7 su...
Hypothesis 8: BatchNorm enables “cheating” in certain tasks. BatchNorm can leak information within batches to “cheat” certain objectives. While previous work finds empirical evidence for this effect through degrading performance [77], we conclusively demonstrate it by carefully designing a toy task: We propose an impos...
Fig. 7(a) shows how anticipation predictions immediately improve in batches which contain an instrument occurrence at some point later in the batch. These examples visualize the “cheating” effect explicitly, not only indirectly through impact on performance. Likely, the model has learned to recognize the instrument and...
Instrument anticipation is an online frame-wise regression task. At each time point, the objective is to predict the remaining time until occurrence of an instrument within a horizon of 5 minutes. Outside the horizon, a constant should be predicted (see Fig. 1). We follow the recently proposed task formulation [58]. W...
Fig. 1: Two surgical workflow tasks on the Cholec80 dataset [70]: phase recognition, a special case of temporal action segmentation, and instrument anticipation, defined as predicting the time until occurrence of an instrument within a specified horizon.
C
Results on LFW, CFP-FP, AgeDB-30, and CALFW. FR on LFW, CFP-FP, AgeDB-30, and CALFW is straightforward. Thus, the performance was saturated. LFW, AgeDB-30, and CALFW contain 6,000 images, and CFP-FP has 6,000 images. They have 1:1 ratios between the positive and negative pairs. Verification accuracy was employed with t...
Test. Cosine similarity was used as a similarity score. Different evaluation metrics were applied depending on the FR tasks. In the verification task (1:1), verification accuracy using the best threshold was exploited for a dataset that has a small number of test images with the same ratio between positive and negative...
Results on MegaFace. MegaFace consists of a gallery set of 1 M images with 690 K classes and probe photos of 100 K images with 530 classes. We followed the test protocol of ArcFace[4]. We removed noisy images and measured rank-1 accuracy for the 1 M distractor after following the identification scenarios using the dev...
In deep feature learning paradigms for pair similarity optimization, loss functions in FR can be categorized based on two approaches: metric loss (ML; e.g., triplet loss[23, 8] and N-pair loss[26]) and classification loss (CL; e.g., softmax loss[1, 21, 30]). The former directly performs the optimization with a pair of...
Results on K-FACE. K-FACE focuses on FR under fine-grained conditions. It consists of 4.3 M images with 6 accessories, 30 illuminations, 3 expressions, and 20 poses for 400 persons. We adopted the same training and test splits used in MixFace [41]. The training split was composed of 3.8 M images with 370 persons. In p...
D
A promising solution to generate synthetic images lies in the Generative Adversarial Networks (GANs) [19]. Such an approach performs data augmentation by competitively creating new samples, i.e., a generator attempts to create synthetic images to fool the discriminator, which then tries to identify whether they are fa...
In retinal imaging, GANs have been used to create synthetic data. Li et al. [27] highlighted the importance of enhancing the quality of synthetic retinal images in their review, emphasizing that using synthetic images in training can improve performance and help mitigate overfitting.
Bellemo et al. [28] described the possible advantages and limitations towards synthetic retina image generation using GANs. The authors highlighted the potential clinical applications of GANs concerning early- and late-stage AMD classification. Burlina et al. [8] trained a Progressive GAN [29] on 133,821133821133,82113...
In the field of Optical Coherence Tomography (OCT) imaging, super-resolution GANs (like ESRGAN [24]) have demonstrated their value as a tool to enhance the quality of the image and improve AMD detection [25]. Das et al. [26] proposed a quick and reliable super-resolution approach concerning OCT imagery using GANs, achi...
The ODIR-2019 and RIADD datasets were organized into two subsets, AMD and non-AMD images. Preprocessing methodology and quality classification was same as proposed by Fu et al. [37] , which comprises of a step that detects the retinal mask using the Hough Circle Transform and then crops it to remove the impact of the b...
C
In the local attention model, its input is the output of the last pooling layer in Simple-Net, and its output is one value between 0 and 1 obtained via the sigmoid function, regarded as the local attention weight wilsuperscriptsubscript𝑤𝑖𝑙w_{i}^{l}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUP...
From Fig.11, it is seen that the non-local weight of each local patch is same at the beginning of training, which implies that each local region is initially regarded as the equal importance. With the training of our network, each local region is given different weights, and the higher weights are given some more discr...
If the facial local region is obscured or missed, the information that it contains for expression recognition will be reduced, and then the weight value of the local attention is also reduced to alleviate the effect of patches including the obscured region. Furthermore, the weights will be multiplied by the correspondi...
In the proposed method, the local attention is designed to deal with the problem that local regions is missed or obscured. In this part, the visualization of local attentions will be shown to validate the robustness of the proposed method for faces with missing regions, experimented on RAF-DB database. Note that the si...
In LNLAttenNet, the local and the non-local information of facial expressions are simultaneously considered to construct two parts of the network respectively: a local multi-network ensemble and a non-local attention network, and then the generated local and non-local feature vectors are integrated and jointly optimize...
B
The highest Elo problem also bares a passing resemblance to the Toda lattice: players are particles whose positions on the real line is given by their ratings; they exhibit a repulsive force when a higher rated player beats a lower rated player and an attractive force if vice versa. Is it possible to use tools from sol...
Another interesting variant is to constrain the number of rounds of games. In a tournament, many games will be happening parallel. Each round consists of any number of games, subject only to the constraint that each player only participates in at most one game each round. Given n𝑛nitalic_n players and k𝑘kitalic_k ro...
A second discrepancy is that mass movement places no restriction on the amount of mass moved in each move, whereas for highest Elo we always have ‖ν‖1=2subscriptnorm𝜈12\left\|\nu\right\|_{1}=2∥ italic_ν ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 2. This would seem to imply an upper bound of the ‘k𝑘kitalic_k rounds’ ...
There are additional complications in real-world implementations of Elo. For legibility and practicality, fractional and negative rating points are avoided by scaling and shifting points up and rounding to the nearest integer, and by imposing an artificial floor on possible ratings (by gifting a player points if they w...
Asymptotically in k𝑘kitalic_k, we seek the highest one of the n𝑛nitalic_n players may be rated after a total of k𝑘kitalic_k games are played amongst all of them. This question is interesting both for fixed n𝑛nitalic_n, and for n𝑛nitalic_n allowed to grow with k𝑘kitalic_k. We find a phase transition at n=k1/3𝑛sup...
A
Another challenge related to the previous one is to identify common local characteristics of the instances in order to classify them into data types, as in the work of Napierala and Stefanowski [NS16] that acknowledges four types of data: safe, borderline, rare, and outliers (SBRO in short). As described before, depen...
In this paper, we present a VA system, called HardVis, that incorporates undersampling and oversampling techniques for the management of both instance hardness and class imbalance independent of the ML algorithm in use. It adopts validation metrics suitable for imbalanced multi-class classification problems and include...
Figure 3: At first, a comparison of different data types projections and then two consecutive undersampling phases with the NCR algorithm are shown in this arrangement of screenshots. The default value for the number of neighbors is 5 (see (a)), which is used as input for computing the type of each instance with KNN. ...
In this paper, we developed HardVis, a VA system that uses hardly-configurable undersampling and oversampling techniques to handle instance hardness. As part of an intensively iterative process, multiple coordinated views assist users in defining an ideal distribution of data types, undersampling particular safe for re...
The remainder of this paper is organized as follows. In Section 2, we review automated methods for the detection of different data types, visually-assisted identification of outliers and rare examples, and visualization approaches for data-centric ML error analysis. Afterwards, in Section 3, we outline the analytical t...
A
⟦.⟧\llbracket.\rrbracket⟦ . ⟧ indicates Iverson brackets such that ⟦if⁢e⁢e≥t⁢i⁢pe⟧delimited-⟦⟧subscript𝑖𝑓𝑒𝑒𝑡𝑖subscript𝑝𝑒\llbracket i_{fee}\geq tip_{e}\rrbracket⟦ italic_i start_POSTSUBSCRIPT italic_f italic_e italic_e end_POSTSUBSCRIPT ≥ italic_t italic_i italic_p start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT ...
With the VDF delay of 90909090s (k=3𝑘3k=3italic_k = 3) and the FIRST recommended fee, on the Ethereum blockchain, 196319 out of 198235 blocks (>99%absentpercent99>99\%> 99 %) had no frontrunnable transactions! With t1=15subscript𝑡115t_{1}=15italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 15secs (k=3𝑘3k=3italic_k ...
Figure 7: Percentage of frontrunnable transactions (Y-axis) for different values of FIRST parameter k𝑘kitalic_k (t1=k×t2subscript𝑡1𝑘subscript𝑡2t_{1}=k\times t_{2}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_k × italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT).
We analyzed the Ethereum and BSC data for different values of k𝑘kitalic_k. Figure 7 shows the percentage (f⁢r×100𝑓𝑟100fr\times 100italic_f italic_r × 100) of transactions that are frontrunnable out of the total transactions (24.34M in Ethereum and 5.14M in BSC) for different values of k𝑘kitalic_k.
As we discussed before, on Ethereum, the chance of transactions being frontrun is a bit higher on account of higher volatility (we theorize, due to NFT transactions and slower block confirmation time) compared to BSC, which is more stable on account of the faster settling of transactions. For example, from our data, in...
C
\bar{\kappa}}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT > divide start_ARG 1 end_ARG start_ARG italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⋅ square-root start_ARG 2 italic_π italic_e end_ARG end_ARG ⋅ divide start_ARG over¯ start_ARG italic_κ end_ARG + 1 end_ARG start_ARG over¯ start_ARG italic_κ ...
In Section 3, we give conditions on our model that guarantee existence and uniqueness of equilibria in the mean-field regime, the limiting regime where at each time step, an infinite number of agents are considered for the treatment. Furthermore, we show that under additional conditions, the mean-field equilibrium aris...
We describe some of the extensions of our model and learning procedure. First, our model assumes that the decision maker’s policy is fixed over time. Dynamic treatment rules, where the policy is time-varying, would extend this work and would likely require new equilibrium definitions. Second, we consider linear polici...
The lack of a fixed point in the expected score in low-noise regimes (rightmost plot, Figure 1) implies that there are distributions over agent unobservables for which there is no equilibrium threshold in low-noise regimes. As a result, when the noise condition for continuity of best response does not hold for all age...
We establish conditions on the variance σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT of the noise distribution that guarantee continuity and contraction properties of the agent best response. Recall that in the context of college admissions, the noise represents that students have imp...
C
For Biased MNISTv2, all the samples having the same class and the same value for all of the spurious factors are placed in a single group. For COCO-on-Places, objects placed on spuriously correlated backgrounds form the majority group, while the rest form the minority group. BAR does not specify oracle group labels, so...
In the main paper, we compared OccamResNet-18 with 8M parameters (feature width = 48) and ResNet-18 with 12M parameters (feature width = 64). To examine if the lower number number of parameters is helping e.g., due to implicit regularization, we test an OccamResNet-18 with 12M parameters by setting the feature width to...
To examine if OccamNets also work well on datasets with less bias, we train ResNet-18 and OccamResNet-18 on 100 classes of the ImageNet dataset [16]. OccamResNet-18 obtains competitive numbers compared to the standard ResNet-18 (OccamResNet-18: 92.1, vs. ResNet-18: 92.6, top-5 accuracies). However, as described in rest...
Because OccamNets are a new network architecture, we used OccamResNet-18 with each of the baseline methods instead of ResNet-18. These results are shown in Table 2, where we provide unbiased accuracy along with any improvement or impairment of performance when OccamResNet-18 is used instead of ResNet-18. All methods be...
Architectures. ResNet-18 is used as the standard baseline architecture for our studies. We compare it with an OccamNet version of ResNet-18, i.e., OccamResNet-18. To create this architecture, we add early exit modules to each of ResNet-18’s convolutional blocks. To keep the number of parameters in OccamResNet-18 compa...
D
Semantic segmentation aims at assigning a semantic label to each pixel in a natural image, which is a fundamental and hot topic in the computer vision community. It has a wide range of applications in both academic and industrial fields. Thanks to the powerful representation capability of deep neural networks [2, 3, 4...
We first discuss local temporal contexts which are widely exploited in VSS [11, 14, 13, 12, 34, 35, 36, 37, 38, 39, 40, 41, 42]. The local temporal contexts can be further divided into static contexts and motional contexts among neighboring video frames, as shown in Fig. 1b. The former refers to the contexts within th...
Figure 1: Illustration of various video contexts. (a) Illustration of local temporal contexts and global temporal contexts. (b) Illustration of static contexts (in blue) and motional contexts (in red) across neighbouring video frames. The human and horse are moving objects, while the grassland and sky are static backg...
Instead of directly modeling global relationships, we propose to model relationships only among necessary tokens for the joint learning of static and motional contexts. Our CFFM technique consists of two steps. The first step, Coarse-to-Fine Feature Assembling (CFFA), assembles the features extracted from neighbouring ...
The video contexts contain local temporal contexts which represent the contextual information from neighbouring/nearby frames and global temporal contexts which indicate the contexts from the whole video. This paper first studies local temporal contexts which can be further divided into static contexts and motional con...
B
2. The Acceleration Phase: The Hotline system’s acceleration phase commences once the frequently-accessed embeddings are replicated on each GPU. During this phase, the Hotline accelerator actively classifies a mini-batch into two μ−b⁢a⁢t⁢c⁢h⁢e⁢s𝜇𝑏𝑎𝑡𝑐ℎ𝑒𝑠\mu-batchesitalic_μ - italic_b italic_a italic_t italic_c it...
The frequently-accessed embeddings of the popular μ𝜇\muitalic_μ-batch are synchronized across all GPUs with dense parameters via an all-reduce collective. On the other hand, for the non-popular μ𝜇\muitalic_μ-batch, frequently-accessed embeddings are updated on GPUs, while the remainder is updated on the CPU’s main m...
1. The Learning Phase: The Hotline accelerator actively determines the frequently-accessed embeddings at runtime. To achieve this, the accelerator performs mini-batch sampling in the first epoch. Our experiments demonstrate that sampling just 5% of the mini-batches is sufficient to identify over 90% of the frequently-...
2. Layout-aware Runtime Scheduling: Hotline employs a dynamic runtime scheduler to achieve optimal compute throughput with the new memory placement. This scheduler divides a mini-batch into two micro-batches (μ𝜇\muitalic_μ-batches), and subsequently, these μ𝜇\muitalic_μ-batches are categorized into two groups. The f...
Our Approach: Hotline partitions each mini-batch into two micro-batches (μ𝜇\muitalic_μ-batches). The inputs in a μ𝜇\muitalic_μ-batch either access only frequently-accessed embeddings or any arbitrary embeddings. First, Hotline schedules the μ𝜇\muitalic_μ-batches that access only frequently-accessed embeddings on the...
A
Let 𝒞𝒞\mathcal{C}caligraphic_C be a polyhedral complex in ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, let F:|𝒞|→ℝ:𝐹→𝒞ℝF:|\mathcal{C}|\to\mathbb{R}italic_F : | caligraphic_C | → blackboard_R be linear on cells of 𝒞𝒞\mathcal{C}caligraphic_C, and assume |𝒞F∈[a,b]|...
Note also that Theorem 4 is sufficient to guarantee that for a map that is affine-linear on cells of a finite polyhedral complex in ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, we need only perform computations at finitely many thresholds a∈ℝ𝑎ℝa\in\mathbb{R}italic_a ∈...
of polyhedral sets of dimension k𝑘kitalic_k, for 0≤k≤d0𝑘𝑑0\leq k\leq d0 ≤ italic_k ≤ italic_d, called the cells of 𝒞𝒞\mathcal{C}caligraphic_C, such that i) If P∈𝒞𝑃𝒞P\in\mathcal{C}italic_P ∈ caligraphic_C, then every face of P𝑃Pitalic_P is in 𝒞𝒞\mathcal{C}caligraphic_C, and ii) if P,Q∈𝒞𝑃𝑄𝒞P,Q\in\mathcal{C...
Because polytopal complexes admit finite simplicial subdivisions and there exist finite time algorithms for computing homology for finite simplicial complexes, Theorems 3 and 4 together have the following important practical consequence, which we state informally:
For every flavor of H𝐻Hitalic_H–complexity described in this paper and every finite PL map F:ℝn→ℝ:𝐹→superscriptℝ𝑛ℝF:\mathbb{R}^{n}\rightarrow\mathbb{R}italic_F : blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_R, there exists a finite time algorithm for computing the H𝐻Hitalic_H–comple...
C
Fortunately, neural network methods have more universalities. The same neural network can be used to represent the states or to study the dynamical processes for various systems, such as those with different dimensions or with different interactions.
The topological defects, i.e., kinks form in the course of the quantum phase transitions due to the KZM. It predicts that the power-law scalings of the mean value of the kink numbers to the quench rate is proportional to τQ−d⁢ν/(1+ν⁢z)superscriptsubscript𝜏𝑄𝑑𝜈1𝜈𝑧\tau_{Q}^{-d\nu/(1+\nu z)}italic_τ start_POSTSUBSCRI...
The recent state-of-the-art neural networks have been shown to provide high efficient representations of such complex states, making the overwhelming complexity computationally tractable [6, 7]. Except for the success in the industrial applications, such as the image and speech recognitions [8], the autonomous driving,...
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions. Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39]. Critical slowing down near the phase transition point may invalidate the applic...
We have realized the time evolutions of the energy expectation value, the universal statistics of the topological defects numbers and the kink-kink correlations in a quantum phase transition of a TFQIM by virtue of the neural networks. The results were found to satisfy theoretical predictions. Thus, it numerically ver...
B
This theorem tells that for the ideal case when Re=∞subscript𝑅𝑒R_{e}=\inftyitalic_R start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = ∞ and 𝒇=𝟎𝒇0\bm{f}=\bm{0}bold_italic_f = bold_0, we obtain the helicity conservation. On the other hand, the natural pollution term for the helicity conservation is the effect of diff...
where QN⁢Nsubscript𝑄𝑁𝑁Q_{NN}italic_Q start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT is the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT projection onto the space of neural network functions chosen in the model. This in general can not guarantee the divergence free property of 𝝎...
In this section, we shall consider Helicity-conservative finite element methods defined on contractible domains. One can extend the definition of helicity to nontrivial topology and different space dimensions [3, Chapter 3]. We use the standard notation for the inner product and the norm of the L2superscript𝐿2L^{2}ita...
The rest of the paper is organized as follows. In Section 2, we provide preliminaries, notation and helicity-conservative finite element scheme. In Section 3, we present a PINN-based algorithm that preserves the helicity. In Section 4, we present numerical results on the convergence and helicity-preserving properties o...
Numerical modeling and simulation for the incompressible Navier-Stokes system is critical in a number of applications. Therefore, there have been a lot of efforts in designing numerical methods for solving the incompressible Navier-Stokes equations. It is well-known that the Navier-Stokes system has various conserved q...
B
The literature on data coverage asudeh2019assessing ; asudeh2021identifying ; lin2020identifying only focuses on representation, and hence fails to capture uncertainty. Additionally, they only return a binary signal of whether to trust the outcome of the model for a query point or not which practically is not very in...
Since the RU measures are model-independent, we perform the effectiveness validation experiments for both classification and regression tasks. For the classification tasks, we use SYN, DCC, AD, RS and GS data sets, and for the regression tasks, we employ RN, HS and DI data sets. To demonstrate the effectiveness of the ...
We compute strongRU and weakRU values for each query point in the uniform sample over the space using the default settings. In Figures 8(b) and 8(c), the query space is colored by assigning a tone based on the corresponding values of strongRU and weakRU respectively. As shown in Figure 8(b), the untrustworthy regions a...
The proposed measures can be extended to different data types and are independent of the model and prediction task (classification and regression). The measures are also agnostic to the choice of metric or approach for computing the two components. Proposing quantitative probabilistic outcomes, our measures are interpr...
While being agnostic to the choice of the uncertainty and lack of representation components, we propose an implementation based on the k𝑘kitalic_k-vicinity of a query point. In particular, given the radius of the k𝑘kitalic_k-vicinity and its uncertainty, we develop functions that return probabilities indicating the l...
C