context
stringlengths
250
4.88k
A
stringlengths
250
4.73k
B
stringlengths
250
3.79k
C
stringlengths
250
8.2k
D
stringlengths
250
4.17k
label
stringclasses
4 values
Table 7 presents that MCL-GAN achieves outstanding performance compared to other strategies especially for the recall metric, even compared with GT-Assign without relying on class labels for discriminator specialization. The proposed approach is also efficient since it is free from any time-consuming clustering procedu...
To achieve these goals, we employ a Multiple Choice Learning (MCL) [8] framework to learn multiple discriminators and update the generator via a set of expert discriminators, where each discriminator is associated with a subset of the true and generated examples. Our approach, based on a single generator and multiple d...
We presented a generative adversarial network framework with multiple discriminators, where each discriminator behaves as an expert classifier and covers a separate mode in the underlying distribution. This idea is implemented by incorporating the concept of multiple choice learning.
Then the loss is computed by the KL divergence of the probability distribution of discriminators for being selected as experts from 𝝁𝝁\bm{\mu}bold_italic_μ. To obtain the probability for discriminator selection, we apply the 𝚜𝚘𝚏𝚝𝚖𝚊𝚡𝚜𝚘𝚏𝚝𝚖𝚊𝚡\mathtt{softmax}typewriter_softmax function to the vector of M𝑀M...
The combination of generative adversarial network and multiple choice learning turns out to be effective to alleviate the mode collapse problem. Also, the integration of the sparsity loss encourages our model to identify the proper number of discriminators and estimate a desirable distribution with low complexity.
B
The CER results of DeepSC-SR and two benchmarks under the AWGN channels and the Rayleigh channels are shown in Fig. 5, where the baseline is the result tested by feeding the speech sample sequence into the ASR module directly without considering communication problems. From the figure, DeepSC-SR obtains lower CER score...
The WER scores comparison of different approaches are compared in Fig. 6. From the figure, the proposed DeepSC-SR can provide lower WER scores and outperform the speech transceiver under various channel conditions, as well as the text transceiver under the Rayleigh channels when SNR is lower than around 8 dB. Moreover,...
In this article, we have investigated a DL-enabled semantic communication system for speech recognition, named DeepSC-SR, which aims to restore the text transcription by utilizing the text-related semantic features. Particularly, we jointly design the semantic and channel coding to learn and extract the features and mi...
The CER results of DeepSC-SR and two benchmarks under the AWGN channels and the Rayleigh channels are shown in Fig. 5, where the baseline is the result tested by feeding the speech sample sequence into the ASR module directly without considering communication problems. From the figure, DeepSC-SR obtains lower CER score...
In this article, we have investigated a DL-enabled semantic communication system for speech recognition, named DeepSC-SR, which aims to restore the text transcription by utilizing the text-related semantic features. Particularly, we jointly design the semantic and channel coding to learn and extract the features and m...
A
There are three categories for 3D semantic segmentation methods: projection-based methods, voxel-based methods and point-based methods. Multi-view projection-based methods[20, 21, 22] project the 3D data into 2D from multiple viewpoints, therefore they can easily process the projected data on 2D convolution networks. H...
Most 2D WSSS methods use image-level labels. Based on class activation map(CAM)[12], many methods[28, 29, 30, 31, 31, 32, 33, 17, 18, 19] refines the CAM generated from a classification network to generate pseudo-pixel-level labels. Then, segmentation networks are trained using the pseudo-pixel-level labels. Besides th...
Existing 3D WSSS methods utilize different kinds of weak supervisions.[10] utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels. [11] proposes to generate pseudo point-level label using 3D class activation map[12] from subcloud-level anno...
The existing 3D WSSS methods formulate the problem in different directions. [10] utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels. However, each 3D sample is projected to 2D in several views and each projected 2D image needs pixel-lev...
As shown in Figure2, we use a two-stage training strategy to avoid interference between the two modules during training. In stage one, we train the basic segmentation network with the cross-sample feature reallocating module. In this stage, for each sample, the network learns from the weak labels of this sample and th...
A
Table 5: Quantitative comparison on different variants of the proposed approach. The experiments are conducted on the KITTI val set for the car category with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT, to investigate the effect of the proposed geometric formu...
Table 2: Monocular 2D object detection results on the KITTI test set for the All categories with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The metric AP40subscriptAP40\rm AP_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT is used for detection evaluat...
Table 5: Quantitative comparison on different variants of the proposed approach. The experiments are conducted on the KITTI val set for the car category with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT, to investigate the effect of the proposed geometric formu...
For all the evaluation, the AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT metric is employed. We mainly investigate from two aspects, including the effect of the proposed geometric formula and module, and the effect of the geometry-guided representation learning for depth estimation.
Extensive experiments conducted on the challenging KITTI [11] dataset clearly demonstrate the effectiveness of the proposed approach and show that our method achieves 13.81% in terms of the AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT metric, which is 2.80% absolute AP40subscriptAP40\r...
C
Figure 2: \addRoute finding may produce suboptimal visiting orders when there are too many contour points. The resultant crossing lines are caused by incorrect visiting order of intermediate points. These crossing lines may lead to self-intersecting contours and result in detection failures.
In this paper, we propose designing the text segments in a dense and partially overlapping manner so as to retain the “streamline” characteristic of text, which allows flexible connection as much as possible to adapt text instances of arbitrary shapes.
The Graph Guided Text Region map and the text segment classification results from the GCN are then used to rectify the TCL map to remove false detection. The relationship prediction and the dense overlapping text segments together ensure the completeness and accuracy of using the contour of the grouped text segments to...
in a dense and partially overlapping manner and develop a simple but effective contour inference strategy to depict the “characterness” of text, which can handle complex situations of arbitrary-shape text with enhanced relational reasoning and type classification capabilities.
After we have obtained the dense overlapping text segments, the next step is to determine their types. The type information of text segments enables them to better reflect text’s “characterness” and allows building a character-to-character graph structure for further fine-grained relational reasoning and rectification.
A
a sequence o∈𝒪|ϱ|𝑜superscript𝒪italic-ϱo\in{\cal O}^{{\lvert\varrho\rvert}}italic_o ∈ caligraphic_O start_POSTSUPERSCRIPT | italic_ϱ | end_POSTSUPERSCRIPT of orientations of the same size, we denote by ρ=π−1⁢(ϱ,o)𝜌superscript𝜋1italic-ϱ𝑜\rho=\pi^{-1}({\varrho,o})italic_ρ = italic_π start_POSTSUPERSCRIPT - 1 end_POS...
1% ,{\lvert\rho\rvert}\rrbracket}\in{\cal S}({\cal V}\times{\cal V})\;,= { italic_π start_POSTSUBSCRIPT caligraphic_V × caligraphic_V end_POSTSUBSCRIPT ( italic_ρ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_i ∈ ⟦ 1 , | italic_ρ | ⟧ end_POSTSUBSCRIPT ∈ caligraphic_S ( caligraphic_V × ca...
={π𝒪⁢(ρi)}i∈⟦1,|ρ|⟧∈𝒮⁢(𝒪),absentsubscriptsubscript𝜋𝒪subscript𝜌𝑖𝑖1𝜌𝒮𝒪\displaystyle={\{{\pi_{{\cal O}}(\rho_{i})}\}}_{i\in\llbracket 1,{\lvert\rho% \rvert}\rrbracket}\in{\cal S}({\cal O})\;,= { italic_π start_POSTSUBSCRIPT caligraphic_O end_POSTSUBSCRIPT ( italic_ρ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIP...
π−1:(ϱ,o)∈Imπ↦ρ with ρi=(ϱi,oi),∀i∈⟦1,|ϱ|⟧.\pi^{-1}:(\varrho,o)\in\text{Im}\pi\mapsto\rho\text{ with }\rho_{i}=(\varrho_{% i},o_{i})\;,\enspace\forall i\in\llbracket 1,{\lvert\varrho\rvert}\rrbracket.italic_π start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT : ( italic_ϱ , italic_o ) ∈ Im italic_π ↦ italic_ρ with italic_ρ ...
\rrbracket}\Big{\}}\;,= { { ( italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ♭ end_POSTSUPERSCRIPT , italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ♯ end_POSTSUPERSCRIPT , italic_o start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic...
C
Assume that each memory address of a 64-bit system can be stored in an element 8 bytes in size, and the number of occurrences of an individual IP address is no more than 264superscript2642^{64}2 start_POSTSUPERSCRIPT 64 end_POSTSUPERSCRIPT. There is an array of size 256 elements that consists of 256×82568256\times 825...
The first proposed mapping mechanism of IP addresses is TLMB. The four parts of the IP address are represented in four layers, where each layer is made up of one or more memory blocks. The first layer only contains one memory block, whereas the second layer contains 256 memory blocks. Each memory block contains 256 ele...
In the worst case, the distinct first three parts of the IP addresses cover all the binary combinations. Hence, the size of the memory blocks in the second layer is 32 GB. We present a parallel computation scheme on multiple computers for improving the computational efficiency and reducing memory use. Assume that there...
We first evaluate the memory use and computational complexity of the first proposed method, TLMB. The size of each memory block is 256×82568256\times 8256 × 8 bytes, and there are 256×256256256256\times 256256 × 256 memory blocks in the first layer. Hence, the size of the memory blocks in the first layer is 128 MB. As...
We formally present a storage strategy for IP addresses that consists of two layers that consist of a limited number of memory blocks. The first layer contains 256×256256256256\times 256256 × 256 memory blocks. The first three parts of the IP address can be mapped into the corresponding position of the element in a pa...
A
Although we do not rigorously prove that 𝒫¯D3subscript¯𝒫subscript𝐷3\bar{\mathcal{P}}_{D_{3}}over¯ start_ARG caligraphic_P end_ARG start_POSTSUBSCRIPT italic_D start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_POSTSUBSCRIPT and 𝒫¯T1subscript¯𝒫subscript𝑇1\bar{\mathcal{P}}_{T_{1}}over¯ start_ARG caligraphic_P end_ARG star...
For block tridiagonal systems, by comparing the theoretical analysis for the nested Schur complement based preconditioners and that for the additive type preconditioners, our argument is that permutation is important and necessary when designing preconditioners. These results are instructive for devising the correspond...
The outline of the remainder of this paper is as follows. In section 2, we briefly recall the classic saddle point problem and its Schur complement, and introduce the twofold saddle point problem and the form of Schur complement, we then construct and analyze the block-triangular and block-diagonal preconditioners base...
In this paper, both nested Schur complement and additive Schur complement based preconditioners are constructed for the twofold and block tridiagonal linear systems. The polynomial equations of the preconditioned matrices are analyzed. It is shown that by properly selecting the sign in front of each Schur complement, ...
Commencing with a twofold saddle point problem, we generalize our theory to n𝑛nitalic_n-tuple block tridiagonal saddle point problems. Our study demonstrates that judiciously selecting signs in front of Schur complements in preconditioners results in a positively stable preconditioned system [16]. By using the Routh–H...
C
Several works have focused on distributed training with vertical partitions in a federated setting. The authors in works (Chen et al., 2020; Hardy et al., 2017; Yang et al., 2019b; Wu et al., 2020; Feng and Yu, 2020; Kang et al., 2020) propose vertical federated learning algorithms for single-tier communication network...
In all figures, N𝑁Nitalic_N represents the number of silos (vertical partitions of the dataset), and Kjsubscript𝐾𝑗K_{j}italic_K start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT represents the number of clients (horizontal partitions) in each silo. For simplicity in our experiments, we consider that all the silos have ...
While this approach is similar to the multiple local iterations in horizontal federated learning, an important distinction is that in vertical federated learning, each silo updates only on its own subset of coordinates in contrast to updating all the coordinates in the case of horizontal federated learning.
TDCD is novel since it interleaves both the horizontal and vertical federated learning paradigms and thus has to account for both the perturbed gradients from the horizontal federated learning component and the stale information from the vertical federated learning component. This combination leads to a different conve...
In this work, we consider vertical and horizontal partitions of the dataset simultaneously in the two tiers. TDCD performs model training in such a multi-tiered system architecture by fusing horizontal and vertical learning approaches in a novel manner.
D
The approach of employing pseudospectra localizations for H- and Z-eigenvalues brazell_solving_2013 ; mo2019z ; qi2005eigenvalues , aimed at identifying more positive definite tensors, has been investigated by many researchers KostiPseudospectra2016 ; LiLiuWei2019 . However, to the best of the authors’ knowledge, there...
In this section, we delve into the study of pseudospectra for third-order tensors within the tensor-tensor multiplication framework. Specifically, we explore different formulations of pseudospectra for third-order tensors in Subsection 4.1. Subsection 4.2 is dedicated to the examination of various properties of pseudos...
The generalization of eigenvalues from matrices to tensors has been studied through the implementation of tensor-tensor multiplication. Significant attention and extensive research have been devoted to this field, resulting in a substantial body of work focused on their variants, applications, and theoretical analysis....
The multiplication of tensors, a fundamental and crucial operation analogous to matrix multiplication, has garnered considerable attention across various scientific disciplines. In 2008, Kilmer et al. Kilmer2008third introduced a novel form of tensor multiplication that enables the representation of a third-order tens...
The study of T-eigenvalues has emerged as a prominent research area within the field of tensor analysis. Motivated by the aforementioned research, we pay our attention to the perturbation analysis of third-order tensors under the novel tensor-tensor multiplication (3) in this paper, encompassing both the extension of ...
C
We design a Bi-directional Gated Feature Fusion (Bi-GFF) module to share and combine information between the structure and texture features for consistency enhancement and a Contextual Feature Aggregation (CFA) module to yield more vivid details by modeling long-term spatial dependency.
The generator is a two-stream architecture, modeled by a U-Net variant, as shown in Figure 2 (a). At the encoding stage, the corrupted image and its corresponding edge map are individually projected into the latent space, where the left branch focuses on texture features and the right branch targets structure features....
Figure 2: Overview of the proposed method (best viewed in color). Generator: Image inpainting is cast into two subtasks, i.e., structure-constrained texture synthesis (left, blue) and texture-guided structure reconstruction (right, red), and the two parallel-coupled streams borrow encoded deep features from each other...
In this paper, we propose a novel two-stream network which casts image inpainting into two collaborative subtasks, i.e., structure-constrained texture synthesis and texture-guided structure reconstruction. In this way, the two parallel-coupled streams are individually modeled and combined to complement each other. Cor...
Bi-directional Gated Feature Fusion (Bi-GFF). This module is proposed to further combine the decoded texture and structure features. It exchanges messages between the two kinds of information, where soft gating is exploited to control the rate. Due to this integration operation, the feature is refined and simultaneousl...
B
We obtained explicit formulas for the average decoding error probabilities of the ensemble under these three decoding principles and computed the error exponents. We also compare the results with the random [n,k]qsubscript𝑛𝑘𝑞[n,k]_{q}[ italic_n , italic_k ] start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT code ensemb...
We obtained explicit formulas for the average decoding error probabilities of the ensemble under these three decoding principles and computed the error exponents. We also compare the results with the random [n,k]qsubscript𝑛𝑘𝑞[n,k]_{q}[ italic_n , italic_k ] start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT code ensemb...
First recall that the error exponents of the average decoding error probability of the ensemble ℛ(1−R)⁢n,nsubscriptℛ1𝑅𝑛𝑛\mathcal{R}_{(1-R)n,n}caligraphic_R start_POSTSUBSCRIPT ( 1 - italic_R ) italic_n , italic_n end_POSTSUBSCRIPT over the erasure channel under the three decoding principles are defined by
We establish a strong concentration result for the unsuccessful decoding probability of a random code in the ensemble ℛ(1−R)⁢n,nsubscriptℛ1𝑅𝑛𝑛\mathcal{R}_{(1-R)n,n}caligraphic_R start_POSTSUBSCRIPT ( 1 - italic_R ) italic_n , italic_n end_POSTSUBSCRIPT towards the mean under unambiguous decoding.
For unambiguous decoding, we computed the variance of the decoding error probability of the ensemble and the error exponent of the variance, from which we derived a strong concentration result, that is, under general conditions, the ratio of the decoding error probability of a random code in the ensemble and the avera...
D
Sokoban is a single-player complex game in which a player controls an agent whose goal is to place boxes on target locations solely by pushing them; without crossing any obstacles or walls. Sokoban has recently been used to test the boundaries in RL [16, 28]. Sokoban is known to be hard [10], mainly due to its combina...
We train the transformer with the objective to predict the k𝑘kitalic_k-th step ahead. The main advantages of this subgoal objective are simplicity and empirical efficiency. We used expert data to generate labels for supervised training. When offline datasets are available, which is the case for the environments consid...
INT The difficulty of the problems in INT increases fast with the proof length L𝐿Litalic_L and the number of accessible axioms. W used K=18𝐾18K=18italic_K = 18; all of available axioms. We observe, that BF-kSubS scales to proofs of length L=10𝐿10L=10italic_L = 10 and L=15𝐿15L=15italic_L = 15, which are significant...
We collect the expert dataset consisting of all successful trajectories occurring during the training of an MCTS agent (using an implementation of [28]). These are suboptimal, especially in the early phases of the training or for harder boards. For both expert training and kSubS evaluation, we generate Sokoban boards f...
Sokoban Using BF-kSubS allows for significantly higher success rates rates within the same computational budget, see Table 3. Our solution scales well to the board size as big as 20×20202020\times 2020 × 20; note that 10×10101010\times 1010 × 10 boards are typically used in deep RL research [16, 37]. Importantly, we ob...
C
Semantic embedding is vital to Named Entity Recognition. In Chinese sentences, a single character does not mean a word, because there is no natural segmentation in Chinese grammar. So, technically, we have two choices to acquire Chinese embedding. The first way is word embedding, trying to separate Chinese sentences i...
The backbone of the NER model used in our work is mainly BiLSTM + CRF. The BiLSTM+CRF model is stable and has been verified in many research projects. Meanwhile, as our method is focused on providing a complementary lightweight module for current named entity recognition models, we select two pre-trained language model...
These years, large-scale pre-trained language models based on Transformer [Vaswani et al., 2017] have shown their superiority in Natural Language Processing tasks. The self-attention mechanism can better capture the long-distance dependency in sentences and the parallel design is suitable for mass computing. Bidirectio...
Recently, pre-trained language models have been widely used in Natural Language Processing (NLP)  [Zhang et al., 2021], constantly refreshing the benchmarks of specific NLP tasks. By applying the transformer structure, semantic features can be extracted more accurately. However, in the Named Entity Recognition (NER) ar...
So far, pre-trained models have been widely used in the semantic domain. One typical method is Word2Vec [Mikolov et al., 2013], which starts to use static embedding vectors to represent Chinese characters in the semantic domain. Now, we have more options. BERT [Kenton and Toutanova, 2019] has its Chinese version and ca...
D
While our experimental prototype was built for a HOLOEYE-PLUTO which possesses a 1K-pixel resolution, corresponding to a 1 mm eyebox with 75.6∘superscript75.675.6^{\circ}75.6 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT horizontal and vertical FOV, the improvement in hologram fidelity persists across resolutions. Irres...
To assess whether the optimized neural étendue expander ℰℰ\mathcal{E}caligraphic_E, shown in Fig. 1b, has learned the image statistics of the training set we evaluate the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG, defined as the spectrum of the generated image with th...
In this work, we introduce neural étendue expanders as an optical element that expands the étendue of existing holographic displays without sacrificing displayed hologram fidelity. Neural étendue expanders are learned from a natural image dataset and are jointly optimized with the SLM’s wavefront modulation. Akin to a...
Next, we analyze the expansion of étendue achieved with the proposed technique. To this end, suppose we want to generate the étendue-expanded hologram of only a single scene. Then, the optimal complex wavefront modulation for the neural étendue expander would be the inverse Fourier transform of the target scene, and, a...
The differentiability of Eq. (2) with respect to the modulation variables ℰℰ\mathcal{E}caligraphic_E and 𝒮𝒮\mathcal{S}caligraphic_S allows us to learn the optimal wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E by jointly optimizing the static neural étendue expander in concunction with...
C
Table 2. A summary of joint MTL studies according to types of tasks involved. ‘W’, ‘S’, ‘D’, and ‘O’ in the four rightmost columns represent the word-level, sentence-level, and document-level tasks, and tasks of other abstract levels such as RE, respectively. A single checkmark could mean joint learning of multiple ta...
In recent years, data-driven neural models have achieved great success in machine learning problems. In the field of Natural Language Processing (NLP), the introduction of transformers (Vaswani et al., 2017) and pre-trained language models (PLMs) such as BERT (Devlin
et al., 2018) develops a formality-sensitive translation system from English to French where formality labels are only available in English. Besides, effort has also been made to learn unified cross-lingual language representations (Singla et al., 2018; Huang et al., 2019). Such cross-lingual representations could subs...
MTL trains machine learning models from multiple related tasks simultaneously or enhances the model for a specific task using auxiliary tasks. Learning from multiple tasks makes it possible for models to capture generalized and complementary knowledge from the tasks at hand besides task-specific features. Tasks in MTL ...
MultilingualMulti-lingual machine learning has always been a hot topic in the NLP field with a representative example of NMT systems mentioned in Section 4.1. Since monolingual data source may be limited and biased, leveraging data from multiple languages through MTL can benefit \replacedmultilingualmulti-lingual machi...
D
Welcome to the updated and simplified documentation to using the IEEEtran LaTeX class file. The IEEE has examined hundreds of author submissions using this package to help formulate this easy to follow guide. We will cover the most commonly used elements of a journal article. For less common elements we will refer bac...
Welcome to the updated and simplified documentation to using the IEEEtran LaTeX class file. The IEEE has examined hundreds of author submissions using this package to help formulate this easy to follow guide. We will cover the most commonly used elements of a journal article. For less common elements we will refer bac...
The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ...
It is assumed that the reader has a basic working knowledge of LaTeX. Those who are new to LaTeX are encouraged to read Tobias Oetiker’s “The Not So Short Introduction to LaTeX”, available at: http://tug.ctan.org/info/lshort/english/lshort.pdf which provides an overview of working with LaTeX.
The IEEE Template Selector will always have the most up-to-date versions of the LaTeX and MSWord templates. Please see: https://template-selector.ieee.org/ and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have...
C
5}(\neg\phi_{S}(x_{5})\land x_{3}\sim x_{5}\land x_{4}\sim x_{5})italic_ϕ start_POSTSUBSCRIPT italic_W italic_Y end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) ∧ italic_ϕ start_POSTSUBSCRIPT italic_W italic_Y end_POSTSUBSCRIPT ( italic_x start_PO...
H,ℓ,𝒞⊧ϕW⁢Y⁢[x1∖u1]⁢[x2∖u2]models𝐻ℓ𝒞subscriptitalic-ϕ𝑊𝑌delimited-[]subscript𝑥1subscript𝑢1delimited-[]subscript𝑥2subscript𝑢2H,\ell,\mathcal{C}\models\phi_{WY}[x_{1}\setminus u_{1}][x_{2}\setminus u_{2}]italic_H , roman_ℓ , caligraphic_C ⊧ italic_ϕ start_POSTSUBSCRIPT italic_W italic_Y end_POSTSUBSCRIPT [ italic_...
definition of c⁢(vi)𝑐subscript𝑣𝑖c(v_{i})italic_c ( italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ). Then H,ℓ′,𝒞′⊧ϕa⁢d⁢j⁢[x1∖u1]⁢[x2∖u2]models𝐻superscriptℓ′superscript𝒞′subscriptitalic-ϕ𝑎𝑑𝑗delimited-[]subscript𝑥1subscript𝑢1delimited-[]subscript𝑥2subscript𝑢2H,\ell^{\prime},\mathcal{C}^{\prime}\mode...
In other words, H,ℓ,𝒞⊧ϕa⁢d⁢j⁢[x1∖u1]⁢[x2∖u2]models𝐻ℓ𝒞subscriptitalic-ϕ𝑎𝑑𝑗delimited-[]subscript𝑥1subscript𝑢1delimited-[]subscript𝑥2subscript𝑢2H,\ell,\mathcal{C}\models\phi_{adj}[x_{1}\setminus u_{1}][x_{2}\setminus u_{2}]italic_H , roman_ℓ , caligraphic_C ⊧ italic_ϕ start_POSTSUBSCRIPT italic_a italic_d itali...
H,ℓ,𝒞⊧ϕa⁢d⁢j⁢[x1∖u1]⁢[x2∖u2]models𝐻ℓ𝒞subscriptitalic-ϕ𝑎𝑑𝑗delimited-[]subscript𝑥1subscript𝑢1delimited-[]subscript𝑥2subscript𝑢2H,\ell,\mathcal{C}\models\phi_{adj}[x_{1}\setminus u_{1}][x_{2}\setminus u_{2}]italic_H , roman_ℓ , caligraphic_C ⊧ italic_ϕ start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBS...
C
We study a voluntary collaboration game in which players choose how much, and with whom, to share. In this setting, shared resources generate externalities for other players, but the player sharing the resources does not derive any immediate or direct benefit from doing so. Viewing the decisions about who to share with...
While the examples above are distinct from each other in several ways, they each highlight some key common characteristics of voluntary sharing in collaborative environments. First, the decision to share naturally entails some individual cost to the sharer. Second, shared resources are, in many cases, congestible. The...
Finally, Figure 6 shows the effects of an increase in positive reciprocity, and thus a shift away from a punishment mindset and toward a focus on gains from mutual effort and collaboration. As we can see, shifting players toward positive reciprocity has some of the largest effects in the baseline, with large increases...
Our experiment allows us to examine the impact of the information structure on reciprocal behavior. In the control (or baseline) condition, players are given information only about the total inflow of benefits from others after each round, but cannot identify the source of those benefits. In this way, direct reciprocit...
Consider, for example, the time allocation problem faced by a researcher involved in multiple projects with different sets of coauthors. The researcher has a limited amount of time and concentration power to dedicate towards coauthored projects and her own research activity. Allocating attention to coauthored projects...
D
Dense Connection: A dense connection mechanism was proposed in DenseNet (Huang et al., 2017), which is widely used in computer vision tasks in recent years. Different from the structure that only sends the hierarchical features to the final reconstruction layer, each layer in the dense block receives the features of a...
et al., 2018d), dense connections are combined with the residual learning to form the residual dense block (RDB), which allows low-frequency features to be bypassed through multiple skip connections, making the main branch focusing on learning high-frequency information. Apart from the aforementioned models, the dense ...
et al., 2018), a dual-state recurrent network is proposed, where recurrent signals are exchanged between these states in both directions via delayed feedback. In SFRBN (Li et al., 2019a), a feedback block is proposed, in which the input of each iteration is the output of the previous one as the feedback information. Fo...
Dense Connection: A dense connection mechanism was proposed in DenseNet (Huang et al., 2017), which is widely used in computer vision tasks in recent years. Different from the structure that only sends the hierarchical features to the final reconstruction layer, each layer in the dense block receives the features of a...
Motivated by the dense connection mechanism, Tong et al. (Tong et al., 2017) proposed an SRDenseNet. SRDenseNet uses not only the layer-level dense connections but also the block-level ones, where the output of each dense block is connected by dense connections. In this way, the low-level features and high-level featur...
D
Coordinate-Based Networks. The interest in using fully connected networks to represent signals in an implicit manner has grown over the last few years, which can be attributed to the potential of such methods to be used for 3D shape representations [3, 4, 6, 13, 14, 15].
An important issue for learning coordinate-based representations is the tendency of neural networks to interpolate and attenuate high-frequency changes in the output [1, 2, 16]. Two effective solutions to this problem are to either map the input coordinates (known as positional encoding) [1] or use sinusoidal activatio...
The Patch is a network of 4 ReLU layers with 256 units, identical to the one used in [1]. The role of this component is to map each coordinate vector to an appropriate pixel patch. The coordinate input is mapped using random Fourier features before passing to the network. This processing step is known as positional e...
There have been some works where coordinate-based networks are used as a core for a generative model using techniques such as a hypernetwork predicting the weights of a sample coordinate  [11], or by modulating the weights of a base coordinate  [12]. These approaches are fundamentally different as they attempt to crea...
The potential of applying a network as an encoding of a signal has recently been explored in a number of works [1, 2, 3, 4, 6, 10, 11, 12, 13, 14, 15]. The learned signals can be of any dimensionality, however, encoding of spatial coordinates is a particularly popular theme, involving a network that learns to produc...
A
To go further than these results - i.e. to establish that the regret guarantees associated with exact TS carry to PG-TS - is likely to be much more complex. The most advanced results on the regret of approximate TS in simple multi-armed bandits, for instance, rely on complex Bayesian non-parametric theory, which, to t...
Both the PG-TS and PG-IDS approaches given in Algorithm 1 and 2 are perhaps the most straightforward possible in terms of their use of the Gibbs samples. It may be beneficial for instance to allow M𝑀Mitalic_M to vary as a function of t𝑡titalic_t, to remove some burn-in from the sample {θt(1),…,θt(M)}superscriptsubscr...
The PG-augmentation scheme can also be used to devise an approximate Information Directed Sampling (IDS) scheme, based on the framework proposed by Russo and Van Roy, (2018). IDS algorithms are randomised policies which construct an action sampling distribution, in each round t𝑡titalic_t, based on a trade-off of regre...
In this paper we have explored the use of heuristic Bayesian decision-making rules for logistic contextual apple tasting problems. We have shown that both Thompson Sampling and Information Directed Sampling methods are highly efficient for such problems, and indeed more so than confidence bound based approach of Bartók...
A related algorithm, inspired by the link between information gain and the necessary exploration in sequential decision-making problems is Information-Directed Sampling (IDS), introduced in Russo and Van Roy, 2014a ; Russo and Van Roy, (2018). Like TS, IDS also selects at random based on the posterior belief, but cons...
B
The design choices behind our memory-augmented transformers architecture are motivated by our intended use of external textual knowledge. Past research on deep learning models tackled the problem of handling external content by introducing external memory blocks the model could interact with in a differentiable way [53...
Another frequent use of the memory block is for external knowledge integration. Transformers have been directly applied as advanced input encoding methods in traditional memory-augmented architectures for information retrieval in dialogue systems [25, 26, 27, 28], question-answering [29] and aspect-based sentiment anal...
The main building block of our architecture is a transformer-based model. In our experiments, we considered BERT [1] and DistilBERT [55], a distilled version of BERT that achieves competitive performance while limiting the overall computational burden. However, our approach is general and is not restricted to these two...
In contrast, we do not propose an architectural modification of transformer architecture. Instead, we leverage the general-purpose architecture of memory-augmented neural networks and use transformers as a possible implementation of one of the constituting blocks of such an architecture. Furthermore, our role of memor...
Therefore, they seem an ideal candidate architecture for the integration of natural language explanations. However, the main purpose of our extension of transformers is not to improve classification performance, but to generate explanations in the form of grounding to elements of a textual knowledge. Accordingly, the k...
D
In terms of the three-steps ahead prediction, the CAR-AR-BNP model is superior, but we observe also that the performance of all CAR-AR models is worse than the baseline in the hours following midnight and between 07:00-09:00 (see negative slopes in the log Bayes factor curves). Furthermore, the gain in performance of t...
In particular, we compare the posterior predictive satisfaction and robustness measures estimated on the M𝑀Mitalic_M trajectories with the ex-post evaluation of the satisfaction and robustness of the properties on the observed data after time t𝑡titalic_t 𝒚t+1:hosubscriptsuperscript𝒚𝑜:𝑡1ℎ\bm{y}^{o}_{t+1:h}bold_ita...
Table 1. Posterior means and standard deviations (in parentheses) of the predictive measures of accuracy and F1 scores for property satisfaction and the RMSE of the robustness for the four properties. The reported measures are averages over all test samples.
We achieve this by leveraging an existing stream of literature in the computer science field of SMC and verification to approximate the posterior predictive probability of satisfaction of these properties, as well as a posterior predictive measure of property reliability or robustness. We then introduce the Bayesian pr...
Table 1 shows the posterior mean and standard deviation of the satisfaction accuracy, satisfaction F1 score and robustness RMSE for all four properties. We observe that the CAR-AR-BNP model is the best-performing one in terms of the measures inspected, however, the difference in performance for some properties is not l...
B
)\|=\sqrt{K(x,x)+K(y,y)-2K(x,y)}.roman_dist ( italic_φ ( italic_x ) , italic_φ ( italic_y ) ) = ∥ italic_φ ( italic_x ) - italic_φ ( italic_y ) ∥ = square-root start_ARG italic_K ( italic_x , italic_x ) + italic_K ( italic_y , italic_y ) - 2 italic_K ( italic_x , italic_y ) end_ARG .
costzφ⁡(X,C):=∑x∈XwX⁢(x)⁢(dist⁡(φ⁢(x),C))z,assignsuperscriptsubscriptcost𝑧𝜑𝑋𝐶subscript𝑥𝑋subscript𝑤𝑋𝑥superscriptdist𝜑𝑥𝐶𝑧\displaystyle\operatorname{cost}_{z}^{\varphi}(X,C):=\sum_{x\in X}{w_{X}(x)(% \operatorname{dist}(\varphi(x),C))^{z}},roman_cost start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSU...
wX⁢(x)⋅(dist⁡(φ⁢(x),C))zcostzφ⁡(X,C)⋅subscript𝑤𝑋𝑥superscriptdist𝜑𝑥𝐶𝑧superscriptsubscriptcost𝑧𝜑𝑋𝐶w_{X}(x)\cdot\frac{(\operatorname{dist}(\varphi(x),C))^{z}}{\operatorname{cost% }_{z}^{\varphi}(X,C)}italic_w start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ( italic_x ) ⋅ divide start_ARG ( roman_dist ( italic_φ ...
costφ⁡(X,C)=∑x∈XwX⁢(x)⋅minc∈C⁡‖φ⁢(x)−c‖2.superscriptcost𝜑𝑋𝐶subscript𝑥𝑋⋅subscript𝑤𝑋𝑥subscript𝑐𝐶superscriptnorm𝜑𝑥𝑐2\operatorname{cost}^{\varphi}(X,C)=\sum_{x\in X}{w_{X}(x)\cdot\min_{c\in C}\|% \varphi(x)-c\|^{2}}.roman_cost start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_X , italic_C ) = ∑ start...
\operatorname{cost}_{z}^{\varphi}(X,C).roman_cost start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_S , italic_C ) ∈ ( 1 ± italic_ϵ ) ⋅ roman_cost start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_X , itali...
A
There are also dependent type theories using grading [Atk18, MIO21], but they do not consider Id-Types. Therefore, an interesting direction will be to use the machinery of the present paper, possibly extended to fibrations, to study quantitative Id-Types in linear type theories.
how does this (standard) notion of equality relate to our quantitative equality? To answer this question in a precise way, first of all we observe that also elementary R𝑅Ritalic_R-graded doctrines can be organised in a 2-category, and then we compare it with the 2-category of R𝑅Ritalic_R-Lipschitz doctrines.
[Pit00, Jac01]: it maps contexts and terms to objects and arrows of the base and formulas to elements of the fibres, respecting the entailments. The interpretation of LPLLR in an R𝑅Ritalic_R-Lipschitz doctrine has to be defined with the additional requirement that 𝗀𝗋𝗀𝗋\mathsf{gr}sansserif_gr agrees with the struct...
Finally, in [PON21] it is shown that certain proof systems of intuitionistic Linear Logic with subexponentials can be used to model and reason abount concurrent programming under the processes-as-formulas interpretation. It would be interesting to investigate to which extent this paradigm applies to our calculus for qu...
A syntactic study of equality in non-modal Linear Logic can be found in [Doz96]. Affineness and replicability of equality are derived from the (non-quantitative) substitution rule. In [CM96], full Linear Logic is considered. Equality is required to be intuitionistic, in the sense that x≖y⊢!(x≖y)x\eqcirc y\vdash\mathsf{...
C
In this section, we first define a new role similarity measure, namely ForestSim, based on spanning rooted forests. We then show that the ForestSim score can be expressed in terms of the diagonal elements in the forest matrix and prove that ForestSim is an admissible role similarity metric. After that, we propose Fores...
In this paper, on the basis of spanning rooted forests, we propose ForestSim, a new node similarity metric. ForestSim uses the average size of the trees rooted at the node u𝑢uitalic_u in spanning rooted forests of the graph, denoted by s⁢(u)𝑠𝑢s(u)italic_s ( italic_u ), to capture its structural properties. Two node...
Figure 2: Each tree rooted at u𝑢uitalic_u in the spanning rooted forest F∈ℱu⁢u𝐹subscriptℱ𝑢𝑢F\in\mathcal{F}_{uu}italic_F ∈ caligraphic_F start_POSTSUBSCRIPT italic_u italic_u end_POSTSUBSCRIPT for u=1,2,3,4𝑢1234u=1,2,3,4italic_u = 1 , 2 , 3 , 4 in the toy graph G0subscript𝐺0G_{0}italic_G start_POSTSUBSCRIPT 0 end...
The key point of analyzing structural roles is figuring out how a vertex connects with its context nodes [43]. To some extent, the sizes of those trees rooted at u𝑢uitalic_u in the spanning rooted forests of a graph reflects the connection mode between the node u𝑢uitalic_u and its context vertices. Here, we use the a...
In this paper, we propose a novel node similarity metric, namely ForestSim, to quickly and effectively process top-k similarity search on large networks. Different from previous frameworks, ForestSim, based on spanning rooted forests of graphs, adopts the average size of all trees rooted at node u𝑢uitalic_u in spanni...
C
It is observed that the sentiment clusters predicted by LSA are very close to the ground truth, which demonstrates the effectiveness of our models in modeling sentiment coherency. The small clusters (e.g., clusters containing 1111 or 2222 aspects) are more easy to predict, while the large clusters (e.g., ≥3absent3\geq ...
Table 4: The traditional aspect sentiment classification performance on five public datasets, and the best results are heightened in bold font. † indicates the results are the best performance in multiple runs, while other methods report the average performance.
We utilize LSA to classify aspect sentiments and aggregate the sentiment clusters. The cluster prediction performance in Table 3 shows that our models consistently outperform the baseline models on all datasets. The performance of LSA is dependent on the base model.
When it comes to sentiment classification performance, the results in Table 4 clearly demonstrate the superiority of our models over significant baselines, particularly in the case of the LSAE model. The experimental results are as expected and show the proficiency of LSA.
Evaluation: According to our extensive experimental results, LSA achieve impressive aspect sentiment coherency prediction results. Besides, our ensemble LSA model also obtains state-of-the-art aspect sentiment classification performance on five public datasets.
C
𝐗𝐗\mathbf{X}bold_X. Suppose that there exist orthogonal projections 𝐋:ℝN→𝒫p⁢(𝕊):𝐋→superscriptℝ𝑁superscript𝒫𝑝𝕊\mathbf{L}:\mathbb{R}^{N}\rightarrow{\mathcal{P}}^{p}(\mathbb{S})bold_L : blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT → caligraphic_P start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR...
The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop...
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and me...
In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic...
gradient estimate of 𝜸𝐖subscript𝜸𝐖\bm{\gamma}_{\mathbf{W}}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT, where 𝜸𝐖0superscriptsubscript𝜸𝐖0\bm{\gamma}_{\mathbf{W}}^{0}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is the initial guess. The main ...
A
\mathrm{Var}(\bm{y})}=\hat{y_{i}}over^ start_ARG italic_f ( italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG = ( italic_f ( italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - blackboard_E [ italic_f ( bold_italic_y ) ] ) / square-root start_ARG roman_Var ( italic_f ( bold_italic_y ) ) end_ARG ...
Figure 3. QuantumNAT Overview. (1) Post-measurement normalization matches the distribution of measurement results between noise-free simulation and real QC. (2) Based on realistic noise models, noise-injection inserts quantum error gates to the training process to increase the classification margin between classes. (3)...
Visualization of QNN extracted features. MNIST-2 classification result is determined by which feature is larger between the two: feature one is the sum of measurement outcomes of qubit 0 and 1; feature 2 is that of qubit 2 and 3. We visualize the two features obtained from experiments on Belem in a 2-D plane as in Figu...
Compatibility with existing noise mitigation. QuantumNAT is orthogonal to existing noise mitigation such as extrapolation method. It can be combined with post-measurement normalization (Table 4). The QNN model has 2 blocks, each with three U3+CU3 layers. For “Normalization only”, the measurement outcomes of the 3-laye...
Figure 4 compares the noise-free measurement result distribution of 4 qubits (blue) with their noisy counterparts (yellow) for MNIST-4. Qualitatively, we can clearly observe that the post-measurement normalization reduces the mismatch between two distributions.
D
Figure 1: An illustration of event trajectories triggered by a flying drone. (a) The triggered retinal events on the image plane. The retinal events marked by the blue rectangle are triggered by the moving drone. (b) The triggered retinal events in the spatio-temporal domain. (c) The event trajectories triggered by th...
However, gallego2018unifying and other event-based data association methods zhu2017event ; gallego2019focus ; peng2022globally show that the events triggered by the same edge in the scene can be associated with each other using an event trajectory. Furthermore, they also show that the event trajectories, triggered a...
Event-based methods have achieved promising performance on various tasks gallego2022event . However, the study of the fundamental event data association problem is still challenging and in its infancy. Unlike a traditional camera, an event camera only sparsely emits binary (i.e., On and Off) retinal events at the edges...
However, in the current event-based studies, most methods usually handle the fundamental event-based data association problem in implicit ways, which are designed for their specific tasks. As a result, event-based data association has not been effectively solved by the current event-based works. There are relatively f...
In this paper, we propose a novel unifying event data association (EDA) approach to effectively and explicitly handle the essential event data association and event information fusion problem. The proposed EDA performs a model fitting on event data, which can asynchronously associate and fuse the event data over time ...
B
A good connected ordering of any connected perfect graph on n𝑛nitalic_n vertices can be computed in time O⁢(nc+4)𝑂superscript𝑛𝑐4O(n^{c+4})italic_O ( italic_n start_POSTSUPERSCRIPT italic_c + 4 end_POSTSUPERSCRIPT ) provided that an optimal colouring of a perfect graph can be obtained in O⁢(nc)𝑂superscript𝑛𝑐O(n^{...
This is formalized by Algorithm 1. The size of the maximum clique of Line 1 is computed in O⁢(nc)𝑂superscript𝑛𝑐O(n^{c})italic_O ( italic_n start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) time using the algorithm in [13], bringing the total time complexity of Algorithm 1 to O⁢(nc+2)𝑂superscript𝑛𝑐2O(n^{c+2})it...
Note that the time bound of the above corollary relies to date on the complexity of the polynomial-time algorithm from [13], whose precise exponent has not been made explicit by the authors and which is most probably large. This is in contrast to the algorithm for comparability graphs given by Theorem 7 which runs in O...
Concerning other subclasses of interest, as mentioned in the introduction, the same task can be done in time O⁢(m+n)𝑂𝑚𝑛O(m+n)italic_O ( italic_m + italic_n ) for chordal graphs using the LexBFS algorithm [21], for Meyniel graphs this can be done in time O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUP...
As a connected vertex-ordering of H𝐻Hitalic_H can be obtained in linear time using a standard graph traversal algorithm, and a colouring of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT may be computed in O⁢(m⁢n)𝑂𝑚𝑛O(mn)italic_O ( italic_m italic_n ) time [12, Chapter 5.7], we concl...
B
Based on ℳℳ\mathcal{M}caligraphic_M, the similarity between two non-adjacent samples within each 𝒩isubscript𝒩𝑖\mathcal{N}_{i}caligraphic_N start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT can be approximated by the shortest-path distance. Since most URL algorithms are designed for specific tasks or data, the following...
To model the global structures, we combine the graph distance calculated on both the raw feature space and predefined graphs, which define the task-specific knowledge, e.g., the provided graph 𝒢𝒢\mathcal{G}caligraphic_G for GE tasks and the proxy task of SSL tasks. To learn the embedding according to data structures,...
Over-uniformity. Without the global structure, the optimal solution for constructing 𝒩𝒩\mathcal{N}caligraphic_Ns is to place each x𝑥xitalic_x evenly in the embedding space, as shown in the middle of Fig 3. In other words, the decision boundaries are maximized to the discrimination between x𝑥xitalic_x.
As for the societal impacts of GenURL, it can be regarded as a unified framework for the unsupervised representation learning (URL) problem that bridges the gap between various methods. The ablation studies of basic hyper-parameters can reflect the relationship between different URL tasks. The core idea of GenURL is to...
Firstly, we compare the effects of hyper-parameters in GenURL for DR and SSL tasks. As shown in Figure 12, GenURL prefers smaller νZsubscript𝜈𝑍\nu_{Z}italic_ν start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT, i.e., using ν=0.01𝜈0.01\nu=0.01italic_ν = 0.01 to balance the local and global structures. Figure 5 shows that...
B
To prevent over-fitting the real validation set, we evaluate the performance of each sub-network on the split validation set. The weights are taken from the super network using indexing. We re-calibrate the batch normalization statistics using 20 batches of data with a batch size 64.
We used evolutionary search to find the best sub-network architecture under certain constraints. We use a population size of 100. We randomly sample 100 sub-networks satisfying the constraints to form the first generation of population. For each iteration, we only keep the top-20 candidates with the highest accuracy. T...
We used evolutionary search to find the best sub-network architecture under certain constraints. We use a population size of 100. We randomly sample 100 sub-networks satisfying the constraints to form the first generation of population. For each iteration, we only keep the top-20 candidates with the highest accuracy. T...
We used evolutionary search to find the best sub-network architecture under certain constraints. We use a population size of 100. We randomly sample 100 sub-networks satisfying the constraints to form the first generation of population. For each iteration, we only keep the top-20 candidates with the highest accuracy. ...
We used evolutionary search to find the best sub-network architecture under certain constraints. We use a population size of 100. We randomly sample 100 sub-networks satisfying the constraints to form the first generation of population. For each iteration, we only keep the top-20 candidates with the highest accuracy. T...
A
Main Results Table I shows the results of different models for CECE on two leaderboards. The single model we proposed, overwhelmingly outperforms the baseline in terms of all leaderboards and achieves encouraging 65.9%percent65.965.9\%65.9 %, 77.0%percent77.077.0\%77.0 % improvements in F1-score over the baseline meth...
In the 2020 ICDM Competition 333https://www.biendata.xyz/competition/icdm_2020_kgc/, the task adds judgments on multiple event types, which is difficult to solve with a reading comprehension framework. The goal of the competition is to extract multiple event types and event-causes for each text and brand/product. To th...
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging [8] framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a...
The ICDM 2020 Knowledge Graph Contest is a competition-style event co-located with the leading ICDM conference. This paper describes our solution for the consumer event-cause extraction task, and we won 1st place in the first stage leaderboard and 3rd place in the final stage leaderboard. Extracting causes of consumer...
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a ran...
D
Table 2: The RDMs Correlation between graph encoders in CGCLG⁢I⁢N/G⁢C⁢N/G⁢A⁢TsubscriptCGCL𝐺𝐼𝑁𝐺𝐶𝑁𝐺𝐴𝑇{\text{CGCL}}_{GIN/GCN/GAT}CGCL start_POSTSUBSCRIPT italic_G italic_I italic_N / italic_G italic_C italic_N / italic_G italic_A italic_T end_POSTSUBSCRIPT on PROTEINS and IMDB-BINARY.
To cope with the problem of model collapse, we devise the asymmetric structure for CGCL. The asymmetry lies in the differences of GNN-based encoders’ message-passing schemes. Besides, graph encoders in CGCL are supposed to be complementary for a stronger fitting ability. Specifically, high complementarity indicate that...
We propose the concepts of asymmetric structure and complementary encoders as foundational principles for the collaborative learning paradigm. To provide a more comprehensive theoretical analysis, we put forth two quantitative metrics to assess both the asymmetry and complementarity inherent in the collaborative frame...
As described in Section 3.2, the proposed CGCL consists of multiple graph encoders. Naturally, the assembly of different types of graph encoders have a significant impact on the performance of CGCL. Therefore, in this section we first explain the essence of collaborative framework. Subsequently, we delve into the two c...
In this study, we introduce CGCL, a novel collaborative graph contrastive learning framework, designed to address the invariance challenge encountered in current GCL methods. Unlike the conventional practice of constructing augmented graphs by hand, CGCL employs multiple GNN-based encoders to generate multiple contrast...
D
(Higgins et al., (2017), Kim and Mnih, (2018), Locatello et al., (2019)). In machine learning context, compositionality is perceived as a generalization mechanism (Lake et al., (2017)) and has been used e.g. for goal composition (Jiang et al., (2019)) or knowledge transfer (Li and Bowling, (2019)).
In this paper, we theoretically show that inductive biases on both the training framework and the data are needed for compositionality to emerge. A similar observation has been made by Kottur et al., (2017); however, our result is more fundamental and points out a common misconception that compositionality can be learn...
In emergent communication studies, one often considers agents who can share information about a set of objects described by the common features. Such a situation is common in multi-agent systems with partial observation (Foerster et al., (2016), Lazaridou et al., (2017), Jaques et al., (2019), Raczaszek-Leonardi et al....
We experimentally verify that a certain range of noise levels, dependent on the model and the data, promotes compositionality. We provide a wide range of experiments that illustrate the influence of different priors. For the inductive biases in the training framework, we look into the impact of the network architecture...
The topic of communication is actively studied in multi-agent RL, see Hernandez-Leal et al., (2020, Table 2) for a recent survey. Compositionality is often investigated in the context of signaling games (Fudenberg and Tirole, (1991), Lewis, (1969), Skyrms, (2010), Lazaridou et al., (2018)). Recent research has shown th...
A
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an...
Control barrier functions (CBFs) were introduced in [3, 4] to render a safe set controlled forward invariant. A CBF defines a set of safe control inputs that can be used to find a minimally invasive safety-preserving correction to a nominal control law by solving a convex quadratic program. Many variations and extensio...
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an...
A promising research direction is to learn CBFs from data. The authors in [36] construct CBFs from safe and unsafe data using support vector machines, while authors in [37] learn a set of linear CBFs for clustered datasets. The authors in [38] proposed learning limited duration CBFs and the work in [39] learns signed ...
Learning with CBFs: Approaches that use CBFs during learning typically assume that a valid CBF is already given, while we focus on constructing CBFs so that our approach can be viewed as complementary. In [19], it is shown how safe and optimal reward functions can be obtained, and how these are related to CBFs. The aut...
D
For every k∈ℕ𝑘ℕk\in\mathbb{N}italic_k ∈ blackboard_N, Σk+1𝖯⊄𝖡𝖰𝖯Σk𝖯not-subset-ofsuperscriptsubscriptsans-serif-Σ𝑘1𝖯superscript𝖡𝖰𝖯superscriptsubscriptsans-serif-Σ𝑘𝖯\mathsf{\Sigma}_{k+1}^{\mathsf{P}}\not\subset\mathsf{BQP}^{\mathsf{\Sigma}_{k}% ^{\mathsf{P}}}sansserif_Σ start_POSTSUBSCRIPT italic_k + 1 end_PO...
In this section, we prove that 𝖡𝖰𝖯Σk𝖯superscript𝖡𝖰𝖯superscriptsubscriptsans-serif-Σ𝑘𝖯\mathsf{BQP}^{\mathsf{\Sigma}_{k}^{\mathsf{P}}}sansserif_BQP start_POSTSUPERSCRIPT sansserif_Σ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT sansserif_P end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT does not c...
Theorem 8 extends the breakthrough of Håstad, Rossman, Servedio, and Tan [HRST17], who (solving an open problem from the 1980s) showed that 𝖯𝖧𝖯𝖧\mathsf{PH}sansserif_PH is infinite relative to a random oracle with probability 1111. Our result shows, not only that a random oracle creates a gap between every two succ...
The proof idea is as follows. First, we take a random oracle, which makes 𝖯𝖧𝖯𝖧\mathsf{PH}sansserif_PH infinite [HRST17, RST15]. Then, we encode the answers to all possible 𝖯#⁢𝖯superscript𝖯#𝖯\mathsf{P^{\#P}}sansserif_P start_POSTSUPERSCRIPT # sansserif_P end_POSTSUPERSCRIPT queries in instances of the Forrelati...
The proof of Theorem 10, giving an oracle where 𝖯=𝖭𝖯≠𝖡𝖰𝖯=𝖯#⁢𝖯𝖯𝖭𝖯𝖡𝖰𝖯superscript𝖯#𝖯\mathsf{P}=\mathsf{NP}\neq\mathsf{BQP}=\mathsf{P^{\#P}}sansserif_P = sansserif_NP ≠ sansserif_BQP = sansserif_P start_POSTSUPERSCRIPT # sansserif_P end_POSTSUPERSCRIPT, follows a similar recipe to the proof of Theorem 9. W...
B
Our result (2) suggests that one possibility is to define the multiplicity of a solution as the growth rate of multiplicities of its truncations, and this definition will be consistent with the usual algebraic multiplicity for the case of a fat point on a line.
We used Macaulay2 [19] and, in particular, package Jets [18, 17] to explore possible analogues of our Theorem 3.1 for this more general case. A related Sage implementation for computing the arc space of an affine scheme with respect to a fat point can be found in [37, Section 9] and [36, Section 5.4].
Note that the series does not depend on the multiplicity m𝑚mitalic_m of the point. One way to capture the scheme structure of ℒ⁢(X)ℒ𝑋\mathcal{L}(X)caligraphic_L ( italic_X ) could be to take the components of the projections in (3) with their multiplicities.
From the point of view of the algebraic geometry, I(∞)superscript𝐼I^{(\infty)}italic_I start_POSTSUPERSCRIPT ( ∞ ) end_POSTSUPERSCRIPT defines the arc space ℒ⁢(X)ℒ𝑋\mathcal{L}(X)caligraphic_L ( italic_X ) [13] of the scheme X𝑋Xitalic_X. Geometrically, the points of the arc space correspond to the Taylor coefficients...
Connections between the multiplicity structure of the arc space of a fat point and Rogers-Ramanujan partition identities from combinatorics were pointed out by Bruschek, Mourtada, and Schepers in [11] (for a recent survey, see [29, §9]). In particular, they used Hilbert-Poincare series of similar nature to (1) (motivat...
D
(b) To fit DCDFM, an efficient spectral clustering algorithm called nDFA is designed. We build theoretical framework on consistent estimation for the proposed algorithm under DCDFM. Benefited from the distribution-free property of DCDFM, our theoretical results under DCDFM are general. Especially, when DCDFM reduces t...
Meanwhile, Q,O⁢E,e⁢r⁢r⁢o⁢r,τ𝑄𝑂𝐸𝑒𝑟𝑟𝑜𝑟𝜏Q,OE,error,\tauitalic_Q , italic_O italic_E , italic_e italic_r italic_r italic_o italic_r , italic_τ and T𝑇Titalic_T obtained by applying DFA and nDFA to adjacency matrices A𝐴Aitalic_A for the above four real-world networks with known nodes labels are reported in Table ...
In this paper, we introduced the Degree-Corrected Distribution-Free Model (DCDFM), a model for community detection on weighted networks. The proposed model is an extension of previous Distribution-Free Models by incorporating node heterogeneity to model real-world weighted networks in which nodes degrees vary, and it ...
Both simulated and empirical data are presented to compare nDFA with existing algorithm DFA developed in [16] for weighted networks, where DFA applies k-means on all rows of U^^𝑈\hat{U}over^ start_ARG italic_U end_ARG with K𝐾Kitalic_K clusters to estimate nodes labels. Meanwhile, codes for all experimental results in...
(c) To measure performances of different methods on real-world weighted network with unknown information on nodes labels, we propose a general modularity as an extension of classical Newman’s modularity [23]. For weighted network in which all edge weights are nonnegative, the general modularity is exactly the Newman’s ...
D
The aim is to win a battle against a built-in AI by using your team of agents. In easier tasks, often rudimentary coordination is enough. However, harder tasks involve engaging a stronger enemy (e.g., having more units), which requires inventing smart techniques and tricks. Each unit has a limited line of sight, which...
We use standard feed-forward networks for the actor and critic networks with two hidden layers of 64646464 neurons and ReLU activations. The critic network of MA-Trace (obs) takes stacked observations of agents as input, while MA-Trace (full) utilizes the full state provided by SMAC. DecMA-Trace have a critic using si...
In Table 1 and Figure 1, we present the results of the main version of our algorithm – MA-Trace (obs), in which the critic uses stacked observations of agents as described in Appendix F. MA-Trace (obs) reaches competitive results and in some cases exceeds the current state-of-the-art. We compare with a selection of the...
We evaluate MA-Trace on StarCraft Multi-Agent Challenge Samvelyan et al. (2019a) – a standard benchmark for multi-agent algorithms. Our approach achieves competitive performance on all tasks and exceeds state-of-the-art results on some of them. Additionally, we provide a comprehensive set of ablations to quantify the i...
We found that MA-Trace (full) performs slightly worse than MA-Trace (obs). Usually the differences are small. However, in two harder tasks, corridor and 6h_vs_8z, MA-Trace (full) learns much slower and often fails. This is perhaps surprising, as the full state contains additional information (e.g., about invisible oppo...
B
Existing work on single decision tree visualization has experimented with different visualization techniques, such as node-link diagrams, Elzen2011BaobabView ; Nguyen2000A ; Lee2016An ; Cavallo2019Clustrophile ; Barlow2001Case ; Phillips2017FFTrees ; Bremm2011Interactive ; Hongzhi2004Multiple ; Munzner2003TreeJuxtapose...
In the InfoVis/VA communities, most of the research in explainable ML focuses on assisting ML experts and developers in understanding, debugging, refining, and comparing ML models. Chatzimparmpas2020A ; Chatzimparmpas2020The In this paper, we expand our method to involve another target group: the various domain exper...
EnsembleMatrix Talbot2009EnsembleMatrix and Manifold Zhang2019Manifold are two VA tools specifically designed for model comparison. The former uses a confusion matrix representation for contrasting models. The latter produces and compares pairs of models across all data classes. We adopt a similar approach as with t...
Finally, multiple static visualizations and a few interactive VA tools have been developed for specific domains of research, such as medicine, Hummelen2010Deep ; Viros2008Improving ; Li2020A ; Niemman2014Learning ; Carlson2008Phylogenetic biology, Abramov2019RuleVis ; Sydow2014Structure security, Aupetit2016Visualiz...
According to a recent survey Streeb2021Task that has extensively analyzed tree- and rule-based classification, several VA systems have been developed for this topic in the InfoVis and VA communities. However, most of these tools do not employ algorithms and measures (except for the accuracy metric) in order to compar...
C
Various other aspects of polarization in MIMO systems have been investigated as well. Ref. [16] showed that space-time block coding (STBC) with single polarization outperforms STBC with dual polarization in Rayleigh and Ricean fading channels. A MIMO system with dual-polarized antenna elements can have lower spatial di...
This section delineates the PR-HS-MIMO communication system that performs spatial multiplexing with the selected antenna elements as depicted in Fig. 3. The Tx selects Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT out of Ntsubscript𝑁𝑡N_{t}italic_N start_POSTSUBSCRIPT italic_t end_POSTSU...
By optimally employing the polarization reconfigurable antennas in conjunction with MIMO spatial multiplexing, this paper will show that the system performance is substantially enhanced compared to that of a conventional scheme with the same number of antenna ports. Throughout the paper, we will assume perfect knowledg...
On the other hand, polarization diversity is not taken into account in the majority of previous research works on antenna selection. Although there are previous reports that consider polarization diversity with antenna selection, they consider fixed antenna polarization such as dual-polarized antennas in[44] or tri-pol...
Tx and Rx have co-located dual-polarized antennas, such that two antenna ports are available for each antenna element at a distinct spatial location. Compared to the case where the same number of antenna elements with only a single polarization is available, this leads to an increase in diversity and capacity, although...
D
Fiat and Woeginger [27] studied a scheduling problem following the online-list paradigm that seems particularly related to online sorting: The goal is to minimize the average job completion time in a system with n𝑛nitalic_n jobs and a single machine.
As we describe in Section 1.3, this can be seen as an asymptotically tight analysis of an online version of the travelling salesperson problem (TSP) on the real line. Indeed, we can imagine that we must visit n𝑛nitalic_n cities on [0,1]01[0,1][ 0 , 1 ] at time steps 1,…,n1…𝑛1,\dots,n1 , … , italic_n. The position of ...
In every step, a single new job arrives and must be scheduled to its time slot immediately and irrevocably and without knowledge of the jobs that arrive in later steps. The offline optimum is to schedule the jobs according to their processing times in sorted order.
In particular, we use the lower bound from Theorem 1 to create an adaptive stream of pieces that will force any packing algorithm to use excessive space. In the reduction, the numbers to be sorted in the online sorting problem correspond to the slopes of the spine segments in the packing problems, and the impossibility...
Fiat and Woeginger [27] studied a scheduling problem following the online-list paradigm that seems particularly related to online sorting: The goal is to minimize the average job completion time in a system with n𝑛nitalic_n jobs and a single machine.
B
Uncertainty and Entropy are also common tools for active learning [28]. We re-implement the framework by connecting one encoder with two classifiers, and train by classifying all instances and enlarging the difference of outputs from two classifiers.
MSE works while uncertainty and entropy are not, where the probable reason is that patterns in medical images appear simple for both classifiers which give similar predictions and cause very low uncertainty for all images and low entropy to distinguish instances clearly.
Finally, we estimate the entropy (of one-hot vectors), uncertainty (difference of two classifiers) and loss as metrics to suggest templates to label. As shown in Table 6, VAAL needs labeled data to initialize, and performs badly in only one iteration.
As shown in Table 1, our approach cannot find the best templates, instead, it finds a fairly good choice well above the average performance. When there is only one template, the best template achieves 2.8632.8632.8632.863mm in MRE. Although the template we choose achieves an MRE of only 3.0833.0833.0833.083mm, it is m...
Uncertainty and Entropy are also common tools for active learning [28]. We re-implement the framework by connecting one encoder with two classifiers, and train by classifying all instances and enlarging the difference of outputs from two classifiers.
B
In particular, when 𝒜𝒜\mathcal{A}caligraphic_A is generated from the Erdös-Rényi random graph G⁢(n,p)𝐺𝑛𝑝G(n,p)italic_G ( italic_n , italic_p ) [43], ℙ⁢(𝒜⁢(i,j)=1)=pℙ𝒜𝑖𝑗1𝑝\mathbb{P}(\mathcal{A}(i,j)=1)=pblackboard_P ( caligraphic_A ( italic_i , italic_j ) = 1 ) = italic_p and ℙ⁢(𝒜⁢(i,j)=0)=1−pℙ𝒜𝑖𝑗01𝑝\math...
In the DFSP algorithm, the number of communities K𝐾Kitalic_K should be known in advance, which is usually impractical for real-world networks. Here, we introduce fuzzy weighted modularity, then we combine it with DFSP to estimate K𝐾Kitalic_K for overlapping weighted networks.
Table 2 records the estimated number of communities of KDFSP and its competitors for real-world networks used in this paper. The results show that, for networks with known K𝐾Kitalic_K, our KDFSP correctly determines the number of communities for these networks while NB and BHac fail to determine the correct K𝐾Kitali...
In this paper, we have proposed a general, flexible, and identifiable mixed membership distribution-free (MMDF) model to capture community structures of overlapping weighted networks. An efficient spectral algorithm, DFSP, was used to conduct mixed membership community detection and shown to be consistent under mild c...
(3) We provide fuzzy weighted modularity to evaluate the quality of mixed membership community detection for overlapping weighted networks. We then provide a method to determine the number of communities for overlapping weighted networks by increasing the number of communities until the fuzzy weighted modularity does ...
A
Figure 2: The effectiveness of directly the mimicking the representations of the oracle model at the initial phase. (a) Initially trained on 50 classes, and then incremented with 10 classes per phase for 5 more phases. (b) Initially trained on 10 classes and then incremented with 10 classes per phase for 9 more phases...
Specifically, since the oracle model is trained with more classes, we investigate how representations are affected by the number of training classes. To this end, we compute and analyze the eigenvalues of the covariance matrix of representations of each class.
For one of the 10 shared classes among the four models, we visualize how αk(c)subscriptsuperscript𝛼𝑐𝑘\alpha^{(c)}_{k}italic_α start_POSTSUPERSCRIPT ( italic_c ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT changes with increasing k𝑘kitalic_k.
Using ImageNet100, we generate four subsets containing 10/25/50/100 classes, where the subset with more classes contains the subset with fewer classes (the 10 classes of the first subset are shared by all 4 subsets). We train four ResNet18 models on each of the subset, and analyze the difference on the representations.
Figure 2: The effectiveness of directly the mimicking the representations of the oracle model at the initial phase. (a) Initially trained on 50 classes, and then incremented with 10 classes per phase for 5 more phases. (b) Initially trained on 10 classes and then incremented with 10 classes per phase for 9 more phases...
C
Therefore, R𝑅Ritalic_R is the relative value (in the range of 00 and 1111) of how many landmarks across all scan pairs have an improved M⁢E⁢E𝑀𝐸𝐸MEEitalic_M italic_E italic_E after registration, when compared to the initial M⁢E⁢E𝑀𝐸𝐸MEEitalic_M italic_E italic_E. When R𝑅Ritalic_R is equal to 1111, the average dis...
We also evaluated the smoothness of the displacement field by calculating its determinant of the Jacobian and examining the number and percentage of voxels with a non-positive Jacobian determinant for each method. These voxels correspond to locations where the deformation is not diffeomorphic.
We also evaluated the smoothness of the displacement field by calculating its determinant of the Jacobian and examining the number and percentage of voxels with a non-positive Jacobian determinant for each method. These voxels correspond to locations where the deformation is not diffeomorphic.
ISBI BraTS-Reg Challenge The testing cohort consisted of 43434343 scan pairs. Participants were asked to containerize their algorithms such that they produced i) warped landmark locations in the B𝐵Bitalic_B scan and ii) determinant of the Jacobian of displacement field.
Also, the annotation protocol might benefit from stricter rules regarding the landmark locations, since the error measured at the annotations is biased by the fact that the landmarks are annotated at salient locations chosen by the clinical experts (Peter et al., 2021). Furthermore, the evaluation process is likely to ...
A
does not hold. Therefore, by definition of 𝖠𝖠\mathsf{A}sansserif_A, 𝖡𝖯𝖯𝖠𝗅𝗀𝗈γ,ϵ,ksubscript𝖡𝖯𝖯𝖠𝗅𝗀𝗈𝛾italic-ϵ𝑘\mathsf{BPPAlgo}_{\gamma,\epsilon,k}sansserif_BPPAlgo start_POSTSUBSCRIPT italic_γ , italic_ϵ , italic_k end_POSTSUBSCRIPT rejects φ𝜑\varphiitalic_φ with probability at most 1313\frac{1}{3}divid...
Before concluding this section, let us discuss the relative frequency problem, which is interesting in its own right. In Section 5, for establishing Theorem 2, we focused on the problem of computing the relative frequency of a CQ w.r.t. a database and a set of FDs.
In Section 5, for establishing Theorem 2, we focused on the problem of computing the relative frequency of a CQ w.r.t. a database and a set of FDs. Recall that the relative frequency of a CQ Q𝑄Qitalic_Q w.r.t. a database D𝐷Ditalic_D and a set ΣΣ\Sigmaroman_Σ of FDs is the ratio that computes the percentage of repairs...
From Proposition 2, and the fact that checking whether a set ΣΣ\Sigmaroman_Σ of FDs has an LHS chain (up to equivalence) is feasible in polynomial time [23], we conclude that to obtain Theorem 2 for FDs, it suffices to provide an analogous result for FDs with an LHS chain (up to equivalence). To this end, following wha...
Unlike Theorem 2, Theorem 8 (the main result of Section 6) was shown by directly considering the problem ♯⁢𝖱𝖾𝗉𝖺𝗂𝗋𝗌⁢(Σ,Q)♯𝖱𝖾𝗉𝖺𝗂𝗋𝗌Σ𝑄\sharp\mathsf{Repairs}(\Sigma,Q)♯ sansserif_Repairs ( roman_Σ , italic_Q ), without relying on the relative frequency problem. An interesting question that comes up is whether...
B
For this linearization, we require that 0<t<1/maxi⁡{ωi/(hi⁢ωi+δi)}0𝑡1subscript𝑖subscript𝜔𝑖subscriptℎ𝑖subscript𝜔𝑖subscript𝛿𝑖0<t<1/\max_{i}\{\omega_{i}/(h_{i}\omega_{i}+\delta_{i})\}0 < italic_t < 1 / roman_max start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT { italic_ω start_POSTSUBSCRIPT italic_i end_POSTSUBSCRI...
Our paper proceeds as follows. In Section 2, we present the original InfoMap algorithm and the Markov time-sweeping technique that we use in our adaptations of InfoMap. In Section 3, we apply InfoMap to absorption-scaled graphs. In Section 4, we introduce a definition of a map function L(a)superscript𝐿𝑎L^{(a)}italic_...
We use the matrix H𝐻Hitalic_H in Definition 5 because this matrix allows us to tune the relative effects of the edge weights and the node-absorption rates on the communities that we detect using our adaptations of InfoMap. In Algorithms 1a and 1b, we summarize our adaptations of InfoMap.
We study an example that plays a similar role to the example of Salathé and Jones [38]. In our example, setting the absorption rates of bridge nodes to larger values than the absorption rates of other nodes is analogous to removing community bridges. Unlike in the example of Salathé and Jones, our example uses the sam...
In our adaptations of InfoMap to absorbing random walks, we introduce a family of associated absorption-scaled graphs and then apply Markov time sweeping to these absorption-scaled graphs. To illustrate how the node-absorption rates impact the communities that we detect, consider the matrix Plsubscript𝑃𝑙P_{{l}}itali...
B
In addition, we see that performance increases with increase in pbsubscript𝑝𝑏p_{b}italic_p start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT, number of nodes, or network link density, as expected due to availability of better trees/paths; it also increases with decrease in (s,d)𝑠𝑑(s,d)( italic_s , italic_d ) distance ...
Now, in Fig. 10(c), we demonstrate the effect of decoherence time of quantum memories used in nodes. Here, we use 30-35 km links. We see that even with decoherence time of as low as 100 ms, DP-Approx is able to create EPs for up to 200 kms while Balanced-Tree can only create EP for paths up to 120 kms; they perform sim...
Fig. 10(a)-(b) shows EP generation rates and fidelity for path lengths of 500km and 1000km for varying link lengths, for the single-tree schemes DP-Approx and Balanced-Tree. Q-Cast and Delft-LP are not shown as their EP rate is near-zero (≤10−20absentsuperscript1020\leq 10^{-20}≤ 10 start_POSTSUPERSCRIPT - 20 end_POSTS...
on a path may have much different lengths. In particular, we pick link lengths randomly in the range of 10 to 50 kms. With this setting, we see that DP-Approx performs much better than Balanced-Tree, and in some cases, up to 100% better. Note that, Balanced-Tree and Caleffi have similar performance over linear graphs, ...
Among our schemes, we use DP-OPT, DP-Approx and Balanced-Tree (see §IV-B) for the QNR-SP problem, and LP (Appendix A) and ITER schemes for the QNR problem. For ITER, we use three schemes: ITER-DPA, ITER-Bal and ITER-SP, which iterate over DP-Approx, Balanced-Tree and SP respectively. To be comprehensive,
B
Transparency and accountability: Explanations can help achieve accountability, which can resolve the potential liability and responsibility gaps in foreseeable post-accident investigations with the involvement of autonomous cars as described by Burton et al. [29]. For example, Mercedes-Benz has recently taken a promis...
As explainees are classified based on their domain knowledge and needs, explanations and their design and evaluation techniques also vary depending on the context and knowledge of the category of explainees. In fact, explanation construction is one of the major challenges in current XAI research. Zablocki et al. [58] ...
3. An explanation component: This constituent of the framework provides understandable insights into the real-time action decisions made by autonomous driving, complying with and corresponding to an e⁢e⁢C𝑒𝑒𝐶eeCitalic_e italic_e italic_C and a s⁢r⁢C𝑠𝑟𝐶srCitalic_s italic_r italic_C. The explanation component must j...
The details, types, and delivery of explanations vary in accordance with users’ identities, technical background knowledge in autonomous driving, and their various functional and cognitive abilities [45, 54]. For instance, a user having little technical expertise on how AVs operate may be satisfied with a simple explan...
Presenting live natural language explanations during the trip: The promising work in this context is Wayve’s LINGO-1 [168] and LINGO-2 [169] architectures. LINGO-1 is a vision-language-action (VLAM) model that provides live natural language explanations for describing a vehicle’s chosen actions in end-to-end autonomous...
C
From Table 4, Ghost-NetVLAD achieves better performance when GhostCNN is pre-trained on the Places-365 dataset. Compared with the ImageNet dataset, the Places-365 dataset includes more building categories, like museums and palaces, etc., while a small proportion is in the ImageNet dataset. Compared with the model pre-...
Dilated convolution can expand the receptive field to get multi-scale context information. To further improve the accuracy of Ghost-NetVLAD, we try to apply dilated convolutions to GhostCNN on the premise of not increasing the model size and training speed. We vary the dilated rate to validate our hypothesis. From Tabl...
From Fig. 3, the dilated convolution part shows the different receptive fields of convolution filter with different dilated rates. For different dilated rates, the number of parameters associated with each layer is identical. Intuitively, when multiple convolution kernels with different dilation rates are superimposed...
Figure 3: Ghost module with dilated convolution filter. We add dilated convolutions to the first step of the Ghost module. Sub-figure (A) shows the 3×3333\times 33 × 3 receptive field of 1-dilated convolution filter (i.e., ordinary convolution). (B) reveals the 7×7777\times 77 × 7 receptive field of 2-dilated convoluti...
In this paper, to improve the original NetVLAD, we proposes a lightweight model to make a good trade-off between accuracy and model efficiency. The experimental results show that the proposed model, Ghost-dil-NetVLAD (i.e., Ghost-NetVLAD with dilated convolutions), achieves similar accuracy with VGG16-NetVLAD and outp...
A
In this paper, we propose a new form of algebraic attack, which is especially effective against nonlinear filter generators. We show with two toy examples how the attack can be performed in practice. We also apply our attack to WG-PRNG and we provide a complexity estimate that shows a fatal weakness of this cipher. We ...
Other versions of the XL-algorithm can be found in [9]. If t𝑡titalic_t is big enough, we expect to find one solution for the system. In this case, the complexity of XL will be essentially the complexity of one single Gaussian reduction in Step 2. Let N𝑁Nitalic_N be the number of equations generated in XL, and T𝑇Tita...
In Chapter 2, we collect all the notations, definitions and known facts needed in the remainder of the paper. We briefly illustrate the XL-Algorithm to solve Boolean equations systems and the algebraic attack to nonlinear filter generators presented in [10].
The core idea of our new algebraic attack is to use many annihilators simultaneously, instead of only one, and provide a good estimate of the number of keystream bits needed to perform the attack, which is strictly related to the number of linearly independent equations after the multiply phase in the XL-Algorithm. Ind...
The XL-Algorithm, introduced by Courtois, Klimov, Patarin, and Shamir in [8], is a computation method for solving a system as the one in (1). Assume that all the fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, for i=1,…,t𝑖1…𝑡i=1,\dots,titalic_i = 1 , … , italic_t, have the same positive d...
B
The last theorem is more complex, and it bounds the exploitability by the gain of the strategy against the model. With p=0𝑝0p=0italic_p = 0, it is reduced to the continual resolving, and with p=1𝑝1p=1italic_p = 1 to CDBR with unbounded exploitability. The theorem shows the parameter p𝑝pitalic_p directly links allowe...
More intuitively, Theorem 5 says that by playing the proposed algorithm, we have at least the same safety guarantees we would get by playing a NE against the opponent we modeled correctly. Theorem 5 allows us to choose a trade-off between the exploitation of the opponent behaving according to the model and safety again...
The last theorem is more complex, and it bounds the exploitability by the gain of the strategy against the model. With p=0𝑝0p=0italic_p = 0, it is reduced to the continual resolving, and with p=1𝑝1p=1italic_p = 1 to CDBR with unbounded exploitability. The theorem shows the parameter p𝑝pitalic_p directly links allowe...
et al. (2014) Resolving gadget constructs a game that allows the opponent to choose whether he wants to play in the subgame we created or terminate. It is done by inserting nodes above the roots of the subgame, and the opponent has two actions before each root, either to follow and play the game or to terminate and rec...
In practice, we will compute CDBR similarly to depth-limited solving with a few key changes. First, we fix the opponent’s strategy in the currently resolved part of the game to allow the player to respond to it, which corresponds to the argmax from the definition. Another key change that simplifies the algorithm is th...
A
The plan of the rest of the paper is as follows. In Section 2, we first show an application of our results to the stochastic block model in Section 2.1, then provide an overview of the further results of this paper in Section 2.2 and lastly give some background and the relation of this paper to previous results in Sec...
In Section 6 we give a ‘robustness’ lemma showing that q∗⁢(G)superscript𝑞𝐺q^{*}(G)italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_G ) and q𝒜⁢(G)subscript𝑞𝒜𝐺q_{\mathcal{A}}(G)italic_q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_G ) do not change much if we change a small proportion o...
The plan of the rest of the paper is as follows. In Section 2, we first show an application of our results to the stochastic block model in Section 2.1, then provide an overview of the further results of this paper in Section 2.2 and lastly give some background and the relation of this paper to previous results in Sec...
The later sections, Section 8 and 9 contain further related results. In Section 8 we see that under-sampling tends to lead to over-estimation of modularity (using Theorem 1.1). In Section 9 we show that Theorem 1.1 implies results on the expected modularity of random graphs Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSC...
Sections 3 to 5 give the proofs of the main results of the paper: Section 3 gives a crucial preliminary lemma for the proofs, the ‘fattening lemma’; and Theorem 1.1 and Theorem 1.2 are proven in Section 4 and Section 5 respectively. (Indeed we also prove versions of these results which take into account the number of p...
D
Analyses on South American countries also contribute to giving a hint of the complexity of the phenomenon. Thiede et al., (2016) show how internal migration is indeed impacted by rising temperature when considering the general effect; however, it hides an extreme heterogeneity of outcomes when specific characteristic...
The macro literature, in line with most validated theoretical models of migration, also investigates whether the effect is conditioned to income levels of the country of origin of potential migrants (Marchiori et al., , 2012; Beine and Parsons, , 2015, 2017). The role of income in a specific origin country experiencin...
The single countries that receive singularly the most attention are Mexico, with 10 case studies, and the U.S., with 9 case studies. This should not be a surprise because of two reasons: firstly, the stock of Mexican emigrates has been constantly the highest in the world (in absolute terms) as well as the migratory flo...
An evident gap in the literature emerges in Figure 4: European countries have rarely been the object of study of the impact of environmental factors on mobility. This might be motivated by the fact that the European continent is mostly seen as a destination for migrants than an origin. It should not surprise that the t...
The overall sample includes both unpublished and published papers, so we add some moderators variables describing different features of the studies that are published. In particular, we introduce a dummy for Published articles and a control for the quality of the journal in which the study is published by adding the va...
C
Static regret has been extensively studied in online convex optimization. Let T𝑇Titalic_T be the time horizon and d𝑑ditalic_d be the dimension, there exist online algorithms with static regret bounded by 𝒪⁢(T)𝒪𝑇\mathcal{O}(\sqrt{T})caligraphic_O ( square-root start_ARG italic_T end_ARG ), 𝒪⁢(d⁢log⁡T)𝒪𝑑𝑇\mathca...
Although the rate is minimax optimal for convex functions, we would like to design algorithms with problem-dependent regret guarantees beyond the worst-case analysis (Roughgarden, 2021). Specifically, we aim to enhance the guarantee for some easy problem instances, particularly when the online functions are smooth, by ...
Finally, we mention that problem-dependent regret minimization falls under the wider umbrella of adaptive online convex optimization (McMahan and Streeter, 2010; Duchi et al., 2010), with more recent explorations discussed in (McMahan, 2017; Joulani et al., 2020; Cutkosky, 2020b) and the monograph (Orabona, 2019). How...
In addition to exploiting the convexity of functions, there are studies improving static regret by incorporating smoothness, whose main proposal is to replace the dependence on T𝑇Titalic_T by problem-dependent quantities. Such problem-dependent bounds enjoy many benign properties, in particular, they can safeguard th...
The two problem-dependent quantities are both at most 𝒪⁢(T)𝒪𝑇\mathcal{O}(T)caligraphic_O ( italic_T ) under standard assumptions of online learning, while could be much smaller in easier problem instances. We propose two novel online algorithms called Sword and Sword++ (“Sword” is short for Smoothness-aware online ...
C
For each e∈A𝑒𝐴e\in Aitalic_e ∈ italic_A and n≥0𝑛0n\geq 0italic_n ≥ 0, consider the derivation tree Tσ⁢(e,n)subscript𝑇𝜎𝑒𝑛T_{\sigma}(e,n)italic_T start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_e , italic_n ) of e𝑒eitalic_e at order n𝑛nitalic_n.
letter a𝑎aitalic_a. Let Tσ⁢(a,kn)subscript𝑇𝜎𝑎subscript𝑘𝑛T_{\sigma}(a,k_{n})italic_T start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_a , italic_k start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) be the derivation tree at order knsubscript𝑘𝑛k_{n}italic_k start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT root...
Case 2. There is a special tree Tσ⁢(e,n)subscript𝑇𝜎𝑒𝑛T_{\sigma}(e,n)italic_T start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_e , italic_n ) with n>Card(A)2n>\operatorname{Card}(A)^{2}italic_n > roman_Card ( italic_A ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. Since a𝑎aitalic_a
Case 1. The integers n𝑛nitalic_n such that the tree Tσ⁢(e,n)subscript𝑇𝜎𝑒𝑛T_{\sigma}(e,n)italic_T start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT ( italic_e , italic_n ) is special are bounded by Card(A)2\operatorname{Card}(A)^{2}roman_Card ( italic_A ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
Since |u⁢b|<(2⁢ρ+1)⁢(Card⁡(A)+1)⁢Card⁡(A)𝑢𝑏2𝜌1Card𝐴1Card𝐴|ub|<(2\rho+1)(\operatorname{Card}(A)+1)\operatorname{Card}(A)| italic_u italic_b | < ( 2 italic_ρ + 1 ) ( roman_Card ( italic_A ) + 1 ) roman_Card ( italic_A ), this shows that the minimal period of x𝑥xitalic_x is bounded by (2⁢ρ+1)⁢(Card⁡(A)+1)⁢Card⁡(A)2�...
C
{n}}{n}+\bigg{(}\frac{C_{n}}{n}\bigg{)}^{\frac{2}{2s+1}}\Bigg{)}.italic_R ( caligraphic_T start_POSTSUBSCRIPT italic_n , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n e...
smoothness index k+1𝑘1k+1italic_k + 1 in each coordinate direction, and any third index q≥1𝑞1q\geq 1italic_q ≥ 1, is indeed n−2⁢s/(2⁢s+1)superscript𝑛2𝑠2𝑠1n^{-2s/(2s+1)}italic_n start_POSTSUPERSCRIPT - 2 italic_s / ( 2 italic_s + 1 ) end_POSTSUPERSCRIPT for s>1/2𝑠12s>1/2italic_s > 1 / 2 (or 2⁢k+2>d2𝑘2𝑑2k+2>d2 it...
This paper is a unification and extension of Sadhanala et al. (2016, 2017) (more will be said about the relationship to these papers in Section 1.3). The models of smoothness for f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that we
The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2⁢k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al. (2017). More precisely, these authors established the third term on the right-hand side in
ℋdk⁢(1)superscriptsubscriptℋ𝑑𝑘1\mathcal{H}_{d}^{k}(1)caligraphic_H start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( 1 ). This matches the optimal rate for estimation over Holder classes (see Sadhanala et al. (2017) for a formal statement and proof for
C
𝕍⁢𝒳=1n⁢∑k=1n𝒟⁢(𝔼⁢𝒳,𝒳k).𝕍𝒳1𝑛superscriptsubscript𝑘1𝑛𝒟𝔼𝒳subscript𝒳𝑘\displaystyle\mathbb{V}\mathcal{X}=\frac{1}{n}\sum_{k=1}^{n}\mathcal{D}(% \mathbb{E}\mathcal{X},\mathcal{X}_{k}).blackboard_V caligraphic_X = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTS...
The topological mean 𝔼⁢𝒳𝔼𝒳\mathbb{E}\mathcal{X}blackboard_E caligraphic_X of networks 𝒳1,⋯,𝒳nsubscript𝒳1⋯subscript𝒳𝑛\mathcal{X}_{1},\cdots,\mathcal{X}_{n}caligraphic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , ⋯ , caligraphic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is the graph given by
The topological variance 𝕍⁢𝒳𝕍𝒳\mathbb{V}\mathcal{X}blackboard_V caligraphic_X of networks 𝒳1,⋯,𝒳nsubscript𝒳1⋯subscript𝒳𝑛\mathcal{X}_{1},\cdots,\mathcal{X}_{n}caligraphic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , ⋯ , caligraphic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is
The sum (9) does not uniquely define networks. Like the toy example in Figure 5, we can have many topologically equivalent brain networks that give the identical distance. Thus, the average of two graphs is also not uniquely defined. The situation is analogous to Fréchet mean, which frequently does not result in a uniq...
The topological variance can be interpreted as the variability of graphs from the topological mean 𝔼⁢𝒳𝔼𝒳\mathbb{E}\mathcal{X}blackboard_E caligraphic_X. To compute the topological mean and variance, we only need to identify a network with identical topology as the topological mean or the topological variance.
D
As the iner-event time function τs⁢(θ)subscript𝜏𝑠𝜃\tau_{s}(\theta)italic_τ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_θ ) is periodic with period π𝜋\piitalic_π, ϕ⁢(θ+π)=arg⁡(G⁢(τs⁢(θ+π))⁢xθ+π)=arg⁡(−G⁢(τs⁢(θ))⁢xθ)=ϕ⁢(θ)+πitalic-ϕ𝜃𝜋𝐺subscript𝜏𝑠𝜃𝜋subscript𝑥𝜃𝜋𝐺subscript𝜏𝑠𝜃subscript𝑥𝜃italic...
∃θ𝜃\exists\theta∃ italic_θ s.t. ϕ⁢(θ)=θitalic-ϕ𝜃𝜃\phi(\theta)=\thetaitalic_ϕ ( italic_θ ) = italic_θ. Then tk+1−tk=τs⁢(θ)subscript𝑡𝑘1subscript𝑡𝑘subscript𝜏𝑠𝜃t_{k+1}-t_{k}=\tau_{s}(\theta)italic_t start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ital...
We can write tk+1−tk=τe⁢(x⁢(tk))subscript𝑡𝑘1subscript𝑡𝑘subscript𝜏𝑒𝑥subscript𝑡𝑘t_{k+1}-t_{k}=\tau_{e}(x(t_{k}))italic_t start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = italic_τ start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT ( italic_x ( italic_t st...
tk+1−tk=min⁡{τ>0:f⁢(x⁢(tk),τ):=xT⁢(tk)⁢M⁢(τ)⁢x⁢(tk)=0},subscript𝑡𝑘1subscript𝑡𝑘:𝜏0assign𝑓𝑥subscript𝑡𝑘𝜏superscript𝑥𝑇subscript𝑡𝑘𝑀𝜏𝑥subscript𝑡𝑘0t_{k+1}-t_{k}=\min\{\tau>0:f(x(t_{k}),\tau):=x^{T}(t_{k})M(\tau)x(t_{k})=0\},italic_t start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - italic_t start_POSTSUB...
u⁢(t)=K⁢x⁢(tk),∀t∈[tk,tk+1).formulae-sequence𝑢𝑡𝐾𝑥subscript𝑡𝑘for-all𝑡subscript𝑡𝑘subscript𝑡𝑘1u(t)=Kx(t_{k}),\quad\forall t\in[t_{k},t_{k+1}).italic_u ( italic_t ) = italic_K italic_x ( italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , ∀ italic_t ∈ [ italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSC...
A
Stability and safety verification in Ordinary Differential Equations (ODEs) has been explored widely and a detailed list of works can be found in survey papers [1] and [2]. Two different notions have emerged in existing literature that characterize stability and safety, respectively:
Consider the system (4) with boundary conditions (8). Let us also consider the unsafe set for this system to be (12) and the metric measuring the distance from this unsafe set to be given by (13). If the controller gains are chosen such that the following inequalities are satisfied,
Next, we present the temperature response and control variables under the two control strategies, as shown in Figs. 5-7. The spatiotemporal temperature distribution of the battery module in Fig. 5 shows an increased in temperature at 700⁢s700𝑠700s700 italic_s for all two strategies after the disturbance injection by t...
The goal of this work is to design a control strategy that will guarantee safety of the battery system under anomalies. The criterion for safety is that the spatial norm of the temperature deviation of the battery from a set-point remains below a prescribed threshold h¯¯ℎ\overline{h}over¯ start_ARG italic_h end_ARG. M...
Input-to-state safety (ISSf) [4, 5, 6]: Here the objective is to ensure that the system state trajectories stay away from a predefined unsafe region, or in other words, stay close to safe region. Specifically, trajectories moving from safe zone towards unsafe region will violate safety boundary only in a sense proport...
D
As ubiquitous devices become more embedded in our lives, they offer an unprecedented ability to passively capture our daily behavior at a high resolution via multiple sensors. Researchers have been leveraging passive sensing techniques to model users’ behavior and their environment from various perspectives. Before the...
RealityMining [5] uses location signals and Bluetooth log data to recognize social patterns in daily user activity, identify socially significant locations and model organizational rhythms. CenceMe [6] uses a wider range of sensors – including embedded cameras, microphones, accelerometer, GPS, temperature, light, humid...
As ubiquitous devices become more embedded in our lives, they offer an unprecedented ability to passively capture our daily behavior at a high resolution via multiple sensors. Researchers have been leveraging passive sensing techniques to model users’ behavior and their environment from various perspectives. Before the...
Ideally, the set of constructs that researchers want to investigate drives the choice of sensors. However, especially while estimating new measures, it can be challenging to ascertain a finite combination of sensors to deploy. Moreover, deployments are expensive and challenging. In longitudinal in-situ studies, even if...
Other health and wellbeing related topics that have been studied using passive sensing among workers include focus and awakeness. Soto et al. utilize biometric data from an arm-wear (viz., physical activity, HR, skin response, skin temperature and respiration) to estimate worker’s stress, focus and awakeness [25]. The...
A
Since both the training speed as well as the final accuracy are important factors in federated learning, we measure: (i) the performance achieved at a specified number of rounds and (ii) the number of rounds required for an algorithm to attain the desired level of target accuracy, following Al-Shedivat et al. [2]. For ...
Since the numbers of local epochs and iterations are set to 5 and 50, respectively, each client has little training opportunity with few training examples and client heterogeneity increases significantly. As shown in Table 2, FedACG outperforms the other methods in most cases, with the performance gap between FedACG an...
Results on three benchmarks with two different federated learning settings. For (a) a moderate-scale experiment, the number of clients and the participation rate, are set to 100, and 5%, respectively, while (b) a large-scale setting has 500 clients with a 2% participation rate. The Dirichlet parameter is commonly set t...
Note that FedCM and FedDC respectively require 1.5×1.5\times1.5 × and 2×2\times2 × network costs for each communication round since they communicate the current model and the associated gradient information per round, while the rest of the algorithms only need to transmit model parameters.
Table 2: Results from reduced participation rates (2% for 100 clients, 1% for 500 clients) on CIFAR-10 and CIFAR-100 with the Dirichlet parameter 0.3. FedCM† and FedDC‡ require 50% and 100% additional communication costs for each communication round, respectively.
D
We develop a novel technique to represent the spectrum allocation function input (i.e., the location and transmission/received powers of primary users or spectrum sensors, and the request parameters of the secondary user) as an image; such an image representation is essential to effectively use a CNN-based learning mo...
Second, in our context, an unsupervised approach is meaningless as unlabelled samples have minimal information (actually, zero information in the PU-Setting), and as explained in §III, a reinforcement-learning approach is also not suitable for our setting.
We develop a novel technique to represent the spectrum allocation function input (i.e., the location and transmission/received powers of primary users or spectrum sensors, and the request parameters of the secondary user) as an image; such an image representation is essential to effectively use a CNN-based learning mo...
Paper Organization. The rest of the paper is organized as follows. In the following section, we develop our spectrum allocation model and setting, discuss related work, and give a high-level overview of our approach. In §III, we develop our CNN-based deep learning model and associated techniques for spectrum allocatio...
Irrespective, the fundamental techniques developed in our work are largely independent of the formulation or algorithm used to determine the optimal allocation power—since learning models and techniques are solely based on training examples. In §IV, we use the above formulation to generate the training examples for the...
C
\frac{1}{\Gamma\left(\frac{1}{K}+i+1\right)}\frac{(-c\alpha^{K})^{i}\alpha}{i!% K^{2i+1}}.= italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT roman_Γ ( divide start_ARG 1 end_ARG start_ARG italic_K end_ARG ) ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT divide start_ARG...
The system Tα⁢α=−c⁢αk⁢Tsubscript𝑇𝛼𝛼𝑐superscript𝛼𝑘𝑇T_{\alpha\alpha}=-c\alpha^{k}Titalic_T start_POSTSUBSCRIPT italic_α italic_α end_POSTSUBSCRIPT = - italic_c italic_α start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_T consists of two decoupled equations of the type u′′⁢(α)=−c⁢αk⁢u⁢(α)superscript𝑢′′𝛼𝑐...
The power series for the affine arc-length parameterization γ⁢(α)𝛾𝛼\gamma(\alpha)italic_γ ( italic_α ) is obtained by integrating the series T⁢(α)𝑇𝛼T(\alpha)italic_T ( italic_α ). See Figures 16 and 16 for reconstructions of curves with curvatures μ⁢(α)=α𝜇𝛼𝛼\mu(\alpha)=\alphaitalic_μ ( italic_α ) = italic_α and ...
Convergence of series (95) for all α𝛼\alphaitalic_α follows from a general known result (Theorem 39.22 p.560 [22]). Directly, absolute convergence of sub-series (96) and (97) can be verified by the ratio test, implying absolute convergence of series (95).
The difference between the uniform and point-wise convergence is that one can choose nεsubscript𝑛𝜀n_{\varepsilon}italic_n start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT which “works” for all p∈P𝑝𝑃p\in Pitalic_p ∈ italic_P. If P𝑃Pitalic_P is an interval [0,L]0𝐿[0,L][ 0 , italic_L ], then uniform convergence of {fn...
C
_{t-1}^{*}|^{2}italic_C start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT | italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - italic_x start_POSTSUBSCRIPT italic_t -...
The regret bounds summarized in Table 1 are consistent with regret bounds of full-gradient based online optimization algorithms proved in the existing literature [29, 11, 7, 23] under similar settings. Our dynamic regret bounds for strongly convex functions proved in Theorems 6 and 7 might need multiple updates at each...
In this work, we have proposed an online coordinate descent algorithms to deal with optimization problems that may change over time. Three widely used update rules of coordinate descent are considered. Under different assumptions, we have provided different upper bounds on the regrets of these online algorithms. In par...
By using Proposition 5.1, we can establish regret bounds for Algorithms 2 and 3 using known regret bounds for online gradient descent algorithms. Moreover, if the regret bounds for online gradient descent are sublinear in T𝑇Titalic_T and ∑t=1Tαtsuperscriptsubscript𝑡1𝑇subscript𝛼𝑡\sum_{t=1}^{T}\alpha_{t}∑ start_POS...
In the seminal work of [41], the method of online gradient descent is proposed for OCO problems, where at each time step the decision maker performs one gradient descent step using the latest available information. A static regret upper bound that is sublinear in T𝑇Titalic_T is proved, where T𝑇Titalic_T is the lengt...
A
We tested the two-layer network shown in Fig. 4(g to m) on the N-MNIST [29] and POKERDVS [30] datasets. The sub-sampling factor was 7, which reduces the N-MNIST resolution from 28x28 in layer 1 to 4x4 in layer 2 and the POKERDVS resolution from 35x35 in layer 1 to 5x5 in layer 2. The Time-Surface lateral dimensions for...
We compare the classification accuracy of single-layer HOTS networks built with Ideal Memristor, a simulated Memristor without STP, and two traditional single-decay HOTS architectures. In order to simulate a memristor without STP, we use the ideal model with Eq. (5,6), which causes the memristor response to reset to A...
The computation time for this test was heavily dependent on the number of clusters and the number of files. For this reason, we set the number of clusters to N[1]=32superscript𝑁delimited-[]132N^{[1]}=32italic_N start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT = 32 for layer 1 and N[2]=64superscript𝑁delimited-[]264N^{...
Since the implementation of in-situ learning on the memristive chip was beyond the scope of this paper, we limited noise analysis to inference only. Calculation and clustering of time surfaces was performed using the Ideal network model only. The Noisy network and the Ideal network share the same sets of clusters, but ...
Since our implementation of HOTS uses K-means for learning the time surfaces, which requires relatively little data to train, we only use 10% of the training set for the N-MNIST results. Files were randomly selected at each run. However, our testing results are reported based on performance on the entire test set.
D
This theoretical lower bound on the expected convergence time is the first proven non-trivial lower bound for asynchronous opinion updates. A challenging open problem is to improve this lower bound by finding a graph class with Φ⁢(S0)∈Ω⁢(n2⁢ε2)Φsubscript𝑆0Ωsuperscript𝑛2superscript𝜀2\Phi(S_{0})\in\Omega(n^{2}\varepsi...
Researchers have investigated the convergence to stable states and the corresponding convergence speed in many variants of the Hegselmann-Krause model. The existing work can be categorized along two dimensions: complete or arbitrary social network and synchronous or asynchronous updates of the opinions. Synchronous opi...
Our opinions are not static. On the contrary, opinions are susceptible to dynamic changes, and this is heavily exploited by (social) media, influencers, politicians, and professionals for public relations campaigns and advertising. The way we form our opinions is not a solitary act that simply combines our personal exp...
Moreover, another direction for future work is to consider social networks with directed and possibly weighted edges. This would more closely mimic the structure of real-world neighborhood influences, allowing us to study asymmetric influence settings found in online social networks like Twitter.
In this paper we study an agent-based model for opinion formation on a social network where the opinion of an agent depends both on its own intrinsic opinion and on the opinions of its network neighbors. One of the earliest influential models in this direction was defined by DeGroot DeGroot (1974). In this model the o...
C
To initialize the network, we employed transfer learning by utilizing pre-trained weights from ImageNet. The early layers of the DenseNet, which capture general image features like edges, were left unchanged. We skipped the top layers, which contain more specific image features, and added two additional layers: a Globa...
We utilized the ChestX-ray8 dataset which consists of Chest X-ray images and is commonly used for research in the thoracic disease field. From this dataset, we randomly selected 99,000 images as our training set. Each image in the dataset was annotated with labels that identify 14 distinct pathological conditions. Thes...
We utilized the ChestX-ray8 dataset Wang et al. (2017a) as our primary dataset and randomly selected 99,000 images. Random selection of datasets introduces an element of stochasticity that helps a model learn from different parts of data distribution for robust representation. However, we implemented a patient-level da...
We addressed the data leakage as explained in Section 3.1.1 and created a train and test set. We employed deep learning architecture as explained in Section 3.2 to develop a model that can predict the presence or absence of the 14 pathological conditions based on the input Chest X-ray images. The training process invo...
Most existing studies on disease diagnosis using chest X-rays primarily focus on detecting a single pathology, such as pneumonia or COVID-19 (Bar et al. (2015); Cicero et al. (2017); Rajpurkar et al. (2017); Dasanayaka and Dissanayake (2021); Hussain et al. (2023)). However, an X-ray image can exhibit multiple patholog...
A
On the contrary, we show that the Bayes optimal algorithm performs sub-optimally with some of the worst model parameters, which implies that maximizing the Bayesian objective differs substantially from maximizing the frequentist objective. A Bayesian measure requires it to optimize the performance averaged over the pr...
Unlike multi-armed bandit algorithms, BAI algorithms are designed solely to deliver the most effective exploration.111Although the term “best arm identification” has appeared only recently, several strands of research share the same goal, among which ranking and selection (RS; Bechhofer 1954) is among the best known. S...
Note that this discrepancy between Bayesian and frequentist measures differs considerably from the situation in standard statistical inference with non-adaptive samples. For a fixed-sample problem, the Bernstein–von Mises theorem describes the asymptotic equivalence of Bayesian and frequentist inference. However, in an...
Moreover, Figure 2 shows the fraction of the runs in which arm 3333 receives a very small (5% of T𝑇Titalic_T) number of samples, demonstrating that the underestimation of arm 3333 consistently occurs for all values of T𝑇Titalic_T. This supports our finding that the initial underestimation of the best arm persists wi...
On the contrary, we show that the Bayes optimal algorithm performs sub-optimally with some of the worst model parameters, which implies that maximizing the Bayesian objective differs substantially from maximizing the frequentist objective. A Bayesian measure requires it to optimize the performance averaged over the pr...
B
Specifically, since the BVC only considers the current positions of all robots for space partition, all future positions in the planned trajectory are limited to this partition. Thus, it often leads to an overly conservative navigation structure with excessive breaking and low efficiency.
In contrast, the heuristic approach of choosing a detour point during deadlock suffers from the livelock problem. Specifically, at time t=3.0𝑡3.0t=3.0italic_t = 3.0s, the robot in the middle chooses a temporary target position and moves away once the deadlock is detected as all robots are static.
Furthermore, the completion time is evaluated for BVC [32] and the proposed method to illustrate the efficiency of MBVC-WB. As provided in Table III, the proposed method has a significant decrease in transition time especially in a more crowded scenario.
This yields a much smoother and significantly more efficient navigation strategy. This difference is apparent in Fig. 9, where the robots accomplish the navigation task via the proposed method at t=3.0𝑡3.0t=3.0italic_t = 3.0s while it takes 4.04.04.04.0s for the traditional BVC method.
This scenario is designed to emphasize that the modified space partition constraint in (III-A) leads to a more accurate separation among the robots and thus a higher utility rate of the workspace. A comparison between the proposed method and the traditional BVC [32] is shown in Fig. 9 for a particular setup.
C
Now, the correct orbit element is identified when ψ⁢(x^)=e𝜓^𝑥𝑒\psi(\hat{x})=eitalic_ψ ( over^ start_ARG italic_x end_ARG ) = italic_e, since ψ⁢(x)=g−1⋅ψ⁢(x^)=g−1𝜓𝑥⋅superscript𝑔1𝜓^𝑥superscript𝑔1\psi(x)=g^{-1}\cdot\psi(\hat{x})=g^{-1}italic_ψ ( italic_x ) = italic_g start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ...
Figure 1: a) Schematic of the learning task this work is concerned with. Data points x∈X𝑥𝑋x\in Xitalic_x ∈ italic_X are encoded to and decoded from latent space Z𝑍Zitalic_Z. Points in the same orbit in X𝑋Xitalic_X are mapped to the same point (orbit) z∈Z=X/G𝑧𝑍𝑋𝐺z\in Z=X/Gitalic_z ∈ italic_Z = italic_X / italic...
In this work we proposed a novel unsupervised learning strategy to extract representations from data that are separated in a group invariant and equivariant part for any group G𝐺Gitalic_G. We defined the sufficient conditions for the different parts of our proposed framework, namely the encoder, decoder and group func...
Our proposed framework does not constrain the decoding function δ𝛿\deltaitalic_δ other than that it has to map elements from the latent space Z𝑍Zitalic_Z to the data domain X𝑋Xitalic_X. Hence, δ𝛿\deltaitalic_δ can be designed independently of the group of interest. The main challenge is in defining the group functi...
As discussed in Section 2.2 and visualized in Figure 1b, the main components of our proposed framework are the encoding function η𝜂\etaitalic_η, the decoding function δ𝛿\deltaitalic_δ and the group function ψ𝜓\psiitalic_ψ. As stated in Property 2.1, the only constraint for the encoding function η𝜂\etaitalic_η is th...
D
In Bayesian optimisation the GP tuning step needs to be repeated many more times than in standard regression, as new data points are added iteratively. Therefore, using standard MCMC on the full hierarchical model for each new data point would slow down the process. By using importance weighting in the BO tuning step t...
Secondly, the measurements are treated as ground-truth readings. This simplifies the error metric calculations, but also makes irrelevant one of the advantages of probabilistically modelling the data, as this allows that uncertainty to be modelled directly. Another limitation is that the results on the satellite data a...
The results on the London data are given in Figs. 6 and 3. As there are no previously published results for comparison, an additional random baseline is provided – random selection without replacement. As all readings are assumed to be noise-free this leads to more efficient exploration. For the satellite data the diff...
These pitfalls can be avoided using a prior that is informed by relevant data from other contexts. To that end, our model is structured hierarchically as described in Fig. 1, and the prior inferred from a related tuning set (𝒟1,⋯,𝒟N)subscript𝒟1⋯subscript𝒟𝑁(\mathcal{D}_{1},\cdots,\mathcal{D}_{N})( caligraphic_D sta...
As expected, the models performed worse on the ground-level data. This is due in part to the data snapshots having fewer available samples. Some exploration is needed before the model can usefully discern the areas of interest, and where the data is sparse compared to the local variations this is harder.
D
RODEO generally achieves the best training ELBOs. One difference we noticed when comparing the variance results with those obtained from single-stochastic-layer VAEs is that these estimators have very different behaviors towards the beginning and the end of training.
Training binary latent VAEs with K=2,3𝐾23K=2,3italic_K = 2 , 3 (except for RELAX which uses 3333 evaluations) on MNIST, Fashion-MNIST, and Omniglot. We report the average ELBO (±1plus-or-minus1\pm 1± 1 standard error) on the training set after 1M steps over 5 independent runs. Test data bounds are reported in Table 4....
Figure 4(a) explores the impact of RODEO Stein operator choice on non-binarized MNIST. As expected, the less stable difference 8 and MPF 6 operators lead to significantly higher gradient variances and worse training ELBOs. In fact, the same operators led to divergent training on binarized MNIST.
To investigate the performance of RODEO when scaling to hierarchical discrete latent variable models, we follow DisARM [14, 68] to train VAEs with 2/3/4 stochastic layers, each of which consists of 200200200200 Bernoulli variables. We set K=2𝐾2K=2italic_K = 2 and compare our estimator with DisARM and Double CV on dyna...
Impact of surrogate functions   To tease apart the benefits of our new surrogate functions 13 and the remaining RODEO components, we replace the surrogate function cϕ⁢(z)subscript𝑐italic-ϕ𝑧c_{\phi}(z)italic_c start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_z ) in RELAX [20] with our surrogates, which only req...
B
This is a significant challenge for random access schemes that rely on fully random selection of access patterns. Since any user can be active in any slot, the only way to avoid pilot collisions, would be to assign a unique orthogonal sequence to each of the N𝑁Nitalic_N devices.
The improvement compared to a system without SIC (cf. Fig. 3) is significant and allows to achieve ultra reliability at much lower SNRs. Particularly important is the fact that Random selection exhibits a performance floor even when the mean traffic intensity is as low as 1111 user/frame. This is a consequence of the s...
The patterns can be also periodically updated. This may happen, for example, when the user population size changes and the resource pool needs to be adjusted; however such updates will occur relatively infrequently compared to the duration of the frame.
In this work we have proposed and investigated the usage of deterministic access patterns to provide ultra-reliable communication for a group of intermittently active users sharing a pool of resources. The patterns, which are a realization of a Steiner system, aim to control the number of collisions and interference a...
In many cases, however, this is not feasible or practical (e.g. with large population of intermittently active devices similar to the scenario addressed in this work). Instead, a common approach is to provide a pool of Q<N𝑄𝑁Q<Nitalic_Q < italic_N pilot sequences from which users pick one at random every time they bec...
D
The classification of sets in ℝNsuperscriptℝ𝑁\mathbb{R}^{N}blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT for which the ATSP can be solved, was done by Jones [Jon90] (for ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) and by Okikiolu [Oki92] (for higher dimensio...
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper...
The classification of sets in ℝNsuperscriptℝ𝑁\mathbb{R}^{N}blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT for which the ATSP can be solved, was done by Jones [Jon90] (for ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) and by Okikiolu [Oki92] (for higher dimensio...
It should be noted that the work of Jones [Jon90], Okikiolu [Oki92], and Schul [Sch07] provides the sets for which a solution exists but not the solution itself when the set is infinite. That is, they classify the sets which are contained in curves of finite length, but do not provide the parametrization of the curves...
We remark that although time-wise this algorithm is fast, the ratio constant of the yielded path over the optimal path has not been computed and is much larger that Christofides’ 3/2323/23 / 2 ratio. In fact, in our algorithm, the yielded path has length at most (300)9/2⁢log⁡300superscript30092300(300)^{9/2}\log{300}( ...
A
We should emphasize that the generalization from the projective Chow forms to the multiprojective ones is far from straightforward both from the mathematical and algorithmic complexity point of views. Even though a multiprojective space is isomorphic to a projective variety via the Segre embedding, this requires adding...
In [46], Osserman and Trager gave a generalization of Chow forms to multiprojetive varieties, i.e., varieties in the multiprojective space ℙ𝒏≔∏i=1lℙni≔superscriptℙ𝒏superscriptsubscriptproduct𝑖1𝑙superscriptℙsubscript𝑛𝑖\mathbb{P}^{\bm{n}}\coloneqq\prod_{i=1}^{l}\mathbb{P}^{n_{i}}blackboard_P start_POSTSUPERSCRIPT b...
As demonstrated in Figure 1, the set of formats α𝛼\alphaitalic_α for which 𝒞⁢𝒵V,α𝒞subscript𝒵𝑉𝛼\mathcal{CZ}_{V,\alpha}caligraphic_C caligraphic_Z start_POSTSUBSCRIPT italic_V , italic_α end_POSTSUBSCRIPT is a hypersurface equals to the set of lattice points that lie “below” supp⁡(V)supp𝑉\operatorname{supp}(V)ro...
For the remainder of the section, the chief example of a polymatroid to us is the support of a multiprojective variety (and its downward closure). The polymatroids of this form are now called Chow polymatroids,333This naming is unfortunate for us because we will later associate a polymatroid to the formats of non-degen...
In addition, we discuss a multihomogeneous generalization of the Hurwitz form. To the best of our knowledge, our paper provides the first result in this area. Contrary to the homogeneous case, multigraded Chow forms and Hurwitz form require a choice of a non-degenerate multidimension vector for the linear subspace, in ...
D
Physiological signals [17]: BVP, GSR, and SKT physiological signals captured during the experimentation by the BioSignalPlux research toolkit are provided in a binary MATLAB® file (.mat). It contains a cell array with 100100100100 rows (one per volunteer) and 14141414 columns (one per video). Each cell contains four f...
Self-reported annotations [19]: They contain the emotional labeling reported by the participants after watching each of the 14141414 videos in the experiment. The data are stored in one CSV file, that contains 14141414 columns and 1,40014001,4001 , 400 rows (100100100100 volunteers ×\times× 14141414 clips). Regarding ...
As introduced before, only 8888 of the 12121212 emotions initially selected were included in WEMAC (see the Stimuli Section), although the 12121212 emotions were considered for the discrete emotion labeling (see the Measures Section). It means that the number of targeted emotions is smaller than the reported ones in th...
Deepspectrum-Resnet50, Deepspectrum-VGG19, and VGGish. They correspond to the six feature sets described in the Data Processing and Cleaning section for Speech Signals. Each folder includes a CSV file per audiovisual stimuli and volunteer. Each CSV has as many columns as the number of features calculated, where an addi...
The libraries recommended for further processing of the WEMAC dataset are the ones we found most useful for data cleaning and filtering for physiological and speech signals. On the one hand, Matlab® was employed for the physiological data processing using the TEAP toolbox 888https://github.com/Gijom/TEAP. On the other...
C
Problem space attacks can be further classified into three forms of attacks: camouflage attacks use obfuscation or encryption to conceal the app information (Demontis et al., 2017). The second type of attack adds noise to the applications (Pierazzi et al., 2020; Cara et al., 2020). Noise in malware refers to additiona...
Chen et al. (Chen et al., 2019a) propose android HIV for adding perturbations to AndroidManifest.xml and classes.dex files. In order to attack the MaMaDroid classifier (Onwuzurike et al., 2019), they extract features from the classes.dex file as Control Flow Graph (CFG). These features are represented as transition pro...
Malware Detection: Malware detection can either use signature-based methods or machine learning methods. Signature-based methods rely on a repository of known malware signatures, which are used to test whether a new application is malicious or not (Moser et al., 2007). It is not an effective method since there is a ne...
MaMaDroid (Onwuzurike et al., 2019) extracts the Control Flow Graph (CFG) of an Android application to use as features for classification. It extracts API calls from applications using static analysis and abstracts the calls to class, package, and family. The classification system analyzes the sequence of API calls and...
While ML-based malware classifiers perform more competently with identifying inputs more accurately than even a cybersecurity expert would classify, they perform poorly to subtle yet maliciously crafted data. Malware classifiers are vulnerable to such perturbation, known as adversarial samples (Szegedy et al., [n. d.])...
A
A goal is a pair (σ∗,D)superscript𝜎∗𝐷(\sigma^{\ast},D)( italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_D ) where σ∗∈∏iΣisuperscript𝜎∗subscriptproduct𝑖subscriptΣ𝑖\sigma^{\ast}\in\prod_{i}\Sigma_{i}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ ∏ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ...
In our opening example, the target set is the set of realizations with long-run frequency 1/2121/21 / 2 of Heads, and in the second example the target set is the set of all realizations where the induced random walk crosses the origin infinitely often.
A goal is a pair (σ∗,D)superscript𝜎∗𝐷(\sigma^{\ast},D)( italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_D ) where σ∗∈∏iΣisuperscript𝜎∗subscriptproduct𝑖subscriptΣ𝑖\sigma^{\ast}\in\prod_{i}\Sigma_{i}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ ∏ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ...
The strategy profile σ∗superscript𝜎\sigma^{*}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a prescribed way for the players to play. The target set D𝐷Ditalic_D is a set of realizations that they are supposed to reach if they follow through their prescribed strategy.
We are interested in cases in which the probability 𝐏σ∗⁢(D)subscript𝐏superscript𝜎𝐷{\rm\bf P}_{\sigma^{*}}(D)bold_P start_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_D ) that the prescribed strategy profile attains the target set is 1 or close to 1.
C
Open-source hardware implementation references. Since the reproduction cost and effort for hardware implementations are generally higher than software-only ones, it is actually more desired for researchers to release as many details about their hardware design as possible. This often means the circuit diagram, printed ...
GPS spoofing refers to sending fake satellite signals to the GPS receiver, causing it to resolve positions that are specified by the attacker. Prior works leverage GPS spoofing to attack MSF localization [92], LiDAR object detection [86], and traffic light detection [80].
Open-source attack modeling code. Another interesting trend in recent sensor attack works is that they often model the attack capability in the digital space for large-scale evaluation. For example, Man et al. [58] modeled the camera lens flare effect caused by attacker’s light beams in digital images, Ji et al. [65] ...
Before this paper, a few AD security-related surveys have been published [22, 23, 20, 21], but none of them focus on the emerging semantic AD AI research space (i.e., our scope defined in §II-C). For example, Kim et al. [20] and Ren et al. [21] focus on AD-related sensor/hardware security and in-vehicle network securi...
Adversarial robustness improvement. Several defenses try to improve the robustness of the AI component against attacks. For example, Chen et al. [105] applied adversarial training [13] to make the camera object detection model more robust. Jia et al. [106] improved the model robustness by predicting and removing poten...
B
A link chain is a sequence of link intervals with a join gadget between every two consecutive link intervals, where one link interval terminates at the gadget and the other one starts from it. The link intervals of all other link chains that intersect this join gadget, cover it. The join gadgets ensure that, in a maxim...
A link chain is a sequence of link intervals with a join gadget between every two consecutive link intervals, where one link interval terminates at the gadget and the other one starts from it. The link intervals of all other link chains that intersect this join gadget, cover it. The join gadgets ensure that, in a maxim...
The link chains do not use long intervals of two separate lengths anymore. The issue of non-consecutive link chains partially intersecting the same edge gadget is resolved by using a switch gadget that switches the relative positions of join gadgets of link chains.
If we straightforwardly connect link chains ℒ131,ℒ133subscriptsuperscriptℒ113superscriptsubscriptℒ133\mathcal{L}^{1}_{13},\mathcal{L}_{13}^{3}caligraphic_L start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 13 end_POSTSUBSCRIPT , caligraphic_L start_POSTSUBSCRIPT 13 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ...
Every join gadget has at most 3 long intervals terminating at it and the same number of long intervals starting from it. The main purpose of a join gadget is to connect a vertex gadget to edge gadgets by link chains of long intervals. The distance between blocks may be modified in order to readjust the relative distanc...
B
The only end-to-end anticipation model for natural video of which we are aware (AVT, [26]) does not use BatchNorm as it is entirely Transformer-based. Although not mentioned as motivation for this model choice, BN would likely have caused both train-test discrepancy and “cheating” since single-sequence batches are used...
We analyze limitations and impact of BatchNorm for end-to-end learning on two surgical tasks (Fig. 1). In surgical phase recognition, an online temporal segmentation task, we show how BatchNorm can affect training strategies and performance. Especially, we show that supposedly outdated CNN-LSTMs can be very effective w...
Our premise is that end-to-end learning is preferable over multi-stage approaches and thus we propose a simple, intuitive strategy for training CNN-LSTM models on online surgical workflow tasks. We use the LSTM to demonstrate that even simple temporal aggregation can be effective when visual features are improved throu...
We provide a comprehensive and detailed analysis of how BatchNorm affects end-to-end surgical workflow analysis. We show the advantage of end-to-end over 2-stage learning (Hypothesis 1), longer training sequences (H.2) and carrying hidden states across batches in online tasks (H.3) and how this can fail using BN-based ...
We investigate the limitations of BatchNorm for end-to-end video learning, which are especially relevant in surgical workflow tasks where CNN backbones require finetuning. In a detailed literature review, we reveal how research has circumvented these problems by moving towards complex, multi-stage approaches. We identi...
A
Effect of Noise-negative Pair Filtering. To approximate WDFS, 𝒩m⁢lsuperscript𝒩𝑚𝑙\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT was assumed to include extremely hard negative pairs because it can produce similarity scores similar to sup𝒮nsupremumsuperscript𝒮𝑛\sup{\mathc...
This paper is based on two insights. First, from a unified perspective, CL and ML have the same purpose of approaching WDFS, except for PG. Second, CL and ML show a mismatch between two similarity distributions of sampled pairs and all negative pairs. Based on these insights, we developed UNPG by combining two PG strat...
In deep feature learning paradigms for pair similarity optimization, loss functions in FR can be categorized based on two approaches: metric loss (ML; e.g., triplet loss[23, 8] and N-pair loss[26]) and classification loss (CL; e.g., softmax loss[1, 21, 30]). The former directly performs the optimization with a pair of...
Effect of Noise-negative Pair Filtering. To approximate WDFS, 𝒩m⁢lsuperscript𝒩𝑚𝑙\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT was assumed to include extremely hard negative pairs because it can produce similarity scores similar to sup𝒮nsupremumsuperscript𝒮𝑛\sup{\mathc...
This paper proposes unified negative pair generation (UNPG) by combining two PG strategies (i.e., MLPG and CLPG) from a unified perspective to alleviate the mismatch. Moreover, it includes filtering noise-negative pairs, such as too-easy/hard negative pairs, in order to guarantee reliable convergence and improve perfo...
A
These images were then resampled to 390×390390390390\times 390390 × 390 pixels, followed by a cropping procedure keeping the center of the image to 256×256256256256\times 256256 × 256 pixels. Such a procedure is required to drive StyleGAN2-ADA generating images focused on the macula area (Figure 2-c). Ultimately, image...
The final round was aimed at comparing the best deep model obtained in the previous phase, ResNet-18, against human experts concerning the task of AMD identification. In this step, twenty real images (ten AMD images and ten non-AMD images) were randomly selected from the test set and provided to human experts for clas...
Figure 4 shows a comparison between (a) real images from the training dataset and (b) synthetic images produced by StyleGAN2-ADA. The trained model yields realistic-looking images for both, with and without AMD, conditioned by sampling from latent representations. Visual inspection shows that the generated images are s...
iChallenge-AMD: Comprises of 1,20012001,2001 , 200 retinal fundus images that have been annotated for drusen and hemorrhage. The training set was made of 400 images (89 images of eyes with AMD and 311 from eyes without AMD), while the test sets contained the remaining images [14].
After quality assessment and selecting the images for the final dataset using the criterion described earlier, the resulting single dataset comprised totaL of 7,10671067,1067 , 106 images. In this, 6,89668966,8966 , 896 images were used to train the models (275275275275 with AMD and 6,62166216,6216 , 621 without AMD) ...
D
In the local attention model, its input is the output of the last pooling layer in Simple-Net, and its output is one value between 0 and 1 obtained via the sigmoid function, regarded as the local attention weight wilsuperscriptsubscript𝑤𝑖𝑙w_{i}^{l}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUP...
In LNLAttenNet, the local and the non-local information of facial expressions are simultaneously considered to construct two parts of the network respectively: a local multi-network ensemble and a non-local attention network, and then the generated local and non-local feature vectors are integrated and jointly optimize...
In the proposed method, the local attention is designed to deal with the problem that local regions is missed or obscured. In this part, the visualization of local attentions will be shown to validate the robustness of the proposed method for faces with missing regions, experimented on RAF-DB database. Note that the si...
If the facial local region is obscured or missed, the information that it contains for expression recognition will be reduced, and then the weight value of the local attention is also reduced to alleviate the effect of patches including the obscured region. Furthermore, the weights will be multiplied by the correspondi...
From Fig.11, it is seen that the non-local weight of each local patch is same at the beginning of training, which implies that each local region is initially regarded as the equal importance. With the training of our network, each local region is given different weights, and the higher weights are given some more discr...
C
The Elo Rating system was proposed by Arpad Elo in the mid 20th century to estimate the relative skill of chess players [2]. It was quickly adopted by the international chess community, and in the decades since has seen adoption in many competitive contexts. This paper considers a simple combinatorial question about th...
There are additional complications in real-world implementations of Elo. For legibility and practicality, fractional and negative rating points are avoided by scaling and shifting points up and rounding to the nearest integer, and by imposing an artificial floor on possible ratings (by gifting a player points if they w...
This strategy is guaranteed to produce a player of either very high or very low rating. If it produces a player of very low rating, simply re-do the strategy picking the same sequence of pairs of players but have the opposite player win. Since game outcomes are symmetric, this will produce a player of high rating inste...
When players A𝐴Aitalic_A and B𝐵Bitalic_B play, the number of points they ante up are K⋅σ⁢(rA−rB)⋅𝐾𝜎subscript𝑟𝐴subscript𝑟𝐵K\cdot\sigma(r_{A}-r_{B})italic_K ⋅ italic_σ ( italic_r start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT - italic_r start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT ) and K⋅σ⁢(rB−rA)⋅𝐾𝜎subscrip...
Each player is given some ‘rating’ value (measured in ‘points’ or simply ‘Elo’), which updates as they play games. These rating points are somewhat analogous to poker chips: when player A𝐴Aitalic_A and player B𝐵Bitalic_B play a game, they each place some of their rating points into a pot. In the case of a draw, the p...
D
In our first application, we observe that the maximum SDC value is 94.80% (high correlation, Figure 3(b)), resulting in a most probably trustworthy projection. Another reassurance stems from the visual inspection of points in the middle of two classes that appear clearly confused, with most rare and borderline instance...
After selecting the projection (which results in a specific distribution of data types), we decide to apply the OSS undersampling algorithm. Nevertheless, the default settings cause the van class to disappear completely, thus the predictive performance gets extremely penalized (see step 1 in Figure 1(h), right). We pic...
Figure 3: At first, a comparison of different data types projections and then two consecutive undersampling phases with the NCR algorithm are shown in this arrangement of screenshots. The default value for the number of neighbors is 5 (see (a)), which is used as input for computing the type of each instance with KNN. ...
The undersampling phase is perhaps most crucial since removing unsafe instances without justifying one’s action could cause a severe issue to the ML model. We choose to activate the de-facto NCR algorithm without any tweaks to check the suggestions (Figure 3(c) and (d)). The distribution of instances changes according...
To understand if a new round of undersampling would be beneficial, we activate the OSS algorithm again with the same settings (step 7). However, the outcome is to decrease the relatively safe population that much, so that the result is becoming worse. Therefore, we disable the algorithm and stop the undersampling phase...
C
Intel SGX: Intel’s Software Guard Extensions (SGX) is a feature of Intel architecture that aims to protect the integrity and confidentiality of a program and its code through Trusted Execution Environment (TEE) technology. Our initial design anticipated adopting Intel SGX to protect the confidentiality of the transacti...
Compatibility with Automated Market Makers: In addition to the previously identified applications, FIRST can also be effectively integrated with AMMs. When an AMM adopts FIRST to prevent frontrunning on a certain pool, all transactions that interact with the pool will experience a uniform VDF delay, which will ensure ...
Transaction ordering: An alternative solution to prevent frontrunning could be enforcing an ordering on transactions using timestamps. However, challenges, such as synchronization, delay on the peer network, resilience challenge of a centralized timestamp server, or dependence on off-chain services for timestamps make...
We propose a general-purpose solution to the frontrunning problem using cryptographic protocols such as verifiable delay functions (VDFs) and aggregate signatures [17, 19] whose outputs are publicly verifiable. Slowswap [11] utilizes VDFs to introduce delays for transactions related to AMMs only. However, the current i...
Frontrunning prevention strategies: Research in frontrunning prevention falls into three broad categories: (a) solutions that require direct interaction with miners to include the transaction t⁢xA𝑡subscript𝑥𝐴tx_{A}italic_t italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT in the upcoming block (private relayer...
B
We consider college admissions as a running example for capacity-constrained treatment assignment in the presence of strategic behavior. The agents are a population of students who are hetergeneous in their baseline test scores and grades and in their ability to invest effort to change them. The decision maker is a co...
This example aims to capture the phenomenon that college admissions has become increasingly competitive since the 1980s; Bound et al. (2009) demonstrates that between 1986-2003, the 75th percentile math SAT score of accepted students at the top 20 public universities, top 20 private colleges, and top 20 liberal arts co...
The decision maker runs the algorithm for J𝐽Jitalic_J epochs. In Section 2, we describe that it may be infeasible for the decision maker to update the selection criterion at each time step. This algorithm requires the decision maker to deploy an updated selection criterion at each epoch j𝑗jitalic_j. In other words, ...
In this work, we study the problem of capacity-constrained treatment assignment in the presence of strategic behavior. We frame the problem in a dynamic setting, where the decision maker assigns treatments at each time step. Suppose a decision maker deploys a fixed selection criterion for all time. At time step t𝑡tit...
We consider college admissions as a running example for capacity-constrained treatment assignment in the presence of strategic behavior. The agents are a population of students who are hetergeneous in their baseline test scores and grades and in their ability to invest effort to change them. The decision maker is a co...
A
Dataset Bias and Bias Mitigation. Deep networks trained with empirical risk minimization (ERM) tend to exploit training set biases resulting in poor test generalization [23, 45, 62, 70]. Existing works for mitigating this problem have focused on these approaches: 1) focusing on rare data patterns through re-sampling [...
We tested OccamNets implemented with CNNs; however, they may be beneficial to other architectures as well. The ability to exit dynamically could be used with transformers, graph neural networks, and feed-forward networks more generally. There is some evidence already for this on natural language inference tasks, where...
Analysis of Early Exits. OccamResNet has four exits, the first exit is used for bias amplification and the rest are used to potentially exit early during inference. To analyze the usage of the earlier exits, we plot the percentage of samples that exited from each exit in Fig. 4. For Biased MNISTv2 dataset, a large por...
In general, we find that earlier exits are triggered more often for in-distribution (easier) test samples as compared to shifted distribution (more difficult) test samples. As shown in Table A10, for BiasedMNIST, when pb⁢i⁢a⁢ssubscript𝑝𝑏𝑖𝑎𝑠p_{bias}italic_p start_POSTSUBSCRIPT italic_b italic_i italic_a italic_s e...
Early Exit Networks. OccamNet is a multi-exit architecture designed to encourage later layers to focus on samples that earlier layers find difficult. Multi-exit networks have been studied in past work to speed up average inference time by minimizing the amount of compute needed for individual examples [10, 31, 66, 74],...
D
In this paper, we propose a Coarse-to-Fine Feature Mining (CFFM) technique to learn local temporal contexts, which consists of two parts: Coarse-to-Fine Feature Assembling (CFFA) and Cross-frame Feature Mining (CFM). Specifically, we first apply an efficient deep network [52] to extract features from each frame. Then,...
Influence of the number of global temporal contextual prototypes.  In our experiments, we set the number (Npsubscript𝑁𝑝N_{p}italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT) of contextual prototypes as 100 when extracting global temporal information. Here, we study the influence of this parameter. The results...
For global temporal contexts, few VSS methods [17, 53] have exploited the contexts from the whole video. The modeling of global temporal contexts is usually achieved by a memory module in the form of a memory bank [17] or a tiny network [53] which is updated during inference. Although promising results have been achiev...
The video contexts contain local temporal contexts which represent the contextual information from neighbouring/nearby frames and global temporal contexts which indicate the contexts from the whole video. This paper first studies local temporal contexts which can be further divided into static contexts and motional con...
Figure 3: Overview of the proposed CFFM++ for additionally mining global temporal contexts. Due to the large number of frames in the video, we uniformly sample frames by a fixed step. The sampled video frames go through the encoder trained by CFFM and corresponding features are generated. After tokenizing the feature m...
B
Figure 10: The Hotline system features an accelerator situated between the CPU and GPU(s), responsible for accessing the main memory to retrieve training inputs and embeddings, which are then efficiently relayed to the GPU(s). In multi-node distributed training, each node utilizes its own Hotline accelerator to overse...
In a single node GPU-only system, transferring embeddings across all devices requires all-to-all collectives. For instance, in a 4-GPU system, we observed that this step consumes nearly 12% of the total training time even after employing the fast NVLink interconnect. As the number of nodes increases, the communication ...
The Hotline accelerator relies on a driver to communicate with the CPU’s main memory and GPU devices. The driver interacts with the DMA engine to access not-frequently-accessed embeddings on CPU main memory and GPU devices via a PCIe link. It uses instructions, as listed in Table I, to read/write the necessary data in...
The system comprises an InfiniBand-based multi-core server. Each node has multiple GPU devices with inter-GPU communication achieved via NVLink [18]. The Hotline system places the accelerator on the low profile PCIe slot that GPUs do not use, enabling it to access the DMA engine through the PCIe switch and communicate ...
Table III provides information about the server used for the experiments. The server employs a 24-core Intel Xeon Silver 4116 (2.1 GHz) processor based on Skylake architecture and is equipped with 4 NVIDIA Tesla-V100 GPUs. The communication between the GPUs, Hotline accelerator, and the rest of the system is facilitate...
C
Let F:|M|→ℝ:𝐹→𝑀ℝF:|M|\rightarrow\mathbb{R}italic_F : | italic_M | → blackboard_R be a map that is linear on cells of a finite polyhedral complex M𝑀Mitalic_M, let a∈ℝ𝑎ℝa\in\mathbb{R}italic_a ∈ blackboard_R be a nontransversal threshold, and K𝐾Kitalic_K a connected component of the intersection of its corresponding ...
It is immediate that any face of a flat cell will also be flat, so the set of flat cells relative to a map F:|M|→ℝ:𝐹→𝑀ℝF:|M|\rightarrow\mathbb{R}italic_F : | italic_M | → blackboard_R forms a subcomplex of M𝑀Mitalic_M, which we will denote Mflat⁢(F)subscript𝑀flat𝐹M_{\tiny\mbox{flat}}(F)italic_M start_POSTSUBSCRIP...
The flat cells of F𝐹Fitalic_F are, by definition, the cells of 𝒞⁢(F)𝒞𝐹\mathcal{C}(F)caligraphic_C ( italic_F ) on which F𝐹Fitalic_F is constant. Flat cells in the PL category should be viewed as the appropriate analogues of critical points in the smooth category, with the caveat that not every flat cell is critica...
In [7], the authors define the notion of H-criticality and H-regularity only for flat 00–cells and not for (connected unions of) flat higher-dimensional cells. Since the functions of interest to us have a non-zero probability of having higher-dimensional flat cells, we extend the definition.
If there is an n𝑛nitalic_n–cell of 𝒞⁢(F)𝒞𝐹\mathcal{C}(F)caligraphic_C ( italic_F ) with ternary labeling (−1,…,−1)1…1(-1,\ldots,-1)( - 1 , … , - 1 ), Lemma 3.12 tells us that F𝐹Fitalic_F is not PL Morse. If there is no n𝑛nitalic_n–cell with ternary labeling (−1,…,−1)1…1(-1,\ldots,-1)( - 1 , … , - 1 ), then the o...
C
One of the most challenging problems in modern physics is the so-called many-body problem. In its quantum version – quantum many-body physics, the exponential complexity of the states in the Hilbert space makes the strongly correlated systems difficult to deal with [1]. Only limited analytical solutions are amenable to...
The states of quantum system can be characterized by wave functions. One can approximate the wave functions with neural-network quantum states (NQS) as ΨN⁢N⁢(s,𝒲)subscriptΨ𝑁𝑁𝑠𝒲\Psi_{\!N\!N}(s,\mathcal{W})roman_Ψ start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT ( italic_s , caligraphic_W ), where s=(s1,s2⁢…...
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions. Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39]. Critical slowing down near the phase transition point may invalidate the applic...
Fortunately, neural network methods have more universalities. The same neural network can be used to represent the states or to study the dynamical processes for various systems, such as those with different dimensions or with different interactions.
The recent state-of-the-art neural networks have been shown to provide high efficient representations of such complex states, making the overwhelming complexity computationally tractable [6, 7]. Except for the success in the industrial applications, such as the image and speech recognitions [8], the autonomous driving,...
C
The original PINN approach trains the NN model to predict the entire space-time at once. In complex cases, this can be more difficult to learn. Seq2seq strategy was proposed in [16], where the PINN learns to predict the solution at each time step, instead of all times. Note that the only data available of the first seq...
The original PINN approach trains the NN model to predict the entire space-time at once. In complex cases, this can be more difficult to learn. Seq2seq strategy was proposed in [16], where the PINN learns to predict the solution at each time step, instead of all times. Note that the only data available of the first seq...
We report the numerical error analysis obtained by solving Navier-Stokes equation obtained by setting Re=104subscript𝑅𝑒superscript104R_{e}=10^{4}italic_R start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT for the model problem. The interval of the mesh h=1/16ℎ116h=1/16ital...
Figure 2: Seq2seq PINN. The blue line is initial condition for each sequence. The red line is boundary condition for each sequence. The domain will be uniformly sectioned. When training of the first sequence finished, the solution at t=0.01𝑡0.01t=0.01italic_t = 0.01 will be calculated and used as the initial conditio...
where Nbsubscript𝑁𝑏N_{b}italic_N start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT is the number of sampling points for boundary condition. Next we consider the loss function for the momentum equation as well as the loss function for the divergence free condition for 𝒖𝒖\bm{u}bold_italic_u, respectively as follows:
C
As a result, looking at the distribution of uncertainty values, one should be able to identify an estimation of u𝑢uitalic_u by finding the sharp slope in the distribution of uncertainty values. For example, Figures 6 and 7 highlight our experiment results for two different settings for regression and classification. I...
Our preprocessing time consists of two parts. The first is the time to build the k𝑘kitalic_k-NN data structure, identifying the k𝑘kitalic_k-vicinity radius (in Algorithm 1) and computing uncertainty (in Algorithm 2) for each tuple in the data set. The second one is the time to construct the sorted multi-sets of k𝑘ki...
Following this intuition, we calculate the k𝑘kitalic_k-vicinity uncertainty for each tuple in 𝒟𝒟\mathcal{D}caligraphic_D, and create the reverse cumulative distribution 𝖵:[0,1]→ℝ:𝖵→01ℝ\mathsf{V}:[0,1]\rightarrow\mathbbm{R}sansserif_V : [ 0 , 1 ] → blackboard_R such that,
for every value r𝑟ritalic_r, the ratio of tuples in 𝒟𝒟\mathcal{D}caligraphic_D with an uncertainty value larger than 𝖵⁢(r)𝖵𝑟\mathsf{V}(r)sansserif_V ( italic_r ) is r𝑟ritalic_r. For example, 𝖵⁢(0.1)𝖵0.1\mathsf{V}(0.1)sansserif_V ( 0.1 ) returns the value u0.1subscript𝑢0.1u_{0.1}italic_u start_POSTSUBSCRIPT 0....
During the preprocessing time, we first construct the k𝑘kitalic_k-NN data structure. Next, for every tuple in 𝒟𝒟\mathcal{D}caligraphic_D, we identify its k𝑘kitalic_k-vicinity radius and add it to the list Γ𝒟subscriptΓ𝒟\Gamma_{\mathcal{D}}roman_Γ start_POSTSUBSCRIPT caligraphic_D end_POSTSUBSCRIPT.
B