context
stringlengths
250
3.86k
A
stringlengths
250
5.11k
B
stringlengths
250
3.39k
C
stringlengths
250
8.2k
D
stringlengths
250
5.02k
label
stringclasses
4 values
To alleviate this limitation, our discriminators share the parameters of all layers in the feature extractor while branching the last layer only. This implementation is also sensible since the discriminators partially have the same objective to distinguish fake examples.
Since the abilities of individual discriminators are severely off-balanced, they are highly prone to assign all samples to a few specific models. Especially at an early phase of training, the model’s capability is more sensitive to the number of updates in the discriminators.
The common representations of all real samples are prone to be learned in the earlier layers despite being clustered in different subsets, whereas the critical information for high-level understanding is often encoded in deeper layers. Moreover, the number of model parameters and training time are saved significantly w...
Figure 1: Snapshots of 256 random samples drawn from the generators of the baseline and MCL-GAN with (left) the standard GAN loss and (right) the Hinge loss after 1K, 5K, 10K, 20K and 50K steps. Data sampled from the true distribution are in orange while the generated ones are in green.
To alleviate this limitation, our discriminators share the parameters of all layers in the feature extractor while branching the last layer only. This implementation is also sensible since the discriminators partially have the same objective to distinguish fake examples.
B
Notation: The single boldface letters are used to represent vectors or matrices and single plain capital letters denote integers. Given a vector 𝒙𝒙\boldsymbol{x}bold_italic_x, xisubscript𝑥𝑖x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT indicates its i𝑖iitalic_ith component, ‖𝒙‖norm𝒙\left\|\boldsymb...
The semantic communication system for speech recognition aims to transmit and recover the information-related semantic features. In this section, we introduce the details of the considered system model and the adopted performance metrics are presented.
In this article, we have investigated a DL-enabled semantic communication system for speech recognition, named DeepSC-SR, which aims to restore the text transcription by utilizing the text-related semantic features. Particularly, we jointly design the semantic and channel coding to learn and extract the features and mi...
Regarding the semantic commutations for speech information, our previous work developed an attention mechanism-based semantic communication system to restore the source message, i.e., reconstruct the speech signals[18]. However, in this paper, we consider an intelligent task at the receiver to recover the text informat...
The rest of this article is structured as follows. Section II introduces the model of semantic communication system for speech recognition and performance metrics. In Section III, the details of the proposed DeepSC-SR is presented. Simulation results are discussed in Section IV and Section V draws conclusions.
A
With the development of 3D sensors, and point cloud data are playing important roles in multimedia applications. Semantic point cloud segmentation is vital for 3D scene understanding, which provides fundamental information for further applications like augmented reality and mobile robots.
The existing 3D WSSS methods formulate the problem in different directions. [10] utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels. However, each 3D sample is projected to 2D in several views and each projected 2D image needs pixel-lev...
Recent developments in deep learning-based point cloud analysis methods have made considerable progress in 3D semantic segmentation[1, 2, 3, 4]. However, 3D semantic segmentation requires point-level labels, which are much more expensive and time-consuming than the labels of 3D classification and detection tasks.
Existing 3D WSSS methods utilize different kinds of weak supervisions.[10] utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels. [11] proposes to generate pseudo point-level label using 3D class activation map[12] from subcloud-level anno...
There are three categories for 3D semantic segmentation methods: projection-based methods, voxel-based methods and point-based methods. Multi-view projection-based methods[20, 21, 22] project the 3D data into 2D from multiple viewpoints, therefore they can easily process the projected data on 2D convolution networks. H...
B
Setup. The KITTI dataset [11] provides widely used benchmarks for various visual tasks in the autonomous driving, including 2D Object detection, Average Orientation Similarity (AOS), Bird’s Eye View (BEV), and 3D Object Detection. The official data set contains 7481 training and 7518 test images with 2D and 3D bounding...
Extensive experiments conducted on the challenging KITTI [11] dataset clearly demonstrate the effectiveness of the proposed approach and show that our method achieves 13.81% in terms of the AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT metric, which is 2.80% absolute AP40subscriptAP40\r...
Table 2: Monocular 2D object detection results on the KITTI test set for the All categories with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The metric AP40subscriptAP40\rm AP_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT is used for detection evaluat...
Table 3: Monocular 3D object detection results on the KITTI val set for the car category with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The results of the previous works are from [9]. Our approach significantly outperforms the previous state-of-the-arts on...
Table 1: Monocular 3D object detection results on the KITTI test set for the Pedestrian and Cyclist categories with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The IoU threshold is set to 0.5. The bold black/blue color indicates the best/the second best perf...
C
During the training stage, the training of the TCL map, GGTR map, geometric features, link prediction and node classification are guided by the loss function designed in Sect. III-D. During the inference stage (see Sect. III-E for details), the GGTR map that contains visual-relational features is used to rectify the TC...
Figure 3: \addThe overall structure of our network. The “1/2,64”, “1/4,128”,… and “1/16,512” indicate the scale ratio and the channel number of the input image. In the training flow, the TCL map, GGTR map, geometric features, link prediction and node classification are guided by the loss function. In testing flow, the...
After feature aggregation, the relational reasoning results can be used to combine text segments that have a strong relationship. The weakly supervised node classification results are used to refine the detection results inherited from the previous FPN layer.
The weakly supervised node classification results are used to further rectify the false positive/negative text segments. Finally, a novel text-instance-based contour inference module is used to approximate the contour of the rectified text segments to obtain the final results.
The Graph Guided Text Region map and the text segment classification results from the GCN are then used to rectify the TCL map to remove false detection. The relationship prediction and the dense overlapping text segments together ensure the completeness and accuracy of using the contour of the grouped text segments to...
C
\mid{\cal V},{\cal E}}]=\varpi^{-1}(\mathcal{R})\subset P({{\cal V},{\cal E}})\;,∀ caligraphic_R ⊂ caligraphic_V × caligraphic_V , italic_D start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT [ caligraphic_R ∣ caligraphic_V , caligraphic_E ] = italic_ϖ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( caligraphic_R ) ⊂ italic...
ℛ⊂𝒱×𝒱ℛ𝒱𝒱\mathcal{R}\subset{\cal V}{\times}{\cal V}caligraphic_R ⊂ caligraphic_V × caligraphic_V, we associate the subset DP⁢[ℛ∣𝒱,ℰ]subscript𝐷𝑃delimited-[]conditionalℛ𝒱ℰD_{P}[{\mathcal{R}\mid{\cal V},{\cal E}}]italic_D start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT [ caligraphic_R ∣ caligraphic_V , caligraphic_E...
The deployment DU⁢[ℛ∣𝒱,ℰ]subscript𝐷𝑈delimited-[]conditionalℛ𝒱ℰD_{U}[{\mathcal{R}\mid{\cal V},{\cal E}}]italic_D start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT [ caligraphic_R ∣ caligraphic_V , caligraphic_E ] is made of the extended-oriented paths whose endpoints satisfy the
in (4c). The deployment in edge paths DP⁢[ℛ∣𝒱,ℰ]subscript𝐷𝑃delimited-[]conditionalℛ𝒱ℰD_{P}[{\mathcal{R}\mid{\cal V},{\cal E}}]italic_D start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT [ caligraphic_R ∣ caligraphic_V , caligraphic_E ] is made of the edge paths whose endpoints satisfy the binary relation ℛℛ\mathcal{R}c...
The opposite or complementary ℛ𝖼superscriptℛ𝖼\mathcal{R}^{\mathsf{c}}caligraphic_R start_POSTSUPERSCRIPT sansserif_c end_POSTSUPERSCRIPT of a binary relation ℛℛ\mathcal{R}caligraphic_R is the relation ℛ𝖼=𝒱×𝒱∖ℛsuperscriptℛ𝖼𝒱𝒱ℛ\mathcal{R}^{\mathsf{c}}={\cal V}\times{\cal V}\setminus\mathcal{R}caligraphic_R start_...
C
The first proposed mapping mechanism of IP addresses is TLMB. The four parts of the IP address are represented in four layers, where each layer is made up of one or more memory blocks. The first layer only contains one memory block, whereas the second layer contains 256 memory blocks. Each memory block contains 256 ele...
The first proposed mapping mechanism of IP addresses is TLMB. The four parts of the IP address are represented in four layers, where each layer is made up of one or more memory blocks. The first layer only contains one memory block, whereas the second layer contains 256 memory blocks. Each memory block contains 256 ele...
We formally present a storage strategy for IP addresses that consists of two layers that consist of a limited number of memory blocks. The first layer contains 256×256256256256\times 256256 × 256 memory blocks. The first three parts of the IP address can be mapped into the corresponding position of the element in a pa...
We traverse all elements of the memory blocks of the second layer to obtain the maximum number of occurrences of elements if k=1𝑘1k=1italic_k = 1. Otherwise, we construct a minimum heap of size k𝑘kitalic_k. The statistical results of the first k𝑘kitalic_k IP addresses are saved in the heap, which is a special binary...
In this paper, we present two efficient algorithms for collecting the statistics of large-scale IP address data. We can obtain the frequently occurring IP addresses from the statistics, which can be regarded as a pre-processing step of user behavior analysis in network traffic management. Because of the increasing volu...
B
In this paper, both nested Schur complement and additive Schur complement based preconditioners are constructed for the twofold and block tridiagonal linear systems. The polynomial equations of the preconditioned matrices are analyzed. It is shown that by properly selecting the sign in front of each Schur complement, ...
In this study, we explore two methodologies for designing preconditioners tailored for 3333-by-3333 block systems exhibiting block tridiagonal form, as well as their extensions to n𝑛nitalic_n-tuple scenarios. One approach centers around the nested (or recursive) Schur complement [37]. Unlike the approach presented in ...
Commencing with a twofold saddle point problem, we generalize our theory to n𝑛nitalic_n-tuple block tridiagonal saddle point problems. Our study demonstrates that judiciously selecting signs in front of Schur complements in preconditioners results in a positively stable preconditioned system [16]. By using the Routh–H...
In this paper, both nested Schur complement and additive Schur complement based preconditioners are constructed for the twofold and block tridiagonal linear systems. The polynomial equations of the preconditioned matrices are analyzed. It is shown that by properly selecting the sign in front of each Schur complement, ...
For block tridiagonal systems, by comparing the theoretical analysis for the nested Schur complement based preconditioners and that for the additive type preconditioners, our argument is that permutation is important and necessary when designing preconditioners. These results are instructive for devising the correspond...
D
We note that for each sample p𝑝pitalic_p, 𝐗j(p)superscriptsubscript𝐗𝑗𝑝\mathbf{X}_{j}^{(p)}bold_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_p ) end_POSTSUPERSCRIPT is only held in a single client in silo j𝑗jitalic_j. A sample ID in the banking and insurance example corresponds t...
The features of the samples in the banking silo consist of balance, installments, debit history, credit history, etc., and the banking features for each customer are stored by the bank subsidiary that serves this customer. Similarly, the insurance silo has insurance-related features such as policy details, premium, age...
We note that for each sample p𝑝pitalic_p, 𝐗j(p)superscriptsubscript𝐗𝑗𝑝\mathbf{X}_{j}^{(p)}bold_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_p ) end_POSTSUPERSCRIPT is only held in a single client in silo j𝑗jitalic_j. A sample ID in the banking and insurance example corresponds t...
If we map this system model on the bank and insurance example in Section 1, there are two silos, a banking silo corresponding to a banking holding company and an insurance silo corresponding to an insurance holding company. The clients in the banking silo are subsidiary banks of the banking holding company, and similar...
Each silo holds a distinct set of features (e.g., customer/patient features); the data within each silo may even be of a different modality; for example, one silo may have audio features, whereas another silo has image data. At the same time, there exists an overlap in the sample ID space. More specifically, the silos ...
A
Figure 5: Boundaries of pseudospectra Λε⁢(ℱ)subscriptΛ𝜀ℱ\Lambda_{\varepsilon}(\mathcal{F})roman_Λ start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT ( caligraphic_F ) for ε=10−1,10−2,…,10−10𝜀superscript101superscript102…superscript1010\varepsilon=10^{-1},10^{-2},\ldots,10^{-10}italic_ε = 10 start_POSTSUPERSCRIPT - 1 end...
Consider the tensor 𝒜𝒜\mathcal{A}caligraphic_A belonging to the complex space ℂm×m×ℓsuperscriptℂ𝑚𝑚ℓ\mathbb{C}^{m\times m\times\ell}blackboard_C start_POSTSUPERSCRIPT italic_m × italic_m × roman_ℓ end_POSTSUPERSCRIPT. Through the utilization of the normalized DFT matrix, the matrix bcirc⁡(𝒜)bcirc𝒜\operatorname{bci...
We consider the validation of T-positive definiteness for third-order symmetric tensors 𝒜𝒜\mathcal{A}caligraphic_A through the application of the pseudospectra localizations strategy. Let 𝒜𝒜\mathcal{A}caligraphic_A be a symmetric tensor with three frontal slices
In this section, we delve into the study of pseudospectra for third-order tensors within the tensor-tensor multiplication framework. Specifically, we explore different formulations of pseudospectra for third-order tensors in Subsection 4.1. Subsection 4.2 is dedicated to the examination of various properties of pseudos...
Furthermore, by Theorem 4.6, one can explore more positive definite tensors that surround the positive definite tensor 𝒜𝒜\mathcal{A}caligraphic_A. We only need to ensure that the localization set Γε⁢(𝒜)subscriptΓ𝜀𝒜\Gamma_{\varepsilon}(\mathcal{A})roman_Γ start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT ( caligraphi...
B
Objective evaluation. We quantitatively evaluate the proposed method using three major metrics: LPIPS, PSNR and SSIM, and compare the scores to those of the state-of-the-art counterparts with irregular mask ratios of 0-20%, 20-40% and 40-60%. Table 1 shows the results achieved on the Places2 dataset, where the propose...
Motivated by global and local GANs [7], Gated Convolution [36] and Markovian GANs [9], we develop a two-stream discriminator to distinguish genuine images from the generated ones by estimating the feature statistics of both texture and structure. The discriminator is shown in Figure 2 (b). The texture branch includes t...
User Study. We further perform subjective user study. 10 volunteers with image processing expertise are involved in this evaluation. They are invited to choose the most realistic image from those inpainted by the proposed method and the representative state-of-the-art approaches. Specifically, each participant has 15 ...
Objective evaluation. We quantitatively evaluate the proposed method using three major metrics: LPIPS, PSNR and SSIM, and compare the scores to those of the state-of-the-art counterparts with irregular mask ratios of 0-20%, 20-40% and 40-60%. Table 1 shows the results achieved on the Places2 dataset, where the propose...
In this paper, we propose a novel two-stream image inpainting method, which recovers corrupted image by simultaneously modeling structure-constrained texture synthesis and texture-guided structure reconstruction. In this way, the two subtasks exchange useful information and thus facilitate each other. Furthermore, a Bi...
B
In digital communication, it is common that messages transmitted through a public channel may be distorted by the channel noise. The theory of error-correcting codes is the study of mechanisms to cope with this problem. This is an important research area with many applications in modern life. For example, error-correc...
In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability ε𝜀\varepsilonitalic_ε. The concept of BEC was first introduced by Elias in 1955 InfThe . Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory ...
The ensemble ℛm,nsubscriptℛ𝑚𝑛\mathcal{R}_{m,n}caligraphic_R start_POSTSUBSCRIPT italic_m , italic_n end_POSTSUBSCRIPT has been studied for a long time and many strong results have been obtained. For example, in the classical work of Gallager Gallager2 , an upper bound of the average number of codewords of a given we...
In a q𝑞qitalic_q-ary erasure channel, the channel input alphabet is a finite field 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT of order q𝑞qitalic_q, and during transmission, each symbol x∈𝔽q𝑥subscript𝔽𝑞x\in\mathbb{F}_{q}italic_x ∈ blackboard_F start_POSTSUBSCRIPT ita...
In digital communication, it is common that messages transmitted through a public channel may be distorted by the channel noise. The theory of error-correcting codes is the study of mechanisms to cope with this problem. This is an important research area with many applications in modern life. For example, error-correc...
A
Our experiments consist of three stages. First, we collect domain-specific expert data, see Section 4.2. Secondly, we train the subgoal generator, low-level conditional policy, and value function networks using the data and targets described in Section 3. For more details see Appendix D. Eventually, we evaluate the pl...
Our experiments consist of three stages. First, we collect domain-specific expert data, see Section 4.2. Secondly, we train the subgoal generator, low-level conditional policy, and value function networks using the data and targets described in Section 3. For more details see Appendix D. Eventually, we evaluate the pl...
The planner is used to search over the graph induced by the subgoal generator and is guided by the value function. The role of the low-level policy is to prune the search tree as well as to transition between subgoals. In this paper, we assume that the generator predicts subgoals that are k𝑘kitalic_k step ahead (towar...
For INT and Rubik’s Cube, we represent states as sequences modeled with a transformer. Following the practice routinely used in language modeling, [48], sub_net_generate employs beam search to generate a set of high likelihood outputs.333In language modeling, typically, only one beam search output is used. In our case...
As baselines, we use BestFS and MCTS (being a single-player implementation of AlphaZero). In INT and Rubik’s Cube, both the algorithms utilize policy networks (trained with behavioral cloning, on the same dataset, which we used to train kSubS). Note that distribution over actions induces a distribution over states; thu...
D
Ablation study is thus made to investigate how glyph and phonetic features bring improvement by themselves. We separately add glyph embedding or phonetic embedding to pre-trained language models for comparisons. From the results on all four datasets, it is clear that models with the glyph or phonetic embedding almost a...
In this paper, we propose a lightweight method, Multi-feature Fusion Embedding for Chinese Named Entity Recognition (MFE-NER), which fuses extra glyph and phonetic features to detect possible substitution forms of named entities in Chinese. On top of using pre-trained models to represent the semantic feature, we choose...
our MFE-NER is a lightweight Named Entity Recognition method fusing the glyph and phonetic feature embeddings for Chinese character substitution, which is complementary to pre-trained language models in the representation of Chinese characters. As shown in Figure 2, MFE-NER introduces an extra module, fusing glyph emb...
Nowadays, the informal language environment created by social media has deeply changed the way that people express their thoughts. Using character substitution to generate new named entities becomes a common linguistic phenomenon which is a big challenge for NER. In this paper, we propose a lightweight method fusing th...
Based on the experiment results above, MFE-NER is able to reduce the negative influence of the character substitution phenomenon in Chinese Named Entity Recognition, while slightly improving the overall performance of NER models. It makes sense that MFE-NER is suitable to solve character substitution problems because g...
C
While our experimental prototype was built for a HOLOEYE-PLUTO which possesses a 1K-pixel resolution, corresponding to a 1 mm eyebox with 75.6∘superscript75.675.6^{\circ}75.6 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT horizontal and vertical FOV, the improvement in hologram fidelity persists across resolutions. Irres...
The differentiability of Eq. (2) with respect to the modulation variables ℰℰ\mathcal{E}caligraphic_E and 𝒮𝒮\mathcal{S}caligraphic_S allows us to learn the optimal wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E by jointly optimizing the static neural étendue expander in concunction with...
Next, we analyze the expansion of étendue achieved with the proposed technique. To this end, suppose we want to generate the étendue-expanded hologram of only a single scene. Then, the optimal complex wavefront modulation for the neural étendue expander would be the inverse Fourier transform of the target scene, and, a...
To assess whether the optimized neural étendue expander ℰℰ\mathcal{E}caligraphic_E, shown in Fig. 1b, has learned the image statistics of the training set we evaluate the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG, defined as the spectrum of the generated image with th...
In this work, we introduce neural étendue expanders as an optical element that expands the étendue of existing holographic displays without sacrificing displayed hologram fidelity. Neural étendue expanders are learned from a natural image dataset and are jointly optimized with the SLM’s wavefront modulation. Akin to a...
B
Auxiliary MTL aims to improve the performance of certain primary tasks by introducing auxiliary tasks and is widely used in the NLP field for different types of primary task\addeds, \replacedsuch asincluding sequence tagging, classification, text generation, and representation learning. Table 1 \replacedsummarizes theg...
et al., 2019), and task-oriented dialogue generation (Zhou et al., 2019). \addedLi and Caragea (2019) add sentence-level sentiment classification and attention-level supervision to assist the primary stance detection task. \addedNishino et al. (2019) add attention-level supervision to improve consistency of the two pri...
Søgaard (2017) add five auxiliary tasks for scientific keyphrase boundary classification, including syntactic chunking, frame target annotation, hyperlink prediction, multi-word expression identification, and semantic super-sense tagging. \addedLi and Lam (2017) use opinion word extraction and sentence-level sentiment ...
et al., 2020a), the organization evaluation for student essays is learned together with the sentence and paragraph discourse element identification tasks. \addedLi and Caragea (2019) model the stance detection task with the help of the sentiment classification and self-supervised stance lexicon tasks.
et al. (2018) build an MTL model that extracts existing questions related to the current one and looks for question-comment threads that could answer the question at the same time. To analyze the argumentative structure of scientific publications, \addedLauscher et al. (2018) optimize argumentative component identifica...
B
MSE=1n⁢∑i=1n(Yi−Yi^)2MSE1𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑌𝑖^subscript𝑌𝑖2\text{MSE}=\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\hat{Y_{i}})^{2}MSE = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( ita...
The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ...
There are other options available for each of these when submitting for peer review or other special requirements. IEEE recommends to compose your article in the base 2-column format to make sure all your equations, tables and graphics will fit the final 2-column format. Please refer to the document “IEEEtran_HOWTO.pdf...
Make sure that your equations are numbered sequentially and there are no equation numbers missing or duplicated. Avoid hyphens and periods in your equation numbering. Stay with IEEE style, i.e., (1), (2), (3) or for sub-equations (1a), (1b). For equations in the appendix (A1), (A2), etc..
In this section, we will consider three types of lists: simple unnumbered, numbered and bulleted. There have been numerous options added to IEEEtran to enhance the creation of lists. If your lists are more complex than those shown below, please refer to the “IEEEtran_HOWTO.pdf” for additional options.
C
b(β−1)⁢k+1⁢…⁢bβ⁢ksubscript𝑏𝛽1𝑘1…subscript𝑏𝛽𝑘b_{(\beta-1)k+1}\ldots b_{\beta k}italic_b start_POSTSUBSCRIPT ( italic_β - 1 ) italic_k + 1 end_POSTSUBSCRIPT … italic_b start_POSTSUBSCRIPT italic_β italic_k end_POSTSUBSCRIPT and we use these bits to determine the connections between w(i,β)subscript𝑤𝑖𝛽w_{(i,\beta)...
b(β−1)⁢k+1⁢…⁢bβ⁢ksubscript𝑏𝛽1𝑘1…subscript𝑏𝛽𝑘b_{(\beta-1)k+1}\ldots b_{\beta k}italic_b start_POSTSUBSCRIPT ( italic_β - 1 ) italic_k + 1 end_POSTSUBSCRIPT … italic_b start_POSTSUBSCRIPT italic_β italic_k end_POSTSUBSCRIPT and we use these bits to determine the connections between w(i,β)subscript𝑤𝑖𝛽w_{(i,\beta)...
More precisely, for β,γ∈[k]𝛽𝛾delimited-[]𝑘\beta,\gamma\in[k]italic_β , italic_γ ∈ [ italic_k ], we set that w(i,β)subscript𝑤𝑖𝛽w_{(i,\beta)}italic_w start_POSTSUBSCRIPT ( italic_i , italic_β ) end_POSTSUBSCRIPT is connected to sk+γsubscript𝑠𝑘𝛾s_{k+\gamma}italic_s start_POSTSUBSCRIPT italic_k + italic_γ end_POST...
connected to sβsubscript𝑠𝛽s_{\beta}italic_s start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT, but the only neighbor of sβsubscript𝑠𝛽s_{\beta}italic_s start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT in Wi′subscript𝑊superscript𝑖′W_{i^{\prime}}italic_W start_POSTSUBSCRIPT italic_i start_POSTSUPERSCRIPT ′ end_POSTSUPERS...
the binary representation of i−1𝑖1i-1italic_i - 1 and b1′⁢…⁢bk2′superscriptsubscript𝑏1′…superscriptsubscript𝑏superscript𝑘2′b_{1}^{\prime}\ldots b_{k^{2}}^{\prime}italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT … italic_b start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT...
B
Subjects played a repeated version of a purely congestive resource sharing game, which consisted of a fixed group size of n=4𝑛4n=4italic_n = 4, and a homogenous cost/externality structure defined by an MPCR function of mi⁢(Ni)=m⁢(Ni)=1.6|Ni|subscript𝑚𝑖subscript𝑁𝑖𝑚subscript𝑁𝑖1.6subscript𝑁𝑖m_{i}(N_{i})=m(N_{i})...
The treatment variation was implemented in the second part. In three baseline sessions, consisting of a total of 72 subjects in 18 groups, subjects were told that the second part of the experiment would be exactly the same as the first part, except that subject IDs would be randomly reassigned. In five treatment sessi...
After the main parts of the experiment, subjects were asked a series of survey questions to elicit measures of their individual characteristics. All of these questions were aimed at eliciting a subject’s heterogeneous preferences toward trust and reciprocity. The full set of survey questions is reproduced in Appendix ...
Our experiment allows us to examine the impact of the information structure on reciprocal behavior. In the control (or baseline) condition, players are given information only about the total inflow of benefits from others after each round, but cannot identify the source of those benefits. In this way, direct reciprocit...
Subjects played a repeated version of a purely congestive resource sharing game, which consisted of a fixed group size of n=4𝑛4n=4italic_n = 4, and a homogenous cost/externality structure defined by an MPCR function of mi⁢(Ni)=m⁢(Ni)=1.6|Ni|subscript𝑚𝑖subscript𝑁𝑖𝑚subscript𝑁𝑖1.6subscript𝑁𝑖m_{i}(N_{i})=m(N_{i})...
A
Among them, ℒℒ\mathcal{L}caligraphic_L is the loss function that is used to minimize the distance between the reconstructed image and the ground-truth image. By using different loss functions, the model can achieve different performance. Therefore, an effective loss function is also crucial for SISR.
Supervised Learning: In SISR, we often call the method of using pairs of LR-HR images for training a supervised learning paradigm. In simulated SISR, LR images are often obtained by downsampling HR images. In real SISR, LR images and HR images are obtained by adjusting the zoom of the camera. In general, the LR and HR...
In this work, we use a common division method in the SR field, that is, whether paired LR-HR images are used for model training. It is worth noting that the HR image here refers to the additional introduced high-resolution image, not the image itself. In addition, learning strategy has no clear definitions in SISR. Acc...
Unsupervised Learning: The simulated paired images have poor versatility, while the real paired images are difficult to collect. To address this issue, some methods began to try to no longer use paired LR-HR images for training. We often call this type of method an unsupervised learning method. This type of unsupervise...
In SISR, the idea of cycle consistency has also been widely discussed. Given the LR images domain X𝑋Xitalic_X and the HR images domain Y𝑌Yitalic_Y, we not only learn the mapping from LR to HR but also the backward process. Researchers have shown that learning how to perform image degradation first without paired data...
C
The resulting architecture performs the equivalent operation to a conventional coordinate-based since the network ultimately predicts a single pixel value. However, the intermediate patch-based representation of the proposed architecture forces the model to establish the natural relationship between the encoded coord...
An important issue for learning coordinate-based representations is the tendency of neural networks to interpolate and attenuate high-frequency changes in the output [1, 2, 16]. Two effective solutions to this problem are to either map the input coordinates (known as positional encoding) [1] or use sinusoidal activatio...
The Patch is a network of 4 ReLU layers with 256 units, identical to the one used in [1]. The role of this component is to map each coordinate vector to an appropriate pixel patch. The coordinate input is mapped using random Fourier features before passing to the network. This processing step is known as positional e...
To perform super-resolution, a Neural Knitwork has to translate the information contained in the patches of the original scale to a domain of patches of finer scale. This can be done by matching the patch distribution across scales [8, 25, 26, 29]. For blind super-resolution, Neural Knitwork core module is utilized wi...
The resulting architecture performs the equivalent operation to a conventional coordinate-based since the network ultimately predicts a single pixel value. However, the intermediate patch-based representation of the proposed architecture forces the model to establish the natural relationship between the encoded coord...
B
In problem (iii) where every context is sampled close to the classification boundary, with a strong tendency towards class 0, a different behaviour is observed. Besides PG-TS, SupLogistic, and CBP-SIDE, each of the algorithms displays a more variable performance, and the traditional variant of PG-IDS incurs noticeabl...
Due to the previously mentioned connection between apple tasting and logistic bandits, state-of-the-art algorithms for those problems are also worth mentioning. Beyond TS approaches, much of the literature (Faury et al.,, 2020; Abeille et al.,, 2021; Faury et al.,, 2022; Lee et al.,, 2023; Zhang and Sugiyama,, 2024) ha...
It can also be shown that subject to a rescaling of the loss matrix (Antos et al.,, 2013; Lienert,, 2013), our framework coincides with a logistic contextual bandit (Filippi et al.,, 2010) with only two actions in each round (one always being a zero vector). The present problem however is a very specific instance of th...
We have shown that the existence of a non-informative action in apple tasting can inhibit the performance of traditional Information Directed Sampling, and it would be of interest to explore further more complex settings (in partial monitoring, reinforcement learning, etc.) where this effect could be even more pronoun...
The phenomenon of a greedy approach sometimes outperforming more complex attempts to balance exploration and exploitation in contextual problems is not unprecedented. Indeed, in the contextual bandit literature, a number of recent works (Bastani et al.,, 2021; Kannan et al.,, 2018; Raghavan et al.,, 2023; Jedor et al.,...
D
In spite of the welcome leap in performance, however, a typical criticism transformer architectures share with most deep learning models is their lack of interpretability. Sure, the attention mechanism [4] could offer cues as to how to interpret the behavior of such models. Nevertheless, whether attention could be mean...
The introduction of smart sampling strategies allows scaling to larger memories. However, the introduced priority-based strategies lack proper conditioning on the input, but rather learn the dataset-level importance of each memory slot. This works well in unfairness detection, where there are few memory slots and they...
Therefore, they seem an ideal candidate architecture for the integration of natural language explanations. However, the main purpose of our extension of transformers is not to improve classification performance, but to generate explanations in the form of grounding to elements of a textual knowledge. Accordingly, the k...
Indeed, with knowledge in the form of sentences in natural language, we cover several scenarios of different nature, spanning from additional contextual information to general task-oriented constraints. For instance, the memory could contain textual descriptions of the task itself and, thus, we could formulate the task...
A way to make deep networks more interpretable could be to inject them with recognizable, readable knowledge elements, as illustrated in Figure 1 [7]. A few interesting proposals in this direction rely on data augmentation, model architectural biases, regularization constraints, and retrofitting. For instance, objectiv...
D
The primary advantage is that STREL is a specification language crafted specifically for keeping a strong connection with intuitive notions of spatial and temporal concepts, allowing to express complex requirements in a compact and understandable way. Note that a dedicated scripting language for STREL is
In addition to the previous requirements that are related to general aspects of the mobile network, for the evaluation of the city in terms of safety and quality of life, it is interesting to look at how the city is performing with respect to the reachability of some key points of interest. For example, in an emergency...
In this section, we propose some informal properties that the crowdedness level in a big city should satisfy to robustly withstand critical events. Let c𝑐citalic_c represent a crowdedness threshold for all the areas of the city. Note, however, that the framework can accommodate for, e.g., area-specific threshold value...
i.e., the crowdedness is above a certain threshold c𝑐citalic_c. Conversely, the formula ¬ϕitalic-ϕ\neg\phi¬ italic_ϕ represents the case where the crowdedness level is below or equal the threshold c𝑐citalic_c. This formula constitutes the basic building block for formalizing our requirements; in fact, the first requ...
We illustrate the approach by building a Bayesian spatio-temporal model for areal crowdedness extracted from aggregated mobile phone data in the city of Milan and by formulating properties that the crowdedness level in the city network should satisfy in order to robustly withstand critical events. We compare various mo...
B
In particular, let S𝑆Sitalic_S be a uniform sample of poly⁡(ϵ−1)polysuperscriptitalic-ϵ1\operatorname{poly}(\epsilon^{-1})roman_poly ( italic_ϵ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ) points from X𝑋Xitalic_X, and let c^:=1|S|⁢∑x∈Sφ⁢(x)assign^𝑐1𝑆subscript𝑥𝑆𝜑𝑥\hat{c}:=\frac{1}{|S|}\sum_{x\in S}{\varphi(x)...
\varphi}(X,c^{\star})roman_cost start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_X , over^ start_ARG italic_c end_ARG ) ≤ ( 1 + italic_ϵ ) roman_cost start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_X , italic_c start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ) and evaluating costφ⁡(X,c^)superscriptco...
\operatorname{cost}_{z}^{\varphi}(X,C).roman_cost start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_S , italic_C ) ∈ ( 1 ± italic_ϵ ) ⋅ roman_cost start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_X , itali...
costφ⁡(S,C)+D∈(1±ϵ)⋅costφ⁡(X,C)superscriptcost𝜑𝑆𝐶𝐷⋅plus-or-minus1italic-ϵsuperscriptcost𝜑𝑋𝐶\operatorname{cost}^{\varphi}(S,C)+D\in(1\pm\epsilon)\cdot\operatorname{cost}^% {\varphi}(X,C)roman_cost start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_S , italic_C ) + italic_D ∈ ( 1 ± italic_ϵ ) ⋅ roman_cost...
\varphi}(S,C)\in(1\pm\epsilon)\cdot\operatorname{cost}^{\varphi}(X,C)∀ italic_C ⊆ caligraphic_H , | italic_C | = italic_k , roman_cost start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_S , italic_C ) ∈ ( 1 ± italic_ϵ ) ⋅ roman_cost start_POSTSUPERSCRIPT italic_φ end_POSTSUPERSCRIPT ( italic_X , italic_C )
A
For instance, replicability of standard equality in substructural logics has a neat algebraic explanation: such an equality is defined by a left adjoint, as pioneered by Lawvere [Law69, Law70], and, as we will show, predicates defined in this way are always replicable.
how does this (standard) notion of equality relate to our quantitative equality? To answer this question in a precise way, first of all we observe that also elementary R𝑅Ritalic_R-graded doctrines can be organised in a 2-category, and then we compare it with the 2-category of R𝑅Ritalic_R-Lipschitz doctrines.
Theorem 22 shows that the notion of quantitative equality given in this paper is coalgebraic, in the sense that Lipschitz doctrines are the coalgebras of a comonad over the category of graded doctrines. This generalizes a known situation that holds in the non-linear case, where elementary doctrines are the coalgebras ...
This shows that a quantitative equality cannot be given by a left adjoint, however, thanks to the language of doctrines, we manage to compare in a rigorous way quantitative equality with the standard one, proving they share other fundamental structural properties.
This provides us with a universal construction yielding an R𝑅Ritalic_R-Lipschitz doctrine from an R𝑅Ritalic_R-graded one, and we use it to generate semantics for the calculus. In Section 5.2 we relate quantitative equality with the usual one defined by left adoints, formally proving that the former indeed refines the...
C
O⁢(ϵ−2⁢m⁢log5⁡n⁢log⁡1ϵ)𝑂superscriptitalic-ϵ2𝑚superscript5𝑛1italic-ϵO(\epsilon^{-2}m\log^{5}{n}\log{\frac{1}{\epsilon}})italic_O ( italic_ϵ start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT italic_m roman_log start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT italic_n roman_log divide start_ARG 1 end_ARG start_ARG italic_ϵ end_...
Efficient top-k similarity search algorithm : We devise ForestSimSearch for the top-k similarity search. ForestSimSearch can handle a top-k query in O⁢(k)𝑂𝑘O(k)italic_O ( italic_k ) time once the precomputation is finished. Furthermore, we use the fast approximate algorithm to compute the diagonal entries of the for...
In this paper, on the basis of spanning rooted forests, we propose ForestSim, a new node similarity metric. ForestSim uses the average size of the trees rooted at the node u𝑢uitalic_u in spanning rooted forests of the graph, denoted by s⁢(u)𝑠𝑢s(u)italic_s ( italic_u ), to capture its structural properties. Two node...
In this section, we systematically analyze the time and space complexity of RoleSim [5], StructSim [29], and ForestSim in finding top-k similar nodes for a given node. For each role similarity measure, its time complexity includes two parts: precomputation and top-k similarity search. Precomputation of the studied role...
RoleSim [5] and StructSim [29] are state-of-art role similarity metrics. However, these two measures have bottlenecks when finding the top-k similar nodes for a given node in a large network, which is the task that receives more interest from the end-users [30, 6, 31, 32, 33, 34]. For an undirected unweighted graph wit...
C
Aspect-based sentiment classification Pontiki et al. (2014, 2015, 2016) (ABSC) aims to identify sentiments associated with specific aspects within a text, as highlighted in several studies Ma et al. (2017); Fan et al. (2018); Zhang et al. (2019); Yang et al. (2021).
In this section, we delve into a case study to validate the capability of our model in learning local sentiment coherency. We present a series of examples in Table 6, which showcase instances where LSA excels in identifying aspect sentiment coherency.
Modeling sentiment coherency often presents challenges for traditional ABSC methods due to the complexity of aspect sentiment coherency. To efficiently address the aspect sentiment coherency task, we shed light on a simple yet effective approach, namely local sentiment aggregation (LSA).
Table 6: The examples for aspect sentiment coherency found by LSA. The target aspects are denoted in bold and the underlined words indicates the aspects with coherent sentiments. “Pos”, “Neg” and “Neu” represent positive, negative and neutral, respectively.
In this work, we make efforts to address an intriguing problem within ABSC that has been overlooked in existing research, i.e., “aspect sentiment coherency”, which focuses on modeling aspects that share similar sentiments. For instance, in the sentence “This laptop has a lot of storage, and so does the battery capacity...
D
𝐗𝐗\mathbf{X}bold_X. Suppose that there exist orthogonal projections 𝐋:ℝN→𝒫p⁢(𝕊):𝐋→superscriptℝ𝑁superscript𝒫𝑝𝕊\mathbf{L}:\mathbb{R}^{N}\rightarrow{\mathcal{P}}^{p}(\mathbb{S})bold_L : blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT → caligraphic_P start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR...
The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop...
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and me...
In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic...
gradient estimate of 𝜸𝐖subscript𝜸𝐖\bm{\gamma}_{\mathbf{W}}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT, where 𝜸𝐖0superscriptsubscript𝜸𝐖0\bm{\gamma}_{\mathbf{W}}^{0}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is the initial guess. The main ...
A
The code for construction and noise-aware training of PQC is available at the TorchQuantum library. It is an convenient infrastructure to query noise models from QC providers such as IBMQ, extract noise information, perform training on CPU/GPU and finally deploy on real QC.
We use QNN as the benchmark PQC in this work. Figure 2 shows the QNN architecture. The inputs are classical data such as image pixels, and the outputs are classification results. The QNN consists of multiple blocks. Each has three components: encoder encodes the classical values to quantum states with rotation gates su...
The path to quantum advantage on QML is typically provided by the quantum circuit’s ability to generate and estimate highly complex kernels, which would otherwise be intractable to compute with conventional computers. They have been shown to have potential speed-up over classical counterparts in various tasks, includin...
Quantum Computing (QC) is a new computational paradigm that can be exponentially faster than classical counterparts in various domains. Parameterized Quantum Circuits (PQC) are circuits containing trainable weights and are promising to achieve quantum advantages in current devices. Among various PQCs, Quantum Neural Ne...
PQC is a promising candidate to demonstrate practical quantum advantages over classical approaches. The road to such advantage relies on: (1) the discovery of novel feature embedding that encodes classical data non-linearly, and (2) overcome the impact of quantum noise. This work focuses on the latter and show that a ...
B
As for the competitors, SiamBAN, SiamRPN++, and ATOM also achieve competitive results on some sequences of the ECD dataset. However, they show inferior tracking performance on the sequences with fast motion and low illumination conditions (such as the light_variations and occlusions sequences). They usually lose the t...
We also provide the event trajectory results obtained by the proposed EDA on some representative sequences from the ECD and EED datasets. Furthermore, in order to evaluate the performance of EDA under more tracking scenarios with challenges, we test it on some representative sequences from the Color Event Dataset (CED)...
From the results in Fig. 7, we can see that the proposed EDA has accurately estimated most of the true event trajectories triggered by the camera and object motions through robust model fitting. In particular, it is worth noting that, with the help of the proposed adaptive model selection strategy, the numbers of the ...
The quantitative results are given in Table 1, Table 2, Table 3, and Table 4. We also provide some qualitative results obtained by SiamBAN, EVT, SiamRPN++, ATOM, E-MS, ETD, RMRNet, and our EDA in Fig. 6. From the tables and the figure, we can see that our EDA achieves the best performance on most of the sequences exce...
Figure 7: Event trajectory estimation results obtained by the proposed EDA approach. The event trajectories marked by different colors are triggered by different motions. The names and the corresponding motions of the video sequences are listed in the bottom of the corresponding event trajectory results, respectively. ...
A
As a connected vertex-ordering of H𝐻Hitalic_H can be obtained in linear time using a standard graph traversal algorithm, and a colouring of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT may be computed in O⁢(m⁢n)𝑂𝑚𝑛O(mn)italic_O ( italic_m italic_n ) time [12, Chapter 5.7], we concl...
Recently, connected greedy edge-colourings (equivalently, connected greedy colourings of line graphs) have been studied in [3], and it was proved that there is no line graph of a bipartite graph that is ugly.444Moreover, a careful analysis of the proof of [3] gives an algorithm running in time O⁢(n4)𝑂superscript𝑛4O(n...
Concerning other subclasses of interest, as mentioned in the introduction, the same task can be done in time O⁢(m+n)𝑂𝑚𝑛O(m+n)italic_O ( italic_m + italic_n ) for chordal graphs using the LexBFS algorithm [21], for Meyniel graphs this can be done in time O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUP...
class of perfect graphs. We also give a simple and constructive proof for comparability graphs (which are perfect). Note that there exist bad graphs in these graph classes, consider for example the fish graph, which is K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT-minor-free and comparability; see...
We now prove our main result, that there are no ugly perfect graphs. This generalizes the same fact which was previously proved for Meyniel graphs [22] (a class which contains chordal graphs, HHD-free graphs, Gallai graphs, parity graphs, distance-hereditary graphs…) and line graphs of bipartite graphs [3]. Our proof ...
D
Unlike previous DR and GE methods, the proposed GenURL can import extra prior knowledge and is robust to highly redundant data; additionally, different from GE and SSL methods, GenURL is agnostic to network structures and predefined proxy tasks. Extensive experiments conducted on benchmarks of four URL tasks (self-supe...
Adopting the manifold assumption in DR, which assumes data lie on a low-dimensional manifold immersed in the high-dimensional space, most DR methods try to preserve intrinsic geometric properties of data [7, 8, 9, 10, 11, 12]. Another practical branch of DR introduced by t-SNE [13] and UMAP [4] optimizes the pair-wise ...
Firstly, we compare the effects of hyper-parameters in GenURL for DR and SSL tasks. As shown in Figure 12, GenURL prefers smaller νZsubscript𝜈𝑍\nu_{Z}italic_ν start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT, i.e., using ν=0.01𝜈0.01\nu=0.01italic_ν = 0.01 to balance the local and global structures. Figure 5 shows that...
We perform DR experiments on MNIST, FMNIST, and COIL-20 datasets. We compare the current leading methods, including non-parametric methods (t-SNE [13] and UMAP [4]) and parametric methods (P-UMAP [15], GRAE [89], TopoAE [11], and DMT [16]). Besides the linear classification top-1 accuracy (Acc) with logistic regression...
(i) Dimension reduction (DR) and graph embedding (GE) algorithms aim to encode non-Euclidean input data into a latent space Z𝑍Zitalic_Z plainly without any prior knowledge of the related domains, as shown in Figure 1 left and middle. (ii) Complementary to DR and GE, another popular path of URL focuses on data-specific...
A
In this paper, we propose patch-based inference to reduce the memory usage for tiny deep learning by up to 8×\times×, which greatly expands the design space and unlocks powerful vision applications on IoT devices. We jointly optimize the neural architecture and inference scheduling to develop MCUNetV2.
In this paper, we propose patch-based inference to reduce the memory usage for tiny deep learning by up to 8×\times×, which greatly expands the design space and unlocks powerful vision applications on IoT devices. We jointly optimize the neural architecture and inference scheduling to develop MCUNetV2. MCUNetV2 signifi...
MCUNetV2 significantly improves the ImageNet accuracy of tiny deep learning on microcontrollers. Under 256kB SRAM/1MB Flash, MCUNetV2 outperforms the state-of-the-art method [30] by 4.6% at 18% lower peak SRAM. Under 512kB SRAM/2MB Flash, MCUNetV2 achieves a new record ImageNet accuracy of 71.8% on commercial microcont...
MCUNetV2 significantly improves the object detection performance on microcontrollers by 16.9% and achieves a record ImageNet accuracy (71.8%). For the VWW dataset, MCUNetV2 can achieve >90% accuracy under only 32kB SRAM, 4×\times× smaller than existing work. Our study largely addresses the memory bottleneck in tinyML a...
In this paper, we propose patch-based inference to reduce the memory usage for tiny deep learning by up to 8×\times×, which greatly expands the design space and unlocks powerful vision applications on IoT devices. We jointly optimize the neural architecture and inference scheduling to develop MCUNetV2. MCUNetV2 signifi...
C
Main Results Table I shows the results of different models for CECE on two leaderboards. The single model we proposed, overwhelmingly outperforms the baseline in terms of all leaderboards and achieves encouraging 65.9%percent65.965.9\%65.9 %, 77.0%percent77.077.0\%77.0 % improvements in F1-score over the baseline meth...
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a ran...
The ICDM 2020 Knowledge Graph Contest is a competition-style event co-located with the leading ICDM conference. This paper describes our solution for the consumer event-cause extraction task, and we won 1st place in the first stage leaderboard and 3rd place in the final stage leaderboard. Extracting causes of consumer...
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging [8] framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a...
In the 2020 ICDM Competition 333https://www.biendata.xyz/competition/icdm_2020_kgc/, the task adds judgments on multiple event types, which is difficult to solve with a reading comprehension framework. The goal of the competition is to extract multiple event types and event-causes for each text and brand/product. To th...
A
In Figure 2, we illustrate the overview of CGCL. Multiple graph encoders process input graphs, yielding embeddings for each graph. Every graph encoder updates its parameters by contrasting its learned embeddings to the outputs from the other graph encoders. Specifically, the graph embeddings learned by Graph Encoder i�...
In this study, we introduce CGCL, a novel collaborative graph contrastive learning framework, designed to address the invariance challenge encountered in current GCL methods. Unlike the conventional practice of constructing augmented graphs by hand, CGCL employs multiple GNN-based encoders to generate multiple contrast...
Here, 𝒉n(l−1)superscriptsubscript𝒉𝑛𝑙1\boldsymbol{h}_{n}^{(l-1)}bold_italic_h start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_l - 1 ) end_POSTSUPERSCRIPT is the representation of node n𝑛nitalic_n at the (l−1)𝑙1(l-1)( italic_l - 1 )-th layer and 𝒉n(0)superscriptsubscript𝒉𝑛0\boldsymb...
Given a set of graphs, CGCL needs to encode them into vectorized representations. GNNs [28, 12, 8] have demonstrated their outstanding ability in encoding graphs. In CGCL, we mainly employ GNNs as graph encoders. GNNs follow the recursive neighborhood aggregation and certain message-passing scheme [32] to encode graph...
Those techniques can be concluded as introducing asymmetry into model architecture. Reflecting on CGCL, diverse GNN-based graph encoders with distinct message-passing schemes are employed to ensure an asymmetric architecture. Essentially, the variance in these schemes introduces the desired asymmetry. Thus, CGCL’s asse...
C
We use two datasets: shapes3d (used in Burgess and Kim, (2018) and included in the TensorFlow datasets package) and a dataset used by Choi et al., (2018)333The dataset is available at https://github.com/benbogin/obverter. We used the code provided in the repository to generate 1000100010001000 images for each color-sha...
permuting the set of (color,shape)colorshape(\text{color},\text{shape})( color , shape ) and factorizing them into new labels (see Appendix D.8). We use a random permutation, so the new labels are abstract and correspond to some joint color-shape concepts. In our experiments, we show that languages, which emerge are co...
We use two datasets: shapes3d (used in Burgess and Kim, (2018) and included in the TensorFlow datasets package) and a dataset used by Choi et al., (2018)333The dataset is available at https://github.com/benbogin/obverter. We used the code provided in the repository to generate 1000100010001000 images for each color-sha...
Each element is a (64,64,3)64643(64,64,3)( 64 , 64 , 3 ) RGB image, and is characterized by multiple features, such as shape or object hue. We choose images with values for both features ranging in {0,1,2,3}0123\{0,1,2,3\}{ 0 , 1 , 2 , 3 }. The obverter dataset contains images of four shapes (box, cylinder, ellipsoid, ...
This experiment was intended to check how much the CNN-backed input is relevant in the compositionality context. We could conjecture that CNN may facilitate shape recognition and therefore be the driving force in the emergence of languages compositional with respect to the canonical shape, color split. To check this we...
C
Safety-critical systems rely on robust control laws that can account for uncertainties in system dynamics and state estimation. For example, consider an autonomous car equipped with noisy sensors that navigates through urban traffic [1]. The state of the car is not exactly known and estimated from output measurements, ...
Learning with CBFs: Approaches that use CBFs during learning typically assume that a valid CBF is already given, while we focus on constructing CBFs so that our approach can be viewed as complementary. In [19], it is shown how safe and optimal reward functions can be obtained, and how these are related to CBFs. The aut...
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an...
Control barrier functions (CBFs) were introduced in [3, 4] to render a safe set controlled forward invariant. A CBF defines a set of safe control inputs that can be used to find a minimally invasive safety-preserving correction to a nominal control law by solving a convex quadratic program. Many variations and extensio...
Learning CBFs: An open problem is how valid CBFs can be constructed. Indeed, the lack of systematic methods to construct valid CBFs is a main bottleneck. For certain types of mechanical systems under input constraints, analytic CBFs can be constructed [30]. The construction of polynomial barrier functions towards cert...
C
Take g=fρ𝑔subscript𝑓𝜌g=f_{\rho}italic_g = italic_f start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT if 𝖣⁡(fρ)≤t𝖣subscript𝑓𝜌𝑡\operatorname{\mathsf{D}}(f_{\rho})\leq tsansserif_D ( italic_f start_POSTSUBSCRIPT italic_ρ end_POSTSUBSCRIPT ) ≤ italic_t, and otherwise let g𝑔gitalic_g be the all-zeros function. Then g∈...
With the tools introduced in the previous section, we can state a more intuitive and useful form of our random restriction argument for 𝖰𝖬𝖠𝖰𝖬𝖠\mathsf{QMA}sansserif_QMA query algorithms. It states that a random restriction of a 𝖰𝖬𝖠𝖰𝖬𝖠\mathsf{QMA}sansserif_QMA query algorithm is close in expectation to a smal...
We conclude with the proof of Theorem 7, showing that 𝖯𝖯𝖯𝖯\mathsf{PP}sansserif_PP is not contained in the 𝖰𝖬𝖠𝖰𝖬𝖠\mathsf{QMA}sansserif_QMA hierarchy relative to a random oracle. This is arguably the most technically involved part of this work. Recall that our key contribution, and the most important step of o...
With all of these tools in hand, we prove in our next theorem that under an appropriately chosen random restriction, a circuit composed of 𝖰𝖬𝖠𝖰𝖬𝖠\mathsf{QMA}sansserif_QMA query gates simplifies to a function that is close (in expectation) to a function with low deterministic query complexity. The proof amounts to...
To elaborate further, we first take a random restriction that, by Theorem 14, turns all of the bottom-layer 𝖰𝖬𝖠𝖰𝖬𝖠\mathsf{QMA}sansserif_QMA gates into DNF formulas. Next, we apply another random restriction and appeal to the switching lemma to argue that these DNFs reduce to functions of low decision tree comple...
C
In order to prove the upper bound from (4), we represent the image of k⁢[x(⩽ℓ)]/ℐm(∞)𝑘delimited-[]superscript𝑥absentℓsuperscriptsubscriptℐ𝑚k[x^{(\leqslant\ell)}]/\mathcal{I}_{m}^{(\infty)}italic_k [ italic_x start_POSTSUPERSCRIPT ( ⩽ roman_ℓ ) end_POSTSUPERSCRIPT ] / caligraphic_I start_POSTSUBSCRIPT italic_m end_PO...
In this direction, new results have been obtained recently in [1, 4, 7]. In [1], Afsharijoo used computational experiments to conjecture [1, Section 5] the initial ideal of ℐm(∞)superscriptsubscriptℐ𝑚\mathcal{I}_{m}^{(\infty)}caligraphic_I start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( ∞ ) end_...
We used Macaulay2 [19] and, in particular, package Jets [18, 17] to explore possible analogues of our Theorem 3.1 for this more general case. A related Sage implementation for computing the arc space of an affine scheme with respect to a fat point can be found in [37, Section 9] and [36, Section 5.4].
Connections between the multiplicity structure of the arc space of a fat point and Rogers-Ramanujan partition identities from combinatorics were pointed out by Bruschek, Mourtada, and Schepers in [11] (for a recent survey, see [29, §9]). In particular, they used Hilbert-Poincare series of similar nature to (1) (motivat...
The proofs of the results are given in Section 4. Section 5 describes computational experiments in Macaulay2 we performed to check whether formulas similar to (2) hold for more general fat points in knsuperscript𝑘𝑛k^{n}italic_k start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT.
D
In this section, we apply nDFA and DFA to five real-world weighted networks Karate club weighted network (Karate-weighted for short), Gahuku-Gama subtribes network, the Coauthorships in network science network (CoauthorshipsNet for short), Condensed matter collaborations 1999 (Con-mat-1999 for short) and Condensed mat...
In this section, we apply nDFA and DFA to five real-world weighted networks Karate club weighted network (Karate-weighted for short), Gahuku-Gama subtribes network, the Coauthorships in network science network (CoauthorshipsNet for short), Condensed matter collaborations 1999 (Con-mat-1999 for short) and Condensed mat...
In this section, four real-world un-weighted networks with known labels are studied to investigate nDFA’s empirical performance. Some basic information of the four data are displayed in Table 2, where Karate, Dolphins, Polbooks and Weblogs are short for Zachary’s karate club, Dolphin social network, Books about US pol...
Karate-weighted: This weighted network is collected from a university karate club. In this weighted network, node denotes member, and edge between two nodes indicates the relative strength of the associations. Actually, this network is the weighted version of Karate club network. So, the number of communities is 2 and...
Gahuku-Gama subtribes: This data is the signed social network of tribes of the Gahuku–Gama alliance structure of the Eastern Central Highlands of New Guinea. This network has 16 tribes, and positive or negative link between two tribes means they are allies or enmities, respectively. Meanwhile, there are 3 communities i...
C
Banerjee (2016); Oliehoek and Amato (2016); Foerster et al. (2016) is based on using the centralized information during training. During execution, the agents act using only their respective observations. Following this scheme, Foerster et al. (2016) introduces the RIAL and DIAL algorithms in the context of Q𝑄Qitalic_...
In this work, we take a step towards amending this situation. We propose MA-Trace, a new on-policy actor-critic algorithm, which adheres to the centralized training and decentralized execution paradigm Lowe et al. (2017); Foerster et al. (2018); Rashid et al. (2018). The key component of MA-Trace is the usage of impor...
et al. (2018), a distributed single-agent algorithm. The idea of extending RL algorithms to the multi-agent setting has been successfully executed multiple times. Lowe et al. (2017) propose a multi-agent actor-critic algorithm MADDPG, which is based on the DDPG algorithm Lillicrap et al. (2016). Yu et al. (2020) introd...
Following this idea, Rashid et al. (2018) introduced QMIX, which learns a complex state-dependent decomposition by using monotonic mixing hypernetworks. Extensions of QMIX include MAVEN Mahajan et al. (2019), COMIX de Witt et al. (2020), SMIX(λ𝜆\lambdaitalic_λ) Wen et al. (2020), and QTRAN Son
Reinforcement learning has witnessed impressive development in recent years. Famously, superhuman performance has been achieved in games Go Silver et al. (2016), StarCraft II Vinyals et al. (2019b), Dota 2 Berner et al. (2019) and other applications. These successes are the result of rapid algorithmic development. Rese...
C
The green color in the center of a point indicates that a decision is from RF, while blue is for AB. The outline color reflects the training instances’ class based on a decision’s prediction. The size maps the number of training instances that are classified by a specific decision, and the opacity encodes the impurity ...
The exploration starts with an overview of how 10 RF and 10 AB models performed based on three validation metrics: accuracy, precision, and recall. The models are initially sorted according to the overall score, which is the average sum of the three metrics. This choice guides users to focus mostly on the right-hand s...
The use case, usage scenario, and user study were performed on a MacBook Pro 2019 with a 2.6 GHz (6-Core) Intel Core i7 CPU, an AMD Radeon Pro 5300M 4 GB GPU, 16 GB of DDR4 RAM at 2667 Mhz, running macOS Monterey, and with Chrome (version 99) as the browser. The system can perform interactively after the model training...
This choice was intentional since bagging methods work differently than boosting, as explained in Section Random Forest vs. Adaptive Boosting. Furthermore, each data set is split in a stratified fashion (i.e., keeping the class balance in training/testing split) into 90% of training samples and 10% of testing samples. ...
UMAP is initiated with variable n_neighbors and min_dist fixed to 0.10.10.10.1. To determine the optimal number of clusters to be visualized, DBSCAN Ester1996A is used to compute an estimated number of core clusters from the derived decisions, which is then used to tune the n_neighbors, with a minimum of 2 and a maxi...
D
It is worth emphasizing that estimation of optimal polarization vectors before the hybrid antenna selection stage is inevitable to have full benefit of joint polarization pre-post coding and the corresponding polarization reconfigurable antenna selection in PR-HS-MIMO spatial multiplexing.
The proposed polarization pre-post coding scheme is based on the closed-form derivation for the optimal polarization vectors at one end; whereas, the optimal polarization vectors at both ends of the Tx and Rx are achieved by the iterative methodology. For this reason, the complete analysis to reach the closed-form desc...
This section delineates the PR-HS-MIMO communication system that performs spatial multiplexing with the selected antenna elements as depicted in Fig. 3. The Tx selects Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT out of Ntsubscript𝑁𝑡N_{t}italic_N start_POSTSUBSCRIPT italic_t end_POSTSU...
In this paper, we proposed several novel schemes to support PR-HS-MIMO spatial multiplexing whose system is composed of multiple polarization reconfigurable antenna elements at both the Tx and Rx. In the proposed iterative joint polarization pre-post coding, the local optimum usually reached the global optimum of Tx/Rx...
Simulation results in Fig. 12 show the impact of PR-HS-MIMO schemes on the channel capacity. Regardless of the number of selected Tx antenna elements, Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, the proposed PR-HS-MIMO schemes outperform the conventional HS-MIMO scheme without polariza...
C
Having dealt with the case where we have no extra space, we turn to the setting where the array has length γ⁢n𝛾𝑛\gamma nitalic_γ italic_n for some γ>1𝛾1\gamma>1italic_γ > 1. This is the setting which is important for our reductions to the online packing problems. In the following two sections, we prove lower and up...
Assume for simplicity that γ𝛾\gammaitalic_γ is a constant, e.g., γ=2𝛾2\gamma=2italic_γ = 2 and that we want to prove a lower bound of ΔΔ\Deltaroman_Δ on the total cost for some Δ=(log⁡n)Θ⁢(1)Δsuperscript𝑛Θ1\Delta=(\log n)^{\Theta(1)}roman_Δ = ( roman_log italic_n ) start_POSTSUPERSCRIPT roman_Θ ( 1 ) end_POSTSUPERSC...
In particular, we use the lower bound from Theorem 1 to create an adaptive stream of pieces that will force any packing algorithm to use excessive space. In the reduction, the numbers to be sorted in the online sorting problem correspond to the slopes of the spine segments in the packing problems, and the impossibility...
We then get a strip packing algorithm 𝒜𝒜\mathcal{A}caligraphic_A in S𝑆Sitalic_S in a similar way as in (a), and obtain a lower bound on the asymptotic competitive ratio of 𝒜*superscript𝒜\mathcal{A}^{*}caligraphic_A start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT in a similar way.
Let us remark that the adversarial stream of reals leading to the lower bound is chosen in a deterministic and adaptive way, i.e., depending on where the preceding reals have been placed in the array. Note that a deterministic oblivious adversary cannot lead to a lower bound above 1111 on ΔΔ\Deltaroman_Δ since for any ...
D
It should be noted that excessive key points will not improve performance but increase the computational cost (e.g. 2.5712.5712.5712.571mm for 100 key points and 2.5992.5992.5992.599mm for 150 key points in Table 5), but sparse key points will lead to performance degradation (e.g. MRE increase from 2.5712.5712.5712.571...
It should be noted that excessive key points will not improve performance but increase the computational cost (e.g. 2.5712.5712.5712.571mm for 100 key points and 2.5992.5992.5992.599mm for 150 key points in Table 5), but sparse key points will lead to performance degradation (e.g. MRE increase from 2.5712.5712.5712.571...
To compare with other key point detection methods, we adopt SIFT, SURF, ORB, and random selection as our key point selectors. For SIFT, SURF, and ORB, we detect the key points and then filter out some very close points. For random selection, we randomly select 100 points as the key points. The results are listed on Tab...
Q: How good is the use of SIFT key points as substitutes for landmarks? Figure 5 demonstrate the relationship between landmarks and potential key points from handcraft methods in feature level (Eq. (9)). The similarities calculated by potential key points are positively correlated with those by landmarks to a large ext...
Further, the SIFT key points for different images are not in correspondence, directly applying Eq. (3) is not possible. To address such an issue, we perform the template matching in a reverse order, that is, for an image X𝑋Xitalic_X with its key points QXsuperscript𝑄𝑋Q^{X}italic_Q start_POSTSUPERSCRIPT italic_X end_...
B
\mathrm{~{}permutation~{}matrix}\}}\frac{\|\hat{\Pi}-\Pi\mathcal{P}\|_{F}}{\|% \Pi\|_{F}}.roman_Relative roman_error = roman_min start_POSTSUBSCRIPT caligraphic_P ∈ { italic_K × italic_K roman_permutation roman_matrix } end_POSTSUBSCRIPT divide start_ARG ∥ over^ start_ARG roman_Π end_ARG - roman_Π caligraphic_P ∥ start...
MMDF is a generative model and fuzzy weighted modularity is a general modularity for overlapping weighted networks. We expect that our model MMDF and fuzzy weighted modularity proposed in this paper will have wide applications in learning and understanding the latent structure of overlapping weighted networks, just as ...
(Comparison to LFR benchmark networks) In [44], the authors proposed LFR benchmark graphs for testing community detection algorithms on non-overlapping unweighted networks. In [45], the authors proposed generalizations of LFR benchmark networks for testing community detection methods on overlapping weighted graphs. \ad...
Accuracy rate. For the task of determining the number of communities, similar to [57], we use Accuracy rate to measure the performance of KDFSP and its competitors in our simulation studies, where Accuracy rate is the fraction of times a method correctly estimates K𝐾Kitalic_K.
Table 2 records the estimated number of communities of KDFSP and its competitors for real-world networks used in this paper. The results show that, for networks with known K𝐾Kitalic_K, our KDFSP correctly determines the number of communities for these networks while NB and BHac fail to determine the correct K𝐾Kitali...
C
In addition, we also perform detailed ablation studies on how the effectiveness of CwD is influenced by factors such as the number of classes at the initial CIL phase, the number of exemplars for each class and regularization coefficient of the CwD term.
The contributions of this paper are as follows: 1) We empirically discover that encouraging the CIL learner to mimic the oracle model in the initial phase can boost the CIL performance. 2) We find that compared with naïvely-trained initial-phase model, data representations of each class produced by the oracle model sca...
We are thus motivated to enforce data representations of each class to be more uniformly scattered at the initial phase, which mimics the representations produced by the oracle model. To this end, we first theoretically show that, a group of embeddings will scatter more uniformly in the space if its correlation matrix ...
Specifically, at the initial phase, we regularize the CIL learner to produce similar representations as the model trained with data of all classes (i.e., the oracle model), since the upper bound of CIL is the oracle model. According to our results, this additional regularization drastically improves CIL performance.
Inspired by this, we consider improving CIL from a novel perspective—encouraging the CIL learner to mimic the oracle model in the initial phase. To achieve this, we first need to understand the difference between representations produced by a naïvely-trained initial-phase model and the oracle model.
A
As already mentioned above, MEE and Robustness were initially assessed on the ground truth landmarks in the baseline images. For the inter-rater analysis, follow-up images were used. Kindly note that due to the difference in image sources, the results may not be directly comparable.
In a second analysis, we made use of the distribution (D𝐷Ditalic_D) of the inter-rater annotation variability, as described in section 3.5 and visualized in Fig. 4(a). We therefore computed hit rate curves (Waldmannstetter et al., 2023), by sampling thresholds from D𝐷Ditalic_D using the formula:
In a second analysis, we made use of the distribution (D𝐷Ditalic_D) of the inter-rater annotation variability, as described in section 3.5 and visualized in Fig. 4(a). We therefore computed hit rate curves (Waldmannstetter et al., 2023), by sampling thresholds from D𝐷Ditalic_D using the formula:
For further analysis with respect to the inter-rater annotation variability as described in Section 3.5, we performed two additional analyses, similar to (Waldmannstetter et al., 2023). In the first analysis, we defined a spherical region of interest (ROI) for each reference ground truth landmark (xlB)superscriptsubscr...
For further analysis with respect to the inter-rater annotation variability as described in Section 3.5, we performed two additional analyses, similar to (Waldmannstetter et al., 2023). In the first analysis, we defined a spherical region of interest (ROI) for each reference ground truth landmark (xlB)superscriptsubscr...
A
is the LHS chain of Σ𝑆𝑡𝑎𝑡𝑖𝑜𝑛subscriptΣ𝑆𝑡𝑎𝑡𝑖𝑜𝑛\Sigma_{\mathit{Station}}roman_Σ start_POSTSUBSCRIPT italic_Station end_POSTSUBSCRIPT. In both Q3subscript𝑄3Q_{3}italic_Q start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT and Q4subscript𝑄4Q_{4}italic_Q start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT, all the attributes of th...
origin},\,\,\,\mathit{Schedule}\ :\ \{\mathrm{train},\mathrm{time},\mathrm{% duration}\}\rightarrow\mathrm{destination}italic_Schedule : { roman_train , roman_time } → roman_origin , italic_Schedule : { roman_train , roman_time , roman_duration } → roman_destination
The set of FDs consisting of those given in Example 3 has an LHS chain since {train,time}⊆{train,time,duration}traintimetraintimeduration\{\mathrm{train},\mathrm{time}\}\subseteq\{\mathrm{train},\mathrm{time},% \mathrm{duration}\}{ roman_train , roman_time } ⊆ { roman_train , roman_time , roman_duration } for the FDs o...
origin},\,\,\,\,\,\phi_{2}=\mathit{Schedule}:\{\mathrm{train},\mathrm{time},% \mathrm{duration}\}\rightarrow\mathrm{destination}italic_ϕ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_Schedule : { roman_train , roman_time } → roman_origin , italic_ϕ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = italic_Schedule : { roman_...
The primary-lhs positions of the atom over 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑒\mathit{Schedule}italic_Schedule in both queries are the first, fourth, and fifth positions, corresponding to the attributes train,timetraintime\mathrm{train},\mathrm{time}roman_train , roman_time, and durationduration\mathrm{duration}roman_durati...
D
1&1&0\end{pmatrix}\,.italic_A = ( start_ARG start_ROW start_CELL 0 end_CELL start_CELL 1 end_CELL start_CELL 1 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL 1 end_CELL start_CELL 1 end_CELL start_CELL 0 end_CELL end_ROW end_ARG ) .
We represent an absorbing state of an absorbing random walk on a graph as a node555Throughout our paper, our figures do not show the nodes for the absorbing states that are associated with absorbing graphs. Our figures only show nodes that are associated with transient (i.e., non-absorbing) states of the corresponding ...
Intuitively, if the node-absorption rate of node 2222 is larger than the node-absorption rates of nodes 1111 and 3333, then node 2222 is in a different community than nodes 1111 and 3333 in the effective community structure. We want to check whether or not the partition with the minimum value of L(a)superscript𝐿𝑎L^{...
Figure 6: An example network with four planted cliques and two partitions of its set of nodes. In (a), the node-absorption rates of the large nodes are δi=7subscript𝛿𝑖7\delta_{i}=7italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 7 and the node-absorption rates of the small nodes are δi=1subscript𝛿𝑖1\delta_...
Figure 1: Consider an absorbing random walk on the depicted four-node network, and suppose that the absorption rate of node 2 is much larger than the absorption rates of the other nodes. Detecting communities via modularity maximization or the standard InfoMap algorithm produces a partition of the network into a single...
B
(1) The 3/2 Factor; Throttled Trees. As mentioned in §III, the 3/2 factor is an accurate estimate if the corresponding T𝑇Titalic_T’s are equal. However, in the above equation, T⁢[i,k,h−1]𝑇𝑖𝑘ℎ1T[i,k,h-1]italic_T [ italic_i , italic_k , italic_h - 1 ] and T⁢[k,j,h−1]𝑇𝑘𝑗ℎ1T[k,j,h-1]italic_T [ italic_k , italic_j , ...
In our overall methodology, to conserve node and link resources, we post-process or ”throttle” the swapping-tree obtained from the DP algorithm by increasing the generation latencies of some of the non-root nodes such that (i) the latencies of siblings are equalized, and (ii) the parents latency is related to the child...
Consider two siblings (A,B)𝐴𝐵(A,B)( italic_A , italic_B ) and (B,C)𝐵𝐶(B,C)( italic_B , italic_C ) at a depth777Defined as the distance of a node from the root; depth of the root is 0. of i𝑖iitalic_i (i>0𝑖0i>0italic_i > 0) from 𝒯𝒯\mathcal{T}caligraphic_T’s root.
latency (tgsubscript𝑡𝑔t_{g}italic_t start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT) and probability of success (pgsubscript𝑝𝑔p_{g}italic_p start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT). Generation latency is the time between successive attempts by the node to excite the
less555We note that, in our context, the storage time as well as the memory coherence time are statistical quantities due to the underlying statistical mechanisms. However, for the purposes of selecting a swapping tree, we use a fixed decoherence threshold τdsubscript𝜏𝑑\tau_{d}italic_τ start_POSTSUBSCRIPT italic_d en...
A
The rest of the article consists of six sections. Section 2 provides background information and the factors triggering the need for the emergence of XAI in autonomous driving. Section 3 covers the concept of explainability for AVs by analyzing 1) cross-disciplinary perspectives necessitating explanations, 2) the role o...
AVs, also referred to as self-driving vehicles, are intelligent vehicles equipped with advanced sensors, cameras, RADAR, LIDAR, GPS, and sophisticated learning algorithms that enable them to navigate and operate without human intervention [11]. To discern, identify, and distinguish the objects in their operational sur...
Explainable autonomous driving is a self-driving approach powered by a compendium of AI techniques 1) ensuring an acceptable level of safety for a vehicle’s real-time decisions, 2) providing explanatory information on the action decisions in critical traffic scenarios in a timely manner, and 3) obeying all traffic rul...
We have presented a systematic overview of state-of-the-art investigations, emerging paradigms, and a future perspective of XAI approaches for autonomous driving. Insights from these studies reveal the existing gaps, and we have proposed a conceptual framework for explainable autonomous driving by incorporating missin...
Wang et al. [77] propose an approach that enables a human driver to provide scene forecasting to an intelligent driving system using a purposeful gaze. They develop a graphical user interface to understand the effect of human drivers on the prediction and control of an intelligent vehicle. A simulator is used to test a...
A
Table 2: Recall@N of Alex-NetVLAD, VGG16-NetVLAD, Patch-NetVLAD, MobileNetV3-NetVLAD, Ghost-NetVLAD and Ghost-dil-NetVLAD on the Pitts30k test dataset, Tokyo 24/7 and TJU-Location test dataset. We report all results for each of them, including the best, second-best and third-best results.
GhostCNN ensures a lightweight architecture and low computational cost by replacing part of convolution operations with a series of linear transformations to generate ghost feature maps. Though the FLOPs of Ghost-dil-NetVLAD is only 1%percent11\%1 % of that of VGG16-NetVLAD and Patch-NetVLAD, and the parameters is 17%p...
Tokyo 24/7 dataset. The experimental results demonstrate that Patch-NetVLAD achieves the best performance on the Pitts30k test dataset and Tokyo 24/7 dataset, while Ghost-dil-NetVLAD performs the best on TJU-Location test dataset because most Recall@N of Ghost-dil-NetVLAD are greater than those of the remaining models....
Table 2: Recall@N of Alex-NetVLAD, VGG16-NetVLAD, Patch-NetVLAD, MobileNetV3-NetVLAD, Ghost-NetVLAD and Ghost-dil-NetVLAD on the Pitts30k test dataset, Tokyo 24/7 and TJU-Location test dataset. We report all results for each of them, including the best, second-best and third-best results.
In this section, six models including Alex-NetVLAD, VGG16-NetVLAD, Patch-NetVLAD (Considering our limited computational resources, we only use its built-in storage mode. In this paper, we uniformly call this method Patch-NetVLAD.), MobileNetV3-NetVLAD (lightweight CNN + NetVLAD), Ghost-NetVLAD (the Ghost module does n...
B
In this paper, we propose a new form of algebraic attack, which is especially effective against nonlinear filter generators. We show with two toy examples how the attack can be performed in practice. We also apply our attack to WG-PRNG and we provide a complexity estimate that shows a fatal weakness of this cipher. We ...
In Chapter 2, we collect all the notations, definitions and known facts needed in the remainder of the paper. We briefly illustrate the XL-Algorithm to solve Boolean equations systems and the algebraic attack to nonlinear filter generators presented in [10].
Other versions of the XL-algorithm can be found in [9]. If t𝑡titalic_t is big enough, we expect to find one solution for the system. In this case, the complexity of XL will be essentially the complexity of one single Gaussian reduction in Step 2. Let N𝑁Nitalic_N be the number of equations generated in XL, and T𝑇Tita...
The XL-Algorithm, introduced by Courtois, Klimov, Patarin, and Shamir in [8], is a computation method for solving a system as the one in (1). Assume that all the fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, for i=1,…,t𝑖1…𝑡i=1,\dots,titalic_i = 1 , … , italic_t, have the same positive d...
The core idea of our new algebraic attack is to use many annihilators simultaneously, instead of only one, and provide a good estimate of the number of keystream bits needed to perform the attack, which is strictly related to the number of linearly independent equations after the multiply phase in the XL-Algorithm. Ind...
A
The last comparison is with SES, which performs poorly, and its gain is only slightly above the best Nash equilibrium. Conversely, it is almost not exploitable. Our results are consistent with results in the paper Liu et al. (2022) and are a direct consequence of using the information from the opponent model only to se...
We initialize strategy to uniform, which gives us range in the next public state (12,121212\frac{1}{2},\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG , divide start_ARG 1 end_ARG start_ARG 2 end_ARG). We give the range to the value function, which returns values as if we played equilibrium in the rest of the...
We compared CDBR with ABR Timbers et al. (2020) on Leduc and different imperfect information Goofspiel. The results are in Table 1, showing that our method is slightly behind in Leduc, even with the highest search depth. In Goofspiel, CDBR1, which looks only one action into the future, is already pretty good, and as s...
We compare LBR and CDBR in Leduc Hold’em. We also compare CDBR with the BR in imperfect information Goofspiel 5, but without LBR, which is poker-specific. We show that CDBR and LBR are very similar with smaller steps, and as we increase the depth limit of CDBR, it starts outperforming LBR. The behavior differs slightly...
In the Nash equilibrium of this game, player 1111 plays heads with probability 2323\frac{2}{3}divide start_ARG 2 end_ARG start_ARG 3 end_ARG and player 2222 chooses his own version of the game with probability 2323\frac{2}{3}divide start_ARG 2 end_ARG start_ARG 3 end_ARG and follows with only heads. Public states in t...
B
given any partition 𝒜𝒜{\mathcal{A}}caligraphic_A of the vertex set, in a linear number of operations (seeing only Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT) the greedy amalgamating algorithm finds a partition 𝒜′superscript𝒜′{\mathcal{A}}^{\prime}caligraphic_A start_POSTSUPERSCRIPT...
In this subsection we show that Theorem 2.1 on the modularity value q∗⁢(Gn,k,p,q)superscript𝑞subscript𝐺𝑛𝑘𝑝𝑞q^{*}(G_{n,k,p,q})italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_G start_POSTSUBSCRIPT italic_n , italic_k , italic_p , italic_q end_POSTSUBSCRIPT ) of the stochastic block model follows quick...
The proof follows that of the non-weighted case, Theorem 1.1, line by line with the following adaptations. In place of the fattening lemma used on the underlying graph, use the weighted version, Lemma 10.3; and replace instances of G𝐺Gitalic_G and Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSC...
Observe that Theorem 1.1 is the special case of Theorem 10.1 when w𝑤witalic_w is {0,1}01\{0,1\}{ 0 , 1 }-valued, and similarly Theorem 1.2 is a special case of Theorem 10.2. In order to prove Theorems 10.1 and 10.2, we may use almost the same proofs as before. We need a natural minor variant of Lemma 3.1.
We shall prove Theorem 2.1 in Section 10, using a deterministic lemma, Lemma 10.6, together with Theorem 10.5, which is a version of Theorem 1.2 with weighted underlying graph. We note that [2, 3] showed that whp the modularity optimal partition will agree with the planted partition except for o⁢(n)𝑜𝑛o(n)italic_o ( i...
C
The macro literature, in line with most validated theoretical models of migration, also investigates whether the effect is conditioned to income levels of the country of origin of potential migrants (Marchiori et al., , 2012; Beine and Parsons, , 2015, 2017). The role of income in a specific origin country experiencin...
Note: Sub-sample of micro-level studies about migration and environmental factors from Scopus, Web of Science, Google Scholar, IDEAS RePEc, and previous meta-analyses (Hoffmann et al., , 2020; Beine and Jeusette, , 2021) collected, merged, screened and included by the authors.
Note: Sample of academic contributions about migration and environmental factors from Scopus, Web of Science, Google Scholar, IDEAS RePEc, and previous meta-analyses (Hoffmann et al., , 2020; Beine and Jeusette, , 2021) collected, merged, screened and included by the authors.
Note: PRISMA Diagram (Page et al., 2021a, ) of identification, screening, eligibility and inclusion stages of academic contributions. The resulting sample is obtained through a search on Scopus, Web of Science, Google Scholar, IDEAS RePEc, and previous meta-analyses (Hoffmann et al., , 2020; Beine and Jeusette, , 2021)...
Note: Sample of academic contributions about migration and environmental factors from Scopus, Web of Science, Google Scholar, IDEAS RePEc, and previous meta-analyses (Hoffmann et al., , 2020; Beine and Jeusette, , 2021) collected, merged, screened and included by the authors.
A
To tackle the difficulty, our primary idea is to facilitate collaboration between the meta and base levels. Specifically, we aim to leverage negative terms from both levels to handle the positive term. However, it turns out that the positive term cannot be entirely offset by the combined negative terms from meta and b...
The derivation uses a fixed learning rate for the meta-algorithm, which not only simplifies the proof but also more effectively illustrates the collaboration of meta-base two layers in the analysis. The analysis in (46) highlights the crucial role of collaboration between meta and base layers. The positive terms — base...
The above forms the main idea of our proposed collaborative online ensemble framework. The term “collaboration” refers to the interplay between meta and base layers. Indeed, on their own, neither the base level nor the meta level can achieve a gradient-variation base/meta regret; each incurs some additional positive t...
To tackle the difficulty, our primary idea is to facilitate collaboration between the meta and base levels. Specifically, we aim to leverage negative terms from both levels to handle the positive term. However, it turns out that the positive term cannot be entirely offset by the combined negative terms from meta and b...
The right hand side of (25) is a weighted combination of stability of base-learners (hence called mixed stability), and the second one is the stability of the meta-algorithm’s weights (hence called meta stability). We also similarly define ∑t=2T∥𝐱t,i−𝐱t−1,i∥22superscriptsubscript𝑡2𝑇superscriptsubscriptdelimited-∥∥s...
B
Set x=x(1)𝑥superscript𝑥1x=x^{(1)}italic_x = italic_x start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT. For every n≥1𝑛1n\geq 1italic_n ≥ 1, let (kn,x(n+1))subscript𝑘𝑛superscript𝑥𝑛1(k_{n},x^{(n+1)})( italic_k start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT , italic_x start_POSTSUPERSCRIPT ( italic_n + 1 ) end_POSTS...
Since σn⁢(x(n))superscript𝜎𝑛superscript𝑥𝑛\sigma^{n}(x^{(n)})italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_x start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT ) is a shift of y𝑦yitalic_y, we have for every k,n≥1𝑘𝑛1k,n\geq 1italic_k , italic_n ≥ 1
For n≥m⁢(K)𝑛𝑚𝐾n\geq m(K)italic_n ≥ italic_m ( italic_K ), we have by (8.4), σn⁢(wn)⊂ℒ⁢(y)superscript𝜎𝑛subscript𝑤𝑛ℒ𝑦\sigma^{n}(w_{n})\subset\mathcal{L}(y)italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_w start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ⊂ caligraphic_L ( italic_y ).
For every n≥1𝑛1n\geq 1italic_n ≥ 1, there is some N≥1𝑁1N\geq 1italic_N ≥ 1 and a∈A𝑎𝐴a\in Aitalic_a ∈ italic_A such that x[−n,i)subscript𝑥𝑛𝑖x_{[-n,i)}italic_x start_POSTSUBSCRIPT [ - italic_n , italic_i ) end_POSTSUBSCRIPT is a factor of σN⁢(a)superscript𝜎𝑁𝑎\sigma^{N}(a)italic_σ start_POSTSUPERSCRIPT italic_N ...
assume that σn⁢(u)=σn⁢(t)=εsuperscript𝜎𝑛𝑢superscript𝜎𝑛𝑡𝜀\sigma^{n}(u)=\sigma^{n}(t)=\varepsilonitalic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_u ) = italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_t ) = italic_ε. Then, by Lemma 5.9, σnsuperscript𝜎𝑛\sigma^{n}italic_σ st...
A
{n}}{n}+\bigg{(}\frac{C_{n}}{n}\bigg{)}^{\frac{2}{2s+1}}\Bigg{)}.italic_R ( caligraphic_T start_POSTSUBSCRIPT italic_n , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n e...
The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2⁢k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al. (2017). More precisely, these authors established the third term on the right-hand side in
ℋdk⁢(1)superscriptsubscriptℋ𝑑𝑘1\mathcal{H}_{d}^{k}(1)caligraphic_H start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( 1 ). This matches the optimal rate for estimation over Holder classes (see Sadhanala et al. (2017) for a formal statement and proof for
smoothness index k+1𝑘1k+1italic_k + 1 in each coordinate direction, and any third index q≥1𝑞1q\geq 1italic_q ≥ 1, is indeed n−2⁢s/(2⁢s+1)superscript𝑛2𝑠2𝑠1n^{-2s/(2s+1)}italic_n start_POSTSUPERSCRIPT - 2 italic_s / ( 2 italic_s + 1 ) end_POSTSUPERSCRIPT for s>1/2𝑠12s>1/2italic_s > 1 / 2 (or 2⁢k+2>d2𝑘2𝑑2k+2>d2 it...
This paper is a unification and extension of Sadhanala et al. (2016, 2017) (more will be said about the relationship to these papers in Section 1.3). The models of smoothness for f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that we
A
The paper introduces a new data-driven topological data analysis (TDA) method for studying dynamically changing human functional brain networks obtained from the resting-state functional magnetic resonance imaging (rs-fMRI). Leveraging persistent homology, a multiscale topological approach, we present a framework that ...
The method employs the Wasserstein distance to measure the topological differences between networks and demonstrates greater efficiency and performance than the commonly used k𝑘kitalic_k-means clustering in defining the state spaces of dynamic brain networks.
Figure 9: The estimated state spaces of dynamically changing brain networks. The correlations are averaged over every time point and subject within each state for k𝑘kitalic_k-means clustering (top) and Wasserstein distance based topological clustering (bottom). In k𝑘kitalic_k-means clustering, the connectivity patter...
In this paper, we propose to develop a novel dynamic persistent homology framework for time varying network data. Our coherent scalable framework for the computation is based on the Wasserstein distance between persistent diagrams, which provides the topological profile of data into 2D scatter plots. We directly establ...
The Wasserstein distance or Kantorovich–Rubinstein metric, as originally defined between probability distributions, can be used to measure topological differences (Vallender, 1974; Canas and Rosasco, 2012; Berwald et al., 2018). Due to the connection to the optimal mass transport, which enjoys various optimal propertie...
A
\left\lVert v\right\rVert∥ roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT italic_v ∥ ≥ italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ( roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT ) ∥ italic_v ∥, where σm⁢i⁢n⁢(eJ⁢τ)subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏...
τ>0𝜏0\tau>0italic_τ > 0. If A𝐴Aitalic_A is non-diagonalizable with eigenvalue λ∈(0,0.5)𝜆00.5\lambda\in(0,0.5)italic_λ ∈ ( 0 , 0.5 ), σm⁢i⁢n⁢(eJ⁢τ)subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏\sigma_{min}(\mathrm{e}^{J\tau})italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ( roman_e start_POSTSUPERSCRIPT ...
\left\lVert v\right\rVert∥ roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT italic_v ∥ ≥ italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ( roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT ) ∥ italic_v ∥, where σm⁢i⁢n⁢(eJ⁢τ)subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏...
value of eJ⁢τsuperscripte𝐽𝜏\mathrm{e}^{J\tau}roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT. Thus, ∥R∥≥σm⁢i⁢n⁢(eJ⁢τ)delimited-∥∥𝑅subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏\left\lVert R\right\rVert\geq\sigma_{min}(\mathrm{e}^{J\tau})∥ italic_R ∥ ≥ italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n en...
σm⁢i⁢n⁢(eJ⁢τ)subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏\sigma_{min}(\mathrm{e}^{J\tau})italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ( roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT ) is a monotonically increasing function of τ𝜏\tauitalic_τ. Thus, σm⁢i⁢n⁢(eJ⁢τ)>1subscript𝜎𝑚𝑖�...
C
Control of PDE systems has been widely explored over the years [15, 16, 17, 18]. Similar to ODEs, notions of ISSt for PDE systems have garnered a lot of attention recently (see survey paper [19]). For example, PDE ISSt have been explored for reaction-diffusion systems [20], hyperbolic systems [21], [22], parabolic syst...
In light of the aforementioned discussion, the main contributions of this paper is the following: Building upon the existing literature, we extend PDE safety research by designing a feedback based control that satisfies both pISSf and ISSt under disturbances, utilizing pISSf barrier functional characterization and ISS...
Control of PDE systems has been widely explored over the years [15, 16, 17, 18]. Similar to ODEs, notions of ISSt for PDE systems have garnered a lot of attention recently (see survey paper [19]). For example, PDE ISSt have been explored for reaction-diffusion systems [20], hyperbolic systems [21], [22], parabolic syst...
In the subsequent sections, our approach of finding the control gains are as follows. First, in Section 3, we find the conditions on control gains that satisfy the pISSf criterion in (9). Next, in Section 4, we show that the pISSf conditions on control gains additionally guarantee ISSt for the system in the sense of (1...
In this paper, we have explored safe control of a class of linear Parabolic PDEs under disturbances. First, we defined unsafe sets and distance of the system states from such unsafe sets. Next, we constructed both control barrier and Lyapunov functional in order to develop a design framework for the controller under s...
A
Worker’s affect, emotions and mood have also been studied in this domain. For example, Bin Morshed et al. [21] explore mood instability of 603 information workers and find that the sleep and activity duration obtained from a Garmin wearable are negatively correlated with mood instability scores. The authors compute moo...
In another study of 50 hospital workers which also uses PANAS, Nadarajan et al. [23] find that speech activity can explain some variance in predicting positive affect measure. Employees wear a specifically designed audio badge during their work-shift hours. The authors extract several features from the audio to identi...
In yet another study, Booth et al. employ multiple sensing modalities to study stress, anxiety and affect of hospital workers [18]. The ground truth for stress and anxiety are obtained as self-reports on a 5-point Likert scale, whereas PANAS is used for assessing affect. The work investigates how patterns in movement d...
Other health and wellbeing related topics that have been studied using passive sensing among workers include focus and awakeness. Soto et al. utilize biometric data from an arm-wear (viz., physical activity, HR, skin response, skin temperature and respiration) to estimate worker’s stress, focus and awakeness [25]. The...
Other works target efforts to support the design of future interventions in the workplace. Kimani et al. [30] created a conversational agent designed to assist information workers in achieving various work-related objectives, such as task scheduling and prioritization, task switching, providing reminders to take break...
A
Nevertheless, we observe that FedACG outperforms other methods consistently in most cases; the accuracy gap between FedACG and its strongest competitor becomes larger in these more challenging scenarios. The results from the large-scale experiments exhibit the robustness of FedACG to the data heterogeneity and low clie...
We test the accuracy of our algorithm for the Dirichlet (0.3) and i.i.d. splits by varying the values of λ𝜆\lambdaitalic_λ and β𝛽\betaitalic_β, which control the momentum integration of the server model and the weight of the proximal term, respectively.
Benefit of accelerated client gradient. For FedAvgM, FedCM, and FedACG (without local regularization for fair comparisons) on CIFAR10, we visualize (a) global training loss surfaces with three local models as black circles in the parameter space, (b) weight divergence, and (c) layer-wise CKA values. In (c), the x𝑥xita...
For these experiments, we set the number of clients to 2,000, with data splits following [5], and randomly select 5 clients to participate in training during each communication round. We employ a two-layer CNN for FEMNIST and a four-layer CNN for CelebA.
To better understand the effectiveness of the accelerated client gradient, we compare two momentum-based algorithms, FedAvgM and FedCM, by visualizing global loss surfaces, weight divergence, and layer-wise CKA values during training. Figure 2(a) highlights a better generalization of FedACG’s local models to global los...
B
Above, Rjsubscript𝑅𝑗R_{j}italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is a certain PUR, Ijsubscript𝐼𝑗I_{j}italic_I start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is the total interference at Rjsubscript𝑅𝑗R_{j}italic_R start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT from other PUs, sjsubscript𝑠𝑗s_{j}it...
The DeepAlloc-Greedy approach extends naturally. For DeepAlloc-NN and DeepAlloc-RNN, we use a simple scheme of uniformly (and randomly) partitioning the given set of SUs into multiple sets and assign a channel to each set; incorporating channel assignment in the model would require much more training, and thus not cons...
Collecting Training Samples. Recall that a sample in PU-Setting is comprised of a sample of PUs’ parameters (location and power) and the optimal power allocated to the SU. In SS-Setting, a training sample is comprised of spectrum sensors’ received power readings. The location of entities is available by using a GPS don...
The general spectrum allocation problem is to allocate optimal power to an SU’s request across spatial, frequency, and temporal domains. We focus on the core function approximation problem, which is to determine the optimal power allocation to an SU for a given location, channel, and time instant—since frequency and te...
Determining Labels (Optimal Power Allocated to SU). We essentially do a binary search to estimate the optimal power that can be allocated to SU. To determine whether PU to PUR transmission is incurring any harmful interference from SU, we have PU continuously streaming ASCII messages over the 1 MHz bandwidth channel c...
C
The system Tα⁢α=−c⁢αk⁢Tsubscript𝑇𝛼𝛼𝑐superscript𝛼𝑘𝑇T_{\alpha\alpha}=-c\alpha^{k}Titalic_T start_POSTSUBSCRIPT italic_α italic_α end_POSTSUBSCRIPT = - italic_c italic_α start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_T consists of two decoupled equations of the type u′′⁢(α)=−c⁢αk⁢u⁢(α)superscript𝑢′′𝛼𝑐...
The affine curvature of 𝒞𝒞\mathcal{C}caligraphic_C at p𝑝pitalic_p is the curvature of the osculating conic 777The osculating conic to 𝒞𝒞\mathcal{C}caligraphic_C at p𝑝pitalic_p passes through p𝑝pitalic_p, and the derivatives of the affine arc-length parameterizations at α=0𝛼0\alpha=0italic_α = 0 (with α=0𝛼0\alp...
This paper is the result of an REU project, which turned out to be of great pedagogical value, as it taught the students to combine the results and methods from various subjects: differential geometry, algebra, analysis and numerical analysis. In addition, this project involved theoretical work and the work of designin...
This work was performed during the REU 2020 program at the North Carolina State University (NCSU) and was supported by the Department of Mathematics at NCSU and the NSA grant H98230-20-1-0259. At the time when the project was performed, Jose Agudelo was an undergraduate student at North Dakota State University, Brooke...
The Euclidean curvature of a circle of radius r𝑟ritalic_r is constant and is equal to 1r1𝑟\frac{1}{r}divide start_ARG 1 end_ARG start_ARG italic_r end_ARG. The Euclidean curvature of 𝒞𝒞\mathcal{C}caligraphic_C at p𝑝pitalic_p equals the curvature of its osculating circle 555The osculating circle to 𝒞𝒞\mathcal{C}...
C
In the seminal work of [41], the method of online gradient descent is proposed for OCO problems, where at each time step the decision maker performs one gradient descent step using the latest available information. A static regret upper bound that is sublinear in T𝑇Titalic_T is proved, where T𝑇Titalic_T is the lengt...
To the best of our knowledge, Coordinate descent [31], as an important class of optimization algorithms, is not sufficiently analyzed by researchers in the online optimization community. In coordinate descent algorithms, most components of the decision variable are fixed during one iteration while the cost function is ...
In the seminal work of [41], the method of online gradient descent is proposed for OCO problems, where at each time step the decision maker performs one gradient descent step using the latest available information. A static regret upper bound that is sublinear in T𝑇Titalic_T is proved, where T𝑇Titalic_T is the lengt...
In this work, we have proposed an online coordinate descent algorithms to deal with optimization problems that may change over time. Three widely used update rules of coordinate descent are considered. Under different assumptions, we have provided different upper bounds on the regrets of these online algorithms. In par...
The main contributions of the paper can be summarized as follows. First, we extend the coordinate descent algorithms considered in [31] to the online case and provide their regret analysis. To the best of our knowledge, this is the first attempt to look at possibilities of using coordinate descent methods to solve OCO...
A
Since our implementation of HOTS uses K-means for learning the time surfaces, which requires relatively little data to train, we only use 10% of the training set for the N-MNIST results. Files were randomly selected at each run. However, our testing results are reported based on performance on the entire test set.
Table III compares the test-set classification accuracies on the two datasets for all the pulse settings listed in Tables I and II. These results were calculated over 5 runs on N-MNIST and 10 runs on POKERDVS. We classified with a polynomial support vector machine of order 3. We
To find the parameters of the stochastic model, we repeated the model fit using data from multiple recordings of pulses with different pulse amplitudes (ranging from 1V𝑉Vitalic_V to 4V𝑉Vitalic_V) and durations (ranging from 200μ𝜇\muitalic_μs to 1 ms). Based on these fits, we calculated the distributions of the model...
The computation time for this test was heavily dependent on the number of clusters and the number of files. For this reason, we set the number of clusters to N[1]=32superscript𝑁delimited-[]132N^{[1]}=32italic_N start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT = 32 for layer 1 and N[2]=64superscript𝑁delimited-[]264N^{...
Since our implementation of HOTS uses K-means for learning the time surfaces, which requires relatively little data to train, we only use 10% of the training set for the N-MNIST results. Files were randomly selected at each run. However, our testing results are reported based on performance on the entire test set.
A
In this paper we study an agent-based model for opinion formation on a social network where the opinion of an agent depends both on its own intrinsic opinion and on the opinions of its network neighbors. One of the earliest influential models in this direction was defined by DeGroot DeGroot (1974). In this model the o...
In this paper we study the following Hegselmann-Krause system (HKS). We have n𝑛nitalic_n agents and their opinions are modeled by points in d𝑑ditalic_d-dimensional Euclidean space ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, for some d≥1𝑑1d\geq 1italic_d ≥ 1. The age...
Johnsen (1990) extended this by incorporating private opinions. Every agent has a private opinion which does not change and an expressed opinion that changes over time. The expressed opinion of an agent is determined as a function of the expressed opinions of its neighbors and its private opinion.
Johnsen (1990) (extending earlier work by DeGroot DeGroot (1974)) each agent has an innate opinion and strategically selects an expressed opinion that is a compromise of its innate opinion and the opinions of its neighbors. Recently, co-evolutionary and game-theoretic variants were studied Bindel et al. (2015); Bhawalk...
In this paper we study an agent-based model for opinion formation on a social network where the opinion of an agent depends both on its own intrinsic opinion and on the opinions of its network neighbors. One of the earliest influential models in this direction was defined by DeGroot DeGroot (1974). In this model the o...
B
We computed several metrics to assess the generalization of our diagnostic models. These metrics include sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), the receiver operating characteristic (ROC) curve, and the F1 score.
Specificity, also known as the true negative ratio, measures the probability that the model correctly predicts a patient as disease-free given that the patient is actually normal. It is computed as TN/(TN+FP), where TN represents the number of correctly predicted negative samples and FP represents the number of incorr...
In particular, the model achieved high sensitivity and accuracy on all conditions, indicating its ability to correctly identify positive cases. However, it is important to note that the positive predictive value (PPV) of the predictions can still be low. For example, for the Pneumonia condition, the sensitivity is 0.6,...
Sensitivity, also known as the true positive rate or recall, measures the probability that the model correctly predicts the presence of a disease given that the patient actually has the disease. It is computed as TP/(TP+FN), where TP represents the number of correctly predicted positive samples and FN represents the nu...
Diagnostically, sensitivity and specificity are not helpful alone. While sensitivity tells the probability that the test results positive given that the person already has the condition, the information of probability that the person has the disease given that the test gives positive is important. Positive predictive v...
C
This paper has considered the BAI problem. We have demonstrated that the Bayes optimal algorithm, which is optimized for the expected performance over the prior, does not have a frequentist rate of simple regret. In some distributions, the Bayes optimal algorithm does not perform well, even when the distributions are ...
This paper considers a Bayes optimal algorithm that minimizes the Bayesian simple regret (Eq. (2)). Our main result demonstrates that the Bayes optimal algorithm does not feature an exponential frequentist simple regret, which is somewhat surprising given its optimality in the Bayesian sense.
Here, L⁢(𝑯⁢(t−1),t+1)𝐿𝑯𝑡1𝑡1L(\bm{H}(t-1),t+1)italic_L ( bold_italic_H ( italic_t - 1 ) , italic_t + 1 ) is the Bayesian simple regret when the algorithm skips one round and explores optimally for the remaining rounds, whereas Li⁢(𝑯⁢(t−1),t)subscript𝐿𝑖𝑯𝑡1𝑡L_{i}(\bm{H}(t-1),t)italic_L start_POSTSUBSCRIPT itali...
This section reviews related studies concerning Bayesian algorithms in the context of sequential decision-making. The multi-armed bandit problem (Robbins, 1952) involves multiple treatment arms and decisions made using samples obtained sequentially via experiments. The aim is to maximize the sum of the rewards, which b...
A high-level implication is that if an approximate Bayesian algorithm exhibits a uniform exponential convergence, this results not from considering a lookahead but from some idea that lends it the robustness preferred by frequentists. A challenge in analyzing a sequential decision-making algorithm is its flexibility. T...
D
Specifically, since the BVC only considers the current positions of all robots for space partition, all future positions in the planned trajectory are limited to this partition. Thus, it often leads to an overly conservative navigation structure with excessive breaking and low efficiency.
Furthermore, the completion time is evaluated for BVC [32] and the proposed method to illustrate the efficiency of MBVC-WB. As provided in Table III, the proposed method has a significant decrease in transition time especially in a more crowded scenario.
In contrast, the heuristic approach of choosing a detour point during deadlock suffers from the livelock problem. Specifically, at time t=3.0𝑡3.0t=3.0italic_t = 3.0s, the robot in the middle chooses a temporary target position and moves away once the deadlock is detected as all robots are static.
This scenario is designed to emphasize that the modified space partition constraint in (III-A) leads to a more accurate separation among the robots and thus a higher utility rate of the workspace. A comparison between the proposed method and the traditional BVC [32] is shown in Fig. 9 for a particular setup.
This yields a much smoother and significantly more efficient navigation strategy. This difference is apparent in Fig. 9, where the robots accomplish the navigation task via the proposed method at t=3.0𝑡3.0t=3.0italic_t = 3.0s while it takes 4.04.04.04.0s for the traditional BVC method.
D
All aforementioned considerations require some sort of supervision to extract invariant representations from data. Unsupervised learning of group invariant representations, despite its potential in the field of representation learning, has been impaired by the fact that the representation of the data in general does no...
We characterize the mathematical conditions of the group action function component and we propose an explicit construction suitable for any group G𝐺Gitalic_G. To the best of our knowledge, this is the first method for unsupervised learning of separated invariant-equivariant representations valid for any group.
To this end, we introduce a group-invariant representation learning method that encodes data in a group-invariant latent code and a group action. By separating the embedding in a group-invariant and a group-equivariant part, we can learn expressive lower-dimensional group-invariant representations utilizing the power ...
In this work we proposed a novel unsupervised learning strategy to extract representations from data that are separated in a group invariant and equivariant part for any group G𝐺Gitalic_G. We defined the sufficient conditions for the different parts of our proposed framework, namely the encoder, decoder and group func...
All aforementioned considerations require some sort of supervision to extract invariant representations from data. Unsupervised learning of group invariant representations, despite its potential in the field of representation learning, has been impaired by the fact that the representation of the data in general does no...
B
This corresponds to one local component and one regional component contributing roughly equally to the overall variability of NO2 concentrations. At a distance of 100 metres the expected correlation between NO2 concentrations will be close to 1, driven by both local and long-range effects, whereas at a distance of 10 k...
Figure 3: Examples of data from the LAQN data set. The distances are from the most south-westerly monitoring station, in Beech outside Alton in Hampshire. Note that not all sensors are available each day, that the location of the maximum varies and the strong clustering in central London.
Traditional air pollution monitoring uses large and expensive sensors that are typically managed by national or municipal authorities deciding where to locate sensors based on domain knowledge and constraints posed by the bulky nature of the sensors (Carminati, Ferrari, and Sampietro 2017). More recently, low-cost air ...
The results on the London data are given in Figs. 6 and 3. As there are no previously published results for comparison, an additional random baseline is provided – random selection without replacement. As all readings are assumed to be noise-free this leads to more efficient exploration. For the satellite data the diff...
The LAQN provides a pollution map online at their website (Imperial College London 1993). It shows pollution across all of London, but with more concentrated in central London radiating outwards and around Heathrow airport, as well as along major roads.
D
The two main classes of gradient estimators used in machine learning are the pathwise or reparameterization gradient estimators [29, 47, 58] and the REINFORCE or score function estimators [65, 18]. The pathwise estimators have shown great success in training variational autoencoders [29] but are only applicable to cont...
The benefits of using Stein operators to construct discrete CVs are twofold. First, the operator structure permits us to learn CVs with a flexible functional form such as those parameterized by neural networks. Second, since our operators are derived from Markov chains on the discrete support, they naturally incorporat...
We then develop a gradient estimation framework—RODEO—that augments REINFORCE estimators with mean-zero CVs generated from Stein operators. Finally, inspired by Double CV [60], we extend our method to develop CVs for REINFORCE leave-one-out estimators [49, 30] to further reduce the variance.
As we have seen in Section 2, there is a long history of designing CVs for REINFORCE estimators using “baselines” [64, 7, 45, 37]. Recent progress is mostly driven by leave-one-out [49, 38, 30, 48] and sample-dependent baselines [43, 22, 60, 62, 20].
Although the Double CV framework points to a promising new direction for developing better REINFORCE estimators, one only obtains significant reduction in variance when bksubscript𝑏𝑘b_{k}italic_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is strongly correlated with f𝑓fitalic_f.
B
In this work we have proposed and investigated the usage of deterministic access patterns to provide ultra-reliable communication for a group of intermittently active users sharing a pool of resources. The patterns, which are a realization of a Steiner system, aim to control the number of collisions and interference a...
In [21], the authors focus on the combinatorial aspects of a repetition-based 5G GF scheme, namely the probability of collisions, and evaluate achievable reliability and latency levels as a function of the number of UEs, amount of pre-allocated resources and number of replicas. The GF repetition coding, its proactive v...
This feature leads to significant gains in terms of outage probability compared to an approach were the choice of channel resources is fully random. In our evaluations we have considered two different signal models - based on destructive collisions and Full MRC, and two receiver processing techniques - with and without...
The improvement compared to a system without SIC (cf. Fig. 3) is significant and allows to achieve ultra reliability at much lower SNRs. Particularly important is the fact that Random selection exhibits a performance floor even when the mean traffic intensity is as low as 1111 user/frame. This is a consequence of the s...
Then, in Section V we consider different receiver processing techniques and provide their thorough analysis in the context of the access patterns from Section IV. This is complemented by both analytical results and corresponding simulations. In Section VI, we discuss the deficiencies of the Random selection approach an...
B
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper...
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper...
We remark that although time-wise this algorithm is fast, the ratio constant of the yielded path over the optimal path has not been computed and is much larger that Christofides’ 3/2323/23 / 2 ratio. In fact, in our algorithm, the yielded path has length at most (300)9/2⁢log⁡300superscript30092300(300)^{9/2}\log{300}( ...
In lieu of the above discussion, “nearly-optimal” algorithms have been explored. That is, algorithms that produce a path which may not be the optimal one but is comparable (or even arbitrarily close) in length to the optimal one. The nearest insertion algorithm [RSL74] computes in O⁢(n2)𝑂superscript𝑛2O(n^{2})italic_O...
It should be noted that the work of Jones [Jon90], Okikiolu [Oki92], and Schul [Sch07] provides the sets for which a solution exists but not the solution itself when the set is infinite. That is, they classify the sets which are contained in curves of finite length, but do not provide the parametrization of the curves...
B
This is a discriminant that characterizes the linear subspaces of dimension n−r𝑛𝑟n-ritalic_n - italic_r that intersect V𝑉Vitalic_V non-transversally. When deg⁡(V)≥2degree𝑉2\deg(V)\geq 2roman_deg ( italic_V ) ≥ 2, these linear spaces form a hypersurface in the corresponding Grassmannian.
As V𝑉Vitalic_V is a complete intersection, if the linear forms are generic, then the resulting system does not have any solution and hence its resultant is not zero. The resultant of the system when we eliminate the variables 𝒙𝒙\bm{x}bold_italic_x, using the
To avoid the case that the denominator is zero in the resultant computations, we can apply the technique of the generalized characteristic polynomial [6], similarly to the projective case. Now, we can apply a symbolic perturbation to all the terms of all the polynomials [49] or only to the terms that appear in the diag...
a system of n+1𝑛1n+1italic_n + 1 polynomials in n+1𝑛1n+1italic_n + 1 variables; we concetrate on the 𝒙𝒙\bm{x}bold_italic_x variables. The polynomial M𝑀Mitalic_M plays the role of the u𝑢uitalic_u-resultant (also appear with the term separating linear form).
The coefficients of the linear forms in this factorization correspond to the solutions of the zero dimensional system. To force (some) of these solutions to have multiplicities we compute the discriminant R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of R1subscript𝑅1R_{1}italic_R start_POSTSUBSCR...
A
Emotional elicitation and labeling is a complex task, and sometimes the expected (or targeted) emotions are not the ones the volunteers experienced (or reported). The agreement between the target class and the self-reported discrete emotion annotations by the volunteers in this experiment is shown in the matrix in Fig...
The following conclusions are extracted from analyzing Table 3: (1) all the target emotions show a strong positive dependence with their corresponding reported, so the videos are especially eliciting the desired emotion in the users; (2) all videos originally classified as non-fear have a significant negative dependenc...
As introduced before, only 8888 of the 12121212 emotions initially selected were included in WEMAC (see the Stimuli Section), although the 12121212 emotions were considered for the discrete emotion labeling (see the Measures Section). It means that the number of targeted emotions is smaller than the reported ones in th...
Emotional elicitation and labeling is a complex task, and sometimes the expected (or targeted) emotions are not the ones the volunteers experienced (or reported). The agreement between the target class and the self-reported discrete emotion annotations by the volunteers in this experiment is shown in the matrix in Fig...
Table 4 shows the reliability ICC metrics analyzing the targeted continuous annotations regarding the targeted discrete ones (see the Targeted field). Analyzing this table it is found poor consistency for 4444 out of 8888 of the emotions, which are amusement, anger, disgust, and sadness. It highlights the case of disgu...
B
Figure 3. Goodfellow et al. (Goodfellow et al., 2015) demonstration of adversarial example generated using Fast Gradient Sign attack. The left image is correctly classified by GoogleNet (Szegedy et al., 2015) as a panda. On adding an imperceptible noise, the image on the right, which looks similar to the original imag...
Feinman et al. (Feinman et al., [n. d.]) based their detection of adversarial examples on the assumption that adversarial samples do not lie on the true data manifold and pose three different settings where adversarial samples could exist. As demonstrated in Figure 7, an adversarial sample x∗superscript𝑥x^{*}italic_x ...
Figure 3. Goodfellow et al. (Goodfellow et al., 2015) demonstration of adversarial example generated using Fast Gradient Sign attack. The left image is correctly classified by GoogleNet (Szegedy et al., 2015) as a panda. On adding an imperceptible noise, the image on the right, which looks similar to the original imag...
Adversarial Sample: An adversarial sample is a sample modified by an adversary by adding a perturbation such that the model output is changed from the normal label. Figure 3 demonstrates an example of an adversarial attack and adversarial sample. Adversarial example exhibits transferability property, which states that ...
While ML-based malware classifiers perform more competently with identifying inputs more accurately than even a cybersecurity expert would classify, they perform poorly to subtle yet maliciously crafted data. Malware classifiers are vulnerable to such perturbation, known as adversarial samples (Szegedy et al., [n. d.])...
C
If, e.g., A𝐴Aitalic_A is incorrectly chosen as the deviator, a similar computation to earlier yields that the partial sum of the series (5) up to N𝑁Nitalic_N is Ω⁢(log⁡log⁡N)Ω𝑁\Omega(\log\log N)roman_Ω ( roman_log roman_log italic_N ), which occurs with a probability approaching 00 as N→∞→𝑁N\to\inftyitalic_N → ∞. H...
the game has a value in mixed strategies, and the statistician has an optimal strategy, ξn:Zn→Δ⁢(I):subscript𝜉𝑛→subscript𝑍𝑛Δ𝐼\xi_{n}:Z_{n}\to\Delta(I)italic_ξ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT : italic_Z start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT → roman_Δ ( italic_I ).
The algorithm above can be adapted to this case. Indeed, supposing that the realization s𝑠sitalic_s is such that the origin is visited finitely many times, then applying the above algorithm to the suffix of the realization after the last visit to the origin identifies Deviator with probability 1.
Roughly speaking, we consider a zero-sum game between an adversary and a statistician, in which the adversary chooses a deviation and the statistician, after observing the realization s𝑠sitalic_s, has to guess the deviator if s∉D𝑠𝐷s\notin Ditalic_s ∉ italic_D. A strategy for the statistican in this game is a blame f...
If s∉D𝑠𝐷s\not\in Ditalic_s ∉ italic_D, the statistician is told the element z∈Z𝑧𝑍z\in Zitalic_z ∈ italic_Z such that s∈[Tz]𝑠delimited-[]subscript𝑇𝑧s\in[T_{z}]italic_s ∈ [ italic_T start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ]. The statistician has then to select an element j∈I𝑗𝐼j\in Iitalic_j ∈ italic_I.
C
Object texture refers to changing the surface texture (e.g., color) of 2D or 3D objects. It is often used by adversarial attacks to embed the malicious sensing inputs. In attack deployment, this is often fabricated as patches [78, 18, 26], posters [54, 37, 18], and camouflages [56, 60, 63], or displayed by projectors [...
Perception refers to perceiving the surrounding environments and extracting the semantic information for driving, such as road object (e.g. pedestrian, vehicles, obstacles) detection, object tracking, segmentation, lane/traffic light detection. Perception module usually takes diverse sensor data as the inputs, includin...
Laser/IR light refers to injecting/projecting laser or light directly to the sensor rather than the environment. Prior works use such attack vector to project malicious light spots in the camera image such that it can misguide the camera localization [67] or object detection [67, 72]. Moreover, some prior works also us...
Object position refers to placing physical objects at specific locations in the environment. Prior work [83] applies this attack vector by controlling a drone to hold a board at a particular location in the air to fool the LiDAR object detector to misdetect the front vehicle.
Physical-layer attacks by road object manipulation. The prediction component in AD systems predicts obstacle trajectory based on its detected physical properties (e.g., obstacle type, dimension, position, orientation, speed). Therefore, assuming upstream components such as AD perception is functioning correctly, the at...
C
Let x𝑥xitalic_x be in Ω⁢(n7)Ωsuperscript𝑛7\Omega(n^{7})roman_Ω ( italic_n start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT ) and let k𝑘kitalic_k be in Ω⁢(n6)Ωsuperscript𝑛6\Omega(n^{6})roman_Ω ( italic_n start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT ). Next to the definition of each gadget, we add a figure, where this gadg...
For every cubic graph G𝐺Gitalic_G (of size n𝑛nitalic_n), we construct a graph H𝐻Hitalic_H of interval count 2 such that any Maximum Cut partition of H𝐻Hitalic_H corresponds to some Maximum Cut partition of G𝐺Gitalic_G and vice versa. Its top level composition is displayed on Figure 3.
Recall that the Maximum Cut problem asks for a cut of the maximal value. Notice that, for interval graphs, Maximum Cut is equivalent to the problem of finding a coloring of an interval model in two colors, say R𝑅Ritalic_R (Red) and B𝐵Bitalic_B (Blue), where the number of differently colored pairs with non-empty inter...
On each of the figures, this gadget is colored in Red and Blue. After we construct the whole graph H𝐻Hitalic_H of interval count two, we will argue that, for every Maximum Cut partition of H𝐻Hitalic_H, the coloring of each of its gadgets is similar to one displayed on the corresponding figure.
Before we formally describe the reduction graph H𝐻Hitalic_H of interval count two, we present all the gadgets that are used in its construction. Assuming that the size |V⁢(G)|𝑉𝐺|V(G)|| italic_V ( italic_G ) | of the cubic graph G𝐺Gitalic_G is n𝑛nitalic_n, we introduce two parameters that denote the gadget sizes.
C
AutoLaparo & CATARACTS: For state-of-the-art comparisons, we retrain phase recognition models on two additional surgical datasets. AutoLaparo [75] contains 21 long videos (10/4/7 split) ranging from 27 to 112 min (mean ca. 66 min) with 7 coarse phases annotated. CATARACTS [91] contains 50 short videos ranging from 6 to...
We enable end-to-end learning with single-sequence batches by replacing ResNet50 with existing BN-free CNNs. We use a ResNet50 with GroupNorm (GN) [76] for fair comparison as well as the more recent ConvNeXt-T [50], which uses ViT-style LayerNorm [81] and is of comparable size to ResNet50.
Our premise is that end-to-end learning is preferable over multi-stage approaches and thus we propose a simple, intuitive strategy for training CNN-LSTM models on online surgical workflow tasks. We use the LSTM to demonstrate that even simple temporal aggregation can be effective when visual features are improved throu...
We argue that end-to-end learning is preferable over 2-stage approaches but fails when the backbone contains BatchNorm. To demonstrate this, we train CNN-LSTM models with three different backbones (Resnet50-BN, ResNet50-GN and ConvNeXt-T), single-sequence batches of 64 frames (1×641641\times 641 × 64) and the proposed ...
End-to-end learning [36, 37], single-sequence batches [41] and CHT [55] have been used individually in BN-based approaches. However, only methods with BN-free backbones such as AlexNet have been able to employ these [7] or similar [83] strategies in combination. This is because end-to-end training is not compatible wit...
C
FR is one of the most promising computer-vision tasks. In recent times, the combination of the following three factors has contributed to the rapid growth of this technology: 1) introduction to large-scale face datasets[5, 38, 28], 2) development of effective backbone models[13, 7, 25, 9, 29], 3) novel loss functions[...
This paper is based on two insights. First, from a unified perspective, CL and ML have the same purpose of approaching WDFS, except for PG. Second, CL and ML show a mismatch between two similarity distributions of sampled pairs and all negative pairs. Based on these insights, we developed UNPG by combining two PG strat...
Classification Loss. Early indirect optimization methods include softmax loss[1, 21, 30], which uses the similarity between the deep feature and class weight vectors. Softmax loss has been widely applied in classification problems, but it is not appropriate for FR because testing is done by similarity comparison, not ...
Metric Loss. Early direct optimization methods include contrastive loss[3, 6] and triplet loss[23, 8], which use the similarity between pairs or triplets in the feature space. They try to make positive samples close and push negative samples far away, but often suffer from slow convergence and poor local optima because...
In deep feature learning paradigms for pair similarity optimization, loss functions in FR can be categorized based on two approaches: metric loss (ML; e.g., triplet loss[23, 8] and N-pair loss[26]) and classification loss (CL; e.g., softmax loss[1, 21, 30]). The former directly performs the optimization with a pair of...
C
We have made the source code for generating the synthetic images publicly available to facilitate joint research in the field. We have also provided free access through this paper for the online use of the AMD detection model. This will facilitate future work to broaden the scope for detecting the severity of AMD, and ...
i-Challenge datset authors confirm in their publications they received ethics approval guidelines from Sun-Yat Sen University, China and Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University, Shanghai, China. Details on https://amd.grand-challenge.org/Home/
ODIR-2019: Contains colored fundus images from both left and right eyes of 5,00050005,0005 , 000 patients obtained from multiple hospitals/medical centers in China, with varying image resolutions and observations from several specialists. The dataset has was designed to address normal and six diseases: diabetes, glauco...
The iChallenge-AMD dataset can be found in https://ai.baidu.com/broad/introduction?dataset=amd, while ODIR-2019 dataset is available on https://odir2019.grand-challenge.org/dataset/ and RIADD is available on  https://riadd.grand-challenge.org/Home/.
The official code for StyleGAN2-ADA is available at https://github.com/NVlabs/stylegan2-ada-pytorch. Our implementation of Deep Convolutional GAN (DCGAN), Least Squares Generative Adversarial Networks (LSGAN), Wasserstein GAN (WGAN), Wasserstein GAN with Gradient Penalty (WGAN-GP), Deep Regret Analytic Generative Adver...
C
SFEW contains the statistic images selected from the movie clips with spontaneous expressions, where the labels of training set and validation set are given. Therefore, 958 training images are used as the training set and 436 validation images are as the testing set in experiments.
AffectNet contains 450,000 images with 10 categories, where each image is annotated by one volunteer. In experiments, we use 287,401 images with neutral and six basic emotions, where 283,901 images are selected as the training set and 3,500 images are selected from the validation set as the testing set.
RAF-DB contains 29672 facial images downloaded from the Internet. For the RAF-DB dataset, the facial landmarks are manually annotated via the crowdsourcing method with basic or compound expressions. In experiments, we use the basic database including 12,271 training and 3,068 testing images.
For AffectNet dataset, its training set is used to train the model, and its validation set is used as the testing set, since the testing set of AffectNet is not given the annotated labels [25]. For CK+ and MMI datasets, we adopt the five-fold cross-validation scheme to evaluate the recognition performance, in order to ...
SFEW contains the statistic images selected from the movie clips with spontaneous expressions, where the labels of training set and validation set are given. Therefore, 958 training images are used as the training set and 436 validation images are as the testing set in experiments.
A
⟨𝐫′,𝐫⟩=𝐫w−𝐫ℓ<0superscript𝐫′𝐫subscript𝐫𝑤subscript𝐫ℓ0\left\langle{\mathbf{r}}^{\prime},{\mathbf{r}}\right\rangle={\mathbf{r}}_{w}-{% \mathbf{r}}_{\ell}<0⟨ bold_r start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , bold_r ⟩ = bold_r start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT - bold_r start_POSTSUBSCRIPT roman_ℓ end...
Why is there a jump in the upper bound from f⁢(k)1/3𝑓superscript𝑘13f(k)^{1/3}italic_f ( italic_k ) start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT to f⁢(k/n)𝑓𝑘𝑛f(k/n)italic_f ( italic_k / italic_n ) at n=k1/3𝑛superscript𝑘13n=k^{1/3}italic_n = italic_k start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT? Can one find...
We essentially show an optimal strategy, when converted to a ordered valid path, does not make use of upsets. The precise statement is given in Lemma 4.2. The intuition here is that upsets bring the players’ ratings closer together, whereas in order to make one rating large you need the ratings to be spread out. The pr...
Finally note the sign patterns of (+,+,+)(+,+,+)( + , + , + ) and (−,−,−)(-,-,-)( - , - , - ) are not possible since the sum of all ratings is invariant. Since in all possible cases we have a decrease of S𝑆Sitalic_S without an increase of length, this process terminates in a path that isn’t longer and doesn’t have any...
Each valid path corresponds to an ordered valid path of the same length by doing the following: each time 𝐫𝐫{\mathbf{r}}bold_r intersects a hyperplane of the form xw=xℓsubscript𝑥𝑤subscript𝑥ℓx_{w}=x_{\ell}italic_x start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIP...
B
Figure 1: Undersampling and oversampling certain data types with HardVis: (a) the panel with many tunable parameters for UMAP, undersampling, and oversampling; (b) box plots for comparing the values of all points against the algorithm’s suggestion in each feature; (c) a stacked bar chart showing the base vs. the new d...
Figure 6: The examination of diverse structures of data types and alternative suggestions while undersampling with OSS. View (a) shows the selection of the number of neighbors value of 13, which is also used as input to KNN for sampling similarly in the high-dimensional space. This decision was made after a careful re...
After selecting the projection (which results in a specific distribution of data types), we decide to apply the OSS undersampling algorithm. Nevertheless, the default settings cause the van class to disappear completely, thus the predictive performance gets extremely penalized (see step 1 in Figure 1(h), right). We pic...
T1: Identify the various types of instances. As the decrease in predictive performance is connected to data distribution-related factors, such as the presence of many rare subgroups obscuring the classification [WH00, Jap01], the consequences from the overlap between the classes [PBM04, GSM07], or the existence of seve...
In machine learning (ML), easy to classify instances are those for which ML models have a high probability of predicting the correct class label, whereas the opposite is true for the difficult to classify instances [YLW∗21]. The assessment of instance hardness can reveal useful information about the boundaries of ML ca...
C
We propose a general-purpose solution to the frontrunning problem using cryptographic protocols such as verifiable delay functions (VDFs) and aggregate signatures [17, 19] whose outputs are publicly verifiable. Slowswap [11] utilizes VDFs to introduce delays for transactions related to AMMs only. However, the current i...
A frontrunner can perform attacks with highly predictable results due to deterministic pricing mechanism as well as the transparency of liquidity amounts of decentralized exchanges. In this context, Qin et al. estimated a profit of 1.51 Million USD made by frontrunners [42]. Other domains that are affected by frontrunn...
a) We propose FrontrunnIng Resistant Smart ConTracts (FIRST), a framework that significantly curtails frontrunning attacks in EVM-based blockchains without requiring any changes in the underlying blockchain infrastructure. Our framework is not application-specific, and hence can be easily adopted by any dApps.
The third category of solutions is built upon the order-fairness property, which ensures that the order of the transactions in the finalized block is preserved in the same order as the users submitted [33, 35, 32]. However, these solutions cannot be adopted directly and requires drastic changes to the consensus layer,...
In this paper, we proposed a decentralized framework, FIRST for mitigating frontrunning attacks on EVM-based smart contracts without modifying the consensus layer of blockchain. FIRST is not an application-specific solution and hence is more accessible for implementation in various dApps. We experimentally show that w...
B
Although many recent works focus on learning in the presence of strategic behavior, learning in the presence of capacity constraints and strategic behavior has not previously been studied in depth. Many motivating applications for learning with strategic behavior, such as college admissions and hiring, are precisely s...
To the best of our knowledge, Liu et al. (2022) is the only existing work that studies capacity-constrained allocation in the presence of strategic behavior. Liu et al. (2022) introduces the problem of strategic ranking, where agents’ rewards depend on their ranks after investing effort in modifying their covariates. T...
However, in many other applications, human agents may change their observed characteristics in response to the policy, violating the exogeneity assumption. For example, in college admissions, applicants, with knowledge that high test scores will improve their chances of admission, may enroll in test preparation service...
We adopt a flexible model where agents are heterogenous in their raw covariates and their ability to modify them. Depending on the context, strategic behavior may be harmful, beneficial, or neutral for the decision maker. In some applications, strategic behavior may be a form of “gaming the system,” e.g. cheating on ex...
We describe some of the extensions of our model and learning procedure. First, our model assumes that the decision maker’s policy is fixed over time. Dynamic treatment rules, where the policy is time-varying, would extend this work and would likely require new equilibrium definitions. Second, we consider linear polici...
C
To gauge the robustness of models, it is important to examine their behaviors across varying levels of biases. For this, we present the unbiased accuracies obtained by training separate models on training sets with pb⁢i⁢a⁢s∈{0.75,0.9,0.95,0.99}subscript𝑝𝑏𝑖𝑎𝑠0.750.90.950.99p_{bias}\in\{0.75,0.9,0.95,0.99\}italic_p...
Ablations. To study the importance of the proposed inductive biases, we perform ablations on Biased MNISTv2 and COCO-on-Places. First, to examine if the multi-exit setup is helpful, we train networks with single exit attached to the end of the network. This caused accuracy drops of 29.1% on Biased MNISTv2 and 8.4% on ...
To examine if the proposed inductive biases improve bias-resilience in other architectures too, we created OccamEfficientNet-B2 and OccamMobileNet-v3 by modifying EfficientNet-B2 [65] and MobileNet-v3 [28, 29]. OccamNet variants outperform standard architectures on both Biased MNISTv2 (OccamEfficientNet-B2: 59.2 vs. E...
Apart from ResNet, we also tested the proposed inductive biases on EfficientNet and MobileNet. The results are presented in Table A13. For both Biased MNISTv2 and COCO-on-Places, Occam variants outperform the standard architectures, showing the efficacy of the proposed modifications.
Modifications for COCO-on-Places. For COCO-on-Places, the images are small (64×64646464\times 6464 × 64), so for ResNet-18 and OccamResNet-18, we replace the first convolutional layer (kernel size=7777, padding=3333, stride=2222), with a smaller layer (kernel size=3333, padding=1111 and stride=1111) and also remove the...
C
Vision transformer, a strong competitor of convolutional neural networks (CNNs), has been widely adopted in various vision tasks [45, 48, 84, 85, 86, 87, 88, 89, 90, 91, 92], due to its powerful ability of modeling global connection within all the input tokens. Specifically, ViT [45] splits an image into patches to con...
Vision transformer, a strong competitor of convolutional neural networks (CNNs), has been widely adopted in various vision tasks [45, 48, 84, 85, 86, 87, 88, 89, 90, 91, 92], due to its powerful ability of modeling global connection within all the input tokens. Specifically, ViT [45] splits an image into patches to con...
To improve segmentation using transformers, some methods [89, 97, 52, 98, 99, 100, 101] have been developed. SETR [89] and Panoptic SegFormer [99] are the first transformer-based models for image and panoptic semantic segmentation, respectively. Generally, these works use transformers to generate global-context-aware f...
In terms of designing vision transformers to use contextual information, Focal Transformer [85] introduces both fine-grained and coarse-grained attention in architecture design to explore local and global contexts in the image. Though our proposed methods also focus on learning contexts, there are significant differenc...
For global temporal contexts, few VSS methods [17, 53] have exploited the contexts from the whole video. The modeling of global temporal contexts is usually achieved by a memory module in the form of a memory bank [17] or a tiny network [53] which is updated during inference. Although promising results have been achiev...
B
Our Approach: Hotline partitions each mini-batch into two micro-batches (μ𝜇\muitalic_μ-batches). The inputs in a μ𝜇\muitalic_μ-batch either access only frequently-accessed embeddings or any arbitrary embeddings. First, Hotline schedules the μ𝜇\muitalic_μ-batches that access only frequently-accessed embeddings on the...
As the mini-batch size increases, the processing overhead and latency for CPU-based segregation also increase proportionately. CPUs cannot actively segregate and schedule popular and non-popular μ𝜇\muitalic_μ-batches before the GPUs finish their execution. Consequently, our experiments demonstrate that GPUs remain id...
Figure 23 compares Hotline to a multi-process CPU-based segregator and scheduler. Using the CPU for mini-batch segregation and working parameter gathering results in GPU stalls as the CPU cannot hide the latency behind popular μ𝜇\muitalic_μ-batch execution. Hotline outperforms this alternative approach, providing sign...
Achieving efficient mini-batch segregation and parameter gathering can be accomplished using CPUs and GPUs instead of hardware accelerators. However, GPUs are not optimized for fine-grained mini-batch segregation. To address this, CPU-based multi-processing can be employed for mini-batch segregation, parameter gatherin...
Figure 7: CPU segregation and scheduling time for a mini-batch using Intel Xeon CPU while the V100 GPU(s) training on a mini-batch. We use mini-batches of 1K, 2K, and 4K inputs for 1, 2, and 4-GPU execution, respectively. Each mini-batch contains two μ𝜇\muitalic_μ-batches (popular and non-popular).
C
\leq\frac{1}{2}&\mbox{if $n_{1}\leq 2n+1$}.\end{array}\right.{ start_ARRAY start_ROW start_CELL 0 end_CELL start_CELL if italic_n start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≤ italic_n end_CELL end_ROW start_ROW start_CELL ≤ divide start_ARG 1 end_ARG start_ARG 2 end_ARG end_CELL start_CELL if italic_n start_POSTSUBSCRIPT ...
Additionally, note that for architectures whose hidden layers all have the same dimension (a common choice), the upper bound given by Theorem 2 is significantly lower for deep networks than for shallow networks of the same total dimension. For ReLU neural networks with >1absent1>1> 1 hidden layer, we expect the true pr...
It is immediate that if C𝐶Citalic_C is a cell for which the components of θC(ℓ)subscriptsuperscript𝜃ℓ𝐶\theta^{(\ell)}_{C}italic_θ start_POSTSUPERSCRIPT ( roman_ℓ ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT are all non-positive for some ℓℓ\ellroman_ℓ, then C𝐶Citalic_C is flat. The following...
We will now prove that the above expression gives an upper bound on the probability that F:ℝn→ℝ:𝐹→superscriptℝ𝑛ℝF:\mathbb{R}^{n}\rightarrow\mathbb{R}italic_F : blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_R is PL Morse in the case ℓ>1ℓ1\ell>1roman_ℓ > 1. By the discussion above and bef...
Let F:ℝn→ℝ:𝐹→superscriptℝ𝑛ℝF:\mathbb{R}^{n}\rightarrow\mathbb{R}italic_F : blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_R be a ReLU neural network map with hidden layers of dimensions (n1,…,nℓ)subscript𝑛1…subscript𝑛ℓ(n_{1},\ldots,n_{\ell})( italic_n start_POSTSUBSCRIPT 1 end_POSTSUBS...
A
The recent state-of-the-art neural networks have been shown to provide high efficient representations of such complex states, making the overwhelming complexity computationally tractable [6, 7]. Except for the success in the industrial applications, such as the image and speech recognitions [8], the autonomous driving,...
One of the most challenging problems in modern physics is the so-called many-body problem. In its quantum version – quantum many-body physics, the exponential complexity of the states in the Hilbert space makes the strongly correlated systems difficult to deal with [1]. Only limited analytical solutions are amenable to...
Among these successful applications in physical sciences, the more challenging task is to use neural networks to study nonequilibrium problems. Recently, an algorithm of artificial neural networks was proposed to solve the unitary time evolutions in a quantum many-body system [15]. Later developments in this direction...
We have realized the time evolutions of the energy expectation value, the universal statistics of the topological defects numbers and the kink-kink correlations in a quantum phase transition of a TFQIM by virtue of the neural networks. The results were found to satisfy theoretical predictions. Thus, it numerically ver...
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions. Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39]. Critical slowing down near the phase transition point may invalidate the applic...
B
Further more, 3.14 presents that ∫∇p⋅𝝎∇⋅𝑝𝝎\int\nabla p\cdot\bm{\omega}∫ ∇ italic_p ⋅ bold_italic_ω is the key that ωN⁢Nsubscript𝜔𝑁𝑁\omega_{NN}italic_ω start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT Network fails to preserve the helicity. It is shown as Figure 7, which indicates that the ∫∇p⋅𝝎∇⋅𝑝𝝎\int\...
The goal of this paper is to construct the neural network model that can preserve the fluids helicity. Unlike the standard finite element methods based on the weak formulation of the PDE models, Physics-informed neural networks (PINN) model [27] is based on the strong PDE and thus, conservation can be shown to be made...
We now report a couple of numerical tests. One result is the error analysis with analytic solutions. The other is to show that the proposed scheme can conserve the helicity. The numerical experiments are performed on a workstation with 1 10 Core Intel(R) Xeon(R) Silver 4210R CPU, 1 RTX A5000 GPU, 128GB RAM, and a Ubun...
For the pressure p=p~+12⁢|𝒖|2𝑝~𝑝12superscript𝒖2p=\widetilde{p}+\frac{1}{2}|\bm{u}|^{2}italic_p = over~ start_ARG italic_p end_ARG + divide start_ARG 1 end_ARG start_ARG 2 end_ARG | bold_italic_u | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, we impose the zero boundary condition. Besides, 𝒇=𝟎𝒇0\bm{f=0}bold_itali...
Experiments show that our scheme preserves the helicity orders of magnitude better with a simple modification in the definition of the vorticity. Figure 8 show that the helicity of our model conserves better. Besides, when the interval of mesh becomes larger, the helicity fails to conserve as well as the refining one.
D
Out-of-distribution generalizability is another related topic from the ML community that quantifies the degree to which a query point is an outlier in the underlying distribution. Specifically, carlini2019distribution proposes five metrics for identifying well-represented examples.
Considering euclidean333Please note that while we use euclidean distance for the explanation and examples in the paper, our metrics and algorithms are agnostic to the choice of the distance measure, and those equally work for other ones. We evaluate the impact of the choice of the distance measure (using multiple well-...
The proposed measures can be extended to different data types and are independent of the model and prediction task (classification and regression). The measures are also agnostic to the choice of metric or approach for computing the two components. Proposing quantitative probabilistic outcomes, our measures are interpr...
These metrics are shown to be highly correlated, stable, and model-agnostic. The metrics rank examples based on different measures within ensembles, distance to the decision boundary, or prediction difference of two models for the same query point (holdout retraining). It is important to note that these techniques are ...
Related work also includes blundell2015 ; pakdaman2015 ; zhang2020 ; abdar2021 that aim to estimate and quantify uncertainty in AI models, however, they have a different perspective on the issue as they extract the uncertainty from models, while our measures are data-centric.
C