context stringlengths 250 4.36k | A stringlengths 250 4.85k | B stringlengths 250 4.12k | C stringlengths 250 3.69k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
In particular, VEEGAN [29] autoencodes the latent vectors to learn the inverse function of the generator, and maps both the true and generated data to the latent distribution, e.g., a Gaussian distribution.
Inclusive GAN [30] learns a generator by matching between real and fake examples in the feature space. | Then the loss is computed by the KL divergence of the probability distribution of discriminators for being selected as experts from 𝝁𝝁\bm{\mu}bold_italic_μ.
To obtain the probability for discriminator selection, we apply the 𝚜𝚘𝚏𝚝𝚖𝚊𝚡𝚜𝚘𝚏𝚝𝚖𝚊𝚡\mathtt{softmax}typewriter_softmax function to the vector of M𝑀M... | In particular, VEEGAN [29] autoencodes the latent vectors to learn the inverse function of the generator, and maps both the true and generated data to the latent distribution, e.g., a Gaussian distribution.
Inclusive GAN [30] learns a generator by matching between real and fake examples in the feature space. | D2GAN [9] conducts a three-player minimax game, where two discriminators are trained for the completely opposite objectives, minimizing the Kullback-Leibler (KL) divergence and the inverse KL divergence between the true and generated data distributions.
The balancing of the two losses plays a role in seeking desirable ... | To analyze the semantic quality of generated images, we present their classification results given by the pretrained classifiers in Table 3.
We measure how much the predicted label distribution in each tested dataset deviates from the true (uniform) distribution using the KL divergence. | C |
The communication systems utilizing DL techniques are typically designed to transmit digital bit sequences and optimized by minimizing the bit-error rate (BER) or symbol-error rate (SER), which achieves the first level communications according to the categorization by Shannon and Weaver[6]. Inspired by potentially high... | Regarding the semantic commutations for speech information, our previous work developed an attention mechanism-based semantic communication system to restore the source message, i.e., reconstruct the speech signals[18]. However, in this paper, we consider an intelligent task at the receiver to recover the text informat... |
Based on the spectrum and transcription of the original speech sample sequence, the proposed system model is shown in Fig. 1. From the figure, the transmitter consists of two individual components: the semantic encoder and the channel encoder, each component is implemented by an independent NN. At the transmitter, the... |
The semantic communication system for speech recognition aims to transmit and recover the information-related semantic features. In this section, we introduce the details of the considered system model and the adopted performance metrics are presented. |
Semantic information is relevant to the transmission goal at the receiver, which could be either source massage recovery or more intelligent tasks. In the cases of intelligent tasks, the semantic information only contains the task-related features while the other irrelative features will not be extracted or transmitte... | D |
The existing 3D WSSS methods formulate the problem in different directions. [10] utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels. However, each 3D sample is projected to 2D in several views and each projected 2D image needs pixel-lev... | Performance of different branches: Table IV compare the segmentation performance of different decoder branches in the two-stage settings. In both settings, the cross branches produce the poorest segmentation results the features of this branch are propagated from the other sample. However, the cross-branch can still pr... |
As shown in Figure2, we use a two-stage training strategy to avoid interference between the two modules during training. In stage one, we train the basic segmentation network with the cross-sample feature reallocating module. In this stage, for each sample, the network learns from the weak labels of this sample and th... | As depicted in Figure1, the first stage of our training process draws inspiration from [17, 18, 19]. Here, we select two samples with at least one overlapping class to serve as an input pair. The CSFR module is designed to facilitate the transfer of analogous features between these two samples. Unlike methods in [17, 1... |
In the subsequent training stage, our ISFR module facilitates the transmission of supervision from labeled to unlabeled points within each individual sample, once again utilizing feature reallocation based on point correlation. This ensures that supervision is densely transmitted from labeled to unlabeled points withi... | C |
Table 3:
Monocular 3D object detection results on the KITTI val set for the car category with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The results of the previous works are from [9]. Our approach significantly outperforms the previous state-of-the-arts on... | Extensive experiments conducted on the challenging KITTI [11] dataset clearly demonstrate the effectiveness of the proposed approach and show that our method achieves 13.81% in terms of the AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT metric, which is 2.80% absolute AP40subscriptAP40\r... | We report the enhanced baseline results of 3D monocular object detection in Table 4.
Overall, the baseline significantly increases the AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT performance upon the original one by 3.76%, 3.54%, 2.88% on easy, moderate and hard difficulty levels, re... | Table 2: Monocular 2D object detection results on the KITTI test set for the All categories with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The metric AP40subscriptAP40\rm AP_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT is used for detection evaluat... | This guarantees the consistency between 2D and 3D boxes from the projection relationships in the proposed geometric formula, and ensure the robust learning with the formula.
The enhanced baseline achieves 16.54%, 13.37%, 11.15% on easy, moderate and hard difficulty levels, respectively. | B |
Different from [10] requiring a weakly supervised instance segmentation to predict the mask and class of each character, our method focuses on classifying the types of text segments without extra segmentation.
\addFig. 8 qualitatively compares the visual detection results of our proposed method with results obtained by... | and can achieve SOTA results with only a VGG16 backbone without introducing deformable convolutional blocks to increase the backbone network’s feature extraction ability as in [8, 35, 20, 36, 37].
Compared with the SOTA top-down methods [34, 20], our method shows improvements of 0.9% on CTW1500 and 0.7% on Total-Text r... | However, the reality is often the opposite, since bottom-up methods are also prone to accumulating intermediate errors (false positives/negatives).
For example, in the challenging curved text datasets CTW1500 [18] and Total-Text [19], the overall accuracy of the best-performing bottom-up methods is lower than those of ... | we have proposed false positive/negative suppression strategies that take visual-relational feature maps into account to infer grouping of densely designed text segments with regard to GCN’s node classification and relational reasoning ability.
We have also proposed a simple but effective shape-approximation method to ... | The SOTA results obtained on these two datasets support our claim that the relative underachievements of existing bottom-up methods are not caused by the limited feature capturing ability of the text proposal backbones or GCN.
Also, bottom-up methods can be superior to top-down methods when adopting our proposed false-... | D |
When ρ∈U(𝒱,ℰ)𝜌𝑈𝒱ℰ\rho\in U({{\cal V},{\cal E}})italic_ρ ∈ italic_U ( caligraphic_V , caligraphic_E ), we obtain that
ϱ=π𝒮(𝒱×𝒱)(ρ)italic-ϱsubscript𝜋𝒮𝒱𝒱𝜌\varrho=\pi_{{\cal S}({\cal V}{\times}{\cal V})}(\rho)italic_ϱ = italic_π start_POSTSUBSCRIPT caligraphic_S ( caligraphic_V × caligraphic_V ) end_POSTSUBS... | \cal E}})\;.italic_ρ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⋉ italic_ρ start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT = italic_π start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( ( italic_π start_POSTSUBSCRIPT caligraphic_S ( caligraphic_V × caligraphic_V ) end_POSTSUBSCRIPT ( italic_ϱ start_POSTSUPERSCRIPT ′ end_POS... | a sequence o∈𝒪|ϱ|𝑜superscript𝒪italic-ϱo\in{\cal O}^{{\lvert\varrho\rvert}}italic_o ∈ caligraphic_O start_POSTSUPERSCRIPT | italic_ϱ | end_POSTSUPERSCRIPT of orientations of the same size, we denote by
ρ=π−1(ϱ,o)𝜌superscript𝜋1italic-ϱ𝑜\rho=\pi^{-1}({\varrho,o})italic_ρ = italic_π start_POSTSUPERSCRIPT - 1 end_POS... | π−1:(ϱ,o)∈Imπ↦ρ with ρi=(ϱi,oi),∀i∈⟦1,|ϱ|⟧.\pi^{-1}:(\varrho,o)\in\text{Im}\pi\mapsto\rho\text{ with }\rho_{i}=(\varrho_{%
i},o_{i})\;,\enspace\forall i\in\llbracket 1,{\lvert\varrho\rvert}\rrbracket.italic_π start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT : ( italic_ϱ , italic_o ) ∈ Im italic_π ↦ italic_ρ with italic_ρ ... | ∀ϱ∈P(𝒱,ℰ),ϖ(ϱ)=ϖ|ϱ|(ϱ)∈𝒱×𝒱.formulae-sequencefor-allitalic-ϱ𝑃𝒱ℰitalic-ϖitalic-ϱsuperscriptitalic-ϖitalic-ϱitalic-ϱ𝒱𝒱\forall\varrho\in P({{\cal V},{\cal E}})\;,\enspace\varpi({\varrho})=\varpi^{{%
\lvert\varrho\rvert}}({\varrho})\in{\cal V}\times{\cal V}\;.∀ italic_ϱ ∈ italic_P ( caligraphic_V , caligraphic_E )... | B |
The statistics collection algorithm should be stable, effective, and efficient for large-scale records. To overcome the disadvantages of general statistics collection methods, a number of parallel techniques have been developed for large-scale records by optimizing the efficiency and complexity. For example, these alg... |
We formally present a storage strategy for IP addresses that consists of two layers that consist of a limited number of memory blocks. The first layer contains 256×256256256256\times 256256 × 256 memory blocks. The first three parts of the IP address can be mapped into the corresponding position of the element in a pa... | We traverse all elements of the memory blocks of the second layer to obtain the maximum number of occurrences of elements if k=1𝑘1k=1italic_k = 1. Otherwise, we construct a minimum heap of size k𝑘kitalic_k. The statistical results of the first k𝑘kitalic_k IP addresses are saved in the heap, which is a special binary... | In this paper, we present two efficient algorithms for collecting the statistics of large-scale IP address data. We can obtain the frequently occurring IP addresses from the statistics, which can be regarded as a pre-processing step of user behavior analysis in network traffic management. Because of the increasing volu... |
The collection of the statistics of large-scale IP address data is one of the most fundamental problems in network traffic measurement. In this paper, we addressed this problem. Specifically, the two proposed methods present two different relationship mapping mechanisms between memory blocks and IP addresses to strike... | C |
In this work, we only assume that A1subscript𝐴1A_{1}italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is invertible and the global system matrix 𝒜𝒜\mathcal{A}caligraphic_A is invertible. Many special cases can be cast into the
above forms of twofold saddle point systems. For example, (a) A2=0subscript𝐴20A_{2}=0itali... | The outline of the remainder of this paper is as follows. In section 2, we briefly recall the classic saddle point problem and its Schur complement, and introduce the twofold saddle point problem and the form of Schur complement, we then construct and analyze the block-triangular and block-diagonal preconditioners base... | Commencing with a twofold saddle point problem, we generalize our theory to n𝑛nitalic_n-tuple block tridiagonal saddle point problems. Our study demonstrates that judiciously selecting signs in front of Schur complements in preconditioners results in a positively stable preconditioned system [16]. By using the Routh–H... | In this study, we explore two methodologies for designing preconditioners tailored for 3333-by-3333 block systems exhibiting block tridiagonal form, as well as their extensions to n𝑛nitalic_n-tuple scenarios. One approach centers around the nested (or recursive) Schur complement [37]. Unlike the approach presented in ... | The above 3333-by-3333 block linear problems (1) and (2) can be naturally extended to the n𝑛nitalic_n-tuple cases. For example, when the system matrix in (1) is extended to the n𝑛nitalic_n-tuple case, it is the block tridiagonal systems discussed in [37]. When
the system matrix in (2) is extended to the n𝑛nitalic_n-... | D |
Several works have focused on distributed training with vertical partitions in a federated setting. The authors in works (Chen et al., 2020; Hardy et al., 2017; Yang et al., 2019b; Wu et al., 2020; Feng and Yu, 2020; Kang et al., 2020) propose vertical federated learning algorithms for single-tier communication network... | While this approach is similar to the multiple local iterations in horizontal federated learning, an important distinction is that in vertical federated learning, each silo updates only on its own subset of coordinates in contrast to updating all the coordinates in the case of horizontal federated learning.
| In this work, we consider vertical and horizontal partitions of the dataset simultaneously in the two tiers.
TDCD performs model training in such a multi-tiered system architecture by fusing horizontal and vertical learning approaches in a novel manner. | TDCD is novel since it interleaves both the horizontal and vertical federated learning paradigms and thus has to account for both the perturbed gradients from the horizontal federated learning component and the stale information from the vertical federated learning component. This combination leads to a different conve... | In all figures, N𝑁Nitalic_N represents the number of silos (vertical partitions of the dataset), and Kjsubscript𝐾𝑗K_{j}italic_K start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT represents the number of clients (horizontal partitions) in each silo.
For simplicity in our experiments, we consider that all the silos have ... | B |
O&M_{2}\\
\end{array}\right),italic_I start_POSTSUBSCRIPT italic_m italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_H end_POSTSUPERSCRIPT ⊗ italic_I start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ) roman_bcirc ... | Due to many scholars have focused their attentions on the matrix perturbation analysis Bauer-Fike ; 1986Generalization ; Rellich1969 ; shi2012sharp ; Sun1987 ; trefethen2005spectra , a wealth of results have been developed up to now. These include the Gershgorin disc theorem, the Bauer-Fike theorem, and the Kahan theor... |
The Gershgorin discs, defined by (9), are depicted in the left picture of Fig. 1 as red dash-dot circles. Notably, these red dash-dot circles are contained within the blue solid circles, signifying a tighter bound compared to the result presented in cao2021tensor . | Figure 1:
The Gershgorin discs (represented by the blue solid lines), obtained by Theorem 5.25.25.25.2 given in cao2021tensor , are compared with the Gershgorin discs derived from Theorem 3.1 presented in this paper (represented by the red dash-dot lines) under two similarity transformations for Example 1. | In this case, Fnewsubscript𝐹newF_{\text{new}}italic_F start_POSTSUBSCRIPT new end_POSTSUBSCRIPT is an upper quasi-triangular matrix that contains more zero entries than F𝐹Fitalic_F. By applying the criterion (9) once again, we establish new Gershgorin circles, displayed in the right picture of Fig. 1. Evidently, the ... | B |
Our study also makes use of image structural information and figures out a different but more effective two-stream network, where structure-constrained texture synthesis and texture-guided structure reconstruction are jointly considered. The two subtasks better facilitate each other, leading to more convincing texture... | Motivated by global and local GANs [7], Gated Convolution [36] and Markovian GANs [9], we develop a two-stream discriminator to distinguish genuine images from the generated ones by estimating the feature statistics of both texture and structure. The discriminator is shown in Figure 2 (b). The texture branch includes t... | As illustrated in Figure 2, the proposed method is implemented as a generative adversarial network, where the two-stream generator jointly synthesizes image textures and structures, and the discriminator judges their quality and consistency. In this section, we detailedly describe the generator, the discriminator, and ... | On Two-stream Network Architecture. To further highlight the two-stream dual generation architecture, we compare it with a multi-task single-stream network, which is tailed by two branches to model the image structure and texture simultaneously. We enlarge its channels to make it have the same amount of parameters as t... |
Different from the case in the texture branch, it is intractable to optimize the adversarial loss of the structure branch only with the detected edge map, mainly due to the sparse nature of the edge. We therefore adopt the gray-scale image as an additional condition and feed the paired data as the input in the structu... | B |
In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability ε𝜀\varepsilonitalic_ε. The concept of BEC was first introduced by Elias in 1955 InfThe . Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory ... |
First recall that the error exponents of the average decoding error probability of the ensemble ℛ(1−R)n,nsubscriptℛ1𝑅𝑛𝑛\mathcal{R}_{(1-R)n,n}caligraphic_R start_POSTSUBSCRIPT ( 1 - italic_R ) italic_n , italic_n end_POSTSUBSCRIPT over the erasure channel under the three decoding principles are defined by |
The problem of decoding linear codes over the erasure channel has received renewed attention in recent years due to their wide application in the internet and the distributed storage system in analyzing random packet losses Byers ; Luby ; Lun . Three important decoding principles, namely unambiguous decoding, maximum ... | In this paper we carried out an in-depth study on the average decoding error probabilities of the random parity-check matrix ensemble ℛm,nsubscriptℛ𝑚𝑛\mathcal{R}_{m,n}caligraphic_R start_POSTSUBSCRIPT italic_m , italic_n end_POSTSUBSCRIPT over the erasure channel under three decoding principles, namely unambiguous de... | In particular in FFW , upon improving previous results, the authors provided a detailed study on the decoding error probabilities of a general q𝑞qitalic_q-ary linear code over the erasure channel under the three decoding principles. Via the notion of qℓsuperscript𝑞ℓq^{\ell}italic_q start_POSTSUPERSCRIPT roman_ℓ end_P... | B |
We propose Subgoal Search method with two implementations: MCTS-kSubS, BF-kSubS. We demonstrate that our approach requires a relatively little search or, equivalently, is able to handle bigger problems. We also observe evidence of out-of-distribution generalization. |
Classical planning methods For many search problems, the state space can be represented in factored fashion (or such representation can be learned [3]). In such cases, the search can be greatly improved with width-based methods [25, 12]. It is an interesting research direction to combine kSubS with such methods. | The planner is used to search over the graph induced by the subgoal generator and is guided by the value function. The role of the low-level policy is to prune the search tree as well as to transition between subgoals.
In this paper, we assume that the generator predicts subgoals that are k𝑘kitalic_k step ahead (towar... |
In classical AI, reasoning is often achieved by search ([39]). Search rarely can be exhaustive, and a large body of algorithms and heuristics has been developed over the years, [39, Section 3.5]. It is hypothesized that progress can be achieved by combining search with learning [4]. Among notable successful examples o... | Reasoning is often regarded as a defining property of advanced intelligence [39, 18]. When confronted with a complicated task, humans’ thinking process often moves from one idea to a related idea, and the progress is made through milestones, or subgoals, rather than through atomic actions that are necessary to transiti... | C |
We propose the ’Trans-pinyin’ system to represent character pronunciation, in which auxiliaries and vowels are transformed to standard forms and keep the tune in the ‘Pinyin’ system. After transformation, ‘c’ becomes ‘ts𝑡𝑠tsitalic_t italic_s’ and ‘z’ becomes ‘ts′𝑡superscript𝑠′ts^{\prime}italic_t italic_s start_PO... | We propose the ’Trans-pinyin’ system to represent character pronunciation, in which auxiliaries and vowels are transformed to standard forms and keep the tune in the ‘Pinyin’ system. After transformation, ‘c’ becomes ‘ts𝑡𝑠tsitalic_t italic_s’ and ‘z’ becomes ‘ts′𝑡superscript𝑠′ts^{\prime}italic_t italic_s start_PO... | In this paper, we propose to use ‘Five-strokes’, a famous structure-based encoding method for Chinese characters, to get our glyph embedding. ‘Five-Strokes’ was put forward by Yongmin Wang in 1983. This special encoding method for Chinese characters is based on their structures. ‘Five-Strokes’ holds the opinion that Ch... | For auxiliaries, they will be mapped to standard forms, which have at most two English characters and a phonetic weight. We apply one-hot encoding to them so that we get two one-hot vectors and a one-dimension phonetic weight. Then we add up the two English characters’ one-hot vectors and the phonetic weight here will ... | For vowels, they are also mapped to standard forms. However, it is a little different here. We have two different kinds of plural vowels. One is purely made up of single vowels, such as ‘au𝑎𝑢auitalic_a italic_u’, ‘eu𝑒𝑢euitalic_e italic_u’ and ‘ai𝑎𝑖aiitalic_a italic_i’. The other kind is like ‘an𝑎𝑛anitalic_a... | C |
Specifically, we model the holographic image formation in a fully differentiable manner following Fourier optics. We relate the displayed holographic image I𝐼Iitalic_I to the wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E as
| where N𝑁Nitalic_N is the pixel count of the neural étendue expander. Please see Supplementary Note 3 for further details of how this upper bound is found.
Therefore, obtaining the optimal neural étendue expander, which minimizes the reconstruction loss ℒTsubscriptℒ𝑇\mathcal{L}_{T}caligraphic_L start_POSTSUBSCRIPT ita... | where ℱ−1superscriptℱ1\mathcal{F}^{-1}caligraphic_F start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT is the inverse 2D Fourier transform, w𝑤witalic_w is the spatial frequency, and c𝑐citalic_c is the cutoff frequency. In order to set the cutoff frequency to be beyond human perceptibility, it suffices to set c𝑐citalic_c ... | The differentiability of Eq. (2) with respect to the modulation variables ℰℰ\mathcal{E}caligraphic_E and 𝒮𝒮\mathcal{S}caligraphic_S allows us to learn the optimal wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E by jointly optimizing the static neural étendue expander in concunction with... |
where ℱℱ\mathcal{F}caligraphic_F is the 2D Fourier transform, 𝒮𝒮\mathcal{S}caligraphic_S is the SLM modulation, U(⋅)𝑈⋅U(\cdot)italic_U ( ⋅ ) is zeroth-order upsampling operator from the low-resolution SLM to the high-resolution neural étendue expander, and ⊙direct-product\odot⊙ is the Hadamard product. | D |
There are several directions worth further investigations for future studies. Firstly, given multiple NLP tasks, how to find a set of tasks that could take advantage of MTL remains a challenge. Besides improving performance of MTL models, a deeper understanding of task relatedness could also help expanding the applicat... | Besides well-studied NLP tasks, joint MTL is also widely applied in various downstream tasks. One major problem of such tasks is the lack of sufficient labeled data. Through joint MTL, one could take advantage of data-rich domains via implicit knowledge sharing. In addition, abundant unlabeled data could be utilized vi... | This paper reviews the \replacedapplicationuse of MTL in recent NLP research. We focus on the ways in which researchers apply MTL to \addeddownstream NLP tasks, including model architecture\addeds, training process\addedes, and data source\addeds. While most pre-trained language models take advantage of MTL during pre-... |
Secondly, current NLP models often rely on a large or even huge amount of labeled data. However, in many real-world applications, where large-scale data annotation is costly, this requirement cannot be easily satisfied. In this case, we may consider to leverage abundant unlabeled data in MTL by using self-supervised o... | Murray, 2019; Pfeiffer et al., 2020) transfer large pre-trained models to new tasks and languages by adding a modest amount of task-specific parameters. In this way, the costly fine-tuning of the entire model is avoided, which is important for real-world applications such as mobile computing and latency-sensitive servi... | C |
Be sure to use the \\\backslash\IEEEmembership command to identify IEEE membership status.
Please see the “IEEEtran_HOWTO.pdf” for specific information on coding authors for Conferences and Computer Society publications. Note that the closing curly brace for the author group comes at the end of the thanks group. This w... | The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ... | For Transactions and Journals papers, this is not necessary to use at the submission stage of your paper. The IEEE production process will add the appropriate copyright line. If you are writing a conference paper, please see the “IEEEtran_HOWTO.pdf” for specific information on how to code ”Publication ID Marks”.
| There are other options available for each of these when submitting for peer review or other special requirements. IEEE recommends to compose your article in the base 2-column format to make sure all your equations, tables and graphics will fit the final 2-column format. Please refer to the document “IEEEtran_HOWTO.pdf... | Be sure to use the \\\backslash\IEEEmembership command to identify IEEE membership status.
Please see the “IEEEtran_HOWTO.pdf” for specific information on coding authors for Conferences and Computer Society publications. Note that the closing curly brace for the author group comes at the end of the thanks group. This w... | B |
For the one direction, if G,ℓ,𝒞⊧ϕmodels𝐺ℓ𝒞italic-ϕG,\ell,\mathcal{C}\models\phiitalic_G , roman_ℓ , caligraphic_C ⊧ italic_ϕ, from the definition of the semantics of ϕitalic-ϕ\phiitalic_ϕ, then there exists S⊆V(G)𝑆𝑉𝐺S\subseteq V(G)italic_S ⊆ italic_V ( italic_G ) such that G,ℓ,𝒞′⊧ϕ[Xi∖Di]models𝐺ℓsuperscript�... |
As previously, S1subscript𝑆1S_{1}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT partitions C2,…,Cq′subscript𝐶2…subscript𝐶superscript𝑞′C_{2},\ldots,C_{q^{\prime}}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_C start_POSTSUBSCRIPT italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRI... | Since each of the vertex sets C1,C2,…,Cq′subscript𝐶1subscript𝐶2…subscript𝐶superscript𝑞′C_{1},C_{2},\ldots,C_{q^{\prime}}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_C start_POSTSUBSCRIPT italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POST... | C1,C2,…,Cq′subscript𝐶1subscript𝐶2…subscript𝐶superscript𝑞′C_{1},C_{2},\ldots,C_{q^{\prime}}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_C start_POSTSUBSCRIPT italic_q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT having the same type... | Since each component has at most k𝑘kitalic_k vertices and each vertex has at most 2ksuperscript2𝑘2^{k}2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT
different types of neighborhoods Njsubscript𝑁𝑗N_{j}italic_N start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, we can have at most 2k2superscript2superscript𝑘22^{k... | B |
Our study naturally contributes to three important and active bodies of literature in economics. We briefly review these connections—to network public goods games, to structural estimation of network formation data, and to social preferences for trust and reciprocity—in the remainder of this section. | Our model departs from the existing literature on public goods in endogenous networks in a number of ways. Primarily, we model a situation in which individuals choose others with whom they would like to share the externalities generated by their resources. This is the reverse of the situations studied in the previous l... | Prior extensions of public goods provision to environments with endogenous linking include Galeotti and Goyal (2010), which furthers the specialization result of Bramoullé et al. (2007). These papers emphasize the prevalence of core-periphery architectures as equilibrium networks, but in a setting where players choose ... | Our other main contributions are more conceptual in nature. The voluntary sharing environment we study is novel, but it exhibits some similarities with other well-studied strategic decision settings, including dictator giving, the provision of public goods or club goods, and public goods games played on networks. As su... |
There has been prior work examining both the extension of public goods to static (exogenous) networks, and the provision of public goods on endogenous networks. In particular, Bramoullé et al. (2007) launched research into this environment, by showing that given a network shape, specialized Nash equilibria (in which a... | D |
where Y^u,vsubscript^𝑌𝑢𝑣\hat{Y}_{u,v}over^ start_ARG italic_Y end_ARG start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT represents the spectrum of the recovered image, and Yu,vsubscript𝑌𝑢𝑣Y_{u,v}italic_Y start_POSTSUBSCRIPT italic_u , italic_v end_POSTSUBSCRIPT represents the spectrum of the ground truth... |
In the SISR task, the loss function is used to guide the iterative optimization process of the model by computing some kind of error. Meanwhile, compared with a single loss function, researchers find that combining multiple loss functions can better reflect the situation of image restoration. In this section, we brief... | Mixed Loss: In SISR, there are also some classic combinations of loss functions that are widely used to guide the network towards generating high-quality HR images. These combinations aim to balance the quality, details, and visual perception of the generated image. Here are some commonly used classic combinations of l... |
In the past, most SISR models relied on L1 loss or MSE loss. Although some other new loss functions like content loss, texture loss, and adversarial loss have been proposed, they still cannot achieve a good balance between reconstruction accuracy and perceptual quality. Therefore, it remains an important research topi... |
The choice of loss function combinations depends on the specific requirements of the SISR task, such as the desired balance between perceptual quality and computational efficiency. In practical applications, researchers may adjust the weights of the loss functions based on experimental results to find the combination ... | B |
The reconstruction quality of the whole image is comparable for the three tested methods. However, when inpainted region is concerned, we observe a significant improvement of over 4 dB for the Neural Knitwork compared to the conventional coordinate and 2 dB less than the -based technique. For some of the results, the ... | The reconstruction quality of the whole image is comparable for the three tested methods. However, when inpainted region is concerned, we observe a significant improvement of over 4 dB for the Neural Knitwork compared to the conventional coordinate and 2 dB less than the -based technique. For some of the results, the ... | Table 1: Comparison of inpainting performance for different fill ratios. The three approaches appear comparable PSNR (↑↑\uparrow↑) and SSIM (↑↑\uparrow↑) for whole images. For the inpainted region, the Neural Knitwork comes close to the level of performance of , while conventional is inferior.
|
Table 2: We compare the blind super-resolution performance achieved by a conventional coordinate , a -based internal learning framework of SinGAN and our method. We compute PSNR (↑↑\uparrow↑) and SSIM (↑↑\uparrow↑) for a number of upscaling factors and downsampling kernels. |
Figure 7 contains results for a diagonal kernel and upscaling factor of 4, for the proposed Neural Knitwork, the conventional and SinGAN, another image super-resolution method based on internal learning. The results show that SinGAN has the lowest performance in terms of PSNR but it also creates distinguishable artif... | B |
The phenomenon of a greedy approach sometimes outperforming more complex attempts to balance exploration and exploitation in contextual problems is not unprecedented. Indeed, in the contextual bandit literature, a number of recent works (Bastani et al.,, 2021; Kannan et al.,, 2018; Raghavan et al.,, 2023; Jedor et al.,... |
In problems (i) and (ii) most of the other approaches have well balanced precision and recall, suggesting that misclassifications of both types occur with roughly similar frequency. In problem (iii), however, the traditional variant of PG-IDS shows a similar behaviour to CBP-SIDE, in that its recall is much larger tha... |
Gentile and Orabona, (2014) consider a multi-class label prediction problem where the learner chooses a subset of possible labels, in each round, and only observes true labels if they are part of their subset. While this also lies at the intersection of classification and partial monitoring, when the number of classes... |
The regret is presumed to be the true measure of interest in the apple tasting problem, and to appropriately weight the impact of false positives and false negatives. Nevertheless it is informative to see in what proportion the algorithms make the two classes of error. To this end, in Tables 3, 3, and 3 we report the ... |
In all aforementioned settings (adversarial and stochastic, contextual and non-contexutal), the apple tasting problem is shown to belong to the class of ‘easy’ problems, with Θ(T)Θ𝑇\Theta(\sqrt{T})roman_Θ ( square-root start_ARG italic_T end_ARG ) minimax regret444Note that these bounds are in the frequentist setti... | C |
ℒCE=−∑i=1N∑c=1Cp(yi=c)logpθ(yi=c)subscriptℒ𝐶𝐸superscriptsubscript𝑖1𝑁superscriptsubscript𝑐1𝐶𝑝subscript𝑦𝑖𝑐subscript𝑝𝜃subscript𝑦𝑖𝑐\mathcal{L}_{CE}=-\sum_{i=1}^{N}\sum_{c=1}^{C}p(y_{i}=c)\log p_{\theta}(y_{i}=c)caligraphic_L start_POSTSUBSCRIPT italic_C italic_E end_POSTSUBSCRIPT = - ∑ start_POSTSUBSCR... | This is also reflected at training time, where, in each batch, the model sees a different sampled memory M¯¯𝑀\bar{M}over¯ start_ARG italic_M end_ARG, thus avoiding always focusing on specific slots, which could possibly head to overfitting.
Nonetheless, sampling inherently introduces variance, therefore compatibility ... |
We notice that the impact of SS regularization is more significant when the memory is limited in size and each memory slot is associated with several input samples. This is not the case with claim detection, where memory slots are in the hundreds and memory/claim associations are few. For instance, consider the follow... |
Note that this strategy is purely similarity-based and does not consider each memory slot’s contribution to the task-specific objective. In other terms, it operates under the assumption that each knowledge descriptive concept contained in the memory should have high semantic and syntactic similarity with its associate... | The concept of WS is important since not always memory slots can be explicitly associated with individual training examples (e.g., this might be costly with regard to the time required from an expert, but also intrinsically challenging). This is a very general setting. Yet, if such annotation information is provided, w... | D |
In our application, we consider a grid of i=1,…,I𝑖1…𝐼i=1,\ldots,Iitalic_i = 1 , … , italic_I areas, which covers the center of the city of Milan, where a measurement of crowdedness y𝑦yitalic_y is collected over time at t=1,…,T𝑡1…𝑇t=1,\ldots,Titalic_t = 1 , … , italic_T, where one time-point represents a time inte... |
We employ in our analysis several Bayesian models to model the behavior of city crowdedness observed at regular time intervals on a fixed grid area. Let yi,tsubscript𝑦𝑖𝑡y_{i,t}italic_y start_POSTSUBSCRIPT italic_i , italic_t end_POSTSUBSCRIPT denote the crowdedness measure in area i𝑖iitalic_i at time t𝑡titalic_t ... |
In this paper, we propose a framework for predictive model checking and comparison, where in addition to usual approaches, we advocate for the specification of concrete (spatio-temporal) properties that the predictions from a model should satisfy. Given trajectories from the Bayesian predictive distribution, the poste... | While the framework is rather general, we are primarily interested in
the properties in a predictive context. Thus, we will formulate and consider requirements on the future crowdedness values up to hℎhitalic_h-steps ahead of a given time t𝑡titalic_t. Each requirement that we formulate can be checked for every areal u... |
Our application aims to study the behavior of city crowdedness observed at regular time intervals on a fixed grid area. Again, let yi,jsubscript𝑦𝑖𝑗y_{i,j}italic_y start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT denote the crowdedness measure in area i𝑖iitalic_i at time j𝑗jitalic_j for i=1,…I𝑖1…𝐼i=1,\... | C |
=‖x∥−c∥‖2+‖x⟂−c⟂‖2absentsuperscriptnormsuperscript𝑥∥superscript𝑐∥2superscriptnormsuperscript𝑥perpendicular-tosuperscript𝑐perpendicular-to2\displaystyle=\|x^{\|}-c^{\|}\|^{2}+\|x^{\perp}-c^{\perp}\|^{2}= ∥ italic_x start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT - italic_c start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT ∥ s... | Thus, ‖c‖2=‖c∥‖2+‖c⟂‖2superscriptnorm𝑐2superscriptnormsuperscript𝑐∥2superscriptnormsuperscript𝑐perpendicular-to2\|c\|^{2}=\|c^{\|}\|^{2}+\|c^{\perp}\|^{2}∥ italic_c ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = ∥ italic_c start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + ∥... | =‖x∥−c∥‖2+‖x⟂−c⟂‖2absentsuperscriptnormsuperscript𝑥∥superscript𝑐∥2superscriptnormsuperscript𝑥perpendicular-tosuperscript𝑐perpendicular-to2\displaystyle=\|x^{\|}-c^{\|}\|^{2}+\|x^{\perp}-c^{\perp}\|^{2}= ∥ italic_x start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT - italic_c start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT ∥ s... | c=c∥+c⟂𝑐superscript𝑐∥superscript𝑐perpendicular-toc=c^{\|}+c^{\perp}italic_c = italic_c start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT + italic_c start_POSTSUPERSCRIPT ⟂ end_POSTSUPERSCRIPT,
where c∥∈𝒮superscript𝑐∥𝒮c^{\|}\in\mathcal{S}italic_c start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT ∈ caligraphic_S and c⟂superscr... | =‖x∥−c∥‖2+‖c⟂‖2absentsuperscriptnormsuperscript𝑥∥superscript𝑐∥2superscriptnormsuperscript𝑐perpendicular-to2\displaystyle=\|x^{\|}-c^{\|}\|^{2}+\|c^{\perp}\|^{2}= ∥ italic_x start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT - italic_c start_POSTSUPERSCRIPT ∥ end_POSTSUPERSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT... | D |
For instance, replicability of standard equality in substructural logics has a neat algebraic explanation:
such an equality is defined by a left adjoint, as pioneered by Lawvere [Law69, Law70], and, as we will show, predicates defined in this way are always replicable. |
Theorem 22 shows that the notion of quantitative equality given in this paper is coalgebraic, in the sense that Lipschitz doctrines are the coalgebras of a comonad over the category of graded doctrines. This generalizes a known situation that holds in the non-linear case, where elementary doctrines are the coalgebras ... | how does this (standard) notion of equality relate to our quantitative equality?
To answer this question in a precise way, first of all we observe that also elementary R𝑅Ritalic_R-graded doctrines can be organised in a 2-category, and then we compare it with the 2-category of R𝑅Ritalic_R-Lipschitz doctrines. | This provides us with a universal construction yielding an R𝑅Ritalic_R-Lipschitz doctrine from an R𝑅Ritalic_R-graded one, and we use it to generate semantics for the calculus.
In Section 5.2 we relate quantitative equality with the usual one defined by left adoints, formally proving that the former indeed refines the... | This shows that a quantitative equality cannot be given by a left adjoint,
however, thanks to the language of doctrines, we manage to compare in a rigorous way quantitative equality with the standard one, proving they share other fundamental structural properties. | D |
\frac{k_{u}^{+}}{k}.roman_Average roman_Precision @ italic_K := divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_u ∈ italic_V end_POSTSUBSCRIPT divide start_ARG italic_k start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT end_ARG start_ARG italic_... |
Figure 5 shows the Average Precision@K𝐾Kitalic_K of four studied measures on six labeled networks. Note that the difference between ForestSim-EX and ForestSim-AP is marginal though the latter requires less time and space. Moreover, ForestSim achieves comparable performance to RoleSim, the state-of-art role similarity... |
Note that for large networks, RoleSim, StructSim, and ForestSim-EX cannot finish the computation due to their high time and memory cost, while ForestSim-AP works well on all these real networks, which shows the significant efficiency advantages of ForestSim-AP. |
In this paper, we propose a novel node similarity metric, namely ForestSim, to quickly and effectively process top-k similarity search on large networks. Different from previous frameworks, ForestSim, based on spanning rooted forests of graphs, adopts the average size of all trees rooted at node u𝑢uitalic_u in spanni... |
Extensive experimental studies : We test the effectiveness of studied role similarity metrics on six labeled networks and estimate their efficiency on 20 real-world networks, including several large networks. For effectiveness, ForestSim achieves comparable performance to RoleSim and better results than StructSim. For... | A |
One of the primary concerns associated with LSA is its occasional inability to outperform certain baselines based on the BERT model. We attribute this observation to two main reasons.
Firstly, LSA is a quite simple mechanism and relies on relatively basic aspect features to construct sentiment aggregation windows, whic... | We utilize LSA to classify aspect sentiments and aggregate the sentiment clusters.
The cluster prediction performance in Table 3 shows that our models consistently outperform the baseline models on all datasets. The performance of LSA is dependent on the base model. | Another limitation is that LSA is a quite simple mechanism and relies on relatively basic aspect features to construct sentiment aggregation windows, which may not be as competitive as state-of-the-art methods that employ more complex features.
Besides, the current sentiment aggregation window is intuitive but may not ... | Secondly, the current sentiment aggregation window, although intuitive, may not be perfect and could potentially lead to the loss of some sentiment information.
Nevertheless, the performance of the three LSA variants may not consistently surpass some baselines, our models offer notable advantages in terms of efficiency... | When it comes to sentiment classification performance, the results in Table 4 clearly demonstrate the superiority of our models over significant baselines, particularly in the case of the LSAE model.
The experimental results are as expected and show the proficiency of LSA. | C |
𝐗𝐗\mathbf{X}bold_X. Suppose that there exist orthogonal projections 𝐋:ℝN→𝒫p(𝕊):𝐋→superscriptℝ𝑁superscript𝒫𝑝𝕊\mathbf{L}:\mathbb{R}^{N}\rightarrow{\mathcal{P}}^{p}(\mathbb{S})bold_L : blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT → caligraphic_P start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR... | In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic... |
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and me... | The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop... | gradient estimate of 𝜸𝐖subscript𝜸𝐖\bm{\gamma}_{\mathbf{W}}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT, where 𝜸𝐖0superscriptsubscript𝜸𝐖0\bm{\gamma}_{\mathbf{W}}^{0}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is the initial
guess. The main ... | C |
We use IBMQ quantum computers via Qiskit (IBM, [n. d.]) APIs. We study 6 devices, with #qubits from 5 to 15 and Quantum Volume from 8 to 32. We also employ Qiskit for compilation. The optimization level is set to 2 for all experiments. All experiments run 8192 shots. The noise models we used are off-the-shelf ones upd... | the noise-injected training to real QC using techniques such as parameter shift (Crooks, 2019). In this case, the training cost is linearly scaled with qubit number. Post-measurement normalization and quantization are also linearly scalable because they are performed on the measurement outcomes. Gradients obtained with... | For measurement, we measure the expectation values on Pauli-Z basis and obtain a value [-1, 1] from each qubit. The measurement outcome goes through post-measurement normalization and quantization and is used as rotation angles for RY gates in the next block’s encoder.
After the last block, for two-classifications, we ... | We use QNN as the benchmark PQC in this work. Figure 2 shows the QNN architecture. The inputs are classical data such as image pixels, and the outputs are classification results. The QNN consists of multiple blocks. Each has three components: encoder encodes the classical values to quantum states with rotation gates su... | QNN results. We experiment with four different QNN architectures on 8 tasks running on 5 quantum devices to demonstrate QuantumNAT’s effectiveness. For each benchmark, we experiment with noise factor T={0.1,0.5,1,1.5}𝑇0.10.511.5T=\{0.1,0.5,1,1.5\}italic_T = { 0.1 , 0.5 , 1 , 1.5 } and quantization level among {3, 4, 5... | D |
Figure 1: An illustration of event trajectories triggered by a flying drone. (a) The triggered retinal events on the image plane. The retinal events marked by the blue rectangle are triggered by the moving drone. (b) The triggered retinal events in the spatio-temporal domain. (c) The event trajectories triggered by th... |
However, in the current event-based studies, most methods usually handle the fundamental event-based data association problem in implicit ways, which are designed for their specific tasks. As a result, event-based data association has not been effectively solved by the current event-based works. There are relatively f... |
In this paper, we propose a novel unifying event data association (EDA) approach to effectively and explicitly handle the essential event data association and event information fusion problem. The proposed EDA performs a model fitting on event data, which can asynchronously associate and fuse the event data over time ... |
However, gallego2018unifying and other event-based data association methods zhu2017event ; gallego2019focus ; peng2022globally show that the events triggered by the same edge in the scene can be associated with each other using an event trajectory. Furthermore, they also show that the event trajectories, triggered a... | Event-based methods have achieved promising performance on various tasks gallego2022event . However, the study of the fundamental event data association problem is still challenging and in its infancy. Unlike a traditional camera, an event camera only sparsely emits binary (i.e., On and Off) retinal events at the edges... | D |
A good connected ordering of any connected perfect graph on n𝑛nitalic_n vertices can be computed in time O(nc+4)𝑂superscript𝑛𝑐4O(n^{c+4})italic_O ( italic_n start_POSTSUPERSCRIPT italic_c + 4 end_POSTSUPERSCRIPT ) provided that an optimal colouring of a perfect graph can be obtained in O(nc)𝑂superscript𝑛𝑐O(n^{... | This is formalized by Algorithm 1.
The size of the maximum clique of Line 1 is computed in O(nc)𝑂superscript𝑛𝑐O(n^{c})italic_O ( italic_n start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) time using the algorithm in [13], bringing the total time complexity of Algorithm 1 to O(nc+2)𝑂superscript𝑛𝑐2O(n^{c+2})it... | Concerning other subclasses of interest, as mentioned in the introduction, the same task can be done in time O(m+n)𝑂𝑚𝑛O(m+n)italic_O ( italic_m + italic_n ) for chordal graphs using the LexBFS algorithm [21], for Meyniel graphs this can be done in time O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUP... | As a connected vertex-ordering of H𝐻Hitalic_H can be obtained in linear time using a standard graph traversal algorithm, and a colouring of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT may be computed in O(mn)𝑂𝑚𝑛O(mn)italic_O ( italic_m italic_n ) time [12, Chapter 5.7], we concl... | Note that the time bound of the above corollary relies to date on the complexity of the polynomial-time algorithm from [13], whose precise exponent has not been made explicit by the authors and which is most probably large.
This is in contrast to the algorithm for comparability graphs given by Theorem 7 which runs in O... | D |
More recently, with the introduction of the Vision Transformers [53, 54], masked image modeling (MIM) [55, 56, 57, 58] achieved state-of-the-art performance based on Transformers, which randomly mask out patches in the input image and predict the masked patches with decoder.
While SSL can extract highly redundant data ... |
In KD tasks, GenURL follows the settings of the current-proposed contrastive-based KD method SEED [67], which adopts the non-linear projector network and data augmentations used in MoCo.v2. Note that MoCo.v2 pre-trained ResNet-50 is adopted as the teacher model. Similar to the SSL task, GenURL uses the BCE loss with ν... | We evaluate the KD tasks based on self-supervised learning on STL-10 dataset. In this experiment, we adopt MoCo.v2 with ResNet-50 under 1600-epoch pre-training. We choose multiple smaller networks with fewer parameters as the student network: ResNet-18 [70], MobileNet.v2 [86], ShuffleNet.v1 [87]. Similar to the pre-tra... |
Then, we compare how GenURL deals with the negative samples in SSL and KD tasks. In Figure 7, we find that GenURL prefers the similar νZsubscript𝜈𝑍\nu_{Z}italic_ν start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT and σ𝜎\sigmaitalic_σ for both SSL and KD tasks, which indicates using νZ=100subscript𝜈𝑍100\nu_{Z}=100ita... |
KD was first proposed by [59], which aims to transfer knowledge from trained neural networks to a smaller one without losing too much generalization power. There are three types of existing KD methods: response-based [59, 60, 61] and feature-based [62] methods require labels to utilize intermediate-level supervision f... | D |
We show the object detection results on Pascal VOC trained with YOLOv3 [41] on Table 3. We provide MCUNetV2 results for M4 MCU with 256kB SRAM and H7 MCU with 512kB SRAM. On H7 MCU, MCUNetV2-H7 improves the mAP by 16.7% compared to the state-of-the-art method MCUNet [30]. It can also scale down to fit a cheaper commod... | As shown in [30], the search space configuration (i.e., the global width multiplier w𝑤witalic_w and input resolution r𝑟ritalic_r) is crucial to the final NAS performance.
We argue that the best search space configuration is not only hardware-aware but also task-aware: for example, some tasks may prefer a higher resol... | Figure 7:
Left: MCUNetV2 has better visual wake word (VWW) accuracy vs. peak SRAM trade-off. Compared to MCUNet [30], MCUNetV2 achieves better accuracy at 4.0×\times× smaller peak memory. It achieves >90% accuracy under <32kB memory, facilitating deployment on extremely small hardware. | Note that MCUNetV2-M4 shares a similar computation with MCUNet (172M vs. 168M) but a much better mAP. This is because the expanded search space from patch-based inference allows us to choose a better configuration of larger input resolution and smaller models.
|
Visual wake word (VWW) reflects the low-energy application of tinyML. MCUNetV2 allows us to run a VWW model with a modest memory requirement. As in Figure 7, MCUNetV2 outperforms state-of-the-art method [30] for both accuracy vs. peak memory and accuracy vs. latency trade-off. We perform neural architecture search und... | C |
In the 2020 ICDM Competition 333https://www.biendata.xyz/competition/icdm_2020_kgc/,
the task adds judgments on multiple event types, which is difficult to solve with a reading comprehension framework. The goal of the competition is to extract multiple event types and event-causes for each text and brand/product. To th... | In this way, we can use the softmax function to decode each label independently, and obtain the set of all possible event types and event-causes pairs. Inspired by sequence tagging tasks, it is beneficial to consider the correlations between labels in neighborhoods and jointly decode the best chain of labels for a give... | In the 2020 ICDM Competition 333https://www.biendata.xyz/competition/icdm_2020_kgc/,
the task adds judgments on multiple event types, which is difficult to solve with a reading comprehension framework. The goal of the competition is to extract multiple event types and event-causes for each text and brand/product. To th... |
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging [8] framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a... | Besides, we modeled label sequences jointly using a CRF to improve the performance of our method. The model effect was not improved when the learning rate of CRF was relatively small. To explore the effect of CRF on BERT, we tried to continuously increase the learning rate of CRF. At last, we concluded that the learnin... | A |
Specifically, removing a critical edge is enough to change a graph from a connected graph to a disconnected one, making the augmented graph and the original graph have little learnable invariance.
In Figure 1, we provide an illustration to show the unstable invariance between the original graph and augmented graphs und... |
Figure 1: An illustration of the unstable invariance of three graph augmentation strategies. The value of each augmented graph is its similarity to the original graph. The upper part of each augmentation strategy shows augmented graphs preserving high invariance, while the lower part’s augmentations bring low invarian... | To remedy the issue of unstable invariance from inappropriate data augmentations, we propose a novel graph-level contrastive learning framework named CGCL, where no handcrafted graph augmentation is needed. CGCL uses multiple GNN-based graph encoders to enforce contrastive learning in a collaborative way, remedying the... | Specifically, removing a critical edge is enough to change a graph from a connected graph to a disconnected one, making the augmented graph and the original graph have little learnable invariance.
In Figure 1, we provide an illustration to show the unstable invariance between the original graph and augmented graphs und... | To measure the destruction of invariance brought by those strategies quantitatively, we annotate the cosine similarity between embeddings of original graph and augmented ones. The graph embeddings are extracted by Node2vec [7].
The upper part of each augmentation strategy shows an augmented graph preserving high invari... | D |
Different number of symbols
Here we study the impact of a different number of symbols on compositionality. In our experiments we used the communication channel with |𝒜s|=5subscript𝒜𝑠5|\mathcal{A}_{s}|=5| caligraphic_A start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT | = 5, giving a total of 25=(5×5)255525=(5\times 5)2... | It turns out, that allowing for only 16161616 messages makes the training less stable.
For topographic similarity, see Figure 5(LABEL:fig:grid:4x4), small to medium values of noise exhibit wide confidence intervals and it is statistically hard to distinguish between the metric values (this might be attributed to a bimo... | Interestingly, the topographic similarity values for the small noise regime (up to 0.080.080.080.08) do not improve over the baseline value (0.790.790.790.79), see Figure 5(LABEL:fig:grid:8x8). This behavior changes for medium to large values of noise, where we can observe a visible increase in topo, peaking at 0.880.8... | The accuracy drops down with an increase of the noise level, as expected, however the speed of the decline increases. This shows that there is an interesting compositionality-accuracy trade-off.
The bottom panel of Figure 4 complements the overall picture with a visualization of metrics’ distribution. We see interestin... | The results for topographic similarity are presented in
Figures 5(LABEL:fig:grid:variable_noise_0.1)-(LABEL:fig:grid:variable_noise_0.15), where ϵ0=0subscriptitalic-ϵ00\epsilon_{0}=0italic_ϵ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0 and ϵT=0.1subscriptitalic-ϵ𝑇0.1\epsilon_{T}=0.1italic_ϵ start_POSTSUBSCRIPT italic_T... | A |
We further performed a comparison with our prior work [53] where we learn CBFs which corresponds to the setting when ΔF=ΔG=ΔX=0subscriptΔ𝐹subscriptΔ𝐺subscriptΔ𝑋0\Delta_{F}=\Delta_{G}=\Delta_{X}=0roman_Δ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT = roman_Δ start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT = roman_Δ... | Safety-critical systems rely on robust control laws that can account for uncertainties in system dynamics and state estimation. For example, consider an autonomous car equipped with noisy sensors that navigates through urban traffic [1]. The state of the car is not exactly known and estimated from output measurements, ... | In this paper, we learn safe output feedback control laws for unknown systems. We first present robust output control barrier functions (ROCBFs) to establish safety under system dynamics and state estimation uncertainties. We then formulate a constrained optimization problem for constructing ROCBFs from safe expert dem... |
We construct a safe ROCBF within the autonomous driving simulator CARLA [56] for a car driving on a road by using camera images, see Fig. 4. In particular, our goal is to learn a ROCBF for the lateral control of the car, i.e., a lane keeping controller, while we use a built-in controller for longitudinal control. Lane... |
In this paper, we have shown how safe control laws can be learned from expert demonstrations under system model and measurement map uncertainties. We first presented robust output control barrier functions (ROCBFs) as a means to enforce safety, which is here defined as the ability of a system to remain within a safe s... | D |
There exists an oracle relative to which 𝖭𝖯𝖡𝖰𝖯⊄𝖡𝖰𝖯𝖭𝖯not-subset-ofsuperscript𝖭𝖯𝖡𝖰𝖯superscript𝖡𝖰𝖯𝖭𝖯\mathsf{NP}^{\mathsf{BQP}}\not\subset\mathsf{BQP}^{\mathsf{NP}}sansserif_NP start_POSTSUPERSCRIPT sansserif_BQP end_POSTSUPERSCRIPT ⊄ sansserif_BQP start_POSTSUPERSCRIPT sansserif_NP end_POSTSUPERSCRIPT,... | Given the experience of classical complexity theory, it would be
reasonable to hope for a theorem showing that, if 𝖭𝖯⊆𝖡𝖰𝖯𝖭𝖯𝖡𝖰𝖯\mathsf{NP}\subseteq\mathsf{BQP}sansserif_NP ⊆ sansserif_BQP, then 𝖯𝖧𝖯𝖧\mathsf{PH}sansserif_PH collapses—analogous to the Karp-Lipton Theorem [KL80], that if 𝖭𝖯⊂𝖯/𝗉𝗈𝗅𝗒𝖭𝖯𝖯... | As mentioned earlier, Theorem 3 resolves an open problem of Fortnow [For05], and demonstrates a clear difference between 𝖡𝖯𝖯𝖡𝖯𝖯\mathsf{BPP}sansserif_BPP and 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP that exemplifies the impossibility of pulling the randomness out of a quantum algorithm. Indeed, Theorem 3 shows that t... | Theorem 10 says, in effect, that there is no relativizing obstruction to 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP being inordinately powerful even while 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP is inordinately weak. It substantially extends the Raz-Tal Theorem, that there is an oracle relative to which 𝖡𝖰𝖯⊄𝖯𝖧not-subset-of𝖡𝖰... |
So what is it that distinguishes 𝖡𝖯𝖯𝖡𝖯𝖯\mathsf{BPP}sansserif_BPP from 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP in these cases? In all of the above examples, the answer turns out to be one of the fundamental properties of classical randomized algorithms: namely, that one can always “pull the randomness out” from suc... | B |
In order to prove the upper bound from (4), we represent the image of k[x(⩽ℓ)]/ℐm(∞)𝑘delimited-[]superscript𝑥absentℓsuperscriptsubscriptℐ𝑚k[x^{(\leqslant\ell)}]/\mathcal{I}_{m}^{(\infty)}italic_k [ italic_x start_POSTSUPERSCRIPT ( ⩽ roman_ℓ ) end_POSTSUPERSCRIPT ] / caligraphic_I start_POSTSUBSCRIPT italic_m end_PO... | We used Macaulay2 [19] and, in particular, package Jets [18, 17] to explore possible analogues of our Theorem 3.1 for this more general case.
A related Sage implementation for computing the arc space of an affine scheme with respect to a fat point can be found in [37, Section 9] and [36, Section 5.4]. | In this direction, new results have been obtained recently in [1, 4, 7].
In [1], Afsharijoo used computational experiments to conjecture [1, Section 5] the initial ideal of ℐm(∞)superscriptsubscriptℐ𝑚\mathcal{I}_{m}^{(\infty)}caligraphic_I start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( ∞ ) end_... | The proofs of the results are given in Section 4.
Section 5 describes computational experiments in Macaulay2 we performed to check whether formulas similar to (2) hold for more general fat points in knsuperscript𝑘𝑛k^{n}italic_k start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. | Connections between the multiplicity structure of the arc space of a fat point and Rogers-Ramanujan partition identities from combinatorics were pointed out by Bruschek, Mourtada, and Schepers in [11] (for a recent survey, see [29, §9]).
In particular, they used Hilbert-Poincare series of similar nature to (1) (motivat... | C |
Karate-weighted: This weighted network is collected from a university karate club. In this weighted network, node denotes member, and edge between two nodes indicates the relative strength of the associations. Actually, this network is the weighted version of Karate club network. So, the number of communities is 2 and... |
where this definition of embeddedness extends that of [38, 24] from un-weighted network to weighted network whose adjacency matrix is connected and has nonnegative entries. Extending the definition of embeddedness for adjacency matrix in which there may exist negative elements is an interesting problem, and we leave i... | Gahuku-Gama subtribes: This data is the signed social network of tribes of the Gahuku–Gama alliance structure of the Eastern Central Highlands of New Guinea. This network has 16 tribes, and positive or negative link between two tribes means
they are allies or enmities, respectively. Meanwhile, there are 3 communities i... | In CoauthorshipsNet, node means scientist and weights mean coauthorship, where weights are assigned by the original papers. For this network, there is no ground truth about nodes labels, and the numbers of communities are unknown. The CoauthorshipsNet has 1589 nodes, however its adjacency matrix is disconnected. Among ... |
Karate-weighted: This weighted network is collected from a university karate club. In this weighted network, node denotes member, and edge between two nodes indicates the relative strength of the associations. Actually, this network is the weighted version of Karate club network. So, the number of communities is 2 and... | B |
Table 1. Median win rate of MA-Trace (obs) compared with other algorithms. In 3s_vs_5z, our agent discovers that keeping the opponents alive leads to higher rewards than killing them. This strategy, however, yields a low win rate. See Appendix F.1 for a detailed study. |
We use standard feed-forward networks for the actor and critic networks with two hidden layers of 64646464 neurons and ReLU activations. The critic network of MA-Trace (obs) takes stacked observations of agents as input, while MA-Trace (full) utilizes the full state provided by SMAC. DecMA-Trace have a critic using si... | We found that MA-Trace (full) performs slightly worse than MA-Trace (obs). Usually the differences are small. However, in two harder tasks, corridor and 6h_vs_8z, MA-Trace (full) learns much slower and often fails. This is perhaps surprising, as the full state contains additional information (e.g., about invisible oppo... | We study two versions of MA-Trace: with the critic V:𝒮→ℝ:𝑉→𝒮ℝV:\mathcal{S}\to\mathbb{R}italic_V : caligraphic_S → blackboard_R taking as inputs full states, and with the critic V:𝒵→ℝ:𝑉→𝒵ℝV:\mathcal{Z}\to\mathbb{R}italic_V : caligraphic_Z → blackboard_R taking as input the joint observation of all agents, denoted ... | In Table 1 and Figure 1, we present the results of the main version of our algorithm – MA-Trace (obs), in which the critic uses stacked observations of agents as described in Appendix F. MA-Trace (obs) reaches competitive results and in some cases exceeds the current state-of-the-art. We compare with a selection of the... | A |
Figure 3: The VisRuler cooperation diagram illustrates how synchronous co-located collaboration typically happens between the ML expert and the domain expert. Three phases and five panels support their teamwork in a single-page tool, with the ML expert being more active (orange color) than the domain expert who receiv... |
The rest of this paper is organized as follows. In Section Related Work, we discuss relevant techniques for visualizing bagging and boosting decision trees, along with tree- and rule-based models and a bulk of relevant works of visual analytics systems for multi-model comparison. Section Random Forest vs. Adaptive Boo... | This choice was intentional since bagging methods work differently than boosting, as explained in Section Random Forest vs. Adaptive Boosting.
Furthermore, each data set is split in a stratified fashion (i.e., keeping the class balance in training/testing split) into 90% of training samples and 10% of testing samples. ... |
From the analyses and the overall score of the RF and AB models, we observe that the most performant models for RF consider only 2 features when splitting the nodes (i.e., max_features hyperparameter). The PCPs in Figure 7(d) enable us to scan the internal regions of the hyperparameters’ solution space for RF. As for ... |
UMAP is initiated with variable n_neighbors and min_dist fixed to 0.10.10.10.1. To determine the optimal number of clusters to be visualized, DBSCAN Ester1996A is used to compute an estimated number of core clusters from the derived decisions, which is then used to tune the n_neighbors, with a minimum of 2 and a maxi... | B |
It has been demonstrated that utilizing the polarization domain may increase channel capacity and spectral efficiency; and improve symbol error rate (SER) [15, 16, 17, 1, 18]. For this reason, impact of polarization on the wireless communication systems has been regarded as a promising research topic [1, 19, 20, 21, 22... |
In particular, several recent research works present the benefit of utilizing the polarization domain in recently proposed communication schemes including, but not limited to, MIMO spatial multiplexing [1]; spatial modulation (SM) [2, 3, 4]; non-orthogonal multiple access (NOMA) [5]; and beamforming [6, 7]. It is vali... | On the other hand, polarization diversity is not taken into account in the majority of previous research works on antenna selection. Although there are previous reports that consider polarization diversity with antenna selection, they consider fixed antenna polarization such as dual-polarized antennas in[44] or tri-pol... | Various other aspects of polarization in MIMO systems have been investigated as well. Ref. [16] showed that space-time block coding (STBC) with single polarization outperforms STBC with dual polarization in Rayleigh and Ricean fading channels. A MIMO system with dual-polarized antenna elements can have lower spatial di... | It has been demonstrated that utilizing the polarization domain may increase channel capacity and spectral efficiency; and improve symbol error rate (SER) [15, 16, 17, 1, 18]. For this reason, impact of polarization on the wireless communication systems has been regarded as a promising research topic [1, 19, 20, 21, 22... | A |
Each request σi=(ti,ri)subscript𝜎𝑖subscript𝑡𝑖subscript𝑟𝑖\sigma_{i}=(t_{i},r_{i})italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) is revealed at some time tisubscript𝑡𝑖t_{i}italic_t star... | Fiat and Woeginger [27] studied a scheduling problem following the online-list paradigm that seems particularly related to online sorting:
The goal is to minimize the average job completion time in a system with n𝑛nitalic_n jobs and a single machine. | Theorem 3 can be seen as an asymptotically tight analysis of the traveling salesperson problem (TSP) on the real line, following the online-list paradigm:
We want to visit n𝑛nitalic_n cities in the unit interval [0,1]01[0,1][ 0 , 1 ] over the course of n𝑛nitalic_n days, one city per day. The positions of the cities a... | Fekete and Hoffmann [25] studied online packing axis-parallel squares so as to minimize the area of their bounding square, and gave an 8888-competitive algorithm for the problem.
Abrahamsen and Beretta [1] gave a 6666-competitive algorithm for the same problem and studied the more general case where the pieces are axis... | In every step, a single new job arrives and must be scheduled to its time slot immediately and irrevocably and without knowledge of the jobs that arrive in later steps.
The offline optimum is to schedule the jobs according to their processing times in sorted order. | A |
Based on the above discussion, our pipeline is summarized as in Figure 3, assuming the availability of the feature extractor Fθsubscript𝐹𝜃F_{\theta}italic_F start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT, which is learned using the self-supervised pixel-wise matching task. First, we extract features from all images Ω... | For medical landmark detection, some researchers treat the few labeled samples as templates. They start with a pre-trained model from other unlabeled data and make the predictions by matching target instances and templates. Yao et al. [42] introduce a pixel-wise multi-layer proxy task for self-supervision, which provid... |
Figure 1: The distribution of the mean radial error (MRE) when choosing a different image as a template in one-shot medical landmark detection task. The x-axis refers to MRE and the y-axis refers to the percentage of MRE lying in the corresponding ranges. Evidently, the choice of template affects the performance signi... | Finally, we can find out the subset of images with the highest similarities as the candidate templates to be labeled, from which a model is learned for few-shot landmark detection. With the help of SCP, we improve the MRE performance of one-shot medical landmark detection from 3.595mm (with a random template) to 3.083m... | While landmark detection is implemented as template matching in Section 3.1, its detection performance is still limited as its feature detector is geared for all pixels not specifically for the landmarks. We further follow [42] to improve the detection of landmarks via semi-supervised learning. Another landmark detecti... | D |
When all nodes in Π^^Π\hat{\Pi}over^ start_ARG roman_Π end_ARG are pure, A𝐴Aitalic_A has both positive and negative elements (i.e., m+>0superscript𝑚0m^{+}>0italic_m start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT > 0 and m−>0superscript𝑚0m^{-}>0italic_m start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT > 0), our modularity re... | MMDF is a generative model and fuzzy weighted modularity is a general modularity for overlapping weighted networks. We expect that our model MMDF and fuzzy weighted modularity proposed in this paper will have wide applications in learning and understanding the latent structure of overlapping weighted networks, just as ... | To determine the number of communities, we follow the strategy provided in [50]. In detail, we iteratively increase k𝑘kitalic_k and choose the one maximizing our fuzzy weighted modularity computed via Equation (6) using method ℳℳ\mathcal{M}caligraphic_M.
|
(3) We provide fuzzy weighted modularity to evaluate the quality of mixed membership community detection for overlapping weighted networks. We then provide a method to determine the number of communities for overlapping weighted networks by increasing the number of communities until the fuzzy weighted modularity does ... |
Our fuzzy weighted modularity computed using Equation (6) measures the quality of overlapping community partition. Similar to the Newman-Girvan modularity [47, 48], a larger fuzzy weighted modularity Qℳ(k)subscript𝑄ℳ𝑘Q_{\mathcal{M}}(k)italic_Q start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ( italic_k ) indicat... | B |
In this work, we study Class Incremental Learning (CIL) from a previously underexplored viewpoint — improving CIL by mimicking the oracle model representation at the initial phase. Through extensive experimental analysis, we show the tremendous potential of this viewpoint. We propose a novel CwD regularization term fo... | Although we have developed our method through observations on the differences between the oracle model and the naïvely-trained initial-phase model, the underlying reason why more uniformly scattered representations for each class benefit CIL is still yet to be explored.
Some analysis on why CwD improves CIL are provide... | Specifically, at the initial phase, we regularize the CIL learner to produce similar representations as the model trained with data of all classes (i.e., the oracle model), since the upper bound of CIL is the oracle model.
According to our results, this additional regularization drastically improves CIL performance. | Inspired by this, we consider improving CIL from a novel perspective—encouraging the CIL learner to mimic the oracle model in the initial phase.
To achieve this, we first need to understand the difference between representations produced by a naïvely-trained initial-phase model and the oracle model. | The contributions of this paper are as follows: 1) We empirically discover that encouraging the CIL learner to mimic the oracle model in the initial phase can boost the CIL performance. 2) We find that compared with naïvely-trained initial-phase model, data representations of each class produced by the oracle model sca... | A |
(a) gives an overview of the hit rate per team using a threshold corresponding to the landmark-specific inter-rater annotation variability.
(b) shows the hit rate per team for increasing thresholds. Percentages in between evaluated thresholds are interpolated. Dotted lines indicate the mean and median inter-rater annot... | We further evaluated the algorithms’ results using hit rates, as described in Section 5.4.
We compared the performance of all participating algorithms by calculating the hit rates when being evaluated against the respective landmark-wise annotation variability. | We further evaluated the algorithms’ results using hit rates, as described in Section 5.4.
We compared the performance of all participating algorithms by calculating the hit rates when being evaluated against the respective landmark-wise annotation variability. | We compared the performance of all participating algorithms by calculating the hit rates when being evaluated against the respective landmark-wise annotation variability.
Since inter-rater analysis was performed on voxels with 1mm1𝑚𝑚1mm1 italic_m italic_m resolution, only landmarks with an AV of not less than 1mm... | In a second analysis, we made use of the distribution (D𝐷Ditalic_D) of the inter-rater annotation variability, as described in section 3.5 and visualized in Fig. 4(a).
We therefore computed hit rate curves (Waldmannstetter et al., 2023), by sampling thresholds from D𝐷Ditalic_D using the formula: | A |
asking whether train 16161616 departs from the BBY station at 10:30. Note that the single atom of Q5subscript𝑄5Q_{5}italic_Q start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT also occurs in Q3subscript𝑄3Q_{3}italic_Q start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT, and, as mentioned in Example 10, the only non-primary-lhs position of... | 𝖤𝗏𝖺𝗅𝖤𝗏𝖺𝗅\mathsf{Eval}sansserif_Eval exploits three auxiliary technical lemmas, which we present next (their proofs are deferred to the appendix), that essentially tell us how the relative frequency of an SJFCQ can be computed via the relative frequency of a simpler query obtained by applying one of the three si... | We now proceed to introduce the central notion of safety for an SJFCQ Q𝑄Qitalic_Q w.r.t. a set ΣΣ\Sigmaroman_Σ of FDs with an LHS chain.
This notion essentially tells us that after simplifying Q𝑄Qitalic_Q into basic subqueries by recursively applying three simplification steps, each such basic subquery has an empty c... |
To establish the above dichotomy result, we first introduce the central notion of safety for an SJFCQ Q𝑄Qitalic_Q w.r.t. a set ΣΣ\Sigmaroman_Σ of FDs with an LHS chain, inspired by a similar notion for the case of primary keys from [25], and show that the problem of deciding whether Q𝑄Qitalic_Q is safe w.r.t. ΣΣ\Sig... | Consider a set ΣΣ\Sigmaroman_Σ of FDs with an LHS chain, and an SJFCQ Q𝑄Qitalic_Q such that (Σ,Q)Σ𝑄(\Sigma,Q)( roman_Σ , italic_Q ) is final. No atom of Q𝑄Qitalic_Q associates three non-primary-lhs positions with the same liaison variable x𝑥xitalic_x.
| B |
In the present paper, we examine absorbing random walks on graphs in which different nodes can have different absorption rates, inducing an “effective” network structure that is reflected only partially by the edge weights of a network. Many notions of network community structure arise from the analysis of random walk... | Our adaptation of InfoMap to absorbing random walks involves a family of absorption-scaled graphs G~(Dδ,H)~𝐺subscript𝐷𝛿𝐻\tilde{G}(D_{\delta},H)over~ start_ARG italic_G end_ARG ( italic_D start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT , italic_H ), where H𝐻Hitalic_H is a scaling matrix that controls the relative i... |
In our adaptations of InfoMap to absorbing random walks, we introduce a family of associated absorption-scaled graphs and then apply Markov time sweeping to these absorption-scaled graphs. To illustrate how the node-absorption rates impact the communities that we detect, consider the matrix Plsubscript𝑃𝑙P_{{l}}itali... | We develop community-detection algorithms that account for node-absorption rates. We adapt the widely-used community-detection algorithm InfoMap [35, 36, 41] to absorbing random walks and thereby account for heterogeneous node-absorption rates in the detected communities. In our adaptation, we apply InfoMap to absorpti... |
The community-detection algorithm InfoMap is based on random walks, so it is natural to adapt it to absorbing random walks. However, there are numerous approaches to community detection [12, 33], and it is worthwhile to adapt other approaches, such as modularity maximization [29] and statistical influence using stocha... | C |
In this section, we consider a special case of the QNR problem, viz., the case wherein there is a single source-destination (s,d)𝑠𝑑(s,d)( italic_s , italic_d ) pair and the goal is to select a single swapping tree for the (s,d)𝑠𝑑(s,d)( italic_s , italic_d ) pair.
For this special case, we design an optimal algorith... | in that both
consider only balanced trees; however, we use a heuristic metric that facilitates a polynomial-time Dijkstra-like heuristic to select the optimal path, while their recursive metric 666We note that their formula (Eqn. 10 in [18]) is incorrect as it either ignores the 3/2 factor or assumes the EP generations... | First, we note that a Dijkstra-like shortest path approach which builds a shortest-path tree greedily doesn’t work for the QNR-SP problem—mainly, because the task is to find an optimal tree rather than an optimal path. As noted before, a routing path can have exponentially many swapping trees over it, with different ge... | Now, to compute the optimal path for each path-length, we can use a simple dynamic
programming approach that run in O(mτl)𝑂𝑚subscript𝜏𝑙O(m\tau_{l})italic_O ( italic_m italic_τ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) time where m𝑚mitalic_m is the number of edges | may be applicable for the QNR-SP problem. However, we need to “combine” trees
rather than paths in the recursive step of a DP approach. Consequently, we were unable to design a DP approach based on the Floyd-Warshall’s approach, but, are able to extend the Bellman-Ford approach for the QNR-SP problem after addressing a... | B |
There are two main approaches to building autonomous driving systems in terms of their AI-based learning architecture: modular and end-to-end pipelines [5, 16]. The modular pipeline consists of four primary and interconnected modules categorized as perception, localization, planning, and control (Figure 2, a). The modu... |
The issues and growing concerns caused by AI systems create the need to scrutinize the regulation of this technology. As a result, public institutions have initiated the development of regulatory frameworks to monitor the activities of data-driven systems at both a country level and internationally. The focal points o... |
AI approaches, which are currently predominated by deep learning algorithms, have brought considerable improvements to many essential components of autonomous driving technology, including advances in perception, object detection, and planning. As the AI-powered driving systems of vehicles advance, the number of AVs d... | While the above subsections primarily describe the potential XAI approaches from the perspective of specific components, we also need to consider AVs’ learning software as a holistic driving system. Three decades of research, starting with ALVINN in 1988 [14] and further succeeding with the DARPA Grand Challenge [199],... | to developing global, socially acceptable principles for machine ethics.” However, further discussion on this issue condemns this opinion and draws attention to the lack of safety principles [27], which force deeper consideration of such dilemmas [28].
Burton et al. [29] have identified three open problems in the state... | B |
The remaining framework of this paper is as follows: related works are presented in Section II. In Section III, we propose a lightweight model named Ghost-dil-NetVLAD. Section IV contains the experimental results and the ablation experiments of different fusion methods. The last section is the conclusion. | Feature extraction is the first step in the VPR mission. The traditional feature representations include scale-invariant feature transform (SIFT), FAB-MAP and Cross-Region-BoW, improving the robustness against appearance or illumination changes. However, these methods always lead to a large amount of calculation compar... |
Further, the critical point to improving VPR performance after feature extraction is to form the compact global image representation when subjected to various image transformations (second step) [8, 9]. VPR mission aggregates the extracted local feature descriptors to the global feature representations. The global des... | The aggregated representation layers can be seen as an H×W𝐻𝑊H\times Witalic_H × italic_W grid of C𝐶Citalic_C-dimensional feature descriptors to a vector representing the original image, aggregating local features into a representative global feature. And then, a similarity function (e.g., Euclidean distance or cosin... |
Currently, the most effective feature extraction method (first step) in VPR is to use deep learning techniques. The original design goal of a convolutional neural network (CNN) is to create a network in which neurons in the first layer extracts local visual features, and neurons in the last layer fuse these features t... | A |
In this paper, we propose a new form of algebraic attack, which is especially effective against nonlinear filter generators. We show with two toy examples how the attack can be performed in practice. We also apply our attack to WG-PRNG and we provide a complexity estimate that shows a fatal weakness of this cipher. We ... | Other versions of the XL-algorithm can be found in [9]. If t𝑡titalic_t is big enough, we expect to find one solution for the system. In this case, the complexity of XL will be essentially the complexity of one single Gaussian reduction in Step 2.
Let N𝑁Nitalic_N be the number of equations generated in XL, and T𝑇Tita... | The XL-Algorithm, introduced by Courtois, Klimov, Patarin, and Shamir in [8], is a computation method for solving a system as the one in (1). Assume that all the fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, for i=1,…,t𝑖1…𝑡i=1,\dots,titalic_i = 1 , … , italic_t, have the same positive d... | The core idea of our new algebraic attack is to use many annihilators simultaneously, instead of only one, and provide a good estimate of the number of keystream bits needed to perform the attack, which is strictly related to the number of linearly independent equations after the multiply phase in the XL-Algorithm. Ind... |
In Chapter 2, we collect all the notations, definitions and known facts needed in the remainder of the paper. We briefly illustrate the XL-Algorithm to solve Boolean equations systems and the algebraic attack to nonlinear filter generators presented in [10]. | D |
The following examples show that the requirement is not satisfied for common resolving gadgets. We tried to construct a gadget that would satisfy the condition, but in the end, we kept the previously resolved parts of the game G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT followed by th... | In equilibrium, player 1111 plays action Q𝑄Qitalic_Q, and player 2222 can mix actions up to the point where the utility for P𝑃Pitalic_P is at most 0. This gives the value of the game 0, and counterfactual values in all inner nodes are also 0. Assuming the modified RNR game GMsuperscript𝐺𝑀G^{M}italic_G start_POSTSUP... | Figure 2 shows a simple game illustrating depth-limited solving. The game starts with player 2222 choosing to either play standard biased matching pennies (p𝑝pitalic_p) or playing his own version of the game (q𝑞qitalic_q). In the next round, player 1111 does not know which game player 2222 chose, and he chooses head ... |
We show that commonly used resolving gadgets are either overestimating or underestimating the values from Definition 5 on an example game in Figure 3. In the game, we first randomly pick a red or green coin. Player 2222 observes this and decides to place the coin heads up (RH, GH) or tails up (RT, GT). Player 1111 can... | et al. (2014)
Resolving gadget constructs a game that allows the opponent to choose whether he wants to play in the subgame we created or terminate. It is done by inserting nodes above the roots of the subgame, and the opponent has two actions before each root, either to follow and play the game or to terminate and rec... | C |
Using existing spectral results from [11, 27] we may deduce that the lower bound in (2.1) is tight for some values of p,q=Θ(n−1logn)𝑝𝑞Θsuperscript𝑛1𝑛p,q=\Theta(n^{-1}\log n)italic_p , italic_q = roman_Θ ( italic_n start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT roman_log italic_n ). (We suppress the details here, ... |
The bootstrapping example involves Gn,2,p,qsubscript𝐺𝑛2𝑝𝑞G_{n,2,p,q}italic_G start_POSTSUBSCRIPT italic_n , 2 , italic_p , italic_q end_POSTSUBSCRIPT. Write q≤2∗subscriptsuperscript𝑞absent2q^{*}_{\leq 2}italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ≤ 2 end_POSTSUBSCRIPT for the maximum ... |
It will be straightforward to show that the planted partition 𝒜0subscript𝒜0{\mathcal{A}}_{0}caligraphic_A start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT has modularity score as claimed: the main part of the proof is to show that q∗(w)superscript𝑞𝑤q^{*}(w)italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_w ) ... | The modularity of a graph is a measure of the extent to which the graph breaks into separate communities. For a given graph G𝐺Gitalic_G, each partition 𝒜𝒜{\mathcal{A}}caligraphic_A of the vertices has a modularity score q𝒜(G)subscript𝑞𝒜𝐺q_{{\mathcal{A}}}(G)italic_q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBS... |
The tightness of (2.1) tells us that whp the planted partition has asymptotically maximal modularity value over all bipartitions. We now consider the modularity value for the k𝑘kitalic_k-block model, and see that the planted partition is asymptotically optimal over all partitions. | D |
An evident gap in the literature emerges in Figure 4: European countries have rarely been the object of study of the impact of environmental factors on mobility. This might be motivated by the fact that the European continent is mostly seen as a destination for migrants than an origin. It should not surprise that the t... | We intend to identify the existence of communities in our network. The assumption is that papers citing the same references aggregate into a group that shares certain features, which could be methodological approach, level of analysis, specific sub-topics of the literature, and outcomes. The extreme heterogeneity of ou... |
Our quantitative approach aims at analyzing the connectivity that exists among papers according to a citation-based approach and detecting the existence of communities or clusters. Since our target literature is characterized by a high heterogeneity of results, both in the direction and magnitude of the impact, we try... |
Second, to investigate the determinants of this extreme heterogeneity of outcomes, we postulate the assumption that the inter-connectivity of papers may play a role in shaping such different conclusions. Considering the ensemble of papers referenced by each contribution included in the sample, as a second step, we bui... |
The procedure identifies four main clusters. Our network being relatively small allows analyzing the main characteristics of each cluster. Following the full-text screening made in the first step of our threefold approach, we summarized some meaningful indicators about the analysis (such as type - quantitative, qualit... | B |
\mathbf{x}_{s-1})\rVert_{2}^{2}over¯ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_s = 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ∥ ∇ italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_s end_P... |
So far, we have designed an online algorithm (Sword) with the gradient-variation dynamic regret. While it achieves a favorable regret guarantee, one caveat is that Sword runs N=𝒪(logT)𝑁𝒪𝑇N=\mathcal{O}(\log T)italic_N = caligraphic_O ( roman_log italic_T ) base-learners simultaneously and each base-learner requir... | In Section 4.2, we develop the Sword algorithm, which achieves the gradient-variation dynamic regret under the multi-gradient feedback model. In Section 4.3, we present an improved algorithm called Sword++ that can achieve the same dynamic regret guarantee (up to constants) under the more challenging one-gradient feedb... |
Up to now, we have shown that it is possible to design online methods to achieve stronger guarantees than static methods under the challenging one-gradient feedback online learning, and meanwhile suffer no computational overhead in terms of the gradient query complexity. | In addition to the regret measure, we further consider the gradient query complexity. Note that algorithms designed for the multi-gradient feedback model may query the gradients for multiple times at each round. However, most algorithms designed for the static regret minimization only require one gradient per iteration... | C |
σkmex(σ)(u)=pratσkmex(σ)(q).superscript𝜎𝑘mex𝜎𝑢𝑝𝑟𝑎𝑡superscript𝜎𝑘mex𝜎𝑞\sigma^{k\operatorname{mex}(\sigma)}(u)=prat\sigma^{k\operatorname{mex}(\sigma%
)}(q).italic_σ start_POSTSUPERSCRIPT italic_k roman_mex ( italic_σ ) end_POSTSUPERSCRIPT ( italic_u ) = italic_p italic_r italic_a italic_t italic_σ s... | \sigma^{p^{\prime}}(c)^{\omega})italic_x start_POSTSUBSCRIPT [ 0 , ∞ ) end_POSTSUBSCRIPT = italic_σ start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ( italic_u start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_v start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT italic_v start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ital... | u=praq𝑢𝑝𝑟𝑎𝑞u=praqitalic_u = italic_p italic_r italic_a italic_q with σ(p)=p𝜎𝑝𝑝\sigma(p)=pitalic_σ ( italic_p ) = italic_p, the word r𝑟ritalic_r erasable, σnsuperscript𝜎𝑛\sigma^{n}italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT
right-prolongable on ra𝑟𝑎raitalic_r italic_a and x=σnω(pra)... | and σkω(u)=σkω(pra)superscript𝜎𝑘𝜔𝑢superscript𝜎𝑘𝜔𝑝𝑟𝑎\sigma^{k\omega}(u)=\sigma^{k\omega}(pra)italic_σ start_POSTSUPERSCRIPT italic_k italic_ω end_POSTSUPERSCRIPT ( italic_u ) = italic_σ start_POSTSUPERSCRIPT italic_k italic_ω end_POSTSUPERSCRIPT ( italic_p italic_r italic_a ).
Moreover, if u∈ℒ(σ)𝑢ℒ𝜎u\... |
Thus s=pr𝑠𝑝𝑟s=pritalic_s = italic_p italic_r and σkω(u)=σkω(pra)superscript𝜎𝑘𝜔𝑢superscript𝜎𝑘𝜔𝑝𝑟𝑎\sigma^{k\omega}(u)=\sigma^{k\omega}(pra)italic_σ start_POSTSUPERSCRIPT italic_k italic_ω end_POSTSUPERSCRIPT ( italic_u ) = italic_σ start_POSTSUPERSCRIPT italic_k italic_ω end_POSTSUPERSCRIPT ( italic_... | D |
{n}}{n}+\bigg{(}\frac{C_{n}}{n}\bigg{)}^{\frac{2}{2s+1}}\Bigg{)}.italic_R ( caligraphic_T start_POSTSUBSCRIPT italic_n , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n e... | smoothness index k+1𝑘1k+1italic_k + 1 in each coordinate direction, and any third index q≥1𝑞1q\geq 1italic_q ≥ 1, is indeed n−2s/(2s+1)superscript𝑛2𝑠2𝑠1n^{-2s/(2s+1)}italic_n start_POSTSUPERSCRIPT - 2 italic_s / ( 2 italic_s + 1 ) end_POSTSUPERSCRIPT for s>1/2𝑠12s>1/2italic_s > 1 / 2 (or 2k+2>d2𝑘2𝑑2k+2>d2 it... | ℋdk(1)superscriptsubscriptℋ𝑑𝑘1\mathcal{H}_{d}^{k}(1)caligraphic_H start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( 1 ). This matches the optimal rate for estimation over Holder
classes (see Sadhanala et al. (2017) for a formal statement and proof for | The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al. (2017). More precisely,
these authors established the third term on the right-hand side in | This paper is a unification and extension of Sadhanala et al. (2016, 2017) (more will be said about the relationship to these papers
in Section 1.3). The models of smoothness for f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that we | C |
Figure 14: The normalized histogram of the Wasserstein distance between average MZ- and DZ-twin correlations within each state over 1000 derangements. Since the generated null data has no genetic signal, we are basically computing the Wasserstein distance between two connectivity matrices with random noises. In compar... | In this study, we proposed the topological clustering method for the estimation and quantification of dynamic state changes in time-varying brain networks. A coherent statistical theory, grounded in persistent homology, was developed, and we demonstrated the application of this method to resting-state fMRI data. Restin... | However, the method has been mainly used on static networks or a static summary of time-varying networks. The dynamic pattern of persistent homology for time-varying brain networks was rarely investigated, with a few exceptions (Yoo et al., 2016; Santos et al., 2019; Songdechakraiwut and Chung, 2020; Giusti et al., 201... |
The predominant method for computing time-varying correlation in time series data, particularly in neuroimaging studies, involves Sliding Windows (SW). This technique entails computing correlations between brain regions across various time windows (Allen et al., 2014; Hutchison et al., 2013; Shakil et al., 2016; Mokht... |
In contrast to previous studies that reported relatively low heritability in functional brain networks (Glahn et al., 2010; Xu et al., 2017; Korgaonkar et al., 2014; Wan et al., 2022), our findings indicate significant higher heritability across various regions of the brain network. This discovery not only challenges ... | A |
x(t)=G(τ)x(tk),∀t∈[tk,tk+1),formulae-sequence𝑥𝑡𝐺𝜏𝑥subscript𝑡𝑘for-all𝑡subscript𝑡𝑘subscript𝑡𝑘1x(t)=G(\tau)x(t_{k}),\quad\forall t\in[t_{k},t_{k+1}),italic_x ( italic_t ) = italic_G ( italic_τ ) italic_x ( italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , ∀ italic_t ∈ [ italic_t start_POSTSUBSCR... | _{c}-A).italic_G ( italic_τ ) := roman_e start_POSTSUPERSCRIPT italic_A italic_τ end_POSTSUPERSCRIPT + ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT roman_e start_POSTSUPERSCRIPT italic_A ( italic_τ - italic_s ) end_POSTSUPERSCRIPT roman_d italic_s ( italic_A start_POSTSUB... |
AL(τ)=(1−α)A+(eAτ−I)Ac.𝐴𝐿𝜏1𝛼𝐴superscripte𝐴𝜏𝐼subscript𝐴𝑐AL(\tau)=(1-\alpha)A+(\mathrm{e}^{A\tau}-I)A_{c}.italic_A italic_L ( italic_τ ) = ( 1 - italic_α ) italic_A + ( roman_e start_POSTSUPERSCRIPT italic_A italic_τ end_POSTSUPERSCRIPT - italic_I ) italic_A start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ... |
G(τ)=I+A−1(eAτ−I)Ac.𝐺𝜏𝐼superscript𝐴1superscripte𝐴𝜏𝐼subscript𝐴𝑐G(\tau)=I+A^{-1}(\mathrm{e}^{A\tau}-I)A_{c}.italic_G ( italic_τ ) = italic_I + italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( roman_e start_POSTSUPERSCRIPT italic_A italic_τ end_POSTSUPERSCRIPT - italic_I ) italic_A start_POSTSUBSCRIP... |
R:=S−1[I−(1−α)AAc−1]S.assign𝑅superscript𝑆1delimited-[]𝐼1𝛼𝐴superscriptsubscript𝐴𝑐1𝑆R:=S^{-1}\left[I-(1-\alpha)AA_{c}^{-1}\right]S.italic_R := italic_S start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT [ italic_I - ( 1 - italic_α ) italic_A italic_A start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERS... | A |
The temperature of the battery module from the two boundaries and mid-section, as shown in Fig. 2, confirms the safety and stability of both strategies. In Fig. 2, the coolant temperature control at the boundaries show that the transient control action for StSf-C is greater than St-C. However, in steady state both cont... | Next, we present the temperature response and control variables under the two control strategies, as shown in Figs. 5-7. The spatiotemporal temperature distribution of the battery module in Fig. 5 shows an increased in temperature at 700s700𝑠700s700 italic_s for all two strategies after the disturbance injection by t... |
Next, we present a test case to illustrate the performance of the proposed approach under disturbance. We consider a scenario where an adversary injects a cyberattack in the form of a disturbance to the battery module to induce overdischarge. The disturbance is injected at 700s700𝑠700s700 italic_s as current drain f... | The temperature of the battery module from the two boundaries and mid-section, as shown in Fig. 2, confirms the safety and stability of both strategies. In Fig. 2, the coolant temperature control at the boundaries show that the transient control action for StSf-C is greater than St-C. However, in steady state both cont... | We consider a battery module as a case study in this work. In the context of battery module, the one-dimensional PDE given by (1) captures the spatio-temporal thermal dynamics (please refer to [31] and [32] for details of the modeling). Here T(x,t)𝑇𝑥𝑡T(x,t)italic_T ( italic_x , italic_t ) is the distributed tempera... | B |
With the growing adoption of technologies in the workplace, research on the “Future of Work” is intensively exploring ways to intelligently redesign workspaces. The goal is to enhance the talents of the workforce, both those recorded on and off the balance sheet. Among the rapidly growing technologies, passive sensors... | We present a survey of existing research on the use of passive sensing technologies in the workplace to assess and promote wellbeing and productivity of the workforce. In this work, we consider “workplace” as the setting or place of employment where individuals perform tasks for their employer without regards to whethe... |
With the growing adoption of technologies in the workplace, research on the “Future of Work” is intensively exploring ways to intelligently redesign workspaces. The goal is to enhance the talents of the workforce, both those recorded on and off the balance sheet. Among the rapidly growing technologies, passive sensors... |
Passive sensing is increasingly involved in various aspects of our daily lives. Within the workplace, it has been used to monitor physiological factors of workers, promote work safety, enhance efficiency among other things [8]. Recently, the Tesserae [9] project involved over 700 information workers to investigate how... | Again, this field is evolving and most of the extant research is formative in revealing the feasibility and efficacy of these technologies. However, real-world and prospective adaptations of these approaches may bring in new challenges. For instance, the accuracy metrics tend to reflect the model performance at average... | A |
A noticeable observation is that the overall performance is lower than the case with a moderate number of clients.
This is because the number of training data for each client decreases and each client suffers more from heterogeneous data distributions. | Nevertheless, we observe that FedACG outperforms other methods consistently in most cases; the accuracy gap between FedACG and its strongest competitor becomes larger in these more challenging scenarios.
The results from the large-scale experiments exhibit the robustness of FedACG to the data heterogeneity and low clie... | Since the numbers of local epochs and iterations are set to 5 and 50, respectively, each client has little training opportunity with few training examples and client heterogeneity increases significantly.
As shown in Table 2, FedACG outperforms the other methods in most cases, with the performance gap between FedACG an... | One of the critical challenges in federated learning is the partial participation of clients, which can slow down the convergence of the global model.
To verify the robustness of FedACG to low client participation rates, we conduct experiments with 500 clients and a participation rate as low as 1%. | Figures 2(b) and 2(c) reveal that the local models of FedACG exhibit less divergence in the parameter space and more consistent feature representations, respectively.
These findings demonstrate that the accelerated client gradient in FedACG effectively mitigates client drift stemming from data heterogeneity. | A |
Irrespective, the fundamental techniques developed in our work are largely independent of the formulation or algorithm used to determine the optimal allocation power—since learning models and techniques are solely based on training examples. In §IV, we use the above formulation to generate the training examples for the... | The general spectrum allocation problem is to allocate optimal power to an SU’s request across spatial, frequency, and temporal domains. We focus on the core function approximation problem, which is to determine the optimal power allocation to an SU for a given location, channel, and time instant—since frequency and te... | has a similar trend. The red plot is a conservative spectrum allocation, based on a
similar conservative path-loss function. (b) Modifying labels of training samples to drive the model towards learning a simpler and more conservative spectrum allocation function. | Allocation based on SSs parameters is implicitly based on real-time channel conditions, which is important for accurate and optimized spectrum allocation as the conditions affecting signal attenuation (e.g., air, rain, vehicular traffic) may change over time.
| SSs can be represented similarly to PUs with the size of their disk proportional to the received (rather than transmit) power. However, unlike PUs, we place SSs among the sheets based on their location rather than received powers, e.g., SSs from certain grids were always placed on the first sheet irrespective of their ... | C |
\frac{1}{\Gamma\left(\frac{1}{K}+i+1\right)}\frac{(-c\alpha^{K})^{i}\alpha}{i!%
K^{2i+1}}.= italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT roman_Γ ( divide start_ARG 1 end_ARG start_ARG italic_K end_ARG ) ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT divide start_ARG... | The power series for the affine arc-length parameterization γ(α)𝛾𝛼\gamma(\alpha)italic_γ ( italic_α ) is obtained by integrating the series T(α)𝑇𝛼T(\alpha)italic_T ( italic_α ).
See Figures 16 and 16 for reconstructions of curves with curvatures μ(α)=α𝜇𝛼𝛼\mu(\alpha)=\alphaitalic_μ ( italic_α ) = italic_α and ... | The system Tαα=−cαkTsubscript𝑇𝛼𝛼𝑐superscript𝛼𝑘𝑇T_{\alpha\alpha}=-c\alpha^{k}Titalic_T start_POSTSUBSCRIPT italic_α italic_α end_POSTSUBSCRIPT = - italic_c italic_α start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_T consists of two decoupled equations of the type u′′(α)=−cαku(α)superscript𝑢′′𝛼𝑐... | The difference between the uniform and point-wise convergence is that one can choose nεsubscript𝑛𝜀n_{\varepsilon}italic_n start_POSTSUBSCRIPT italic_ε end_POSTSUBSCRIPT which “works” for all p∈P𝑝𝑃p\in Pitalic_p ∈ italic_P. If P𝑃Pitalic_P is an interval [0,L]0𝐿[0,L][ 0 , italic_L ], then uniform convergence of {fn... | Convergence of series (95) for all α𝛼\alphaitalic_α follows from a general known result (Theorem 39.22 p.560 [22]).
Directly, absolute convergence of sub-series (96) and (97) can be verified by the ratio test, implying absolute convergence of series (95). | D |
As discussed in Remark 4.2, the stepsize choice αt=CTTsubscript𝛼𝑡subscript𝐶𝑇𝑇\alpha_{t}=\sqrt{\frac{C_{T}}{T}}italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = square-root start_ARG divide start_ARG italic_C start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT end_ARG start_ARG italic_T end_ARG end_ARG can be ma... | Now, we consider the dynamic regret of the online random coordinate descent algorithm for strongly convex functions. As before, we consider μ𝜇\muitalic_μ-strongly convex functions. In addition, we will make the following assumption commonly seen in online optimization literature [18, 23].
|
In the seminal work of [41], the method of online gradient descent is proposed for OCO problems, where at each time step the decision maker performs one gradient descent step using the latest available information. A static regret upper bound that is sublinear in T𝑇Titalic_T is proved, where T𝑇Titalic_T is the lengt... |
Note that when the cost functions are strongly convex and constant stepsizes are used, the result in Proposition 5.1 only gives a dynamic regret bound that is linear in T𝑇Titalic_T. Therefore, we aim to establish better dynamic regret bounds for Algorithms 2 and 3. In this subsection, we will make the following assum... |
By using Proposition 5.1, we can establish regret bounds for Algorithms 2 and 3 using known regret bounds for online gradient descent algorithms. Moreover, if the regret bounds for online gradient descent are sublinear in T𝑇Titalic_T and ∑t=1Tαtsuperscriptsubscript𝑡1𝑇subscript𝛼𝑡\sum_{t=1}^{T}\alpha_{t}∑ start_POS... | C |
Advancements in technology offer a broader range of materials that could potentially facilitate the design of improved silicon-based brains. Architectures using this device challenge the conventional choices for abstraction level and partitioning in mixed-signal neuromorphic processors. These designs can employ more ad... |
Hybrid substrates present an intriguing design space that can leverage the advantages of both analog and digital domains [61]. The most prevalent hybrid model utilizes analog circuits for computation and digital circuits for communication [62, 63, 64]. This approach recognizes that digital circuits operate much faster... |
The commonly used level of abstraction, which closely resembles direct biological replication, models neural computation using coupled ordinary differential equations. The temporal dynamics of this model enable information integration over time, while the coupling across state variables models spatial information inte... |
The last section (VI.C) explored the computational properties of STP and the double exponential dynamics. Both STP and double exponential decay dynamics increased the accuracy of the network compared to the original HOTS network with single exponentials and no-STP. STP contributes to reducing ”Noise” in the network. M... | Advancements in technology offer a broader range of materials that could potentially facilitate the design of improved silicon-based brains. Architectures using this device challenge the conventional choices for abstraction level and partitioning in mixed-signal neuromorphic processors. These designs can employ more ad... | B |
This theoretical lower bound on the expected convergence time is the first proven non-trivial lower bound for asynchronous opinion updates.
A challenging open problem is to improve this lower bound by finding a graph class with Φ(S0)∈Ω(n2ε2)Φsubscript𝑆0Ωsuperscript𝑛2superscript𝜀2\Phi(S_{0})\in\Omega(n^{2}\varepsi... |
In this paper we study an agent-based model for opinion formation on a social network where the opinion of an agent depends both on its own intrinsic opinion and on the opinions of its network neighbors. One of the earliest influential models in this direction was defined by DeGroot DeGroot (1974). In this model the o... | Moreover, another direction for future work is to consider social networks with directed and possibly weighted edges.
This would more closely mimic the structure of real-world neighborhood influences, allowing us to study asymmetric influence settings found in online social networks like Twitter. | Our opinions are not static. On the contrary, opinions are susceptible to dynamic changes, and this is heavily exploited by (social) media, influencers, politicians, and professionals for public relations campaigns and advertising. The way we form our opinions is not a solitary act that simply combines our personal exp... | Researchers have investigated the convergence to stable states and the corresponding convergence speed in many variants of the Hegselmann-Krause model. The existing work can be categorized along two dimensions: complete or arbitrary social network and synchronous or asynchronous updates of the opinions.
Synchronous opi... | B |
Deep learning models have shown notable improvements in diagnosis accuracy across various medical imaging applications, including detecting diabetic retinopathy Tymchenko et al. (2020) and skin cancer Li and Shen (2018). Given the substantial impact of thoracic diseases on public health and the widespread use of chest... | In 2015, Bar et al. (2015) used a pre-trained image classifier for classifying pathologies in chest radiographs, demonstrating the feasibility of detecting X-ray pathologyDonahue et al. (2014). In 2017, Cicero et al. (2017) presented a similar CNN classifier that achieved an AUC of 0.964 using a medium-sized dataset of... |
In Table 3 and Table 4, we compare the performance of our proposed model against single and multi-label prediction models for selected pathologies. Table 3 shows that our proposed multi-label approach was able to outperform single-label models. In Table 4, the results indicate that our proposed architecture outperform... |
Convolutional Neural Networks (CNNs) Fukushima and Wake (1991) are widely used neural network architectures in image classification tasks. They efficiently extract and learn image features through convolution and pooling layers. The pioneering CNN architecture, AlexNet Krizhevsky et al. (2012), employed multiple convo... | Deep learning has achieved remarkable performance in various image classification tasks largely due to the availability of labeled datasets (Krizhevsky et al. (2012); Ren et al. (2015); Simonyan and Zisserman (2015); Szegedy et al. (2017); He et al. (2016)). Deep learning has also shown immense potential in health anal... | A |
Elsewhere, while the results of Komiyama et al. (2021) also apply specifically to Bernoulli arms, the Θ(1/T)Θ1𝑇\Theta(1/T)roman_Θ ( 1 / italic_T ) Bayesian simple regret can also be derived for Gaussian arms, provided we are not interested in the magnitude of the constant.
| The simple regret formulated by Eq. (1) is frequentist in the sense that it assumes that 𝝁𝝁\bm{\mu}bold_italic_μ is fixed (but unknown). In contrast, a Bayesian approach considers a known distribution of 𝝁∈ℝK𝝁superscriptℝ𝐾\bm{\mu}\in\mathbb{R}^{K}bold_italic_μ ∈ blackboard_R start_POSTSUPERSCRIPT italic_K end_POST... | This paper considers the Bayes optimal algorithm in the context of fixed-budget identification.
In the field of statistical decision theory, the Bayes optimal algorithm is deeply connected to a minimax estimator that maximizes the worst-case performance. |
This paper has considered the BAI problem. We have demonstrated that the Bayes optimal algorithm, which is optimized for the expected performance over the prior, does not have a frequentist rate of simple regret. In some distributions, the Bayes optimal algorithm does not perform well, even when the distributions are ... | This paper considers a Bayes optimal algorithm that minimizes the Bayesian simple regret (Eq. (2)).
Our main result demonstrates that the Bayes optimal algorithm does not feature an exponential frequentist simple regret, which is somewhat surprising given its optimality in the Bayesian sense. | D |
The constrained optimization (1) is called recursive feasible, if it is feasible at time step t𝑡titalic_t, ∀t>t0for-all𝑡subscript𝑡0\forall t>t_{0}∀ italic_t > italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, for each robot i∈𝒩𝑖𝒩i\in\mathcal{N}italic_i ∈ caligraphic_N, then the new optimization at time step t+h𝑡... | In addition, the above methods in general introduce an instantaneous change of the control inputs whenever deadlocks happen,
while the proposed scheme adapts a smooth and gradual adaptation of repulsive forces, before the potential deadlocks actually happen. | The general MATG problem is to design control inputs uisuperscript𝑢𝑖u^{i}italic_u start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT for each robot i∈𝒩𝑖𝒩i\in\mathcal{N}italic_i ∈ caligraphic_N such that it reaches the target position ptargeti∈ℝdsubscriptsuperscript𝑝𝑖targetsuperscriptℝ𝑑p^{i}_{\text{target}}\in\m... | Recursive feasibility ensures the safety of the resulting trajectories, namely, no inter-robot collisions would happen. However, for certain cases such as symmetric and crowded scenarios, a number of robots block each other and cannot move towards their target positions, also known as deadlocks [27, 29, 37].
Although n... | More importantly, another well-known issue in MATG is that deadlocks often occur during navigation in multi-robot systems.
Formally, a deadlock happens when multiple robots are blocked by each other indefinitely and cannot make any progress towards their targets [29]. | C |
To this end, we introduce a group-invariant representation learning method that encodes data in a group-invariant latent code and a group action. By separating the embedding in a group-invariant and a group-equivariant part, we can learn expressive lower-dimensional group-invariant representations utilizing the power ... | We characterize the mathematical conditions of the group action function component and
we propose an explicit construction suitable for any group G𝐺Gitalic_G. To the best of our knowledge, this is the first method for unsupervised learning of separated invariant-equivariant representations valid for any group. | The first consists in learning an approximate group action in order to match the input and the reconstructed data.
For instance, Mehr et al. (2018b) propose to encode the input in quotient space, and train the model with a loss that is defined by taking the infimum over the group G𝐺Gitalic_G. While this is feasible fo... |
To this end, we introduce a group-invariant representation learning method that encodes data in a group-invariant latent code and a group action. By separating the embedding in a group-invariant and a group-equivariant part, we can learn expressive lower-dimensional group-invariant representations utilizing the power ... | In this work we proposed a novel unsupervised learning strategy to extract representations from data that are separated in a group invariant and equivariant part for any group G𝐺Gitalic_G. We defined the sufficient conditions for the different parts of our proposed framework, namely the encoder, decoder and group func... | A |
2.5 µm/times2.5dividemicrometerabsent2.5\text{\,}\mathrm{\SIUnitSymbolMicro m}\text{/}start_ARG 2.5 end_ARG start_ARG times end_ARG start_ARG start_ARG roman_µ roman_m end_ARG start_ARG divide end_ARG start_ARG end_ARG end_ARG,
PM2.5, causing 3-4 million deaths each year (Cohen et al. 2017; Lelieveld et al. 2015). | This corresponds to one local component and one regional component contributing roughly equally to the overall variability of NO2 concentrations.
At a distance of 100 metres the expected correlation between NO2 concentrations will be close to 1, driven by both local and long-range effects, whereas at a distance of 10 k... | One commonly monitored pollutant is nitrogen dioxide, NO2, as it is used to indicate the presence of traffic-generated air pollution (Katsouyanni 2003).
NO2 is also part of an ozone-generating process, and short-term increases in ozone concentration are followed by short-term increases in mortality (Katsouyanni 2003). ... | This is advantageous in applications like pollution monitoring where there are plentiful observations for some contexts but not others, e.g. cities with and without extensive pollution monitoring programs, and sample efficiency is paramount as each sample is expensive to collect. Hierarchical Bayesian modelling has bee... | Traditional air pollution monitoring uses large and expensive sensors that are typically managed by national or municipal authorities deciding where to locate sensors based on domain knowledge and constraints posed by the bulky nature of the sensors (Carminati, Ferrari, and Sampietro 2017).
More recently, low-cost air ... | B |
To investigate the performance of RODEO when scaling to hierarchical discrete latent variable models, we follow DisARM [14, 68] to train VAEs with 2/3/4 stochastic layers, each of which consists of 200200200200 Bernoulli variables.
We set K=2𝐾2K=2italic_K = 2 and compare our estimator with DisARM and Double CV on dyna... | In this experiment we replace the two-layer MLP-based VAE with a ResNet VAE, where the cost of f𝑓fitalic_f is significantly higher than the single-layer MLP of H,H∗𝐻superscript𝐻H,H^{*}italic_H , italic_H start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT.
In this case, RODEO and RLOO have very close per-iteration time (0.0... | Figure 4(a) explores the impact of RODEO Stein operator choice on non-binarized MNIST.
As expected, the less stable difference 8 and MPF 6 operators lead to significantly higher gradient variances and worse training ELBOs. In fact, the same operators led to divergent training on binarized MNIST. | RODEO generally achieves the best training ELBOs.
One difference we noticed when comparing the variance results with those obtained from single-stochastic-layer VAEs is that these estimators have very different behaviors towards the beginning and the end of training. | generally leads to the lowest variance.
Although RELAX was often observed to have very strong performance in prior work [14, 60], our results in Figure 1 suggest that, for dynamically binarized datasets, much larger gains can be achieved by using the same number of function evaluations in other estimators. | C |
The improvement compared to a system without SIC (cf. Fig. 3) is significant and allows to achieve ultra reliability at much lower SNRs. Particularly important is the fact that Random selection exhibits a performance floor even when the mean traffic intensity is as low as 1111 user/frame. This is a consequence of the s... | Lastly, we consider Steiner systems with different configurations of the frame length M𝑀Mitalic_M and number of repetitions K𝐾Kitalic_K. In Table I we provide the relevant parameters for the systems used in this work, namely the number of patterns C𝐶Citalic_C, maximum number of interferers per slot D𝐷Ditalic_D, ord... | In practice, for some Steiner systems that number is even higher, e.g. in the used S(2,4,25)𝑆2425S(2,4,25)italic_S ( 2 , 4 , 25 ) no subset of size <7absent7<7< 7 exist that would form a stopping set.
Those issues, as well as the results for other systems, are further discussed in Section VII. | In Fig. 2 we compare a S(2,4,25)𝑆2425S(2,4,25)italic_S ( 2 , 4 , 25 ) Steiner system (solid line) and a corresponding Random selection (dashed) with the same frame length and number of repetitions.
For this Steiner system, we can have at most C=50𝐶50C=50italic_C = 50 users, and at most D=8𝐷8D=8italic_D = 8 collisio... | ( S(2,5,25)𝑆2525S(2,5,25)italic_S ( 2 , 5 , 25 ) vs S(2,5,41)𝑆2541S(2,5,41)italic_S ( 2 , 5 , 41 ) or S(2,4,25)𝑆2425S(2,4,25)italic_S ( 2 , 4 , 25 ) vs S(2,4,37)𝑆2437S(2,4,37)italic_S ( 2 , 4 , 37 )).
Furthermore, even though higher K𝐾Kitalic_K itself is generally beneficial, the system with high M𝑀Mitalic_M ... | B |
The classification of sets in ℝNsuperscriptℝ𝑁\mathbb{R}^{N}blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT for which the ATSP can be solved, was done by Jones [Jon90] (for ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) and by Okikiolu [Oki92] (for higher dimensio... |
It should be noted that the work of Jones [Jon90], Okikiolu [Oki92], and Schul [Sch07] provides the sets for which a solution exists but not the solution itself when the set is infinite. That is, they classify the sets which are contained in curves of finite length, but do not provide the parametrization of the curves... | The classification of sets in ℝNsuperscriptℝ𝑁\mathbb{R}^{N}blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT for which the ATSP can be solved, was done by Jones [Jon90] (for ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) and by Okikiolu [Oki92] (for higher dimensio... |
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper... | We remark that although time-wise this algorithm is fast, the ratio constant of the yielded path over the optimal path has not been computed and is much larger that Christofides’ 3/2323/23 / 2 ratio. In fact, in our algorithm, the yielded path has length at
most (300)9/2log300superscript30092300(300)^{9/2}\log{300}( ... | C |
Finally, we compute the gcd of N=𝒪(m)𝑁𝒪𝑚N=\mathcal{O}(m)italic_N = caligraphic_O ( italic_m ) Chow forms. As each Chow form has (r+1)(n+1)𝑟1𝑛1(r+1)(n+1)( italic_r + 1 ) ( italic_n + 1 )-variables, bitsize 𝒪~(ndr−1(τ+n))~𝒪𝑛superscript𝑑𝑟1𝜏𝑛\widetilde{\mathcal{O}}(nd^{r-1}(\tau+n))over~ start_ARG caligr... | The coefficients of the linear forms in this factorization correspond to the solutions of the zero dimensional system. To force (some) of these solutions to have multiplicities
we compute the discriminant R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of R1subscript𝑅1R_{1}italic_R start_POSTSUBSCR... | Our contribution There is an extensive literature on the complexity of elimination theory procedures in general, and complexity of polynomial system solving in particular; we provide a small sample here [23, 31, 30, 8, 25]. Despite the strong literature on the subject, we were not able to locate any results on the bit ... | In this section, we provide algorithms to compute supp(V)supp𝑉\operatorname{supp}(V)roman_supp ( italic_V ) by the means of Theorem 4.1. The idea is to compute dimπI(V)dimensionsubscript𝜋𝐼𝑉\dim\pi_{I}(V)roman_dim italic_π start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( italic_V ) for every I⊂[l]𝐼delimited-[]𝑙I... | We have assumed in Alg. 2 that r=dimV𝑟dimension𝑉r=\dim Vitalic_r = roman_dim italic_V is part of the input. We could also compute r𝑟ritalic_r using the algorithms in [41, 11], without changing the single exponential nature of the complexity of the algorithm.
| D |
The BioSignalPlux333https://biosignalsplux.com/products/kits/researcher.html research toolkit system. It is a commonly used device to acquire different physiological signals in the literature [23, 24, 25, 26]. More specifically, we capture finger Blood Volume Pulse (BVP), ventral wrist Galvanic Skin Response (GSR), for... |
To fill this gap, UC3M4Safety generated the UC3M4Safety Database [13], which includes the Women and Emotion Multi-modal Affective Computing (WEMAC) and audiovisual stimuli datasets, as listed in Table 1. This paper presents WEMAC, a collection of physiological and audio data captured in a virtual reality set-up where ... | Note that the synchronization of all the different sensors acquisition together with the stages of the experiment is performed using a laptop (MSI GE75 Raider 8SE-034ES) running a Unity® framework-based program. On the one hand, the BiosignalPlux device connection is configured using the OpenSignals (r)evolution softwa... | The preprocessing is as follows. A low pass filter at 8kHz8𝑘𝐻𝑧8\leavevmode\nobreak\ kHz8 italic_k italic_H italic_z is applied, where most of the energy is concentrated. A high pass filter with a cut-off frequency of 50Hz50𝐻𝑧50\leavevmode\nobreak\ Hz50 italic_H italic_z is also applied to remove the electrica... | The Oculus Rift® S VR Headset444https://www.oculus.com/rift-s/. Its embedded microphone captures the speech signal produced during the speech-based labeling at a sampling rate of 48kHz48𝑘𝐻𝑧48\leavevmode\nobreak\ kHz48 italic_k italic_H italic_z mono and a depth of 16161616 bits. This device also guides volunteers... | D |
Labaca et al. (Labaca-Castro et al., 2021) propose adversarial training-based defense using universal problem-space attacks and demonstrate that universal perturbations in the problem space capture adversarial vulnerabilities that harden a classifier more effectively with adversarial training. Li et al. (Li et al., 202... |
Smutz et al. (Smutz and Stavrou, 2016) propose an ensemble method called ensemble classifier mutual agreement analysis where the goal is to identify uncertainty in classification decisions by introspecting the decision given by individual classifiers in an ensemble. If each classifier in an ensemble has a similar pred... |
Yang et al. (Yang et al., 2017) also evaluate different defense mechanisms to improve the robustness of classifiers. Using adversarial training, they were able to improve the robustness of classifiers. They also evaluate weight bounding, similar to Demontis et al. (Demontis et al., 2017), to improve the robustness of ... | Li et al. (Li and Li, 2020) combine ensemble network with adversarial training technique. The hardened model was able to defend against a broad range of attacks but was vulnerable to mimicry attacks. In 2021, Li et al. (Li et al., 2021b) again propose an ensemble framework for defense using a mixture of classifiers. Ea... | Adversarial Attack: An attack by an adversary that aims to fool the classifier is called an adversarial attack. Adversarial attacks can occur at either the training or testing time. An evasion attack, which occurs at test time, involves modifying the test samples in such a way that they evade the detection of the class... | A |
A goal is a pair (σ∗,D)superscript𝜎∗𝐷(\sigma^{\ast},D)( italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_D ) where σ∗∈∏iΣisuperscript𝜎∗subscriptproduct𝑖subscriptΣ𝑖\sigma^{\ast}\in\prod_{i}\Sigma_{i}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ ∏ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ... | A goal is a pair (σ∗,D)superscript𝜎∗𝐷(\sigma^{\ast},D)( italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , italic_D ) where σ∗∈∏iΣisuperscript𝜎∗subscriptproduct𝑖subscriptΣ𝑖\sigma^{\ast}\in\prod_{i}\Sigma_{i}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ ∏ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ... | We are interested in cases in which the probability 𝐏σ∗(D)subscript𝐏superscript𝜎𝐷{\rm\bf P}_{\sigma^{*}}(D)bold_P start_POSTSUBSCRIPT italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_D ) that the prescribed strategy profile attains the target set is 1 or close to 1.
| The strategy profile σ∗superscript𝜎\sigma^{*}italic_σ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a prescribed way for the players to play.
The target set D𝐷Ditalic_D is a set of realizations that they are supposed to reach if they follow through their prescribed strategy. | In our opening example, the target set is the set of realizations with long-run frequency 1/2121/21 / 2 of Heads,
and in the second example the target set is the set of all realizations where the induced random walk crosses the origin infinitely often. | C |
As described in §IV-A, to effectively fill this gap at the research community level, a common system-level evaluation infrastructure is highly desired. To foster this, we take the initiative to build a system-driven evaluation platform that unifies the common system-level instrumentations needed for AD AI security eva... | This platform will be fully open-sourced so that researchers can collectively develop new APIs to fit future attack/defense evaluation needs, and also contribute attack and defense implementations to form a semantic AD AI security benchmark, which can improve comparability, reproducibility, and also encourage open-sour... | The open-sourcing status of AD AI security works from the security community is much lacking compared to related communities/domains (e.g., CV, ML/AI, ASR/SI), which harms the reproducibility and comparability for this emerging area in the security community.
| Implementation. We have implemented several variations for each module in the platform. The AD model supports modular AD pipelines for STOP sign attack evaluation (Appendix -C) and full-stack AD systems (Apollo [101] and Autoware [102]). Our plant model includes an industry-grade AD simulator [134] and real L4-capable ... |
In this paper, we thus take the initiative to address this critical scientific methodology-level gap by developing a uniform and extensible system-driven evaluation platform, named PASS (Platform for Autonomous driving Safety and Security), for the semantic AD AI security research community (§V). We choose a simulatio... | A |
\text{bot}},caligraphic_B start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bot end_POSTSUPERSCRIPT , caligraphic_B start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bot end_POSTSUPERSCRIPT , caligraphic_B start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bot end_POSTSUPERSCRIPT , an... | We are going to compute the Maximum Cut for the general case and to compare it to CutaltersubscriptCutalter\textrm{Cut}_{\text{alter}}Cut start_POSTSUBSCRIPT alter end_POSTSUBSCRIPT in order to find out when it can be the maximal.
We are going to split the computation into several parts. | Our goal is to prove that f=Cutinside+Cuty+Cutz+Cutinter−Cutalter𝑓subscriptCutinsidesubscriptCut𝑦subscriptCut𝑧subscriptCutintersubscriptCutalterf=\textrm{Cut}_{\text{inside}}+\textrm{Cut}_{y}+\textrm{Cut}_{z}+\textrm{Cut}_%
{\text{inter}}-\textrm{Cut}_{\text{alter}}italic_f = Cut start_POSTSUBSCRIPT inside end_POSTS... | The CutysubscriptCut𝑦\textrm{Cut}_{y}Cut start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT and CutzsubscriptCut𝑧\textrm{Cut}_{z}Cut start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT parts count the cut-edges between different blocks that are both on the same level. There are two levels: the top y𝑦yitalic_y and the bottom ... |
Suppose w.l.o.g. that all the variables except for y1,y4subscript𝑦1subscript𝑦4y_{1},y_{4}italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT are equal to 00. Then Cut−CutalterCutsubscriptCutalter\textrm{Cut}-\textrm{Cut}_{\text{alter}}Cut - Cut start_POSTSUBSCRIPT alte... | A |
The focus has shifted towards developing sophisticated temporal models to operate on extracted image features similar to the natural-video domain, replacing end-to-end learning with complex multi-stage training procedures, where each component (CNN, LSTM [30], TCN [19], Transformer [73], etc.) is trained individually [... | In natural-video tasks, CNNs are only used to extract image- or clip-wise features using pretrained CNNs (e.g. [11]) off-the-shelf. Only the temporal model, which typically does not contain BN, is trained to aggregate features over time [1, 19, 32, 35, 42, 60, 74, 84]. However, in specialized small-data domains such as... | Further, our experiments focus on 2D backbones, due to limitations in the surgical domain, and LSTMs, to show the effectiveness of simple, state-based end-to-end models for online tasks.
While the incompatibility of BN and single-sequence batches is general to the video setting, investigating the feasibility and effect... | We choose these for two reasons:
The lack of well-pretrained feature extractors and ineffectiveness of 3D CNNs signify the need for end-to-end approaches in SWA and thus make BN-issues most relevant here. Further, SWA is one of the most active research areas for online video understanding, | We believe a comprehensive and deeper understanding of BN’s limitations in video tasks is crucial for future research in surgical workflow analysis.
End-to-end learning with simple models appears favorable over complex, multi-stage approaches but we hypothesize that BN issues have silently hidden its advantages. Moving... | C |
We observed the reason why FR models trained sufficiently fail to approach WDFS. This stems from the mismatch of similarity distributions between the sampled pairs and all pairs. Fig. 1 (b) shows an example of two similarity sets 𝒮^psuperscript^𝒮𝑝\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTS... |
Effect of Noise-negative Pair Filtering. To approximate WDFS, 𝒩mlsuperscript𝒩𝑚𝑙\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT was assumed to include extremely hard negative pairs because it can produce similarity scores similar to sup𝒮nsupremumsuperscript𝒮𝑛\sup{\mathc... | This paper is based on two insights. First, from a unified perspective, CL and ML have the same purpose of approaching WDFS, except for PG. Second, CL and ML show a mismatch between two similarity distributions of sampled pairs and all negative pairs. Based on these insights, we developed UNPG by combining two PG strat... |
This paper proposes unified negative pair generation (UNPG) by combining two PG strategies (i.e., MLPG and CLPG) from a unified perspective to alleviate the mismatch. Moreover, it includes filtering noise-negative pairs, such as too-easy/hard negative pairs, in order to guarantee reliable convergence and improve perfo... | Noise Negative Pair Filtering. According to our preliminary experiments, directly utilizing 𝒩mlsuperscript𝒩𝑚𝑙\mathcal{N}^{ml}caligraphic_N start_POSTSUPERSCRIPT italic_m italic_l end_POSTSUPERSCRIPT produced by MLPG causes performance degradation and divergence of a loss because many noise-negative pairs cause a s... | C |
Table 1 presents the FID values for each GAN-based architecture. StyleGAN2-ADA achieved the lowest FID value of 166166166166, while EBGAN was placed in last and its FID value of 380380380380 was the highest. The smaller the FID value, the better is the quality of the generated image. Therefore, all further experiments... | There were four stages of the experiments: (i) comparison of ten different GAN architectures, (ii) evaluation of synthetic images based on human experts’ ability to distinguish between real and synthetic images, (iii) evaluation of data augmentation with synthetic images in three different deep learning networks, and (... | Figure 4 shows a comparison between (a) real images from the training dataset and (b) synthetic images produced by StyleGAN2-ADA. The trained model yields realistic-looking images for both, with and without AMD, conditioned by sampling from latent representations. Visual inspection shows that the generated images are s... | Burlina et al have successfully developed a method for [8, 10] generating the synthetic images. Their method is based on GAN, and they tested their method by showing that experts were unable to identify the synthetic images. However, their method has been patented and hence not available for being used by others. We h... | The second experiment determines whether clinical experts, who are very experienced with the analysis of eye fundus images, can distinguish between synthetic and real images. Such a step is essential to evaluate the effectiveness of StyleGAN2-ADA for generating synthetic eye fundus images. The experts were provided wit... | B |
By the joint optimization of facial non-local and local feature vectors, LNLAttenNet can adaptively enhance more crucial regions in the process of deep feature learning.
Specifically, an ensemble of multiple networks corresponding to local regions is constructed to integrate the local feature with the non-local weights... | Actually, one purpose of our work is to explore how to automatically enhance the significance of local crucial regions in deep FER, while any landmarks are not given as the prior information of facial crucial regions.
Thus, in order to validate it, we make an analysis for the weights of 16 local regions obtained by our... | Some patches that are visually more discriminative are lightened with higher weights and some patches located at the non crucial regions cut down with smaller weights.
In summary, the analyses for non-local weights demonstrate that the proposed method can effectively automatically enhance the significance of facial cru... | In this paper, we propose the LNLAttenNet method to effectively explore the significance of facial crucial regions in feature learning for FER, without any landmark information.
In LNLAttenNet, the global information of the facial expression is utilized to construct the non-local attention network, and meanwhile the lo... | Experimental results also demonstrate that some local crucial regions can be effectively enhanced in feature learning by LNLAttenNet while there are not any given information of landmarks in the training model.
Moreover, the proposed method focuses on enhancing facial crucial regions in FER without any landmark informa... | D |
\sigma}\left(-t^{1/(1-\varepsilon)}\right)\,\textnormal{d}t\leq 2\int{t^{-1/(1%
-\varepsilon)}}\,\textnormal{d}t<\inftyblackboard_E | italic_η | start_POSTSUPERSCRIPT 1 - italic_ε end_POSTSUPERSCRIPT = ∫ roman_Pr ( | italic_η | start_POSTSUPERSCRIPT 1 - italic_ε end_POSTSUPERSCRIPT ≥ italic_t ) d italic_t = 2 ∫ italic_... | This could occur by finding a better strategy than the ones in Section 3, for instance by slowly increasing δ𝛿\deltaitalic_δ as games are played, or by tightening the analysis in Theorem 4.5.
It’s possible that for heavy-tailed σ𝜎\sigmaitalic_σ the lower bound is too loose, but for light-tail σ𝜎\sigmaitalic_σ the up... |
This establishes the lower bound. For the upper bound, we cannot assume that the optimal strategy is for H𝐻Hitalic_H to win every game. In particular, the function z↦z+σ(−2z)maps-to𝑧𝑧𝜎2𝑧z\mapsto z+\sigma(-2z)italic_z ↦ italic_z + italic_σ ( - 2 italic_z ) need not be monotone, so we cannot exclude the possibili... | This bound is stronger the heavier the tail of σ𝜎\sigmaitalic_σ. Consider, for instance
σ(z)=11+e−cz𝜎𝑧11superscript𝑒𝑐𝑧\sigma(z)=\frac{1}{1+e^{-cz}}italic_σ ( italic_z ) = divide start_ARG 1 end_ARG start_ARG 1 + italic_e start_POSTSUPERSCRIPT - italic_c italic_z end_POSTSUPERSCRIPT end_ARG | Intuitively, the faster σ(−z)𝜎𝑧\sigma(-z)italic_σ ( - italic_z ) decays to 0 as z→∞→𝑧z\to\inftyitalic_z → ∞, the slower the rate. Indeed, as the next theorem makes explicit, there is a simple expression for the rate in terms of the f𝑓fitalic_f defined in (4). Note f𝑓fitalic_f is only well defined for {x:σ(−x)>0}... | A |
The table heatmap view in Figure 3(g) is a more detailed view of the aggregated results present in the box plots. It normalizes the values from 0 to 1, evident in dark brown to dark teal colors, and it shows for each feature the current value in each instance. The features are sorted as in the box plots. Moreover, the ... |
The inverse polar chart in Figure 3(i), is deliberately designed to provide more space to instances that are in the borders between two classes or completely misclassified cases. The predicted probability with the ground truth class is used for the 100 to 0 axis, and the angle/orientation is computed as the difference... |
The undersampling phase is perhaps most crucial since removing unsafe instances without justifying one’s action could cause a severe issue to the ML model. We choose to activate the de-facto NCR algorithm without any tweaks to check the suggestions (Figure 3(c) and (d)). The distribution of instances changes according... | The size of each piece is calculated from the number of training instances that belong to a particular class, with extra space being provided to larger classes (i.e., consisting of more points). The same symbols as in the projection are also retained here. This approach can easily work for two or three classes but beco... |
Figure 3: At first, a comparison of different data types projections and then two consecutive undersampling phases with the NCR algorithm are shown in this arrangement of screenshots. The default value for the number of neighbors is 5 (see (a)), which is used as input for computing the type of each instance with KNN. ... | A |
Mallory: We assume Mallory is an adversary who is computationally bounded and economically rational. Mallory is observing the pending transaction pool for Alice’s transaction, txA𝑡subscript𝑥𝐴tx_{A}italic_t italic_x start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT on the Ethereum network. Mallory will attempt a front... | Verifiers: One might question the need for VDF in the presence of an honest-majority committee which can be used to verify delay. However, a solution using verifiers and not VDF mandates verifiers to keep a timer per request to track waiting time, and apart from being unscalable is also not independently verifiable. On... | In this section, we describe in detail the seven protocols that constitute FIRST. We start by giving a preamble of each protocol. The initial protocol consists of deploying a smart contract, SC, and generating key pairs for all members of 𝕍𝕍\mathbb{V}blackboard_V (Protocol 1). The second protocol is the generation of... | The goal of the aggregate signature scheme in FIRST is to cut down the cost of verifying each Visubscript𝑉𝑖V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT’s signature individually.
Moreover, we can obtain aggregate signatures from all members of 𝕍𝕍\mathbb{V}blackboard_V without requiring any trust assu... | Verifiers: Verifiers 𝕍𝕍\mathbb{V}blackboard_V are a set of entities not controlled nor owned by the dAC. The only assumption we have on 𝕍𝕍\mathbb{V}blackboard_V is an honest majority. The trust assumption in the verifiers ensures that transactions are not unjustly censored or subjected to unnecessary delays from th... | D |
The cost function ci:ℝd→ℝ:subscript𝑐𝑖→superscriptℝ𝑑ℝc_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → blackboard_R is twice continuously differentiable, αisubscript𝛼𝑖\alpha_{i}italic_α start_POSTSUBS... | We construct a simulated distribution over agent unobservables using the NELS data.111Due to computational constraints of the data generation, we construct a distribution over K=8𝐾8K=8italic_K = 8 representative agent types instead of using the empirical distribution over all m𝑚mitalic_m unobservables. Additional det... |
In the context of college admissions, Assumption 1 implies that a student must invest effort to deviate from their baseline test scores and grades. The amount of effort a student must invest to alter their test scores and grades changes smoothly with respect to the difference between modified and raw covariates. There... | To the best of our knowledge, Liu et al. (2022) is the only existing work that studies capacity-constrained allocation in the presence of strategic behavior. Liu et al. (2022) introduces the problem of strategic ranking, where agents’ rewards depend on their ranks after investing effort in modifying their covariates. T... |
We consider college admissions as a running example for capacity-constrained treatment assignment in the presence of strategic behavior. The agents are a population of students who are hetergeneous in their baseline test scores and grades and in their ability to invest effort to change them. The decision maker is a co... | B |
We find that gDRO on Biased MNISTv2 and ResNet+gDRO on BAR obtain accuracies lower than ResNet+ERM. To alleviate this issue, we tried to tune the hyperparameters by lowering the learning rates to {10−4,10−5}superscript104superscript105\{10^{-4},10^{-5}\}{ 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT , 10 start_POST... | To examine the efficiency and robustness of each exit for all of the datasets, we present the exit %, accuracy on the exited samples and overall exit-wise accuracies on all the samples for OccamResNet-18 in Table A9. For Biased MNISTv2, the earliest exits E1subscript𝐸1E_{1}italic_E start_POSTSUBSCRIPT 1 end_POSTSUBSCR... |
Ablations. To study the importance of the proposed inductive biases, we perform ablations on Biased MNISTv2 and COCO-on-Places. First, to examine if the multi-exit setup is helpful, we train networks with single exit attached to the end of the network. This caused accuracy drops of 29.1% on Biased MNISTv2 and 8.4% on ... |
Analysis of Early Exits. OccamResNet has four exits, the first exit is used for bias amplification and the rest are used to potentially exit early during inference. To analyze the usage of the earlier exits, we plot the percentage of samples that exited from each exit in Fig. 4. For Biased MNISTv2 dataset, a large por... | In Fig. A8 and A9 we show the reliability diagrams for ERM model (leftmost column) and for each exit (E1−E3𝐸1𝐸3E1-E3italic_E 1 - italic_E 3) for OccamResNet for COCO-on-Places and Biased MNISTv2 respectively. In terms of model calibration, OccamNet reduces the expected calibration error (ECE) to some extent, yet t... | D |
Vision transformer, a strong competitor of convolutional neural networks (CNNs), has been widely adopted in various vision tasks [45, 48, 84, 85, 86, 87, 88, 89, 90, 91, 92], due to its powerful ability of modeling global connection within all the input tokens. Specifically, ViT [45] splits an image into patches to con... | In terms of designing vision transformers to use contextual information, Focal Transformer [85] introduces both fine-grained and coarse-grained attention in architecture design to explore local and global contexts in the image. Though our proposed methods also focus on learning contexts, there are significant differenc... | To improve segmentation using transformers, some methods [89, 97, 52, 98, 99, 100, 101] have been developed. SETR [89] and Panoptic SegFormer [99] are the first transformer-based models for image and panoptic semantic segmentation, respectively. Generally, these works use transformers to generate global-context-aware f... | Vision transformer, a strong competitor of convolutional neural networks (CNNs), has been widely adopted in various vision tasks [45, 48, 84, 85, 86, 87, 88, 89, 90, 91, 92], due to its powerful ability of modeling global connection within all the input tokens. Specifically, ViT [45] splits an image into patches to con... | For global temporal contexts, few VSS methods [17, 53] have exploited the contexts from the whole video. The modeling of global temporal contexts is usually achieved by a memory module in the form of a memory bank [17] or a tiny network [53] which is updated during inference. Although promising results have been achiev... | B |
The non-popular μ𝜇\muitalic_μ-batch accesses arbitrary embeddings. The memory controller sends a direct memory access (DMA) request to the DMA engine for not-frequently-accessed embeddings and initiates a GPU read memory request for frequently-accessed embeddings. The Reducer block processes working parameters from CP... |
Figure 20 demonstrates the latency breakdown of Hotline and three hybrid baselines. The Criteo Kaggle and Terabyte datasets, which are more embedding and memory intensive, comprise high CPU–GPU communication time. Hotline eliminates the CPU-GPU communication time for popular μ𝜇\muitalic_μ-batch being completely execu... | Our Approach: Hotline introduces a novel accelerator that utilizes parallel lookup engines capable of performing fine-grained tasks. These tasks include determining whether an input is a high-access or low-access value, fragmenting the mini-batches to form new μ𝜇\muitalic_μ-batches, and enabling efficient parameter ga... | Our Approach: Hotline partitions each mini-batch into two micro-batches (μ𝜇\muitalic_μ-batches). The inputs in a μ𝜇\muitalic_μ-batch either access only frequently-accessed embeddings or any arbitrary embeddings. First, Hotline schedules the μ𝜇\muitalic_μ-batches that access only frequently-accessed embeddings on the... |
Figure 12: The execution pipeline of Hotline involves the accelerator actively classifying a mini-batch into popular and non-popular μ𝜇\muitalic_μ-batches, then scheduling the popular μ𝜇\muitalic_μ-batch onto the GPU(s). Simultaneously, the accelerator gathers the working parameters for the non-popular μ𝜇\muitalic_... | D |
Because the parameters for F𝐹Fitalic_F are selected i.i.d. from a distribution which is symmetric about the origin, the following construction of the network F′superscript𝐹′F^{\prime}italic_F start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT has the same distribution as F𝐹Fitalic_F: Selecting a network from the same distr... |
Next, we may investigate how the sublevel sets F≤tsubscript𝐹absent𝑡F_{\leq t}italic_F start_POSTSUBSCRIPT ≤ italic_t end_POSTSUBSCRIPT (and superlevel sets F≥tsubscript𝐹absent𝑡F_{\geq t}italic_F start_POSTSUBSCRIPT ≥ italic_t end_POSTSUBSCRIPT) change as we vary t∈[−M,M]𝑡𝑀𝑀t\in[-M,M]italic_t ∈ [ - italic_M , it... |
The idea of using Betti numbers of sublevel sets as a measure of complexity for neural network functions was initiated in [3]. Recall that the Betti numbers of a topological space are the ranks of its homology groups (see e.g. [10]), and they are invariants of the space up to homeomorphism (in fact, up to the weaker n... | Indeed, we believe that the probability that a given ReLU neural network function is non PL Morse grows with depth. Thus, studying the topology of the sublevel sets of ReLU neural network functions requires establishing more flexible results governing how the sublevel set varies with the threshold. We will introduce su... |
The purpose of this section is to define and study algebro-topological invariants of the level sets and sublevel sets of ReLU neural network functions. We extract local and global notions of topological complexity of ReLU neural network functions by studying how the homology of its level sets and sublevel sets change ... | D |
In this paper, we need to evolve the system from an initial ground state with strong transverse magnetic field to a state without any transverse magnetic field. In order to obtain this initial state, we have to train a beginning wave function whose parameters are random complex numbers with machine learning algorithms,... | These universal power-laws beyond KZM can be achieved by computing the higher order cumulants of the kink numbers. Since we adopt the PBC in the spin chain, the outcomes of the kink numbers are all even [45, 46]. Thus, in practice we compute the higher cumulants for the numbers of kink pairs, i.e., 𝒩^P=𝒩^/2subscript^... |
Utilizing the machine learning method (see Section IV “Materials and Methods”), we consider the time evolution of a one-dimensional TFQIM (1) under a linear quench (2) through the critical point with various quench rates. We set PBC to the spin chains, thus they satisfy the even parity [40]. The coupling strengths bet... |
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions. Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39]. Critical slowing down near the phase transition point may invalidate the applic... | After preparing the initial ground state, the system will evolve according to the quench profile (2). Therefore, the wave function should also depend on time. To this end, we render the neural network parameters to be functions of time. Then, the parameters will be computed at every time step with the time-dependent VM... | D |
Table 2: Error analysis of different hℎhitalic_h while Δt=1E−03Δ𝑡1𝐸03\Delta t=1E-03roman_Δ italic_t = 1 italic_E - 03. The time level at which the error is computed is T=1𝑇1T=1italic_T = 1. Re=104subscript𝑅𝑒superscript104R_{e}=10^{4}italic_R start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = 10 start_POSTSUPERSCR... | Then we choose different time interval ΔtΔ𝑡\Delta troman_Δ italic_t to observe the performance of our model. Table 3 shows the result of different ΔtΔ𝑡\Delta troman_Δ italic_t. We can conclude that the space interval has more influence to the error than the time interval.
| In order to show the effect of different space intervals and time intervals, two groups of comparison tests have been done. First, we choose different space interval hℎhitalic_h to observe the performance of our model. Table 2 shows the result of different hℎhitalic_h.
| We report the numerical error analysis obtained by solving Navier-Stokes equation obtained by setting Re=104subscript𝑅𝑒superscript104R_{e}=10^{4}italic_R start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT for the model problem. The interval of the mesh h=1/16ℎ116h=1/16ital... |
Table 3: Error analysis of different ΔtΔ𝑡\Delta troman_Δ italic_t while h=1/16ℎ116h=1/16italic_h = 1 / 16. The time level at which the error is computed is T=1𝑇1T=1italic_T = 1. Re=104subscript𝑅𝑒superscript104R_{e}=10^{4}italic_R start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT 4 end_POST... | A |
Being a model-agnostic technique, to realize what parts of the input are involved in the prediction, LIME perturbs the query point by creating samples around its neighborhood and observes how the model performs for the perturbed samples. Next, the samples are weighted with regard to their proximity to the original quer... |
SHAP (Shapley Additive Explanations) lundberg2017unified is another model-agnostic framework for explaining individual predictions made by machine learning models. SHAP values are based on cooperative game theory concepts, specifically the Shapley value, which allocates a fair contribution to each feature in the pred... | Similar to SHAP, QII (Quantitative Input Influence) datta2016algorithmic also uses Shapley values to explain individual predictions, yet, instead of adopting the conditional approach used in SHAP, QII draws ideas from the causal inference and follows an interventional approach. The QII method addresses feature correla... | Another related topic is the body of work on local interpretation methods for explaining individual predictions molnar2020interpretable . LIME provides local explanations for a model’s prediction behavior on query points by substituting the original complex model with a locally interpretable surrogate model.
| Therefore, the data science company would like to provide additional means alongside the model itself to help with the reliability question regarding individual predictions i.e. although the model demonstrates to be accurate on average, is it reliable for this individual prediction as well?
Furthermore, | A |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.