context stringlengths 250 7.19k | A stringlengths 250 4.62k | B stringlengths 250 8.2k | C stringlengths 250 3.89k | D stringlengths 250 4.12k | label stringclasses 4
values |
|---|---|---|---|---|---|
Table 7: Comparison with other discriminator assignment strategies such as Minimum, Random, and GT-Assign.
GT-Assign links an expert discriminator with real samples using the ground-truth class labels under our multi-discriminator framework but is unrealistic due to the requirement of the labels. | Then the loss is computed by the KL divergence of the probability distribution of discriminators for being selected as experts from 𝝁𝝁\bm{\mu}bold_italic_μ.
To obtain the probability for discriminator selection, we apply the 𝚜𝚘𝚏𝚝𝚖𝚊𝚡𝚜𝚘𝚏𝚝𝚖𝚊𝚡\mathtt{softmax}typewriter_softmax function to the vector of M𝑀M... | To achieve these goals, we employ a Multiple Choice Learning (MCL) [8] framework to learn multiple discriminators and update the generator via a set of expert discriminators, where each discriminator is associated with a subset of the true and generated examples.
Our approach, based on a single generator and multiple d... | In practice, our specialized discriminators to the subsets of training data outperforms independent training of discriminators on the whole dataset (as in GMAN) or different discriminator assignment strategies such as minimum-score discriminator selections, opposite to MCL-GAN, and random selections.
Also, the performa... | Table 7: Comparison with other discriminator assignment strategies such as Minimum, Random, and GT-Assign.
GT-Assign links an expert discriminator with real samples using the ground-truth class labels under our multi-discriminator framework but is unrealistic due to the requirement of the labels. | C |
In the proposed DeepSC-S, the numbers of CNN modules and BRNN modules in the semantic encoder are 2 and 6, respectively. For each CNN module, the number of filters is 32, and for each BRNN module, the number of GRU units is 800. Moreover, two dense layers are utilized in the channel encoder with units 40 and 40, respec... |
In this section, we compare to the performance between the proposed DeepSC-SR and the traditional communication systems under the AWGN channels and the Rayleigh channels, where the accurate CSI is assumed at the receiver. The traditional communication systems include benchmark 1 and benchmark 2, which are introduced i... |
The second benchmark is a traditional communication system to transmit text signals, named text transceiver. Particularly, the speech signals are converted into the text signals by the ASR model before feeding into the traditional system and the text transcription is recovered at the receiver. For the system, the Huff... |
The first benchmark is a traditional communication system to transmit speech signals, named speech transceiver. Particularly, the input of the system is the speech signals, which is restored at the receiver. Moreover, the transcription is obtained from the recovered speech signals after passing through an automatic sp... | Regarding the semantic commutations for speech information, our previous work developed an attention mechanism-based semantic communication system to restore the source message, i.e., reconstruct the speech signals[18]. However, in this paper, we consider an intelligent task at the receiver to recover the text informat... | C |
Compared to MulPro[8] and MP[9] which focus on exploring the information within the current input sample, our methods leverage to explore the information that contains inter-sample to find more supervision cues for weakly supervised 3D semantic segmentation task. |
We propose a cross-sample feature reallocating module to reconstruct features and re-route gradients across the input pair based on point correlation. Hence, the supervision signals from labeled points can be propagated to unlabeled points across samples. |
In this paper, we propose a weakly supervised point cloud segmentation method with only 10% or 1% percent of the points being labeled. We developed cross- and intra-sample feature reallocating modules to densely propagate supervision from labeled points to unlabeled points. | As depicted in Figure1, the first stage of our training process draws inspiration from [17, 18, 19]. Here, we select two samples with at least one overlapping class to serve as an input pair. The CSFR module is designed to facilitate the transfer of analogous features between these two samples. Unlike methods in [17, 1... |
In this section, we propose an intra-sample feature propagation(ISFR) module to further propagate supervision from labeled points to unlabeled points within each sample and finetuning the network against the possible noise introduced by the CSFR module. | A |
Table 3:
Monocular 3D object detection results on the KITTI val set for the car category with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The results of the previous works are from [9]. Our approach significantly outperforms the previous state-of-the-arts on... | Table 2: Monocular 2D object detection results on the KITTI test set for the All categories with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The metric AP40subscriptAP40\rm AP_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT is used for detection evaluat... | This guarantees the consistency between 2D and 3D boxes from the projection relationships in the proposed geometric formula, and ensure the robust learning with the formula.
The enhanced baseline achieves 16.54%, 13.37%, 11.15% on easy, moderate and hard difficulty levels, respectively. | We report the enhanced baseline results of 3D monocular object detection in Table 4.
Overall, the baseline significantly increases the AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT performance upon the original one by 3.76%, 3.54%, 2.88% on easy, moderate and hard difficulty levels, re... | Extensive experiments conducted on the challenging KITTI [11] dataset clearly demonstrate the effectiveness of the proposed approach and show that our method achieves 13.81% in terms of the AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT metric, which is 2.80% absolute AP40subscriptAP40\r... | C |
We argue that the wider the text segments are, the less well they reflect the characteristics of text, e.g., various spaces between text and characters, and that this affects the flexibility of the grouping process, resulting in poorer performance.
Although a width of 1-3 pixels achieved slightly higher precision of 90... | In this work, to realize the proposed visual-relational feature reasoning and capture the long-range dependency between text segments, relational features obtained from GCNs are fused with the visual features obtained from the FPN layers.
To address the dimensional difference between the relational and visual features,... | a Location-Aware Transfer (LAT) module to convert relational features produced by GCNs into visual compatible features. Then, a Fusion Decoding (FD) module is introduced
to fuse the relational features with the visual features to generate a Graph Guided Text Region (GGTR) map. Since relational features are a ready-made... |
Since FPN is unable to reason the relationship between different text segments, we introduce a multi-modal Fusion Decoding (FD) module to capture the long-range dependency between individual text regions according to their relational features as well as FPN features. The final goal is to generate a Graph Guided Text R... | We fuse them with the visual features to complement the long-range dependency between text segments for generating graph-guided text regions.
However, the dimensionality of the relational features is different from that of the visual features, which means they cannot be fused directly. | A |
𝒞Wsuperscript𝒞𝑊\displaystyle\mathcal{C}^{\text{\tiny$W$}}caligraphic_C start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT
=(ΔW𝒦WΔW)+∪ΔW,absentsuperscriptsubscriptΔ𝑊superscript𝒦𝑊subscriptΔ𝑊subscriptΔ𝑊\displaystyle=\big{(}{\Delta_{W}\mathcal{K}^{\text{\tiny$W$}}\Delta_{W}}\big{)% | {)}\;.= roman_Δ ∪ caligraphic_B start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ∪ caligraphic_B start_POSTSUPERSCRIPT - italic_W end_POSTSUPERSCRIPT ∪ caligraphic_K start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ∪ ( caligraphic_B start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ∪ caligraphic_K start_POSTSUPERS... | \mathsf{c}}}\;,= ( caligraphic_E start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT = caligraphic_E start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT roman_Δ start_POSTSUBSCRIPT italic_W start_POSTSUPERSCRIPT sansserif_c end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ,
| }^{\text{\tiny$W*$}}}\;,= caligraphic_E ( roman_Δ start_POSTSUBSCRIPT italic_W start_POSTSUPERSCRIPT sansserif_c end_POSTSUPERSCRIPT end_POSTSUBSCRIPT caligraphic_E ) start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = caligraphic_E caligraphic_E start_POSTSUPERSCRIPT italic_W ∗ end_POSTSUPERSCRIPT ,
| }^{+}\cup\Delta_{W}\;,= ( roman_Δ start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT caligraphic_K start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT roman_Δ start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ∪ roman_Δ start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT ,
| D |
The computational cost of TLMB is linearly proportional to the number of IP addresses with a limited memory resource, and the memory use of SSMB remains almost unchanged with a reasonable computational cost, regardless of the number of IP addresses.
| The remainder of this paper is organized as follows. We briefly describe some of the original sorting techniques as well as various parallel sorting algorithms in Section 2. In Section 3, we introduce the proposed method. The experimental results are presented in Section 4. Finally, we draw the conclusions of the study... |
We formally present a storage strategy for IP addresses that consists of two layers that consist of a limited number of memory blocks. The first layer contains 256×256256256256\times 256256 × 256 memory blocks. The first three parts of the IP address can be mapped into the corresponding position of the element in a pa... | We traverse all elements of the memory blocks of the second layer to obtain the maximum number of occurrences of elements if k=1𝑘1k=1italic_k = 1. Otherwise, we construct a minimum heap of size k𝑘kitalic_k. The statistical results of the first k𝑘kitalic_k IP addresses are saved in the heap, which is a special binary... | In this paper, we present two efficient algorithms for collecting the statistics of large-scale IP address data. We can obtain the frequently occurring IP addresses from the statistics, which can be regarded as a pre-processing step of user behavior analysis in network traffic management. Because of the increasing volu... | A |
For block tridiagonal systems, by comparing the theoretical analysis for the nested Schur complement based preconditioners and that for the additive type preconditioners, our argument is that permutation is important and necessary when designing preconditioners.
These results are instructive for devising the correspond... | For block tridiagonal systems, by comparing the theoretical analysis for the nested Schur complement based preconditioners and that for the additive type preconditioners, our argument is that permutation is important and necessary when designing preconditioners.
These results are instructive for devising the correspond... | In this study, we explore two methodologies for designing preconditioners tailored for 3333-by-3333 block systems exhibiting block tridiagonal form, as well as their extensions to n𝑛nitalic_n-tuple scenarios. One approach centers around the nested (or recursive) Schur complement [37]. Unlike the approach presented in ... | The outline of the remainder of this paper is as follows. In section 2, we briefly recall the classic saddle point problem and its Schur complement, and introduce the twofold saddle point problem and the form of Schur complement, we then construct and analyze the block-triangular and block-diagonal preconditioners base... | The authors would like to thank Mingjian Ding, and Baoxuan Zhu for providing an alternative proof of the Hurwitz stability of polynomials (25). They also thank Jarle Sogn for communicating on Schur complement based preconditioners.
The work of M. Cai is partially supported by the NIH-RCMI grant through 347 U54MD013376,... | D |
As a motivating example, we consider two smartphone application providers who wish to train a global model over the datasets stored on the smartphones of their respective customer bases.
Here, the two application companies do not want to share the customer data directly with each other. At the same time, the users do n... | If we map this system model on the bank and insurance example in Section 1, there are two silos, a banking silo corresponding to a banking holding company and an insurance silo corresponding to an insurance holding company. The clients in the banking silo are subsidiary banks of the banking holding company,
and similar... | We can think of another use case when there is a significant overlap in the customer ID space
between a banking silo ℬℬ\mathcal{B}caligraphic_B, e.g., a banking holding company with multiple independent subsidiary banks, and an insurance silo ℐℐ\mathcal{I}caligraphic_I, e.g., an insurance holding company with multiple ... | acts as a client and may have a separate customer base, operating in a different geographical location in a country. Thus the entire customer data of the bank silo is horizontally partitioned across its subsidiaries in the bottom tier.
Silo ℐℐ\mathcal{I}caligraphic_I for the insurance holding company also has a similar... | We note that for each sample p𝑝pitalic_p, 𝐗j(p)superscriptsubscript𝐗𝑗𝑝\mathbf{X}_{j}^{(p)}bold_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_p ) end_POSTSUPERSCRIPT is only held in a single client in silo j𝑗jitalic_j.
A sample ID in the banking and insurance example corresponds t... | B |
Let 𝒜=ℬ*𝒞𝒜ℬ𝒞\mathcal{A}=\mathcal{B}*\mathcal{C}caligraphic_A = caligraphic_B * caligraphic_C. It can be easily verified that 𝒜∈ℂn1×n2×n3𝒜superscriptℂsubscript𝑛1subscript𝑛2subscript𝑛3\mathcal{A}\in\mathbb{C}^{n_{1}\times n_{2}\times n_{3}}caligraphic_A ∈ blackboard_C start_POSTSUPERSCRIPT italic_n start_POSTSU... | The generalization of eigenvalues from matrices to tensors has been studied through the implementation of tensor-tensor multiplication. Significant attention and extensive research have been devoted to this field, resulting in a substantial body of work focused on their variants, applications, and theoretical analysis.... | The usefulness of tensor-tensor multiplication (3) has been demonstrated in various domains in recent years, including but not limited to image processing (such as image deblurring and compression, object and facial recognition), tensor principal component analysis, tensor completion, multilinear control systems, and p... | Then, the notion of T-eigenvalues was introduced by Miao, Qi and Wei Miao2020T and also Liu and Jin Jin2020 , establishing a fundamental and significant concept. Alternative versions and formulations of eigenvalues of third-order tensors in the context of tensor-tensor multiplication have also been explored by Qi and ... | Pseudospectra localizations for matrices or (generalized) tensor eigenvalues qi2005eigenvalues have several applications, including the search for more positive definite tensors LiLiuWei2019 and testing the stability of dynamical systems KostiPseudospectra2016 . The study of pseudospectra for T-eigenvalues of tensors... | B |
Figure 2: Overview of the proposed method (best viewed in color). Generator: Image inpainting is cast into two subtasks, i.e., structure-constrained texture synthesis (left, blue) and texture-guided structure reconstruction (right, red), and the two parallel-coupled streams borrow encoded deep features from each other... | The traditional methods can be mainly summarized into two categories, i.e., diffusion-based and patch-based. Diffusion-based methods [3, 1] render missing regions referring to the appearance information of the neighboring ones. Their results are not so good due to this preliminary searching mechanism. In patch-based me... | Figure 5 compares our results with the ones of the representative methods including the current state-of-the-arts on the three benchmarks. It can be seen, as a classical patch-based method, PatchMatch [2] fails in handling large holes. PConv [13] is suitable for irregular corruptions, but obvious artifacts can be obser... |
As with most computer vision problems, image inpainting has been largely advanced by the widespread use of deep learning during the past decade. Different from the traditional methods [2, 5] that gradually fill in missing areas by searching for the most similar patches from known regions, the deep generative ones [19,... |
To deal with this problem, a number of multi-stage methods are proposed to explicitly incorporate structure modeling, which hallucinate structures of missing regions in the first stage and use them to guide pixel generation in the second stage. For instance, EdgeConnect [18] encodes such structures by edges, while [20... | A |
In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability ε𝜀\varepsilonitalic_ε. The concept of BEC was first introduced by Elias in 1955 InfThe . Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory ... | In this paper we carried out an in-depth study on the average decoding error probabilities of the random parity-check matrix ensemble ℛm,nsubscriptℛ𝑚𝑛\mathcal{R}_{m,n}caligraphic_R start_POSTSUBSCRIPT italic_m , italic_n end_POSTSUBSCRIPT over the erasure channel under three decoding principles, namely unambiguous de... |
First recall that the error exponents of the average decoding error probability of the ensemble ℛ(1−R)n,nsubscriptℛ1𝑅𝑛𝑛\mathcal{R}_{(1-R)n,n}caligraphic_R start_POSTSUBSCRIPT ( 1 - italic_R ) italic_n , italic_n end_POSTSUBSCRIPT over the erasure channel under the three decoding principles are defined by |
The problem of decoding linear codes over the erasure channel has received renewed attention in recent years due to their wide application in the internet and the distributed storage system in analyzing random packet losses Byers ; Luby ; Lun . Three important decoding principles, namely unambiguous decoding, maximum ... | In particular in FFW , upon improving previous results, the authors provided a detailed study on the decoding error probabilities of a general q𝑞qitalic_q-ary linear code over the erasure channel under the three decoding principles. Via the notion of qℓsuperscript𝑞ℓq^{\ell}italic_q start_POSTSUPERSCRIPT roman_ℓ end_P... | C |
Assuming that this problem was solved, a generated subgoal still remains to be assessed. The exact evaluation may, in general, require exhaustive search or access to an oracle (in which case the original problem is essentially solved). Consequently, it is unlikely that a simple planner (e.g., one unrolling independent... | Reasoning is often regarded as a defining property of advanced intelligence [39, 18]. When confronted with a complicated task, humans’ thinking process often moves from one idea to a related idea, and the progress is made through milestones, or subgoals, rather than through atomic actions that are necessary to transiti... |
Assuming that this problem was solved, a generated subgoal still remains to be assessed. The exact evaluation may, in general, require exhaustive search or access to an oracle (in which case the original problem is essentially solved). Consequently, it is unlikely that a simple planner (e.g., one unrolling independent... | More generally, concepts related to goals and subgoals percolated to reinforcement learning early on, leading, among others, to prominent ideas like hindsight [22], hierarchical learning [47, 7] or the Horde architecture [46]. Recently, with the advent of deep reinforcement learning, these ideas have been resurfacing a... |
The deep learning revolution has brought spectacular advancements in pattern recognition techniques and models. Given the hard nature of reasoning problems, these are natural candidates to provide search heuristics [4]. Indeed, such a blend can produce impressive results [43, 44, 36, 1]. These approaches seek solution... | C |
In practice, it is extremely hard for those pre-trained language models to tackle this problem. Currently, the tasks for pre-training Chinese language models are mainly focused on the semantic domain, neglecting glyph and phonetic features. However, most character substitution cases exist in glyph and phonetic domains... | In this paper, we propose a lightweight method, Multi-feature Fusion Embedding for Chinese Named Entity Recognition (MFE-NER), which fuses extra glyph and phonetic features to detect possible substitution forms of named entities in Chinese. On top of using pre-trained models to represent the semantic feature, we choose... | Based on the experiment results above, MFE-NER is able to reduce the negative influence of the character substitution phenomenon in Chinese Named Entity Recognition, while slightly improving the overall performance of NER models. It makes sense that MFE-NER is suitable to solve character substitution problems because g... | Nowadays, the informal language environment created by social media has deeply changed the way that people express their thoughts. Using character substitution to generate new named entities becomes a common linguistic phenomenon which is a big challenge for NER. In this paper, we propose a lightweight method fusing th... |
our MFE-NER is a lightweight Named Entity Recognition method fusing the glyph and phonetic feature embeddings for Chinese character substitution, which is complementary to pre-trained language models in the representation of Chinese characters. As shown in Figure 2, MFE-NER introduces an extra module, fusing glyph emb... | A |
To generate étendue-expanded high-fidelity holograms, we propose a computational inverse-design method that learns the wavefront modulation of the neural étendue expander by treating it as a layer of trainable neurons that are taught to minimize a loss placed on the formed holographic image, see Fig. 1c. | Next, we analyze the expansion of étendue achieved with the proposed technique. To this end, suppose we want to generate the étendue-expanded hologram of only a single scene.
Then, the optimal complex wavefront modulation for the neural étendue expander would be the inverse Fourier transform of the target scene, and, a... | To assess whether the optimized neural étendue expander ℰℰ\mathcal{E}caligraphic_E, shown in Fig. 1b, has learned the image statistics of the training set we evaluate the virtual frequency modulation ℰ~~ℰ\widetilde{\mathcal{E}}over~ start_ARG caligraphic_E end_ARG, defined as the spectrum of the generated image with th... | The differentiability of Eq. (2) with respect to the modulation variables ℰℰ\mathcal{E}caligraphic_E and 𝒮𝒮\mathcal{S}caligraphic_S allows us to learn the optimal wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E by jointly optimizing the static neural étendue expander in concunction with... | Specifically, we model the holographic image formation in a fully differentiable manner following Fourier optics. We relate the displayed holographic image I𝐼Iitalic_I to the wavefront modulation of the neural étendue expander ℰℰ\mathcal{E}caligraphic_E as
| D |
The architectures of MTL models depend on the characteristics of the indented tasks as well as the design of the base models. When training generative models on instruction following, people usually train the entire model and focus more on data curation. We refer interested readers to another survey paper on instructio... | The simplest form of parallel architecture is a parallel feature sharing architecture (Fig. 1(a)), where the models for different tasks share a base feature extractor (i.e., the trunk) followed by task-specific encoders and output layers (i.e., the branches). A shallow trunk can be simply the word representation layer ... | The parallel architecture shares the bulk of the model among multiple tasks while each task has its own task-specific output layer.
The hierarchical architecture models the hierarchical relationships between tasks. Such architecture can hierarchically combine features from different tasks, take the output of one task a... | The hierarchical architecture considers hierarchical relationships among multiple tasks. The features and output of one task can be used by another task as an extra input or additional control signals. The design of hierarchical architectures depends on the tasks at hand and is usually more complicated than parallel ar... | Instead of aggregating features from different tasks as in feature fusion architectures, pipeline architectures treat the output of a task as an extra input of another task and form a hierarchical pipeline between tasks. In this section, we refer to output as the final result \replacedforfrom a task, including the fina... | B |
Welcome to the updated and simplified documentation to using the IEEEtran LaTeX class file. The IEEE has examined hundreds of author submissions using this package to help formulate this easy to follow guide. We will cover the most commonly used elements of a journal article. For less common elements we will refer bac... | The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ... | The IEEE Template Selector will always have the most up-to-date versions of the LaTeX and MSWord templates. Please see: https://template-selector.ieee.org/ and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have... | It is assumed that the reader has a basic working knowledge of LaTeX. Those who are new to LaTeX are encouraged to read Tobias Oetiker’s “The Not So Short Introduction to LaTeX”, available at: http://tug.ctan.org/info/lshort/english/lshort.pdf which provides an overview of working with LaTeX.
|
Welcome to the updated and simplified documentation to using the IEEEtran LaTeX class file. The IEEE has examined hundreds of author submissions using this package to help formulate this easy to follow guide. We will cover the most commonly used elements of a journal article. For less common elements we will refer bac... | C |
H,ℓ,𝒞⊧ϕadj[x1∖u1][x2∖u2]models𝐻ℓ𝒞subscriptitalic-ϕ𝑎𝑑𝑗delimited-[]subscript𝑥1subscript𝑢1delimited-[]subscript𝑥2subscript𝑢2H,\ell,\mathcal{C}\models\phi_{adj}[x_{1}\setminus u_{1}][x_{2}\setminus u_{2}]italic_H , roman_ℓ , caligraphic_C ⊧ italic_ϕ start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBS... | if (i) ℓ(u1)∈Wiℓsubscript𝑢1subscript𝑊𝑖\ell(u_{1})\in W_{i}roman_ℓ ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ∈ italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ℓ(u2)∈Wi′ℓsubscript𝑢2subscript𝑊superscript𝑖′\ell(u_{2})\in W_{i^{\prime}}roman_ℓ ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT... | {2}\setminus u_{2}]italic_H , roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , caligraphic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊧ italic_ϕ start_POSTSUBSCRIPT italic_a italic_d italic_j end_POSTSUBSCRIPT [ italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∖ italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ... | the coloring c:V(G)→[3]:𝑐→𝑉𝐺delimited-[]3c:V(G)\to[3]italic_c : italic_V ( italic_G ) → [ 3 ] defined in this way is proper. Consider i,i′∈[n]𝑖superscript𝑖′delimited-[]𝑛i,i^{\prime}\in[n]italic_i , italic_i start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ [ italic_n ] such that (vi,vi′)∈E(G)subscript𝑣𝑖subscript�... | and only if ℓ(u1)∈Wiℓsubscript𝑢1subscript𝑊𝑖\ell(u_{1})\in W_{i}roman_ℓ ( italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ∈ italic_W start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and ℓ(u2)∈Wi′ℓsubscript𝑢2subscript𝑊superscript𝑖′\ell(u_{2})\in W_{i^{\prime}}roman_ℓ ( italic_u start_POSTSUBSCRIPT 2 end_POSTSUBS... | D |
The final coefficient of primary interest (θ5subscript𝜃5\theta_{5}italic_θ start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT) is on the interaction between the treatment effect and our measure of direct reciprocity. The large magnitude and significance of this estimate indicate that direct reciprocity is indeed a strong driver... |
After the main parts of the experiment, subjects were asked a series of survey questions to elicit measures of their individual characteristics. All of these questions were aimed at eliciting a subject’s heterogeneous preferences toward trust and reciprocity. The full set of survey questions is reproduced in Appendix ... | The two principal components of reciprocity—overall reciprocity and positive reciprocity—also explain much of the variation in behavioral patterns. Overall reciprocity, which captures both a taste for positive and negative reciprocity, has a nuanced effect in the two conditions. In the baseline, subjects with a higher ... |
The estimates of our structural parameters are robust to the inclusion of several heterogeneous individual characteristics, which we construct based on subjects’ responses to a post-experiment survey. We distill the responses down to three key attributes using a principal components analysis, representing trust, overa... |
The characteristic that we describe as overall reciprocity consists of positive weights on the answers to all of the questions in the reciprocity questionnaire. This includes both questions about positive reciprocity (e.g. “If someone does me a favor, I am prepared to return it”), as well as negative reciprocity (“If ... | A |
Most methods simply seek to reconstruct SR images with high PSNR and SSIM. However, the improvement in reconstruction accuracy is not always accompanied by an improvement in visual quality. Blau et al. (Blau and Michaeli, 2018) pointed out that there was a perception-distortion trade-off. It is only possible to improv... | Pixel Loss: Pixel loss is the simplest and most popular loss function in SISR, which aims to measure the difference between two images on a pixel basis so that these two images can converge as close as possible. It mainly includes the L1 loss, Mean Square Error (MSE) Loss, and Charbonnier loss (a differentiable variant... |
Content Loss: Content loss is also termed perceptual loss, which uses a pre-trained classification network to measure the semantic difference between images, and can be further expressed as the Euclidean distance between the high-level representations of these two images: |
L1 + Perceptual Loss: combining L1 loss with perceptual loss, such as the feature loss based on VGG networks, can generate images that are clearer and have better details. This combination can effectively reduce noise and distortion in the image; L1 + TV Loss: combining L1 loss with total variation (TV) loss can gener... | Perceptual Loss: Although pixel-wise losses, i.e., L1 and MSE loss, have been widely used to achieve high image quality, they do not capture the perceptual differences between the SR and HR images. In order to address this problem and allow the loss functions to better measure the perceptual and semantic differences be... | D |
Table 2: We compare the blind super-resolution performance achieved by a conventional coordinate , a -based internal learning framework of SinGAN and our method. We compute PSNR (↑↑\uparrow↑) and SSIM (↑↑\uparrow↑) for a number of upscaling factors and downsampling kernels. | Figure 3: Ablation study of Neural Knitwork components. Conventional does not produce coherent inpainted region and this is improved with the introduction of patches. Further, imposing cross-patch consistency constraint increases the quality of the synthesized region while employing a approach ensures patches of high... | The reconstruction quality of the whole image is comparable for the three tested methods. However, when inpainted region is concerned, we observe a significant improvement of over 4 dB for the Neural Knitwork compared to the conventional coordinate and 2 dB less than the -based technique. For some of the results, the ... | Reconstructed Pixel Loss The transition from predicting isolated pixel colors to patches introduces a new trade-off between imposing spatial relationships of the pixel colors and obtaining a high fidelity image with accurate detail. In practice, there will be some disagreement between the predictions for the same pixel... | As we demonstrate in Figure 8, a standard network has limited denoising capability because it attempts to fit all pixel colors with no additional constraints. In contrast, a Neural Knitwork ensures that both patches and pixel colors are reliably reconstructed while imposing additional consistency constraint on the der... | D |
Due to the previously mentioned connection between apple tasting and logistic bandits, state-of-the-art algorithms for those problems are also worth mentioning. Beyond TS approaches, much of the literature (Faury et al.,, 2020; Abeille et al.,, 2021; Faury et al.,, 2022; Lee et al.,, 2023; Zhang and Sugiyama,, 2024) ha... |
The present paper is the first work we aware of that specifically applies TS to apple tasting, but previous work has considered its use for logistic bandits. For logistic contextual bandits, the implementation of exact TS (i.e. the policy that draws its sample from the exact posterior) is infeasible due to the intract... | In the related setting of the logistic contextual bandit, Dumitrascu et al., (2018) introduce an approximate variant of TS which uses Pólya-Gamma (PG) augmentation to admit efficient sampling. We adapt this to give an approximate TS policy for LCAT, PG-TS, in Algorithm 1. It utilises a Gibbs sampler for the unknown par... | Our motivations for a renewed treatment of apple tasting are threefold. Firstly, despite the existence of alternative theoretically justified approaches, new developments in the theoretical understanding of TS allow us to derive guarantees for the empirically superior TS policy. Second, the apple tasting setting strike... | The regret results of the Section 2 are based on exact sampling from the posterior. The PG-TS algorithm necessarily samples from an approximation of the posterior, to maintain a reasonable computational overhead. Recent work of Phan et al., (2019) has identified conditions under which sampling from an approximate poste... | A |
As an initial step, we consider several memory-related metrics already introduced in [12]. These metrics evaluate memory usage from several perspectives, covering typical information retrieval analysis concerning top-K memory ranking and purely memory-oriented statistics like target coverage. Such metrics will also be... |
One problem is how to define an appropriate activation threshold δ𝛿\deltaitalic_δ, i.e., the minimum value to consider a memory slot has been used by the model. While in some cases, like ToS, a δ=0.5𝛿0.5\delta=0.5italic_δ = 0.5 threshold could be meaningful enough, in other cases, like for the IBM2015 dataset, we pr... |
Figure 3: MemDistilBERT analysis on IBM2015, 1-Topic. (a) P@K for increasing K𝐾Kitalic_K values and δ=0.25𝛿0.25\delta=0.25italic_δ = 0.25; (b) P@3 for increasing δ𝛿\deltaitalic_δ values. Metrics for sampling-based models are averaged across three distinct inferences on test set. | Activation threshold δ𝛿\deltaitalic_δ is set to 0.5.
Best results are in bold, second-best results are underlined for MRR. Columns C to P@3 are not directly comparable among models due to their different memory usage. Standard deviation is reported in subscript format. | Differently from unfairness detection, the large memory size hinders selective memory lookup operations. Additionally, claim detection on the IBM2015 dataset is a challenging task where existing solutions reach comparably low performance [66, 67]. For these reasons, following previous work on the same dataset, we focus... | A |
While we believe a Bayesian approach to uncertainty can enrich SMC, we also argue that the use of Formal Methods in the (Bayesian) Machine Learning community opens the door for interesting application-driven methods for prediction and model evaluation and comparison. |
In this paper, we propose a framework for predictive model checking and comparison, where in addition to usual approaches, we advocate for the specification of concrete (spatio-temporal) properties that the predictions from a model should satisfy. Given trajectories from the Bayesian predictive distribution, the poste... |
The proposed framework advocates for exploiting the rich information that Bayesian predictive inference offers in the form of draws from the posterior predictive distribution of future values, by evaluating models also based on properties that can be directly translated into decision-making. Therefore, we showcase how... | (i.e., post-estimation). However, predictions obtained from a (Bayesian) Machine Learning model are not directly translated into a decision but rather transformed, compressed, and combined with further rules or requirements relevant to the decision problem at hand. As a simple example, consider an algorithmic trading s... | Especially in high-dimensional, complex models, these requirements or properties relevant for decision-making are typically highly nonlinear functions of the random variables, and one is interested in their predictive distribution. Their verification ex-ante as well as their evaluation ex-post (as part of the posterior... | C |
to infinite X𝑋Xitalic_X, e.g., X=ℝd𝑋superscriptℝ𝑑X=\mathbb{R}^{d}italic_X = blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, by Mercer’s Theorem.
The distance between x′,y′∈ℋsuperscript𝑥′superscript𝑦′ℋx^{\prime},y^{\prime}\in\mathcal{H}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_y... | Apply Lemma 3.2 to obtain f:ℋ′→ℝn+1:𝑓→superscriptℋ′superscriptℝ𝑛1f:\mathcal{H}^{\prime}\to\mathbb{R}^{n+1}italic_f : caligraphic_H start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_n + 1 end_POSTSUPERSCRIPT,
then for all C⊆ℋ′𝐶superscriptℋ′C\subseteq\mathcal{H}^{\prime}italic_C ⊆... | dist(x′,y′):=‖x′−y′‖=⟨x′−y′,x′−y′⟩assigndistsuperscript𝑥′superscript𝑦′normsuperscript𝑥′superscript𝑦′superscript𝑥′superscript𝑦′superscript𝑥′superscript𝑦′\operatorname{dist}(x^{\prime},y^{\prime}):=\|x^{\prime}-y^{\prime}\|=\sqrt{%
\langle x^{\prime}-y^{\prime},x^{\prime}-y^{\prime}\rangle}roman_dist ( italic_x ... | to infinite X𝑋Xitalic_X, e.g., X=ℝd𝑋superscriptℝ𝑑X=\mathbb{R}^{d}italic_X = blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, by Mercer’s Theorem.
The distance between x′,y′∈ℋsuperscript𝑥′superscript𝑦′ℋx^{\prime},y^{\prime}\in\mathcal{H}italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_y... | C(x):={x′∈X:NNC(x′)=NNC(x)}assign𝐶𝑥conditional-setsuperscript𝑥′𝑋subscriptNN𝐶superscript𝑥′subscriptNN𝐶𝑥C(x):=\{x^{\prime}\in X:\operatorname{NN}_{C}(x^{\prime})=\operatorname{NN}_{C%
}(x)\}italic_C ( italic_x ) := { italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ italic_X : roman_NN start_POSTSUBSCRIPT... | B |
As we have seen in the previous section, R𝑅Ritalic_R-Lipschitz doctrines are the objects of the 2-category 𝐋𝐋𝐃Rsubscript𝐋𝐋𝐃𝑅\mathbf{LLD}_{R}bold_LLD start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT.
In this section we study its properties, relating it with with other 2-categories of doctrines. | More in detail, we first show that R𝑅Ritalic_R-Lipschitz doctrines arise as coalgebras for a 2-comonad on R𝑅Ritalic_R-graded ones.
This result smoothly generalises what happens in the non-linear setting and provides us with a universal construction of R𝑅Ritalic_R-Lipschitz doctrines from R𝑅Ritalic_R-graded ones. | This provides us with a universal construction yielding an R𝑅Ritalic_R-Lipschitz doctrine from an R𝑅Ritalic_R-graded one, and we use it to generate semantics for the calculus.
In Section 5.2 we relate quantitative equality with the usual one defined by left adoints, formally proving that the former indeed refines the... | R𝑅Ritalic_R-Lipschitz doctrines are the coalgebras of the 2-comonad induced by such a 2-adjunction.
On one hand, this result provides us with a universal construction of R𝑅Ritalic_R-Lipschitz doctrines, which gives us a tool to produce semantics for the calculus LPLLR. | which are doctrines modelling the (⊗,𝟏)tensor-product1(\otimes,\mathbf{1})( ⊗ , bold_1 )-fragment of Linear Logic enriched by R𝑅Ritalic_R-graded modalities, where R𝑅Ritalic_R is an ordered semiring of resources,
and introduce R𝑅Ritalic_R-Lipschitz doctrines, namely, R𝑅Ritalic_R-graded doctrines with quantitative e... | A |
In this paper, on the basis of spanning rooted forests, we propose ForestSim, a new node similarity metric. ForestSim uses the average size of the trees rooted at the node u𝑢uitalic_u in spanning rooted forests of the graph, denoted by s(u)𝑠𝑢s(u)italic_s ( italic_u ), to capture its structural properties. Two node... |
In this paper, we propose a novel node similarity metric, namely ForestSim, to quickly and effectively process top-k similarity search on large networks. Different from previous frameworks, ForestSim, based on spanning rooted forests of graphs, adopts the average size of all trees rooted at node u𝑢uitalic_u in spanni... |
In this paper, on the basis of spanning rooted forests, we propose ForestSim, a new node similarity metric. ForestSim uses the average size of the trees rooted at the node u𝑢uitalic_u in spanning rooted forests of the graph, denoted by s(u)𝑠𝑢s(u)italic_s ( italic_u ), to capture its structural properties. Two node... |
Efficient top-k similarity search algorithm : We devise ForestSimSearch for the top-k similarity search. ForestSimSearch can handle a top-k query in O(k)𝑂𝑘O(k)italic_O ( italic_k ) time once the precomputation is finished. Furthermore, we use the fast approximate algorithm to compute the diagonal entries of the for... | In this section, we first define a new role similarity measure, namely ForestSim, based on spanning rooted forests. We then show that the ForestSim score can be expressed in terms of the diagonal elements in the forest matrix and prove that ForestSim is an admissible role similarity metric. After that, we propose Fores... | C |
In this section, we introduce the settings of our experiments and report the experimental results.
We report all implementation details in the appendix, e.g., hyperparameter settings (Appendix 4.2), baseline introduction (Appendix 4.3) and additional experiments, etc. | 222We evaluate LSA on the Twitter Dong et al. (2014) dataset and report the experimental results in Section C.5.
The processed datasets are available with the code in supplementary materials.: Laptop14, Restaurant14, Restaurant15 and Restaurant16 datasets, | Table 4: The traditional aspect sentiment classification performance on five public datasets, and the best results are heightened in bold font.
† indicates the results are the best performance in multiple runs, while other methods report the average performance. | When it comes to sentiment classification performance, the results in Table 4 clearly demonstrate the superiority of our models over significant baselines, particularly in the case of the LSAE model.
The experimental results are as expected and show the proficiency of LSA. | We utilize LSA to classify aspect sentiments and aggregate the sentiment clusters.
The cluster prediction performance in Table 3 shows that our models consistently outperform the baseline models on all datasets. The performance of LSA is dependent on the base model. | A |
𝐗𝐗\mathbf{X}bold_X. Suppose that there exist orthogonal projections 𝐋:ℝN→𝒫p(𝕊):𝐋→superscriptℝ𝑁superscript𝒫𝑝𝕊\mathbf{L}:\mathbb{R}^{N}\rightarrow{\mathcal{P}}^{p}(\mathbb{S})bold_L : blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT → caligraphic_P start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR... | In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic... | gradient estimate of 𝜸𝐖subscript𝜸𝐖\bm{\gamma}_{\mathbf{W}}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT, where 𝜸𝐖0superscriptsubscript𝜸𝐖0\bm{\gamma}_{\mathbf{W}}^{0}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is the initial
guess. The main ... | The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop... |
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and me... | C |
We visualize the QNN accuracy contours on Fashion-4 on IBMQ-Athens with different noise factors and quantization levels in Figure 8 left. The best accuracy occurs for factor 0.2 and 5 levels. Horizontal-wise, the accuracy first goes up and then goes down. This is because too few quantization levels hurt the QNN model c... | Figure 3. QuantumNAT Overview. (1) Post-measurement normalization matches the distribution of measurement results between noise-free simulation and real QC. (2) Based on realistic noise models, noise-injection inserts quantum error gates to the training process to increase the classification margin between classes. (3)... | Visualization of QNN extracted features. MNIST-2 classification result is determined by which feature is larger between the two: feature one is the sum of measurement outcomes of qubit 0 and 1; feature 2 is that of qubit 2 and 3. We visualize the two features obtained from experiments on Belem in a 2-D plane as in Figu... | We use QNN as the benchmark PQC in this work. Figure 2 shows the QNN architecture. The inputs are classical data such as image pixels, and the outputs are classification results. The QNN consists of multiple blocks. Each has three components: encoder encodes the classical values to quantum states with rotation gates su... |
Figure 4 compares the noise-free measurement result distribution of 4 qubits (blue) with their noisy counterparts (yellow) for MNIST-4. Qualitatively, we can clearly observe that the post-measurement normalization reduces the mismatch between two distributions. | B |
Robust model fitting methods can accurately estimate the model instances (e.g., lines, homograph matrices, or fundamental matrices) in data contaminated with a large number of noise and outliers, and they have been applied in various areas. After the model fitting process, the motion information contained in the event... |
However, gallego2018unifying and other event-based data association methods zhu2017event ; gallego2019focus ; peng2022globally show that the events triggered by the same edge in the scene can be associated with each other using an event trajectory. Furthermore, they also show that the event trajectories, triggered a... | Finally, the model hypotheses having the Nssubscript𝑁𝑠N_{s}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT smallest weights are selected as the estimated model instances, as shown in Fig. 2(g), where the green and red lines are the estimated model instances. For each estimated model instance, we calculate its... |
In this paper, we propose a novel unifying event data association (EDA) approach to effectively and explicitly handle the essential event data association and event information fusion problem. The proposed EDA performs a model fitting on event data, which can asynchronously associate and fuse the event data over time ... | In this section, we describe all the components of the proposed EDA approach. The pipeline of the proposed EDA is illustrated in Fig. 2. First, we describe the sequential retinal events, and introduce an asynchronous fusion phase to gather the sequential retinal events, as illustrated in Fig. 2(a)-(d). Then, we present... | D |
As a connected vertex-ordering of H𝐻Hitalic_H can be obtained in linear time using a standard graph traversal algorithm, and a colouring of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT may be computed in O(mn)𝑂𝑚𝑛O(mn)italic_O ( italic_m italic_n ) time [12, Chapter 5.7], we concl... | Recently, connected greedy edge-colourings (equivalently, connected greedy colourings of line graphs) have been studied in [3], and it was proved that there is no line graph of a bipartite graph that is ugly.444Moreover, a careful analysis of the proof of [3] gives an algorithm running in time O(n4)𝑂superscript𝑛4O(n... | class of perfect graphs. We also give a simple and constructive proof for comparability graphs (which are perfect). Note that there exist bad graphs in these graph classes, consider for example the fish graph, which is K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT-minor-free and comparability; see... |
We now prove our main result, that there are no ugly perfect graphs. This generalizes the same fact which was previously proved for Meyniel graphs [22] (a class which contains chordal graphs, HHD-free graphs, Gallai graphs, parity graphs, distance-hereditary graphs…) and line graphs of bipartite graphs [3]. Our proof ... | Concerning other subclasses of interest, as mentioned in the introduction, the same task can be done in time O(m+n)𝑂𝑚𝑛O(m+n)italic_O ( italic_m + italic_n ) for chordal graphs using the LexBFS algorithm [21], for Meyniel graphs this can be done in time O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUP... | C |
We further discuss the relation between SSL and DR tasks (the dynamic and static p~Xsubscript~𝑝𝑋\tilde{p}_{X}over~ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT) with GenURL to explain the desired structures of data in URL tasks. | Meanwhile, we find that the instance discrimination prior knowledge in SSL tasks is more useful for downstream tasks like clustering and classification of complex scenarios, as shown in Figure 13. Therefore, we can choose a proper URL task for various scenarios: the DR task is suitable for MNIST and FMNIST datasets whe... |
As for the limitations of GenURL, we can conclude three aspects: (i) the proposed framework relies on offline hyper-parameter tuning to adapt to new URL tasks, which makes it tough to handle more than two input similarities, (ii) GenURL cannot deal with the case of discrete empirical spaces well, e.g., the SSL tasks, ... | Firstly, we compare the effects of hyper-parameters in GenURL for DR and SSL tasks. As shown in Figure 12, GenURL prefers smaller νZsubscript𝜈𝑍\nu_{Z}italic_ν start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT, i.e., using ν=0.01𝜈0.01\nu=0.01italic_ν = 0.01 to balance the local and global structures. Figure 5 shows that... |
Then, we compare how GenURL deals with the negative samples in SSL and KD tasks. In Figure 7, we find that GenURL prefers the similar νZsubscript𝜈𝑍\nu_{Z}italic_ν start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT and σ𝜎\sigmaitalic_σ for both SSL and KD tasks, which indicates using νZ=100subscript𝜈𝑍100\nu_{Z}=100ita... | A |
We study several widely used deep network backbones designed for edge inference in Figure 5: MobileNetV2 [44] (MbV2), redistributed MobileNetV2 (MbV2-RD), Once-For-All CPU (OFA-CPU) [5], MnasNet [47], and FBNet-A [50]. All the networks use an input resolution of 224×224224224224\times 224224 × 224; for patch-based infe... | in order to compute the non-overlapping output patches, the input image patches need to be overlapped (Figure 3(b)), leading to repeated computation.
The overhead is positively related to the receptive field of the initial stage: the larger the receptive field, the larger the input patches, which leads to more overlapp... | per-patch based inference reduces the measured peak SRAM by 4-6×\times×.
Some models may have a large latency overhead, since the initial stage has worse hardware utilization. But with a proper architecture design (MbV2-RD), we can reduce the latency overhead to 4%, | Figure 5: Analytical profiling: patch-based inference significantly reduces the inference peak memory by 3.7-8.0×\times× at a small computation overhead of 8-17%. The memory reduction and computation overhead are related to the network design. For MobileNetV2, we can reduce the computation overhead from 10% to 3% by re... | The memory saving and computation reduction are related to the network architecture. Some models like MnasNet have a larger overhead since it uses large kernel sizes in the initial stage, which increases receptive fields.
It shows the necessity to co-design the network architecture with the inference engine. | D |
Besides, we modeled label sequences jointly using a CRF to improve the performance of our method. The model effect was not improved when the learning rate of CRF was relatively small. To explore the effect of CRF on BERT, we tried to continuously increase the learning rate of CRF. At last, we concluded that the learnin... |
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a ran... | In the model ensemble [9] stage, we adopt a simple and efficient way and get 1.30%percent1.301.30\%1.30 % boosting (Shown in Figure 2). We employed a two-step approach to get the final results. Firstly, we determined the serialization of the text boundary cross-validation results, with the character position of the pre... |
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging [8] framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a... |
Main Results Table I shows the results of different models for CECE on two leaderboards. The single model we proposed, overwhelmingly outperforms the baseline in terms of all leaderboards and achieves encouraging 65.9%percent65.965.9\%65.9 %, 77.0%percent77.077.0\%77.0 % improvements in F1-score over the baseline meth... | D |
Another bad example is the assembly in the top left-hand corner of IMDB-BINARY (point C). Though it has a high CC, the AC of this assembly is fairly low. In addition, there also exist assembly, whose AC and CC are both at low levels, such as point D on NCI1 and point E on IMDB-MULTI. With the above analysis, we further... | To cope with the problem of model collapse, we devise the asymmetric structure for CGCL. The asymmetry lies in the differences of GNN-based encoders’ message-passing schemes. Besides, graph encoders in CGCL are supposed to be complementary for a stronger fitting ability. Specifically, high complementarity indicate that... |
In CGCL, multiple graph encoders compute their own contrastive losses based on representations learned by others, and optimize their losses collaboratively. To check the reliability of collaborative mechanism, we empirically analyze the convergence in the optimization process of each individual encoder on PROTEINS and... |
For a further exploration of CGCL’s working mechanism, we test various assembly with respect to the two quantitative metrics AC and CC. The candidate encoders include GIN, GCN, GAT and Set2Set. Specifically, we combine two or three of graph encoders on six datasets and use the best result of multiple encoders as this ... | We first evaluate the representations learned by CGCL in the setting of unsupervised learning. We follow the same process of InfoGraph [26], where representations are learned by models without any labels and then fed into a SVM to evaluate the graph classification performance.
CGCLGINsubscriptCGCL𝐺𝐼𝑁{\text{CGCL}}_... | B |
Theorem 1 can be viewed as a discrete version of Locatello et al., (2019, Theorem 1).
There are multiple ways of grounding the learning process, including imposing inductive biases on the agents and designing loss functions to disentangle features. We study these, together with a new mechanism: injecting noise into the... | Theorem 1 can be viewed as a discrete version of Locatello et al., (2019, Theorem 1).
There are multiple ways of grounding the learning process, including imposing inductive biases on the agents and designing loss functions to disentangle features. We study these, together with a new mechanism: injecting noise into the... | The noisy channel model of communication was famously introduced by Shannon, (1948). The idea of noise as a driving force in the emergence of communication was first proposed by Nowak and Krakauer, (1999), who showed that word-level compositionality is the optimal solution to the problem of communication in a noisy env... | We then formulate inductive biases in the loss function and prove that they are sufficient to achieve compositionality when coupled with communication over a noisy channel.
Consequently, we highlight the catalytic role of noise in the emergence of compositionality. | Another major conceptual finding is that compositional communication spontaneously emerges when introducing a relatively simple mechanism – a noisy channel. This is proved in Theorem 2, provided that the loss function penalizes agents’ mistakes, but also rewards for (partially) correct guesses.
| D |
We construct a safe ROCBF within the autonomous driving simulator CARLA [56] for a car driving on a road by using camera images, see Fig. 4. In particular, our goal is to learn a ROCBF for the lateral control of the car, i.e., a lane keeping controller, while we use a built-in controller for longitudinal control. Lane... |
In this paper, we have shown how safe control laws can be learned from expert demonstrations under system model and measurement map uncertainties. We first presented robust output control barrier functions (ROCBFs) as a means to enforce safety, which is here defined as the ability of a system to remain within a safe s... | Safety-critical systems rely on robust control laws that can account for uncertainties in system dynamics and state estimation. For example, consider an autonomous car equipped with noisy sensors that navigates through urban traffic [1]. The state of the car is not exactly known and estimated from output measurements, ... |
As we have no direct access to the system dynamics of the car, we identify a system model. The model for longitudinal control is estimated from data and consists of the velocity v𝑣vitalic_v of the car and the integrator state d𝑑ditalic_d of the PID. The identified longitudinal model of the car is |
We construct a safe ROCBF within the autonomous driving simulator CARLA [56] for a car driving on a road by using camera images, see Fig. 4. In particular, our goal is to learn a ROCBF for the lateral control of the car, i.e., a lane keeping controller, while we use a built-in controller for longitudinal control. Lane... | C |
With Theorem 14 in hand, proving that 𝖯𝖯⊄𝖰𝖬𝖠𝖰𝖬𝖠𝖰𝖬𝖠⋯not-subset-of𝖯𝖯superscript𝖰𝖬𝖠superscript𝖰𝖬𝖠superscript𝖰𝖬𝖠⋯\mathsf{PP}\not\subset\mathsf{QMA}^{\mathsf{QMA}^{\mathsf{QMA}^{\cdots}}}sansserif_PP ⊄ sansserif_QMA start_POSTSUPERSCRIPT sansserif_QMA start_POSTSUPERSCRIPT sansserif_QMA start_POSTSUPER... |
To elaborate further, we first take a random restriction that, by Theorem 14, turns all of the bottom-layer 𝖰𝖬𝖠𝖰𝖬𝖠\mathsf{QMA}sansserif_QMA gates into DNF formulas. Next, we apply another random restriction and appeal to the switching lemma to argue that these DNFs reduce to functions of low decision tree comple... | With Theorem 14 in hand, proving that 𝖯𝖯⊄𝖰𝖬𝖠𝖰𝖬𝖠𝖰𝖬𝖠⋯not-subset-of𝖯𝖯superscript𝖰𝖬𝖠superscript𝖰𝖬𝖠superscript𝖰𝖬𝖠⋯\mathsf{PP}\not\subset\mathsf{QMA}^{\mathsf{QMA}^{\mathsf{QMA}^{\cdots}}}sansserif_PP ⊄ sansserif_QMA start_POSTSUPERSCRIPT sansserif_QMA start_POSTSUPERSCRIPT sansserif_QMA start_POSTSUPER... | Notably, our proof of Theorem 7 does not appeal to Raz-Tal at all, but instead relies on a new random restriction lemma for the acceptance probabilities of quantum query algorithms. Our random restriction lemma shows that if one randomly fixes most of the inputs to a quantum query algorithm, then the algorithm’s behavi... | With all of these tools in hand, we prove in our next theorem that under an appropriately chosen random restriction, a circuit composed of 𝖰𝖬𝖠𝖰𝖬𝖠\mathsf{QMA}sansserif_QMA query gates simplifies to a function that is close (in expectation) to a function with low deterministic query complexity. The proof amounts to... | A |
∑h=0∞(dimk[𝐱(⩽h)]/I(∞))⋅th=m1−mt?superscriptsubscriptℎ0⋅dimension𝑘delimited-[]superscript𝐱absentℎsuperscript𝐼superscript𝑡ℎ𝑚1𝑚𝑡?\sum\limits_{h=0}^{\infty}(\dim k[\mathbf{x}^{(\leqslant h)}]/I^{(\infty)})%
\cdot t^{h}=\frac{m}{1-mt}?∑ start_POSTSUBSCRIPT italic_h = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ e... | Connections between the multiplicity structure of the arc space of a fat point and Rogers-Ramanujan partition identities from combinatorics were pointed out by Bruschek, Mourtada, and Schepers in [11] (for a recent survey, see [29, §9]).
In particular, they used Hilbert-Poincare series of similar nature to (1) (motivat... | The authors are grateful to Joris van der Hoeven, Hussein Mourtada, Bernd Sturmfels, Dmitry Trushin, and the referees for helpful discussions. We thank Yassine El Maazouz and Claudia Fevola for their support with making the Mathrepo webpage.
GP was partially supported by NSF grants DMS-1853482, DMS-1760448, and DMS-185... | The starting point of our proof of the lower bound uses the insightful conjecture by Afsharijoo [1, Section 5] suggesting how the standard monomials of ℐm(∞)superscriptsubscriptℐ𝑚\mathcal{I}_{m}^{(\infty)}caligraphic_I start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( ∞ ) end_POSTSUPERSCRIPT with ... | Note that the series does not depend on the multiplicity m𝑚mitalic_m of the point.
One way to capture the scheme structure of ℒ(X)ℒ𝑋\mathcal{L}(X)caligraphic_L ( italic_X ) could be to take the components of the projections in (3) with their multiplicities. | B |
Since our model DCDFM has no limitation on the choice of distribution ℱℱ\mathcal{F}caligraphic_F as long as Eq (5) holds, setting ℱℱ\mathcal{F}caligraphic_F as any other distribution (see, Double exponential, Exponential, Gamma and Uniform distributions in http://www.stat.rice.edu/~dobelman/courses/texts/distributions... | Both simulated and empirical data are presented to compare nDFA with existing algorithm DFA developed in [16] for weighted networks, where DFA applies k-means on all rows of U^^𝑈\hat{U}over^ start_ARG italic_U end_ARG with K𝐾Kitalic_K clusters to estimate nodes labels. Meanwhile, codes for all experimental results in... | (c) To measure performances of different methods on real-world weighted network with unknown information on nodes labels, we propose a general modularity as an extension of classical Newman’s modularity [23]. For weighted network in which all edge weights are nonnegative, the general modularity is exactly the Newman’s ... |
In this paper, we introduced the Degree-Corrected Distribution-Free Model (DCDFM), a model for community detection on weighted networks. The proposed model is an extension of previous Distribution-Free Models by incorporating node heterogeneity to model real-world weighted networks in which nodes degrees vary, and it ... |
Meanwhile, Q,OE,error,τ𝑄𝑂𝐸𝑒𝑟𝑟𝑜𝑟𝜏Q,OE,error,\tauitalic_Q , italic_O italic_E , italic_e italic_r italic_r italic_o italic_r , italic_τ and T𝑇Titalic_T obtained by applying DFA and nDFA to adjacency matrices A𝐴Aitalic_A for the above four real-world networks with known nodes labels are reported in Table ... | A |
We follow a common approach of sharing the policy network between agents. In some works, e.g., Rashid et al. (2020), to preserve individuality, the observations are enriched with agent ID. This might be beneficial if agents should be assigned different roles within the team. However, we find these benefits rather mino... | One can also use separate networks for each agent. We check that MA-Trace works considerably worse in such a case. In rare cases, using separate networks is advantageous, but only in the easiest tasks, e.g., 3s5z. See Figure 6 and details in Appendix C.1.
| et al. (2021), centralized training in some cases may suffer from higher variance. Therefore we compared MA-Trace with its decentralized version (i.e. having indpendent critics for each agent). The latter typically obtains weaker results and is less stable. See Figure 5 and details in Appendix B.
| In Table 1 and Figure 1, we present the results of the main version of our algorithm – MA-Trace (obs), in which the critic uses stacked observations of agents as described in Appendix F. MA-Trace (obs) reaches competitive results and in some cases exceeds the current state-of-the-art. We compare with a selection of the... |
We use standard feed-forward networks for the actor and critic networks with two hidden layers of 64646464 neurons and ReLU activations. The critic network of MA-Trace (obs) takes stacked observations of agents as input, while MA-Trace (full) utilizes the full state provided by SMAC. DecMA-Trace have a critic using si... | A |
Suppose there are N𝑁Nitalic_N instances in our training data set. If we consider a binary classification problem, then we will have: xi∈ℝnsubscript𝑥𝑖superscriptℝ𝑛{x}_{i}\in\mathbb{R}^{n}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, yi∈{−1,... |
From the analyses and the overall score of the RF and AB models, we observe that the most performant models for RF consider only 2 features when splitting the nodes (i.e., max_features hyperparameter). The PCPs in Figure 7(d) enable us to scan the internal regions of the hyperparameters’ solution space for RF. As for ... | Boosting attaches weak classifiers (e.g., decision stumps or shallow decision trees) sequentially, each improving the predictions made by the previous models. Freund1996Experiments ; Schapire1990Strength
Stacking involves fitting many base models from different algorithms on the same data set and using a metamodel to ... | Random Forest. This algorithm works in two stages. The first stage involves integrating numerous decision trees to construct the RF, and the second stage involves making predictions for each tree created in the first stage, followed by a majority voting strategy.
| The panel in Figure 1(e) contains interactive views that help users find outliers, borderline cases, and misclassified cases in the test set. The first main view supports extracting the manual decisions (MD) from the previous phase (see Section Manual Decisions). This output is stored in a JSON format where the boundar... | C |
The conventional HS-MIMO scheme does not show full-matching of selected antenna indices with any of two PR-HS-MIMO schemes in the scenarios that Lt=4subscript𝐿𝑡4L_{t}=4italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 4 and 6666. On the other hand, the two PR-HS-MIMO schemes, i.e., EW and global polarization... |
This section delineates the PR-HS-MIMO communication system that performs spatial multiplexing with the selected antenna elements as depicted in Fig. 3. The Tx selects Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT out of Ntsubscript𝑁𝑡N_{t}italic_N start_POSTSUBSCRIPT italic_t end_POSTSU... | Even for a set of selected Tx antenna elements based on the conventional HS-MIMO, the channel capacity is improved via joint polarization pre-post coding after the hybrid antenna selection. However, the Tx antenna elements chosen by hybrid selection are different from the selection based on the proposed PR-HS-MIMO sche... |
It is worth emphasizing that the combination of polarization reconfiguration and hybrid antenna selection, i.e., PR-HS can provide significant improvement in effective channel gain, i.e., the squared envelop of hijeffsubscriptsuperscriptℎeff𝑖𝑗h^{\rm eff}_{ij}italic_h start_POSTSUPERSCRIPT roman_eff end_POSTSUPERSCR... | It is worth emphasizing that estimation of optimal polarization vectors before the hybrid antenna selection stage is inevitable to have full benefit of joint polarization pre-post coding and the corresponding polarization reconfigurable antenna selection in PR-HS-MIMO spatial multiplexing.
| D |
We now describe an extension of OnlinePacker in order to handle horizontal parallelograms of arbitrary height (at most 1111) and bounded extended width, to be defined shortly.
Here, we partition the pieces into height classes, so that a piece P𝑃Pitalic_P belongs to class hℎhitalic_h if 2−h−1<height(P)≤2−hsuperscript2... | We handle each width class independently, so that pieces from each class are packed in stacks as described in Section 5.2.2.
In particular, each width class i𝑖iitalic_i is subdivided into height classes, and we allocate in the strip rectangles of size 2i×1superscript2𝑖12^{i}\times 12 start_POSTSUPERSCRIPT italic_i en... | For height class hℎhitalic_h, the base type is a rectangle of size 2×2−h2superscript2ℎ2\times 2^{-h}2 × 2 start_POSTSUPERSCRIPT - italic_h end_POSTSUPERSCRIPT, and in the strip we allocate boxes of this size in which to pack pieces from the height class.
We define an infinite ternary box type tree for each height class... | For each height class hℎhitalic_h for which there are some pieces, we feed the algorithm with a rectangle of size 1×2−h1superscript2ℎ1\times 2^{-h}1 × 2 start_POSTSUPERSCRIPT - italic_h end_POSTSUPERSCRIPT.
Let F𝐹Fitalic_F be the total area of these pieces, and we have F<∑h=0∞2−h=2𝐹superscriptsubscriptℎ0superscript2ℎ... | Since the area of pieces in each non-empty height class hℎhitalic_h is at least 2−hsuperscript2ℎ2^{-h}2 start_POSTSUPERSCRIPT - italic_h end_POSTSUPERSCRIPT, we can now apply Lemma 17 (by scaling the y𝑦yitalic_y-coordinates by a power of 2222, we obtain a packing of parallelograms of height 1111).
Let nhsubscript𝑛ℎn_... | B |
Metrics: Following the official challenge [14, 37], mean radial error (MRE) and successful detection rate (SDR) in four radii (2mm, 2.5mm, 3mm, and 4mm) are applied, based on the Euclidean distance between prediction and ground truth. In addition, similarity via Eq. (9) is demonstrated for comparison. For the WFLW data... | All of our models are implemented in PyTorch, accelerated by an NVIDIA RTX GPU.
Following [41, 42], the feature extractor is optimized by Adam optimizer for 3500 epochs for self-supervised training. The learning rate is initialized with 0.001, and decayed by half every 500 epochs. It takes 6 hours to converge with bat... | Another group of researchers, aiming to achieve a high performance at a low labeling cost, propose a strategy to select instances for annotation incrementally [15, 31, 35, 45].
The basic idea is to first train a model with few labeled data, and then use the model to select instances from unlabeled data iteratively, whi... | We follow [42, 41] to construct a deep model trained via multi-layer pixel-wise contrastive loss function as our feature extractor. According to [42], we use VGG [30] as the backbone, followed with 5 blocks to reduce the dimension. This model is trained with a pixel-wise matching proxy task for over 500 epochs.
| Q: Can self-supervised methods affect our performance? In the proposed method, we leverage the proxy task in [42] to pre-train our feature extractor. In addition, we implement other self-supervised methods, including BYOL [9] in pixel level, and BYOL with multi-layer training (BYOL-m).
From Table 4 we discover the stro... | A |
When all nodes in Π^^Π\hat{\Pi}over^ start_ARG roman_Π end_ARG are pure, A𝐴Aitalic_A has both positive and negative elements (i.e., m+>0superscript𝑚0m^{+}>0italic_m start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT > 0 and m−>0superscript𝑚0m^{-}>0italic_m start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT > 0), our modularity re... | To determine the number of communities, we follow the strategy provided in [50]. In detail, we iteratively increase k𝑘kitalic_k and choose the one maximizing our fuzzy weighted modularity computed via Equation (6) using method ℳℳ\mathcal{M}caligraphic_M.
| MMDF is a generative model and fuzzy weighted modularity is a general modularity for overlapping weighted networks. We expect that our model MMDF and fuzzy weighted modularity proposed in this paper will have wide applications in learning and understanding the latent structure of overlapping weighted networks, just as ... |
(3) We provide fuzzy weighted modularity to evaluate the quality of mixed membership community detection for overlapping weighted networks. We then provide a method to determine the number of communities for overlapping weighted networks by increasing the number of communities until the fuzzy weighted modularity does ... |
Our fuzzy weighted modularity computed using Equation (6) measures the quality of overlapping community partition. Similar to the Newman-Girvan modularity [47, 48], a larger fuzzy weighted modularity Qℳ(k)subscript𝑄ℳ𝑘Q_{\mathcal{M}}(k)italic_Q start_POSTSUBSCRIPT caligraphic_M end_POSTSUBSCRIPT ( italic_k ) indicat... | A |
Furthermore, the plateau interval of the second protocol is larger than that of the first protocol ([0.25,1.0]0.251.0[0.25,1.0][ 0.25 , 1.0 ] vs [0.25,0.75]0.250.75[0.25,0.75][ 0.25 , 0.75 ]), indicating that our CwD is even more robust to η𝜂\etaitalic_η when the incremental learning sequence is longer.
| Interestingly, we find that when training with fewer classes, the top eigenvalues of the covariance matrix of representations of each class dominate, indicating that the representations of each class lie in a long and narrow region (see Fig. 1 (a) for example).
On the other hand, for models trained with more classes (p... | Specifically, since the oracle model is trained with more classes, we investigate how representations are affected by the number of training classes.
To this end, we compute and analyze the eigenvalues of the covariance matrix of representations of each class. | The reason might be that an excessively large penalization on the Frobenius norm of class-wise correlation matrix would make the representations of each class to spread out drastically, resulting in large overlaps among representations of different classes. This is further studied in the Appendix.
| We are thus motivated to enforce data representations of each class to be more uniformly scattered at the initial phase, which mimics the representations produced by the oracle model.
To this end, we first theoretically show that, a group of embeddings will scatter more uniformly in the space if its correlation matrix ... | C |
Upon reviewing the methodologies of the top-performing teams, we observed that they included pre-alignment, deep neural networks, and inverse consistency analysis.
Instance optimization on top of a deep learning approach, i.e., refining the registration result at test time in a post-processing step, also seems to be ad... | Instance optimization on top of a deep learning approach, i.e., refining the registration result at test time in a post-processing step, also seems to be advantageous.
Overall, the top-ranked methods were very close to each other in terms of all evaluation metrics. | Overall, the top-ranked methods were very close to each other in terms of all evaluation metrics.
This is particularly evident in the case of the first three methods, and to a lesser extent, it remains true for the methods in the entire first half of the ranking. | Instance optimization on top of a deep learning approach, i.e., refining the registration result at test time in a post-processing step, also seems to be advantageous.
Overall, the top-ranked methods were very close to each other in terms of all evaluation metrics. | Overall, the top-ranked methods were very close to each other in terms of all evaluation metrics.
This is particularly evident in the case of the first three methods, and to a lesser extent, it remains true for the methods in the entire first half of the ranking. | B |
As said above, we have that β∈𝖼𝗈𝗆𝗉Σ(Q)𝛽subscript𝖼𝗈𝗆𝗉Σ𝑄\beta\in\mathsf{comp}_{\Sigma}(Q)italic_β ∈ sansserif_comp start_POSTSUBSCRIPT roman_Σ end_POSTSUBSCRIPT ( italic_Q ) if and only if βy↦x∈𝖼𝗈𝗆𝗉Σ(Q′)subscript𝛽maps-to𝑦𝑥subscript𝖼𝗈𝗆𝗉Σsuperscript𝑄′\beta_{y\mapsto x}\in\mathsf{comp}_{\Sigma}(Q^{\... | If the condition of line 1 is satisfied by Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, then it is also satisfied by Q𝑄Qitalic_Q as we have only removed an orphan variable, which does not affect the complex part of the query. This is a contradiction to Lemma 29.
| If the condition of line 7 is satisfied by Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, then it is also satisfied by Q𝑄Qitalic_Q as we have not changed constants or removed variables from primary-lhs positions, which contradicts Lemma 29.
|
If the condition of line 7 is satisfied by Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, then it is also satisfied by Q𝑄Qitalic_Q as we have not changed constants or removed variables from primary-lhs positions, which contradicts Lemma 29. |
If the condition of line 7 is satisfied by Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, then it is also satisfied by Q𝑄Qitalic_Q as we have not changed constants or removed variables from primary-lhs positions, which contradicts Lemma 29. | C |
In the present paper, we examine absorbing random walks on graphs in which different nodes can have different absorption rates, inducing an “effective” network structure that is reflected only partially by the edge weights of a network. Many notions of network community structure arise from the analysis of random walk... |
In our adaptations of InfoMap to absorbing random walks, we introduce a family of associated absorption-scaled graphs and then apply Markov time sweeping to these absorption-scaled graphs. To illustrate how the node-absorption rates impact the communities that we detect, consider the matrix Plsubscript𝑃𝑙P_{{l}}itali... |
The community-detection algorithm InfoMap is based on random walks, so it is natural to adapt it to absorbing random walks. However, there are numerous approaches to community detection [12, 33], and it is worthwhile to adapt other approaches, such as modularity maximization [29] and statistical influence using stocha... | Our adaptation of InfoMap to absorbing random walks involves a family of absorption-scaled graphs G~(Dδ,H)~𝐺subscript𝐷𝛿𝐻\tilde{G}(D_{\delta},H)over~ start_ARG italic_G end_ARG ( italic_D start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT , italic_H ), where H𝐻Hitalic_H is a scaling matrix that controls the relative i... | We develop community-detection algorithms that account for node-absorption rates. We adapt the widely-used community-detection algorithm InfoMap [35, 36, 41] to absorbing random walks and thereby account for heterogeneous node-absorption rates in the detected communities. In our adaptation, we apply InfoMap to absorpti... | D |
Fig. 10(a)-(b) shows EP generation rates and fidelity for path lengths
of 500km and 1000km for varying link lengths, for the single-tree schemes DP-Approx and Balanced-Tree. Q-Cast and Delft-LP are not shown as their EP rate is near-zero (≤10−20absentsuperscript1020\leq 10^{-20}≤ 10 start_POSTSUPERSCRIPT - 20 end_POSTS... | Now, in Fig. 10(c), we demonstrate the effect of decoherence time of quantum memories used in nodes.
Here, we use 30-35 km links. We see that even with decoherence time of as low as 100 ms, DP-Approx is able to create EPs for up to 200 kms while Balanced-Tree can only create EP for paths up to 120 kms; they perform sim... | in that both
consider only balanced trees; however, we use a heuristic metric that facilitates a polynomial-time Dijkstra-like heuristic to select the optimal path, while their recursive metric 666We note that their formula (Eqn. 10 in [18]) is incorrect as it either ignores the 3/2 factor or assumes the EP generations... | on a path may have much different lengths. In particular, we pick link lengths randomly in the range of 10 to 50 kms. With this setting, we see that DP-Approx performs much better than Balanced-Tree, and in some cases, up to 100% better.
Note that, Balanced-Tree and Caleffi have similar performance over linear graphs, ... | less555We note that, in our context, the storage time as well as the memory coherence
time are statistical quantities due to the underlying statistical mechanisms. However, for the purposes of selecting a swapping tree, we use a fixed decoherence threshold τdsubscript𝜏𝑑\tau_{d}italic_τ start_POSTSUBSCRIPT italic_d en... | A |
Wang et al. [77] propose an approach that enables a human driver to provide scene forecasting to an intelligent driving system using a purposeful gaze. They develop a graphical user interface to understand the effect of human drivers on the prediction and control of an intelligent vehicle. A simulator is used to test a... | Apart from informational content, another critical aspect deserving attention is the timing perspective of such explanations: the lead time for emergent scenarios, perhaps using extensive scenario-based evaluations or case-based reasoning, must be engineered appropriately. Furthermore, a well-known problem with large p... | Presenting live natural language explanations during the trip: The promising work in this context is Wayve’s LINGO-1 [168] and LINGO-2 [169] architectures. LINGO-1 is a vision-language-action (VLAM) model that provides live natural language explanations for describing a vehicle’s chosen actions in end-to-end autonomous... |
Finally, while the preliminary studies and further works on explainable autonomous driving primarily focused on a combination of various AI techniques revisited above, large language models (LLMs) and vision-language models (VLMs) have recently emerged as a novel paradigm to interpreting AVs decisions and describing t... | Overall, as an emerging AI technology, LLMs and VLMs have tremendously benefitted AVs from the interpretability aspect, as described in the above studies. However, it is also worthwhile to mention that there are still spaces for improvement of these models as fictitiously generated explanations (e.g., hallucinations) m... | C |
Tokyo 24/7 dataset. The experimental results demonstrate that Patch-NetVLAD achieves the best performance on the Pitts30k test dataset and Tokyo 24/7 dataset, while Ghost-dil-NetVLAD performs the best on TJU-Location test dataset because most Recall@N of Ghost-dil-NetVLAD are greater than those of the remaining models.... |
In this section, six models including Alex-NetVLAD, VGG16-NetVLAD, Patch-NetVLAD (Considering our limited computational resources, we only use its built-in storage mode. In this paper, we uniformly call this method Patch-NetVLAD.), MobileNetV3-NetVLAD (lightweight CNN + NetVLAD), Ghost-NetVLAD (the Ghost module does n... | Tokyo 24/7 dataset. The experimental results demonstrate that Patch-NetVLAD achieves the best performance on the Pitts30k test dataset and Tokyo 24/7 dataset, while Ghost-dil-NetVLAD performs the best on TJU-Location test dataset because most Recall@N of Ghost-dil-NetVLAD are greater than those of the remaining models.... |
As shown in Fig. 5, it is evident that Ghost-NetVLAD outperforms other models in FLOPs and model parameters. There are few differences between Ghost-dil-NetVLAD and Ghost-NetVLAD in the model efficiency. However, VGG16-NetVLAD and Patch-NetVLAD need the most computational resources, which is not conducive to its deplo... | GhostCNN ensures a lightweight architecture and low computational cost by replacing part of convolution operations with a series of linear transformations to generate ghost feature maps. Though the FLOPs of Ghost-dil-NetVLAD is only 1%percent11\%1 % of that of VGG16-NetVLAD and Patch-NetVLAD, and the parameters is 17%p... | C |
According to the algebraic attack of Courtois and Meier, to decrease the degree of the system equations, we can multiply any equation in (10) by an annihilator of WGTWGT\operatorname{WGT}roman_WGT and the one of WGT+1WGT1\operatorname{WGT}+1roman_WGT + 1, depending on whether the keystream bit is 1 or 0. Since AI(WGT... | The core idea of our new algebraic attack is to use many annihilators simultaneously, instead of only one, and provide a good estimate of the number of keystream bits needed to perform the attack, which is strictly related to the number of linearly independent equations after the multiply phase in the XL-Algorithm. Ind... |
As shown in Table 1, the attack is not feasible if D=4𝐷4D=4italic_D = 4, as it needs more than 218superscript2182^{18}2 start_POSTSUPERSCRIPT 18 end_POSTSUPERSCRIPT keystream bits. However, for both D=5𝐷5D=5italic_D = 5 and D=6𝐷6D=6italic_D = 6, it is possible to carry out the attack knowing 217.84superscript217.84... | The main aim of our work is not to perform effectively the attack described in Section 3 on WG-PRNG, but to estimate how many keystream bits one needs to perform successfully the attack on WG-PRNG. We will show that knowing less than 218superscript2182^{18}2 start_POSTSUPERSCRIPT 18 end_POSTSUPERSCRIPT keystream bits, ... |
In Chapter 4, to validate our algebraic attack, first we apply it to two toy stream ciphers and then we show that it is feasible to perform it on WG-PRNG. We conclude showing that the security of WG-PRNG is less that claimed until now. For the sake of presentation, we will first describe the part regarding WG-PRNG, an... | C |
However, it is impossible to compute RNR in huge games, and we fused the RNR approach with depth-limited solving creating a novel algorithm we call CDRNR. CDRNR is the best performing theoretically sound robust response calculation that can be done in huge games, enabling new opponent exploiting approaches. | Local best response Lisý and Bowling (2017) is an evaluation tool for poker. It uses a given abstraction in its action space. It picks the best action in each decision set, looking at the fold probability for the opponent in the next decision node and then assuming the game is called until the end. Our algorithm CDBR i... |
Fully exploiting opponent models in small games boils down to computing a best response. This is infeasible in games with an intractable number of information sets for which we use the continual depth-limited solving algorithms. The depth-limited setting does not allow computing BR in one pass anymore. The game we alr... |
The results in Figure 6 show that both concepts are good at approximating the best response, with CDBR being better against both strategies. LBR looks at one following action, so it is best compared to the CDBR1 in terms of comparability. Next, we observe a lack of monotonicity in step increase, which is explained wit... | We can use other gadgets in CDRNR to obtain fast algorithms without any theoretical guarantees and with the same bound on computation as we have for the combination of CDBR and Nash equilibrium. The soundness of the algorithm relies on using the full gadget, which requires solving increasingly larger parts of the game ... | A |
But now, since q𝒜(G)=q≤k∗(G)subscript𝑞𝒜𝐺subscriptsuperscript𝑞absent𝑘𝐺q_{\mathcal{A}}(G)=q^{*}_{\leq k}(G)italic_q start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ( italic_G ) = italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ≤ italic_k end_POSTSUBSCRIPT ( italic_G ) and since |𝒜′... | As in the last proof, we may follow the proof of the non-weighted version, Theorem 1.2, with the following adaptations. In place of the fattening lemma, use the weighted version, Lemma 10.3; replace instances of G𝐺Gitalic_G and Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT by w𝑤witalic_w... | which completes the proof of (6.9). It is now possible to choose η𝜂\etaitalic_η small enough that the inequality holds for all t≥0𝑡0t\geq 0italic_t ≥ 0 and we may take 2222 as the coefficient of the exponential - the details of this calculation appear for example in the last part of the proof of Theorem 7.1 of [32]. ... | The proof follows that of the non-weighted case, Theorem 1.1, line by line with the following adaptations. In place of the fattening lemma used on the underlying graph, use the weighted version, Lemma 10.3; and replace instances of G𝐺Gitalic_G and Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSC... |
The rough idea of the proof of Theorem 1.2 is that we can use the fattening lemma, Lemma 3.1, to bound the probability that a vertex partition behaves badly by the probability that a fat vertex partition behaves similarly badly, and we can use probabilistic methods to handle fat partitions. However, even after the str... | D |
where 𝐀𝐀\mathbf{A}bold_A is the cited reference bipartite network and 𝐀𝐓superscript𝐀𝐓\mathbf{A^{T}}bold_A start_POSTSUPERSCRIPT bold_T end_POSTSUPERSCRIPT is its transpose. 𝐁𝐁\mathbf{B}bold_B is a symmetrical square matrix 151 ×\times× 151, where rows and columns are papers included in the sample. Element bij... |
The resulting matrix displays an undirected weighted network in which the 151 vertices are the set of papers included in our sample and the edges represent the citation ties between them. An existing tie implies that common reference literature exists between vertex i𝑖iitalic_i and j𝑗jitalic_j. When two nodes are no... | We intend to identify the existence of communities in our network. The assumption is that papers citing the same references aggregate into a group that shares certain features, which could be methodological approach, level of analysis, specific sub-topics of the literature, and outcomes. The extreme heterogeneity of ou... | The PRISMA flow diagram (Moher et al., , 2009) in Figure 1 shows the process of identifica-tion, screening, eligibility, and inclusion of contributions in the final sample. It is important to note that there are two levels of inclusion: the first level identifies the sample of contributions included in our network anal... | Co-citation is based on the relationship established by citing authors of a paper: two papers are linked whenever they jointly appear in the cited references of at least a third paper. Direct citation is the most intuitive approach, linking two papers if one has cited a precedent one. As co-citation, direct citation pe... | A |
is the path length of comparators 𝐮1,…,𝐮Tsubscript𝐮1…subscript𝐮𝑇\mathbf{u}_{1},\ldots,\mathbf{u}_{T}bold_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_u start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT and thus reflects the non-stationarity of environments. If the path length PTsubscript𝑃𝑇P_{T}italic_P sta... |
The two problem-dependent quantities are both at most 𝒪(T)𝒪𝑇\mathcal{O}(T)caligraphic_O ( italic_T ) under standard assumptions of online learning, while could be much smaller in easier problem instances. We propose two novel online algorithms called Sword and Sword++ (“Sword” is short for Smoothness-aware online ... | Although the rate is minimax optimal for convex functions, we would like to design algorithms with problem-dependent regret guarantees beyond the worst-case analysis (Roughgarden, 2021). Specifically, we aim to enhance the guarantee for some easy problem instances, particularly when the online functions are smooth, by ... |
In this paper, we exploit the easiness of problem instances to enhance the universal dynamic regret. We propose two novel online ensemble algorithms, Sword and Sword++, for convex and smooth online learning. Both algorithms achieve a best-of-both-worlds dynamic regret of order 𝒪((1+PT+min{VT,FT})(1+PT))𝒪1subscrip... |
In addition to exploiting the convexity of functions, there are studies improving static regret by incorporating smoothness, whose main proposal is to replace the dependence on T𝑇Titalic_T by problem-dependent quantities. Such problem-dependent bounds enjoy many benign properties, in particular, they can safeguard th... | B |
If σ:A∗→B∗:𝜎→superscript𝐴superscript𝐵\sigma\colon A^{*}\to B^{*}italic_σ : italic_A start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → italic_B start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a morphism, we denote |σ|=∑a∈A|σ(a)|𝜎subscript𝑎𝐴𝜎𝑎|\sigma|=\sum_{a\in A}|\sigma(a)|| italic_σ | = ∑ start_POSTSUBSCRIPT itali... | of y∈Bℤ𝑦superscript𝐵ℤy\in B^{\mathbb{Z}}italic_y ∈ italic_B start_POSTSUPERSCRIPT blackboard_Z end_POSTSUPERSCRIPT is a pair (x,k)𝑥𝑘(x,k)( italic_x , italic_k ) of a sequence x∈Aℤ𝑥superscript𝐴ℤx\in A^{\mathbb{Z}}italic_x ∈ italic_A start_POSTSUPERSCRIPT blackboard_Z end_POSTSUPERSCRIPT
and an integer k𝑘kitalic_k... | from Proposition 5.4. Indeed, x∈Aℤ𝑥superscript𝐴ℤx\in A^{\mathbb{Z}}italic_x ∈ italic_A start_POSTSUPERSCRIPT blackboard_Z end_POSTSUPERSCRIPT
is a fixed point of σ:A∗→A∗:𝜎→superscript𝐴superscript𝐴\sigma\colon A^{*}\to A^{*}italic_σ : italic_A start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → italic_A start_POSTSUPERSC... | in X𝑋Xitalic_X for a point y∈Bℤ𝑦superscript𝐵ℤy\in B^{\mathbb{Z}}italic_y ∈ italic_B start_POSTSUPERSCRIPT blackboard_Z end_POSTSUPERSCRIPT if y𝑦yitalic_y
has at most one centered σ𝜎\sigmaitalic_σ-representation (x,k)𝑥𝑘(x,k)( italic_x , italic_k ) with x∈X𝑥𝑋x\in Xitalic_x ∈ italic_X. | B∗∪B−ℕ∪Bℕ∪Bℤsuperscript𝐵superscript𝐵ℕsuperscript𝐵ℕsuperscript𝐵ℤB^{*}\cup B^{-\mathbb{N}}\cup B^{\mathbb{N}}\cup B^{\mathbb{Z}}italic_B start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∪ italic_B start_POSTSUPERSCRIPT - blackboard_N end_POSTSUPERSCRIPT ∪ italic_B start_POSTSUPERSCRIPT blackboard_N end_POSTSUPERSCRIPT ∪ i... | A |
{n}}{n}+\bigg{(}\frac{C_{n}}{n}\bigg{)}^{\frac{2}{2s+1}}\Bigg{)}.italic_R ( caligraphic_T start_POSTSUBSCRIPT italic_n , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n e... | This paper is a unification and extension of Sadhanala et al. (2016, 2017) (more will be said about the relationship to these papers
in Section 1.3). The models of smoothness for f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that we | ℋdk(1)superscriptsubscriptℋ𝑑𝑘1\mathcal{H}_{d}^{k}(1)caligraphic_H start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( 1 ). This matches the optimal rate for estimation over Holder
classes (see Sadhanala et al. (2017) for a formal statement and proof for | smoothness index k+1𝑘1k+1italic_k + 1 in each coordinate direction, and any third index q≥1𝑞1q\geq 1italic_q ≥ 1, is indeed n−2s/(2s+1)superscript𝑛2𝑠2𝑠1n^{-2s/(2s+1)}italic_n start_POSTSUPERSCRIPT - 2 italic_s / ( 2 italic_s + 1 ) end_POSTSUPERSCRIPT for s>1/2𝑠12s>1/2italic_s > 1 / 2 (or 2k+2>d2𝑘2𝑑2k+2>d2 it... | The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al. (2017). More precisely,
these authors established the third term on the right-hand side in | D |
The method employs the Wasserstein distance to measure the topological differences between networks and demonstrates greater efficiency and performance than the commonly used k𝑘kitalic_k-means clustering in defining the state spaces of dynamic brain networks. | In this study, we proposed the topological clustering method for the estimation and quantification of dynamic state changes in time-varying brain networks. A coherent statistical theory, grounded in persistent homology, was developed, and we demonstrated the application of this method to resting-state fMRI data. Restin... |
In contrast to previous studies that reported relatively low heritability in functional brain networks (Glahn et al., 2010; Xu et al., 2017; Korgaonkar et al., 2014; Wan et al., 2022), our findings indicate significant higher heritability across various regions of the brain network. This discovery not only challenges ... |
The method employs the Wasserstein distance to measure the topological differences between networks and demonstrates greater efficiency and performance than the commonly used k𝑘kitalic_k-means clustering in defining the state spaces of dynamic brain networks. |
In addition to the methodological advancement, the paper applies the proposed technique to analyze the heritability of overall brain network topology using a twin study design. The study investigates whether the dynamic pattern of brain networks is a genetically influenced trait, an area previously underexplored. By e... | D |
tk+1subscript𝑡𝑘1\displaystyle t_{k+1}italic_t start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT
=min{t>tk:V(x(t))=V(x(tk))e−r(t−tk)}.absent:𝑡subscript𝑡𝑘𝑉𝑥𝑡𝑉𝑥subscript𝑡𝑘superscript𝑒𝑟𝑡subscript𝑡𝑘\displaystyle=\min\{t>t_{k}:V(x(t))=V(x(t_{k}))e^{-r(t-t_{k})}\}.= roman_min { italic_t > italic_t sta... | u(t)=Kx(tk),∀t∈[tk,tk+1).formulae-sequence𝑢𝑡𝐾𝑥subscript𝑡𝑘for-all𝑡subscript𝑡𝑘subscript𝑡𝑘1u(t)=Kx(t_{k}),\quad\forall t\in[t_{k},t_{k+1}).italic_u ( italic_t ) = italic_K italic_x ( italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , ∀ italic_t ∈ [ italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSC... |
x(tk+1)=G(τ′)x(tk)=αx(tk),𝑥subscript𝑡𝑘1𝐺superscript𝜏′𝑥subscript𝑡𝑘𝛼𝑥subscript𝑡𝑘x(t_{k+1})=G(\tau^{\prime})x(t_{k})=\alpha x(t_{k}),italic_x ( italic_t start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT ) = italic_G ( italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) italic_x ( italic_t start_POST... | tk+1−tk=min{τ>0:f(x(tk),τ):=xT(tk)M(τ)x(tk)=0},subscript𝑡𝑘1subscript𝑡𝑘:𝜏0assign𝑓𝑥subscript𝑡𝑘𝜏superscript𝑥𝑇subscript𝑡𝑘𝑀𝜏𝑥subscript𝑡𝑘0t_{k+1}-t_{k}=\min\{\tau>0:f(x(t_{k}),\tau):=x^{T}(t_{k})M(\tau)x(t_{k})=0\},italic_t start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - italic_t start_POSTSUB... |
x(t)=G(τ)x(tk),∀t∈[tk,tk+1),formulae-sequence𝑥𝑡𝐺𝜏𝑥subscript𝑡𝑘for-all𝑡subscript𝑡𝑘subscript𝑡𝑘1x(t)=G(\tau)x(t_{k}),\quad\forall t\in[t_{k},t_{k+1}),italic_x ( italic_t ) = italic_G ( italic_τ ) italic_x ( italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , ∀ italic_t ∈ [ italic_t start_POSTSUBSCR... | D |
\mathrm{d}{x}.+ [ divide start_ARG 1 - italic_σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_ARG start_ARG 2 italic_σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG + divide start_ARG 1 - italic_σ start_POSTSUBSCRIPT 2 end_... | for t∈[0,tmax]𝑡0subscript𝑡𝑚𝑎𝑥t\in[0,t_{max}]italic_t ∈ [ 0 , italic_t start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT ], where |.|𝒰|.|_{\mathscr{U}}| . | start_POSTSUBSCRIPT script_U end_POSTSUBSCRIPT denotes a distance metric from the unsafe set; ∥.∥∞\|.\|_{\infty}∥ . ∥ start_POSTSUBSCRIPT ∞ e... | Stability-Only Control (St-C): In this case, we have used ISSt criterion (49) to design the closed-loop control gains. Note that, if the design is done solely based on ISSt criterion, then there is no guarantee that it will also satisfy pISSf. To illustrate this point, and to highlight the potential advantage of combin... |
Consider the system (4) with boundary conditions (8). Let us also consider the unsafe set for this system to be (12) and the metric measuring the distance from this unsafe set to be given by (13). If the controller gains are chosen such that the following inequalities are satisfied, |
Then, we use (12) to conclude that the distance from the unsafe set boundary is always greater than the distance from the inside of the set. Mathematically, this implies, h¯2−∫0Lh2dx⩽infa∈𝒰{a2−∫0Lh2dx}=|h|𝒰superscript¯ℎ2superscriptsubscript0𝐿superscriptℎ2differential-d𝑥subscriptinfimum𝑎𝒰superscript𝑎2superscri... | C |
Task performance refers to duties or actions that are formally recognized and rewarded by management [51]. Several studies attempt to predict ITP and IRB as measures of task performance. In a study of 84 full-time nursing professionals over a ten-week period, Feng et al. [32] use Fitbit and Bluetooth proximity data to ... | In a follow-up study of 298 information workers, Mirjafari et al. [53] used auto-encoder-generated features based on passive sensing data from mobile phones and a Garmin fitness tracker to predict day-to-day job performance dynamics, i.e., detecting whether there was an improvement, decline, or no change in the job per... |
Other works target efforts to support the design of future interventions in the workplace. Kimani et al. [30] created a conversational agent designed to assist information workers in achieving various work-related objectives, such as task scheduling and prioritization, task switching, providing reminders to take break... | Feng and Narayanan [10] propose a method for capturing behavioral consistency in wearable data using the activity curve model. They find that consistency features improve accuracy by up to 6% when compared to using only summary features from the Fitbit fitness tracker in a study of 97 hospital workers throughout 10 wee... |
In studies of information workers, several works stand out. The study by Das Swain et al. [33] recruits 249 individuals over 62 days and used Bluetooth beacons to analyze desk and away-from-desk sessions per hour, time at work and time at home. The authors produce a measure of organization fit by analyzing the converg... | D |
Since the numbers of local epochs and iterations are set to 5 and 50, respectively, each client has little training opportunity with few training examples and client heterogeneity increases significantly.
As shown in Table 2, FedACG outperforms the other methods in most cases, with the performance gap between FedACG an... | Each participating client optimizes its local model from the momentum-integrated initialization.
This proactive initialization allows each client to find its local optimal solution along the trajectory of the global gradient, which improves the consistency of local updates in FedACG. | Table 2: Results from reduced participation rates (2% for 100 clients, 1% for 500 clients) on CIFAR-10 and CIFAR-100 with the Dirichlet parameter 0.3.
FedCM† and FedDC‡ require 50% and 100% additional communication costs for each communication round, respectively. | Since FedACG does not require storing local model history for local updates, it is conceptually better suited for scenarios with newly participating clients.
To validate this property, we conduct an experiment, where we maintain 250 clients in each round but replace half of the clients on average every 100 rounds by se... | Our experiments have been performed in two different settings; one is a moderate scale, which involves 100 clients with a 5% participation rate per round, and the other is with a large number of clients, 500 with a participation rate of 2%.
Because the number of clients in the large-scale setting is five times higher t... | C |
Collecting Training Samples. Recall that a sample in PU-Setting is comprised of a sample of PUs’ parameters (location and power) and the optimal power allocated to the SU. In SS-Setting, a training sample is comprised of spectrum sensors’ received power readings. The location of entities is available by using a GPS don... | the input (features) being the primary-user parameters, spectrum sensor (SS) readings, and secondary user (SU) request parameters, and the output (label) being the maximum power that can be allocated to the SU without
resulting in any harmful interference to the PUs’ receivers. |
Determining Labels (Optimal Power Allocated to SU). We essentially do a binary search to estimate the optimal power that can be allocated to SU. To determine whether PU to PUR transmission is incurring any harmful interference from SU, we have PU continuously streaming ASCII messages over the 1 MHz bandwidth channel c... | Collecting Training Samples. Recall that a sample in PU-Setting is comprised of a sample of PUs’ parameters (location and power) and the optimal power allocated to the SU. In SS-Setting, a training sample is comprised of spectrum sensors’ received power readings. The location of entities is available by using a GPS don... | The general spectrum allocation problem is to allocate optimal power to an SU’s request across spatial, frequency, and temporal domains. We focus on the core function approximation problem, which is to determine the optimal power allocation to an SU for a given location, channel, and time instant—since frequency and te... | B |
\frac{\delta s^{2}}{2}.≤ ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT | | italic_κ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - italic_κ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT | | s... | The paper is structured as follows. Section 2 contains preliminaries and is split in the following subsections. In Section 2.1, after reviewing the definitions of groups and group actions, we define the notions of congruence and symmetry of curves relative to a given group. In Sections 2.2
and 2.3, we follow [12] to d... |
The inequality in line (57) follows from properties of definite integrals, the first inequality in line (58) follows from (55). The equality in line (58) follows from (49) and the properties of definite integrals. The first inequality in line (59) follows from (25). |
is the Euclidean Cartan matrix. From the equivariance property (7) and the SE(2)𝑆𝐸2SE(2)italic_S italic_E ( 2 )-invariance666The Euclidean curvature κ𝜅\kappaitalic_κ changes its sign under reflections and, therefore, is not invariant under the full Euclidean group E(2)𝐸2E(2)italic_E ( 2 ). Nonetheless, it is cu... | is the affine Cartan matrix. From the equivariance property (18) and the SA(2)𝑆𝐴2SA(2)italic_S italic_A ( 2 )-invariance888The affine curvature μ𝜇\muitalic_μ is scaled under non-unimodular linear transformations and, therefore, is not invariant under the full affine group A(2)𝐴2A(2)italic_A ( 2 ). Nonetheless, ... | B |
Online learning [29], resource allocation [9], demand response in power systems [14], and localization of moving targets [2] are just a few examples where online convex optimization (OCO) has been applied. In the problem setup of OCO, the objective functions are time-varying and are not available to the decision maker ... | Online learning [29], resource allocation [9], demand response in power systems [14], and localization of moving targets [2] are just a few examples where online convex optimization (OCO) has been applied. In the problem setup of OCO, the objective functions are time-varying and are not available to the decision maker ... |
In the seminal work of [41], the method of online gradient descent is proposed for OCO problems, where at each time step the decision maker performs one gradient descent step using the latest available information. A static regret upper bound that is sublinear in T𝑇Titalic_T is proved, where T𝑇Titalic_T is the lengt... | To the best of our knowledge, Coordinate descent [31], as an important class of optimization algorithms, is not sufficiently analyzed by researchers in the online optimization community. In coordinate descent algorithms, most components of the decision variable are fixed during one iteration while the cost function is ... |
The main contributions of the paper can be summarized as follows. First, we extend the coordinate descent algorithms considered in [31] to the online case and provide their regret analysis. To the best of our knowledge, this is the first attempt to look at possibilities of using coordinate descent methods to solve OCO... | B |
One limitation of our results is that we could not include device mismatch in our simulation, as we have yet to realize a LiWES array, which would allow us to characterize this mismatch. However, we have modelled cycle-to-cycle variation, which also gives rise to mismatch in time-surfaces, albeit over time rather than ... |
The last section (VI.C) explored the computational properties of STP and the double exponential dynamics. Both STP and double exponential decay dynamics increased the accuracy of the network compared to the original HOTS network with single exponentials and no-STP. STP contributes to reducing ”Noise” in the network. M... |
In this work, we used a Lixx{}_{\textbf{x}}start_FLOATSUBSCRIPT x end_FLOATSUBSCRIPTWO33{}_{\textbf{3}}start_FLOATSUBSCRIPT 3 end_FLOATSUBSCRIPT electrochemical memristor to test the effect of programmable time constants, double exponential decay and STP on the widely used N-MNIST dataset and POKERDVS dataset. We used... | In recent years, a different approach has surfaced. Several works have demonstrated the presence of transient conductance responses in memristive devices akin to short-term plasticity (STP) and Excitatory/Inhibitory Post Synaptic Potentials (EPSP/IPSP)[12, 13, 14, 15]. These memristors with “volatile” properties are ex... | Recent developments in semiconductor technology have led to the design and creation of a new class of devices called memristors. It has been predicted that memristors will be used in the near future as the atomic component of more advanced and complex systems, which can provide performance superior to conventional tran... | A |
Path: The given social network is a path graph. Initially, the agents’ opinions are uniformly distributed in one dimension with an equal distance of ε𝜀\varepsilonitalic_ε so that the influence network forms a path graph with a uniform edge length of ε𝜀\varepsilonitalic_ε. | For each initial HKS state on social networks with varying numbers of agents n𝑛nitalic_n, we simulated 100 independent runs of random activations needed to reach a δ𝛿\deltaitalic_δ-stable state. The code for our simulator software and all necessary tools to reproduce our figures are available from our public GitHub r... | Our opinions are not static. On the contrary, opinions are susceptible to dynamic changes, and this is heavily exploited by (social) media, influencers, politicians, and professionals for public relations campaigns and advertising. The way we form our opinions is not a solitary act that simply combines our personal exp... |
We study the convergence time to a δ𝛿\deltaitalic_δ-stable state in Hegselmann-Krause systems with an arbitrary initial state and an arbitrary given social network, where we update one uniformly at random chosen agent in each step. To the best of our knowledge, this is the first analysis of the variant of HKSs that f... | To the best of our knowledge, convergence guarantees and convergence times on non-complete networks were first studied by Etesami and Başar Etesami and
Başar (2015) where the authors consider δ𝛿\deltaitalic_δ-equilibra in contrast to δ𝛿\deltaitalic_δ-stable states. They define a δ𝛿\deltaitalic_δ-equilibrium as a sta... | A |
In our project, we employed the DenseNet architecture, which comprises two main blocks: the DenseBlock and the TransitionBlock. The DenseBlock keeps the feature size dimension constant while varying the number of filters. Within a DenseBlock, each layer performs a 1x1 convolution for feature extraction and a 3x3 convo... |
The DenseNet architecture Huang et al. (2017) addresses the vanishing gradient problem by incorporating dense connections between layers. In DenseNet, each layer is directly connected to all the preceding layers in the network. This connectivity pattern enables each layer to access the feature maps produced by all the... | To initialize the network, we employed transfer learning by utilizing pre-trained weights from ImageNet. The early layers of the DenseNet, which capture general image features like edges, were left unchanged. We skipped the top layers, which contain more specific image features, and added two additional layers: a Globa... |
In our project, we employed the DenseNet architecture, which comprises two main blocks: the DenseBlock and the TransitionBlock. The DenseBlock keeps the feature size dimension constant while varying the number of filters. Within a DenseBlock, each layer performs a 1x1 convolution for feature extraction and a 3x3 convo... | Although these architectures differ in their topology for transmitting features across layers, they share the fundamental CNN principle of convolution, sub-sampling, dense, and softmax layers. In the convolution layer, filters are applied to the input image to extract features. The sub-sampling layer reduces the spatia... | B |
This paper considers the Bayes optimal algorithm in the context of fixed-budget identification.
In the field of statistical decision theory, the Bayes optimal algorithm is deeply connected to a minimax estimator that maximizes the worst-case performance. | This paper considers the Bayes optimal algorithm in the context of fixed-budget identification.
In the field of statistical decision theory, the Bayes optimal algorithm is deeply connected to a minimax estimator that maximizes the worst-case performance. |
On the contrary, we show that the Bayes optimal algorithm performs sub-optimally with some of the worst model parameters, which implies that maximizing the Bayesian objective differs substantially from maximizing the frequentist objective. A Bayesian measure requires it to optimize the performance averaged over the pr... |
This paper has considered the BAI problem. We have demonstrated that the Bayes optimal algorithm, which is optimized for the expected performance over the prior, does not have a frequentist rate of simple regret. In some distributions, the Bayes optimal algorithm does not perform well, even when the distributions are ... |
Several known results demonstrate the suboptimality of KG and EI in K𝐾Kitalic_K-armed BAI, which we describe in Section 7. In this paper, the convergence rate is characterized according to the Bayes optimal algorithm rather than its one-step approximations KG and EI. We show that the Bayes optimal algorithm performs ... | B |
The first experiment is the symmetry scenario where 20202020 quadrotors transit to their antipodal position in a 2D circle with radius 1.71.71.71.7m.
As shown in Fig. 14(a), the navigation task is accomplished within 11.311.311.311.3s with smooth and safe trajectories. | Left: The condition of deadlocks is a force equilibrium for robot 1111 in which the resulting force F1=0superscript𝐹10F^{1}=0italic_F start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT = 0.
Right: After introducing the right-hand rule, the repulsive forces to the left side FR13subscriptsuperscript𝐹13𝑅F^{13}_{R}italic_F sta... | Due to the symmetric setup, each agent is potentially blocked by its left and right neighbors at about t=2.0𝑡2.0t=2.0italic_t = 2.0s.
Via the proposed resolution scheme, the repulsive force from the left hand is increased and thus larger than the right-hand force, yielding a right-hand rotation. | The above theorem provides a theoretical guarantee for the proposed deadlock resolution scheme.
In comparison, the artificial right-hand perturbation introduced in [27] and the right-hand detour points proposed in [33, 34] are heuristic and thus lacking theoretical analyses. | Consequently, ρijsuperscript𝜌𝑖𝑗\rho^{ij}italic_ρ start_POSTSUPERSCRIPT italic_i italic_j end_POSTSUPERSCRIPT is adapted to modify the repulsive forces by following the proposed right-hand rule.
Initially, ρij=ρ0superscript𝜌𝑖𝑗subscript𝜌0\rho^{ij}=\rho_{0}italic_ρ start_POSTSUPERSCRIPT italic_i italic_j end_POST... | B |
We achieved a reconstruction RMSE of 0.15±0.07Åplus-or-minus0.150.07Å0.15\pm 0.07~{}\mbox{\AA}0.15 ± 0.07 Å for atom coordinates and perfect atom type accuracy on 5000 unseen test conformations (see Figure 5c for two examples and Appendix E.2.2 for more reconstruction predictions). Given a point cloud of N𝑁Nitalic_N ... | For all other rotations, we see slight variations in the latent code, which, however, is to be expected due to interpolation artifacts for rotations on a discretized grid. Still, inspecting the 2d-projection of the latent code of our proposed model in Figure 2, we see distinct clusters for each digit class for the diff... | We also run experiments on the ShapeNet dataset Chang et al. (2015). We utilized 3D Steerable CNNs proposed by Weiler et al. (2018b) as equivariant encoder for the 3d voxel input space. We utilized the scalar outputs as rotation-invariant embedding (z𝑧zitalic_z) and predict (analogously to our experiments on 3d point ... | Next, we train a permutation-invariant autoencoder on sets of digits. A set with N𝑁Nitalic_N digits is represented by concatenating one-hot vectors of each digit in a N×D𝑁𝐷N\times Ditalic_N × italic_D-dimensional matrix, where we take D=10𝐷10D=10italic_D = 10. Notice that this matrix-representation of a set is not ... |
In the first experiment, we train an SO(2)-invariant autoencoder on the original (non-rotated) MNIST dataset and validate the trained model on the rotated MNIST dataset (ref. mni ) which consists of randomly rotated versions of the original MNIST dataset. For the functions η𝜂\etaitalic_η and ψ𝜓\psiitalic_ψ we utiliz... | B |
The strength of BO lies in being able to exploit structure in data, so it works better for slowly varying processes.
While the results show that the approach has promise, more evaluation is needed. In particular, due to data constraints, London was used both for developing the prior and for evaluation. | In Bayesian optimisation the GP tuning step needs to be repeated many more times than in standard regression, as new data points are added iteratively. Therefore, using standard MCMC on the full hierarchical model for each new data point would slow down the process. By using importance weighting in the BO tuning step t... | We have shown that Bayesian optimisation with hierarchical models can be successfully applied to ground-level urban pollution data.
We also presented a pragmatic method for approximate inference when some of the work can be precomputed, which we believe can be useful in other applications. | By inspection of Fig. 2 one can see that the selection examples have many local optima, and so would suffer from this.
On the London data one of the GP models does much better than either baseline, but the other two perform worse. Why this happens is examined in the Exploration subsection. But it shows that BO can be m... | To test whether the prior can be constructed from other cities additional data needs to be used. It would also be beneficial to test the methods as they would be used, i.e. iteratively placing sensors, but this would be much more resource-intensive.
There might be easy performance gains available by experimenting with | D |
The two main classes of gradient estimators used in machine learning are the pathwise or reparameterization gradient estimators [29, 47, 58] and the REINFORCE or score function estimators [65, 18].
The pathwise estimators have shown great success in training variational autoencoders [29] but are only applicable to cont... | The benefits of using Stein operators to construct discrete CVs are twofold. First, the operator structure permits us to learn CVs with a flexible functional form such as those parameterized by neural networks.
Second, since our operators are derived from Markov chains on the discrete support, they naturally incorporat... |
Although the Double CV framework points to a promising new direction for developing better REINFORCE estimators, one only obtains significant reduction in variance when bksubscript𝑏𝑘b_{k}italic_b start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is strongly correlated with f𝑓fitalic_f. | We then develop a gradient estimation framework—RODEO—that augments REINFORCE estimators with mean-zero CVs generated from Stein operators.
Finally, inspired by Double CV [60], we extend our method to develop CVs for REINFORCE leave-one-out estimators [49, 30] to further reduce the variance. | As we have seen in Section 2, there is a long history of designing CVs for REINFORCE estimators using “baselines” [64, 7, 45, 37].
Recent progress is mostly driven by leave-one-out [49, 38, 30, 48] and sample-dependent baselines [43, 22, 60, 62, 20]. | C |
Note that the SINR of the combined packet is equal to the sum of the SINRs of its constituents only when the interference in each is uncorrelated. This will not be the case if a given user collides with another in more than one slot. Whether the resulting SINR will be higher or lower than the sum will depend on whether... | Until now, the implicit assumption was that perfect channel estimates have been available. In reality channel estimates are never perfect, however, the quality of estimation can be improved by making the pilot sequences longer and/or investing in them more transmit power, as long as the pilot sequences of transmitting ... | Now let us consider negative MRC. When both users fail it has no effect. If there is one successful user, it can happen that negative MRC turns it into an undecodable one, thus making SIC impossible. Similarly, if both users are successful, negative MRC could make them both undecodable (in this case it is not enough to... | In a slot in which an arbitrary user is active, there are only D−1𝐷1D-1italic_D - 1 other patterns that could cause a collision and the selection is done without replacement due to the unique preassignments.
Hence, the probability that L𝐿Litalic_L out of U−1𝑈1U-1italic_U - 1 devices select one of them while the rest... | Note that the SINR of the combined packet is equal to the sum of the SINRs of its constituents only when the interference in each is uncorrelated. This will not be the case if a given user collides with another in more than one slot. Whether the resulting SINR will be higher or lower than the sum will depend on whether... | B |
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper... |
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper... | In lieu of the above discussion, “nearly-optimal” algorithms have been explored. That is, algorithms that produce a path which may not be the optimal one but is comparable (or even arbitrarily close) in length to the optimal one. The nearest insertion algorithm [RSL74] computes in O(n2)𝑂superscript𝑛2O(n^{2})italic_O... |
It should be noted that the work of Jones [Jon90], Okikiolu [Oki92], and Schul [Sch07] provides the sets for which a solution exists but not the solution itself when the set is infinite. That is, they classify the sets which are contained in curves of finite length, but do not provide the parametrization of the curves... | We remark that although time-wise this algorithm is fast, the ratio constant of the yielded path over the optimal path has not been computed and is much larger that Christofides’ 3/2323/23 / 2 ratio. In fact, in our algorithm, the yielded path has length at
most (300)9/2log300superscript30092300(300)^{9/2}\log{300}( ... | D |
If α,β∈𝒫𝛼𝛽𝒫\alpha,\beta\in\mathcal{P}italic_α , italic_β ∈ caligraphic_P with |β|<|α|𝛽𝛼|\beta|<|\alpha|| italic_β | < | italic_α |, then there exists an index i∈[l]𝑖delimited-[]𝑙i\in[l]italic_i ∈ [ italic_l ] such that β+𝒆i∈𝒫𝛽subscript𝒆𝑖𝒫\beta+\bm{e}_{i}\in\mathcal{P}italic_β + bold_italic_e start_POSTSU... | In [46], Osserman and Trager gave a generalization of Chow forms to multiprojetive varieties, i.e., varieties in the multiprojective space ℙ𝒏≔∏i=1lℙni≔superscriptℙ𝒏superscriptsubscriptproduct𝑖1𝑙superscriptℙsubscript𝑛𝑖\mathbb{P}^{\bm{n}}\coloneqq\prod_{i=1}^{l}\mathbb{P}^{n_{i}}blackboard_P start_POSTSUPERSCRIPT b... | In addition, we discuss a multihomogeneous generalization of the Hurwitz form. To the best of our knowledge, our paper provides the first result in this area. Contrary to the homogeneous case, multigraded Chow forms and Hurwitz form require a choice of a non-degenerate multidimension vector for the linear subspace, in ... |
As demonstrated in Figure 1, the set of formats α𝛼\alphaitalic_α for which 𝒞𝒵V,α𝒞subscript𝒵𝑉𝛼\mathcal{CZ}_{V,\alpha}caligraphic_C caligraphic_Z start_POSTSUBSCRIPT italic_V , italic_α end_POSTSUBSCRIPT is a hypersurface equals to the set of lattice points that lie “below” supp(V)supp𝑉\operatorname{supp}(V)ro... | For the remainder of the section, the chief example of a polymatroid to us is the support of a multiprojective variety (and its downward closure). The polymatroids of this form are now called Chow polymatroids,333This naming is unfortunate for us because we will later associate a polymatroid to the formats of non-degen... | D |
The study was conducted between October 2020202020202020 and April 2021202120212021. It took place in the
Electronics Technology Department at the School of Engineering of the Universidad Carlos III de Madrid, Spain. The experimental methodology designed to be applied for each volunteer is schematized in Figure 1. Duri... |
The submission to the Ethical Committee covered essential topics for the development of the experiments. Among others, the adequacy of the volunteers’ informed consent, the research goals and plans, the data management and de-identification procedures, and the compliance with the European General Data Protection Regul... |
Upon arrival, participants were informed about the experimental procedure. Then, they signed the informed consent, filled out the personal data form, and answered the general questionnaire. Next, participants listened to instructions regarding the experiment. |
Volunteers’ annotations are collected in two instants: prior to the experiment and during the experimentation. Before the experiment, each volunteer is provided with informed consent, a personal data form, and a general questionnaire to supply additional information related to cognition, appraisal, attention, personal... | The participants were requested to avoid unnecessary actions or movements during the experiment (e.g., turning the wrist). They were also informed that they could skip any clip or quit the experiment at any
time. Once the procedure was clear to the participants, the sensors were set up, as well as the Virtual Reality H... | B |
Yang et al. (Yang et al., 2017) design a variant detector to identify if an application is a mutated version of an existing application. This variant detector is a classifier network trained with mutation features generated from each pair of app features that explain the feature difference between original and mutated ... |
Variational Auto-encoder (VAE): VAE, a generative deep learning model for unsupervised learning (Diederik et al., 2014), encodes input data into a lower-dimensional latent space and decodes this representation to generate new, similar data. VAEs use a probabilistic approach to training and can learn complex data distr... | In (Li et al., 2021c), Li et al. employ similarity constraint in order to squeeze space for the presence of adversarial examples. They use a VAE to differentiate between benign and adversarial samples based on the reconstruction errors. They modify the loss function for VAE to disentangle features of different classes.... |
Li et al.(Li et al., 2018a) propose a detector framework combining hash function transformation and denoising auto-encoder. A hashing layer transforms the input features into a vector representation using a locality-sensitive hash function. A denoising auto-encoder receives this hashed vector as input that understands... |
MagNet (Meng and Chen, 2017): Magnet uses a detector network to discard such samples located far away from the data manifold learned by the classifier using training samples. If the adversarial sample is close to the boundary, then the reformer network transforms the adversarial sample into an original sample. They us... | C |
The algorithm above can be adapted to this case.
Indeed, supposing that the realization s𝑠sitalic_s is such that the origin is visited finitely many times, then applying the above algorithm to the suffix of the realization after the last visit to the origin identifies Deviator with probability 1. | The algorithm above can be adapted to this case.
Indeed, supposing that the realization s𝑠sitalic_s is such that the origin is visited finitely many times, then applying the above algorithm to the suffix of the realization after the last visit to the origin identifies Deviator with probability 1. | Almost surely either a player was chosen on Step 1 or 2 or the sum of just the odd-numbered terms (given by B’s moves) of expression (2) diverges to ∞\infty∞, by Lemma 3.6. In the latter case, the sum of the even-numbered terms must diverge to −∞-\infty- ∞ (as the sum of all terms is convergent), and therefore cannot d... | Since sn2superscriptsubscript𝑠𝑛2s_{n}^{2}italic_s start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT increases by 1111 in expectation on Honest’s moves, it should decrease on Deviator’s moves to keep the walk close to 00, and this discrepancy is what’s detected in Step 3.
|
The idea of the proof is that since Deviator must move to the right during periods where Honest moves substantially to the left (to avoid going below zero), Deviator must thus move to the left when Honest moves to the right to avoid being clearly right-biased (Steps 1 and 2 detect right-biased behavior). Thus Deviator... | D |
Future directions. There is no doubt that we should encourage more open-sourcing efforts from the security community such that future works can benefit from it. However, the challenge is in which form the hardware implementations should be shared such that such sharing can be more directly useful for the community. He... | No hardware modification. This property means whether the defense requires changes to the hardware on AD vehicle or adding new hardware (e.g., additional sensors). Among the defenses, Liu et al. [104] require stereo cameras, which may not be available on some AD vehicles.
| Open-source hardware implementation references. Since the reproduction cost and effort for hardware implementations are generally higher than software-only ones, it is actually more desired for researchers to release as many details about their hardware design as possible. This often means the circuit diagram, printed ... | For each work in Table I and Table II, we searched for their code or open implementation in the paper and also on the Internet. Overall, there are less than 20.6% (7/34) papers from security conferences that release source code. The situation is much worse if we narrow it down to the sensor attack works, where only 1 (... |
Open-source attack modeling code. Another interesting trend in recent sensor attack works is that they often model the attack capability in the digital space for large-scale evaluation. For example, Man et al. [58] modeled the camera lens flare effect caused by attacker’s light beams in digital images, Ji et al. [65] ... | B |
For a vertex gadget, these two partitions were made to correspond to its membership in one of the partition sets for a maximum cut of the original cubic graph.
If two adjacent vertices of the cubic graph belonged to different sets, then the corresponding edge gadget would make more cut edges with link intervals than if... | The problem remains NP-complete for many graph classes, such like cubic graphs [5], split graphs [6], co-bipartite graphs [6], unit disk graphs [7], total graphs [8], and interval graphs [1].
On the positive side, polynomial time algorithms are known for planar graphs [9], line graphs [8], graphs not contractible to K5... | Thus, a cut of maximum size of the given cubic graph always corresponded to a cut of maximum size of the constructed interval graph and vice versa.
As Maximum Cut on cubic graphs is NP-complete, this reduction implies that it is also NP-complete on interval graphs. | For every cubic graph G𝐺Gitalic_G (of size n𝑛nitalic_n), we construct a graph H𝐻Hitalic_H of interval count 2 such that any Maximum Cut partition of H𝐻Hitalic_H corresponds to some Maximum Cut partition of G𝐺Gitalic_G and vice versa.
Its top level composition is displayed on Figure 3. | Second, a more recent result of de Figueiredo et al. in [2], where they extend the result of the first paper by proving that Maximum Cut is NP-complete on graphs of interval count four.
Using the technique of the above work, de Figueiredo et al. prove the NP-completeness of Maximum Cut on permutation graphs as well, wh... | B |
Most approaches to surgical workflow tasks combine a 2D CNN for frame-wise feature extraction with a temporal model (e.g. LSTM, TCN or Attention) for aggregation over time.
In the following, we discuss different learning strategies which have emerged for surgical workflow tasks and how they were possibly influenced by ... | Methods in this style have been proposed for phase recognition [13, 14, 24, 85, 91], duration prediction [2, 8, 71], tracking [55] or anticipation [86]. Most notably, TeCNO [13], an MS-TCN [19] trained on ResNet features, is the popular approach for 2-stage learning and Trans-SVNet [24], a 3-stage method which trains a... | End-to-end CNN-LSTM models with AlexNet [43] or VGG [63] backbones
have been proposed for phase recognition [7, 83], anticipation [58], duration prediction [57, 83], surgery type prediction [41] and occlusion detection [5]. All methods train long sequences of several minutes. However, results are not competitive due to... | End-to-end learning [36, 37], single-sequence batches [41] and CHT [55] have been used individually in BN-based approaches. However, only methods with BN-free backbones such as AlexNet have been able to employ these [7] or similar [83] strategies in combination. This is because end-to-end training is not compatible wit... | SWA covers a wide range of common computer vision problems including temporal action segmentation (e.g. surgical phase recognition [70]), dense frame-wise regression (e.g. anticipating instrument usage [58] or procedure duration [71]) or video classification (e.g. early surgery type recognition [41]). SWA tasks are inh... | B |
Results on IJB-B and IJB-C. IJB-B consists of 21.8 K images of 1,845 subjects and 55 K frames of 7,011 videos. IJB-C, an extended version of IJB-B, contains 31.3 K images of 3,531 subjects and 117.5 K frames of 11,799 videos. 10 k / 8 M and 19 K / 15 M of positive / negative pairs in IJB-B and IJB-C were used for 1:1 v... | Does it sufficiently satisfy WDFS? We conclude that UNPG helps FR models to form WDFS by reducing the gap between 𝒮^psuperscript^𝒮𝑝\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and 𝒮^nsuperscript^𝒮𝑛\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S e... |
Results on MegaFace. MegaFace consists of a gallery set of 1 M images with 690 K classes and probe photos of 100 K images with 530 classes. We followed the test protocol of ArcFace[4]. We removed noisy images and measured rank-1 accuracy for the 1 M distractor after following the identification scenarios using the dev... |
Results on K-FACE. K-FACE focuses on FR under fine-grained conditions. It consists of 4.3 M images with 6 accessories, 30 illuminations, 3 expressions, and 20 poses for 400 persons. We adopted the same training and test splits used in MixFace [41]. The training split was composed of 3.8 M images with 370 persons. In p... | Results on IJB-B and IJB-C. IJB-B consists of 21.8 K images of 1,845 subjects and 55 K frames of 7,011 videos. IJB-C, an extended version of IJB-B, contains 31.3 K images of 3,531 subjects and 117.5 K frames of 11,799 videos. 10 k / 8 M and 19 K / 15 M of positive / negative pairs in IJB-B and IJB-C were used for 1:1 v... | B |
A promising solution to generate synthetic images lies in the Generative Adversarial Networks (GANs) [19]. Such an approach performs data augmentation by competitively creating new samples, i.e., a generator attempts to create synthetic images to fool the discriminator, which then tries to identify whether they are fa... | The ODIR-2019 and RIADD datasets were organized into two subsets, AMD and non-AMD images.
Preprocessing methodology and quality classification was same as proposed by Fu et al. [37] , which comprises of a step that detects the retinal mask using the Hough Circle Transform and then crops it to remove the impact of the b... |
In retinal imaging, GANs have been used to create synthetic data. Li et al. [27] highlighted the importance of enhancing the quality of synthetic retinal images in their review, emphasizing that using synthetic images in training can improve performance and help mitigate overfitting. | In the field of Optical Coherence Tomography (OCT) imaging, super-resolution GANs (like ESRGAN [24]) have demonstrated their value as a tool to enhance the quality of the image and improve AMD detection [25]. Das et al. [26] proposed a quick and reliable super-resolution approach concerning OCT imagery using GANs, achi... | Bellemo et al. [28] described the possible advantages and limitations towards synthetic retina image generation using GANs. The authors highlighted the potential clinical applications of GANs concerning early- and late-stage AMD classification.
Burlina et al. [8] trained a Progressive GAN [29] on 133,821133821133,82113... | C |
In the following parts, we regard these crucial facial action units as facial crucial regions, shortened by FCRs. Fig.1 illustrates facial crucial regions of two facial images (ID1 and ID2) from six expressions, respectively.
From Fig.1, it is found that the FCRs are more discriminative to determine the expression cate... |
Figure 3: A simple view of the proposed model (LNLAttenNet). The part in the green dotted box shows the global weights corresponding to 16 local regions (from Patch 1 to Patch 16) obtained by LNLAttenNet, and the part under the green dotted box is a simple framework of LNLAttenNet. | Similarly, for the crucial regions including eyes, ID1 and ID2 from the category (Fear) are different, whereas ID1 from the category (Surprise) and ID2 from the category (Anger) are similar.
It illustrates that FCRs of expression images belonging to the same category may be very different but FCRs from different catego... | In the following parts, we regard these crucial facial action units as facial crucial regions, shortened by FCRs. Fig.1 illustrates facial crucial regions of two facial images (ID1 and ID2) from six expressions, respectively.
From Fig.1, it is found that the FCRs are more discriminative to determine the expression cate... |
Figure 1: An illustration of facial crucial regions from six expressions, where two facial images (ID1 and ID2) are shown for each expression. The regions around eyes and mouths are cropped as examples of FCRs in the purple box and the green box, respectively. | D |
Both strategies produce higher ratings for larger n𝑛nitalic_n, up to n=Θ(k1/3)𝑛Θsuperscript𝑘13n=\Theta(k^{1/3})italic_n = roman_Θ ( italic_k start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT ), at which point the asymtotic highest rating in k𝑘kitalic_k is Θ(k1/3)Θsuperscript𝑘13\Theta(k^{1/3})roman_Θ ( italic_k sta... | Pick any pair of players whose ratings are within δ𝛿\deltaitalic_δ of each other and have the higher rated player beat the lower rated player. Repeat until no two players are within δ𝛿\deltaitalic_δ rating points or until k𝑘kitalic_k games have been played. Note for δ=0𝛿0\delta=0italic_δ = 0, one recovers the origi... | Each player is given some ‘rating’ value (measured in ‘points’ or simply ‘Elo’), which updates as they play games. These rating points are somewhat analogous to poker chips: when player A𝐴Aitalic_A and player B𝐵Bitalic_B play a game, they each place some of their rating points into a pot. In the case of a draw, the p... | This strategy is guaranteed to produce a player of either very high or very low rating. If it produces a player of very low rating, simply re-do the strategy picking the same sequence of pairs of players but have the opposite player win. Since game outcomes are symmetric, this will produce a player of high rating inste... | Both strategies produce higher ratings for larger n𝑛nitalic_n, up to n=Θ(k1/3)𝑛Θsuperscript𝑘13n=\Theta(k^{1/3})italic_n = roman_Θ ( italic_k start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT ), at which point the asymtotic highest rating in k𝑘kitalic_k is Θ(k1/3)Θsuperscript𝑘13\Theta(k^{1/3})roman_Θ ( italic_k sta... | C |
Completion time for each activity.
The frontend of HardVis has been developed in JavaScript and uses Vue.js [vue14], D3.js [D311], and Plotly.js [plo10], while the backend has been written in Python and uses Flask [Fla10] and Scikit-learn [PVG∗11]. More technical details are made available on GitHub [Har22]. All experi... | The remainder of this paper is organized as follows. In Section 2, we review automated methods for the detection of different data types, visually-assisted identification of outliers and rare examples, and visualization approaches for data-centric ML error analysis. Afterwards, in Section 3, we outline the analytical t... | In this paper, we developed HardVis, a VA system that uses hardly-configurable undersampling and oversampling techniques to handle instance hardness. As part of an intensively iterative process, multiple coordinated views assist users in defining an ideal distribution of data types, undersampling particular safe for re... | a working prototype of the suggested workflow in the form of our VA system, HardVis, which comprises a novel combination of multiple coordinated views to support the entire process of selectively undersampling and oversampling parts of the data set;
| In this paper, we present a VA system, called HardVis, that incorporates undersampling and oversampling techniques for the management of both instance hardness and class imbalance independent of the ML algorithm in use. It adopts validation metrics suitable for imbalanced multi-class classification problems and include... | B |
In order to get the most accurate waiting times of transactions in the pending pool, we deployed a Geth333https://github.com/ethereum/go-ethereum full node (v.1.11.0) running on an Amazon AWS Virtual Machine located in North Virginia. The AWS node had an AMD EPYC 7R32 CPU clocked at 3.30 GHz with 8 dedicated cores, 32 ... | Many Ethereum Virtual Machine (EVM) based blockchains, such as Polygon and Fantom, have implemented the EIP-1559 patch. Despite the overall trend of EVM-based blockchains adopting EIP-1559, for completeness we also studied a non-EIP-1559 chain protocol. We replicated our analysis on the Binance Smart Chain (BSC) which ... | The difference in waiting times of transactions in the US and Singapore was very small. Across all the transactions that we captured, the difference between the receipt times in the US and Singapore was no more than 2 ns for any transactions. Interestingly, our Singapore node also never received any of the 944807 trans... |
We analyzed the Ethereum and BSC data for different values of k𝑘kitalic_k. Figure 7 shows the percentage (fr×100𝑓𝑟100fr\times 100italic_f italic_r × 100) of transactions that are frontrunnable out of the total transactions (24.34M in Ethereum and 5.14M in BSC) for different values of k𝑘kitalic_k. | There were a total of 30.6M transactions for the given block range (15665200–15886660), out of which, 24.34M were Type-2 (EIP-1559) transactions and 6.26M were Type-1 (legacy, non-EIP-1559) transactions.
We analyzed the more common Type-2 transactions, FIRST can also be used for Type-1 transactions. | D |
To show that results from Section 4 transfer to the setting with unit-level perturbations, we can define a new distribution over agent unobservables F~~𝐹\tilde{F}over~ start_ARG italic_F end_ARG that are related to the original distribution over agent unobservables F𝐹Fitalic_F. When agents with unobservables sampled ... | In the context of college admissions, Assumption 4 requires that 𝒳𝒳\mathcal{X}caligraphic_X is large enough that agents’ raw covariates do not “bunch” at the boundaries of 𝒳𝒳\mathcal{X}caligraphic_X, and that 𝒞𝒞\mathcal{C}caligraphic_C is large enough that it contains cost functions that are linear offsets of the... | To show that results from Section 4 transfer to the setting with unit-level perturbations, we can define a new distribution over agent unobservables F~~𝐹\tilde{F}over~ start_ARG italic_F end_ARG that are related to the original distribution over agent unobservables F𝐹Fitalic_F. When agents with unobservables sampled ... |
We require the following assumption to guarantee that the transformed unobservables in supp(F~)supp~𝐹\text{supp}(\tilde{F})supp ( over~ start_ARG italic_F end_ARG ) can be defined on the same spaces as the original unobservables in supp(F).supp𝐹\text{supp}(F).supp ( italic_F ) . | For any Zi∈supp(FZ),subscript𝑍𝑖suppsubscript𝐹𝑍Z_{i}\in\text{supp}(F_{Z}),italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ supp ( italic_F start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT ) , Zi∈Int(𝒳).subscript𝑍𝑖Int𝒳Z_{i}\in\text{Int}(\mathcal{X}).italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT... | C |
Relation to Multi-Hypothesis Models. Some recent works generate multiple plausible hypotheses [39, 68] and use extra information at test time to choose the best hypothesis. The techniques include training a set of models with dissimilar input gradients [68] and training multiple prediction heads that disagree on a targ... |
We tested OccamNets implemented with CNNs; however, they may be beneficial to other architectures as well. The ability to exit dynamically could be used with transformers, graph neural networks, and feed-forward networks more generally. There is some evidence already for this on natural language inference tasks, where... | Early Exit Networks. OccamNet is a multi-exit architecture designed to encourage later layers to focus on samples that earlier layers find difficult. Multi-exit networks have been studied in past work to speed up average inference time by minimizing the amount of compute needed for individual examples [10, 31, 66, 74],... | We tested OccamNets implemented with CNNs; however, they may be beneficial to other architectures as well. The ability to exit dynamically could be used with transformers, graph neural networks, and feed-forward networks more generally. There is some evidence already for this on natural language inference tasks, where ... |
Here, we propose convolutional OccamNets which have architectural inductive biases that favor using the minimal amount of network depth and the minimal number of image locations during inference for a given example. The first inductive bias is implemented using early exiting, which has been previously studied for spee... | C |
Next, we tokenize the extracted features and treat the feature vector at each pixel as a token, resulting in tokens 𝒐∈ℝNo×c𝒐superscriptℝsubscript𝑁𝑜𝑐\bm{o}\in\mathbb{R}^{N_{o}\times c}bold_italic_o ∈ blackboard_R start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT × italic_c end_POSTSUPER... | Figure 3: Overview of the proposed CFFM++ for additionally mining global temporal contexts. Due to the large number of frames in the video, we uniformly sample frames by a fixed step. The sampled video frames go through the encoder trained by CFFM and corresponding features are generated. After tokenizing the feature m... |
Influence of the number of global temporal contextual prototypes. In our experiments, we set the number (Npsubscript𝑁𝑝N_{p}italic_N start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT) of contextual prototypes as 100 when extracting global temporal information. Here, we study the influence of this parameter. The results... | For global temporal contexts, few VSS methods [17, 53] have exploited the contexts from the whole video. The modeling of global temporal contexts is usually achieved by a memory module in the form of a memory bank [17] or a tiny network [53] which is updated during inference. Although promising results have been achiev... | Due to the selection of video frames across the whole video and the use of GPU-based k𝑘kitalic_k-means clustering, the process of generating global temporal contextual prototypes is fast and does not significantly decrease the speed, which will be shown in our experiments (§5.2).
| D |
We compared Hotline against state-of-the-art deep learning frameworks such as XDL [15], FAE [10], and Intel-optimized DLRM [16]. On average, Hotline provides 3.4×\times× speedup over the 4-GPU XDL, 1.4×\times× over FAE, and 2.2×\times× speedup over Intel optimized DLRM. It is noteworthy that Hotline only rearranges in... | Figure 28 shows the performance of Hotline across two synthetic models and datasets. Our experiments show that the benefits of Hotline are sustained even for larger models. As the model size increases, the sparse features increase from 102 to 204, and the performance gains decrease from 2.5x to 2.2x. This decrease can ... | Figure 2 illustrates the general structure of deep-learning-based recommendation models, which rely on two types of inputs: dense and sparse. Dense inputs are continuous features, such as the user’s age, while sparse inputs represent categorical features, such as the user’s location or videos they have liked. The neura... | Embedding Representation: Previous research has explored various methods to represent categorical features within limited memory, aiming to accommodate multiple feature values with a restricted number of embeddings. The hashing trick [56] applies a simple hash function to constrain feature embeddings. Compositional Emb... |
Table II presents the specifications of four open-sourced recommender models that were evaluated using Hotline. The models have varying numbers of sparse parameters, ranging from 5.1M for RM1 to 266M for RM3. These models consist of a top and bottom multi-layer perceptron (MLP) with a deep learning attention layer for... | B |
Following ideas dating back to Banchoff [2], we focus our attention on the flat cells of a ReLU neural network map F:ℝn→ℝ:𝐹→superscriptℝ𝑛ℝF:\mathbb{R}^{n}\rightarrow\mathbb{R}italic_F : blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_R. The canonical polyhedral complex 𝒞(F)𝒞𝐹\mathcal... | We say that a cell C𝐶Citalic_C in the canonical polyhedral complex 𝒞(F)𝒞𝐹\mathcal{C}(F)caligraphic_C ( italic_F ) is flat if F𝐹Fitalic_F is constant on that cell. Note that C∈𝒞(F)𝐶𝒞𝐹C\in\mathcal{C}(F)italic_C ∈ caligraphic_C ( italic_F ) is flat if and only if all 1111–dimensional faces of C𝐶Citalic_C are u... |
If there is an n𝑛nitalic_n–cell of 𝒞(F)𝒞𝐹\mathcal{C}(F)caligraphic_C ( italic_F ) with ternary labeling (−1,…,−1)1…1(-1,\ldots,-1)( - 1 , … , - 1 ), Lemma 3.12 tells us that F𝐹Fitalic_F is not PL Morse. If there is no n𝑛nitalic_n–cell with ternary labeling (−1,…,−1)1…1(-1,\ldots,-1)( - 1 , … , - 1 ), then the o... |
Unsurprisingly, the key to understanding how the topology of sublevel sets changes as one varies the threshold are the PL analogues of points where the gradient of the function vanishes, cf. Theorem 1. These are the so-called flat or constant cells (Definition 3.7), which map to nontransversal thresholds (cf. Lemma 2.... | The flat cells of F𝐹Fitalic_F are, by definition, the cells of 𝒞(F)𝒞𝐹\mathcal{C}(F)caligraphic_C ( italic_F ) on which F𝐹Fitalic_F is constant. Flat cells in the PL category should be viewed as the appropriate analogues of critical points in the smooth category, with the caveat that not every flat cell is critica... | D |
Fortunately, neural network methods have more universalities. The same neural network can be used to represent the states or to study the dynamical processes for various systems, such as those with different dimensions or with different interactions. | The recent state-of-the-art neural networks have been shown to provide high efficient representations of such complex states, making the overwhelming complexity computationally tractable [6, 7]. Except for the success in the industrial applications, such as the image and speech recognitions [8], the autonomous driving,... |
We have realized the time evolutions of the energy expectation value, the universal statistics of the topological defects numbers and the kink-kink correlations in a quantum phase transition of a TFQIM by virtue of the neural networks. The results were found to satisfy theoretical predictions. Thus, it numerically ver... | The topological defects, i.e., kinks form in the course of the quantum phase transitions due to the KZM. It predicts that the power-law scalings of the mean value of the kink numbers to the quench rate is proportional to τQ−dν/(1+νz)superscriptsubscript𝜏𝑄𝑑𝜈1𝜈𝑧\tau_{Q}^{-d\nu/(1+\nu z)}italic_τ start_POSTSUBSCRI... |
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions. Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39]. Critical slowing down near the phase transition point may invalidate the applic... | A |
This theorem tells that for the ideal case when Re=∞subscript𝑅𝑒R_{e}=\inftyitalic_R start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = ∞ and 𝒇=𝟎𝒇0\bm{f}=\bm{0}bold_italic_f = bold_0, we obtain the helicity conservation. On the other hand, the natural pollution term for the helicity conservation is the effect of diff... | In this section, we shall consider Helicity-conservative finite element methods defined on contractible domains. One can extend the definition of helicity to nontrivial topology and different space dimensions [3, Chapter 3]. We use the standard notation for the inner product and the norm of the
L2superscript𝐿2L^{2}ita... | Numerical modeling and simulation for the incompressible Navier-Stokes system is critical in a number of applications. Therefore, there have been a lot of efforts in designing numerical methods for solving the incompressible Navier-Stokes equations. It is well-known that the Navier-Stokes system has various conserved q... | The rest of the paper is organized as follows. In Section 2, we provide preliminaries, notation and helicity-conservative finite element scheme. In Section 3, we present a PINN-based algorithm that preserves the helicity. In Section 4, we present numerical results on the convergence and helicity-preserving properties o... | where QNNsubscript𝑄𝑁𝑁Q_{NN}italic_Q start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT is the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT projection onto the space of neural network functions chosen in the model. This in general can not guarantee the divergence free property of 𝝎... | A |
SHAP (Shapley Additive Explanations) lundberg2017unified is another model-agnostic framework for explaining individual predictions made by machine learning models. SHAP values are based on cooperative game theory concepts, specifically the Shapley value, which allocates a fair contribution to each feature in the pred... | Similar to SHAP, QII (Quantitative Input Influence) datta2016algorithmic also uses Shapley values to explain individual predictions, yet, instead of adopting the conditional approach used in SHAP, QII draws ideas from the causal inference and follows an interventional approach. The QII method addresses feature correla... |
SHAP (Shapley Additive Explanations) lundberg2017unified is another model-agnostic framework for explaining individual predictions made by machine learning models. SHAP values are based on cooperative game theory concepts, specifically the Shapley value, which allocates a fair contribution to each feature in the pred... | Therefore, the data science company would like to provide additional means alongside the model itself to help with the reliability question regarding individual predictions i.e. although the model demonstrates to be accurate on average, is it reliable for this individual prediction as well?
Furthermore, | Validation on Regression:
In this experiment, we study the effectiveness of our RU measures in the regression tasks. Accordingly, we used RN and HS data sets and computed strongRU and weakRU values for all the query points in the uniform sample. Thereafter, we repeated the bucketization process as we did in the last ex... | A |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.