context stringlengths 250 5.39k | A stringlengths 250 7.25k | B stringlengths 250 4.32k | C stringlengths 250 8.2k | D stringlengths 250 11.4k | label stringclasses 4
values |
|---|---|---|---|---|---|
The combination of generative adversarial network and multiple choice learning turns out to be effective to alleviate the mode collapse problem.
Also, the integration of the sparsity loss encourages our model to identify the proper number of discriminators and estimate a desirable distribution with low complexity. | On top of the adversarial losses, we introduce another loss for balanced updates of discriminators.
As there is no supervision about the specialization target of a discriminator, e.g., class labels or feature embeddings, it may be difficult to reasonably distribute real samples to expert models from the beginning. |
This work was partly supported by Samsung Electronics Co., Ltd., and the Institute of Information & communications Technology Planning & Evaluation (IITP) grants [No.2022-0-00959, (Part 2) Few-Shot Learning of Causal Inference in Vision and Language for Decision Making; No.2021-0-01343, Artificial Intelligence Graduat... | DiverseNet [18] introduces a control parameter as an input that diversifies the outputs of the network with an MCL loss.
While these works generate multiple outputs explicitly and select one of them or take their ensemble at inference time, our approach adopts a unique strategy for diversifying the mode with no additio... | The common representations of all real samples are prone to be learned in the earlier layers despite being clustered in different subsets, whereas the critical information for high-level understanding is often encoded in deeper layers.
Moreover, the number of model parameters and training time are saved significantly w... | B |
Inspired by the end-to-end (E2E) communication systems developed to address the challenges in traditional block-wise commutation systems[9, 10], different types of sources have been considered on E2E semantic communication systems. Particularly, an initial research on semantic communication systems for text information... | In this article, we have investigated a DL-enabled semantic communication system for speech recognition, named DeepSC-SR, which aims to restore the text transcription by utilizing the text-related semantic features. Particularly, we jointly design the semantic and channel coding to learn and extract the features and mi... |
Recently, there are also investigations on semantic communications for other transmission contents, such as image and speech. A DL-enabled semantic communication system for image transmission, named JSCC, has been developed in[14]. Based on JSCC, an image transmission system, integrating channel output feedback, can i... | Inspired by the end-to-end (E2E) communication systems developed to address the challenges in traditional block-wise commutation systems[9, 10], different types of sources have been considered on E2E semantic communication systems. Particularly, an initial research on semantic communication systems for text information... |
In this article, we have investigated a DL-enabled semantic communication system for speech recognition, named DeepSC-SR, which aims to restore the text transcription by utilizing the text-related semantic features. Particularly, we jointly design the semantic and channel coding to learn and extract the features and m... | B |
Dataset: We conduct our experiments on the popular public dataset S3DIS[45]. S3DIS covers 6 areas of the entire floor from 3 different buildings with a total of 215 million points and covers over 6000m2superscript𝑚2m^{2}italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The dataset is annotated with 13 classes. We ... | Existing 3D WSSS methods utilize different kinds of weak supervisions.[10] utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels. [11] proposes to generate pseudo point-level label using 3D class activation map[12] from subcloud-level anno... | The existing 3D WSSS methods formulate the problem in different directions. [10] utilize dense 2D segmentation labels to supervise the training in 3D by projecting the 3D predictions onto the corresponding 2D labels. However, each 3D sample is projected to 2D in several views and each projected 2D image needs pixel-lev... |
In this paper, we propose a weakly supervised point cloud segmentation method with only 10% or 1% percent of the points being labeled. We developed cross- and intra-sample feature reallocating modules to densely propagate supervision from labeled points to unlabeled points. |
Weak labels: We follow [13] to annotate only 10% of the points. We first sample 4% of the points from the original data as the network inputs. Then, we randomly label 10% of points in each class for the sampled input point clouds. The final predictions will be back-projected to the original point clouds. Therefore, o... | D |
Then base detection branch is used for generating 2D/3D predictions with depth excluded from image features.
The 2D/3D predictions are utilized by the holistic geometric representation learning branch to generate geometric features via the proposed holistic geometric formula implemented in a network module. | Third, the holistic geometric representation learning branch models the geometry relationships from these 2D/3D predictions to obtain a holistic geometric formula, which is implemented as a network module for geometry-aware feature learning (Sec. 3.3). Finally, we utilize the geometric features for depth estimation (Se... | Our base network structure for 2D detection, 3D dimension, and orientation prediction is derived from the anchor-free 2D object detection [50, 42] with six output branches. Each branch takes the backbone features as input and uses 3x3 convolution, ReLU, and 1x1 convolution for prediction.
In the base detection branch, ... | The geometric features are concatenated with the image features from the backbone for depth estimation.
Based on the depth and other 3D predictions from the base detection branch, the detectors outputs the 3D object detection results. The symbol ⓒ indicates a concatenation operation. | Then base detection branch is used for generating 2D/3D predictions with depth excluded from image features.
The 2D/3D predictions are utilized by the holistic geometric representation learning branch to generate geometric features via the proposed holistic geometric formula implemented in a network module. | C |
We argue that GCN makes limited contributions when dealing with text instances that are spatially close enough to each other.
Such text instances are more common in multi-oriented texts, because most text segments in these texts do not have many spatial changes and have relatively small character and word spaces. Moreo... | Note that, in FPNS (GGTR), the GGTR map is used to rectify false positive/negative text segments, as it is the final visual-relational representation of applying LAT and FD.
The GGTR map is also indirectly involved in the process of FPNS (Node), as it rectifies the text segments before FPNS (Node), leading to the chang... | In this work, to realize the proposed visual-relational feature reasoning and capture the long-range dependency between text segments, relational features obtained from GCNs are fused with the visual features obtained from the FPN layers.
To address the dimensional difference between the relational and visual features,... | FPNS (Node) rectifies false detections by measuring attributes of the text segments in local graph structures and upgrades GCNs to a multiple-task network rather than only linkage reasoning, modifications which support each other. Both node classification and link prediction utilize the same relational features and boo... | In this case, the dense overlapping design of the text segments and our shape-approximation strategy are sufficient to ensure connectivity of text segments. This further demonstrates that the proposed FPNS and SAp strategies are able to boost the performance of bottom-up methods with visual reasoning cues when GCN is r... | D |
{\mathsf{c}}italic_U start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ( caligraphic_V , caligraphic_E ) = ( italic_U start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ( caligraphic_V , caligraphic_E ) ) start_POSTSUPERSCRIP... | (v1♭,v1♯,o1)AtrW(v2♭,v2♯,o2)⇔iffsuperscriptsubscript𝑣1♭superscriptsubscript𝑣1♯subscript𝑜1superscriptsubscript𝐴tr𝑊superscriptsubscript𝑣2♭superscriptsubscript𝑣2♯subscript𝑜2absent\displaystyle({v_{1}^{\flat},v_{1}^{\sharp},o_{1}})\,A_{\textsc{tr}}^{W}\,({v_%
{2}^{\flat},v_{2}^{\sharp},o_{2}})\iff( italic_v start... | denoted by ChrelO in Coq – and defined by avec
((v1♭,v1♯,o1)CH𝒪(v2♭,v2♯,o2)⇔v1♯=v2♭)iffsuperscriptsubscript𝑣1♭superscriptsubscript𝑣1♯subscript𝑜1superscriptsubscript𝐶H𝒪superscriptsubscript𝑣2♭superscriptsubscript𝑣2♯subscript𝑜2superscriptsubscript𝑣1♯superscriptsubscript𝑣2♭((v_{1}^{\flat},v_{1}^{\sharp},o_{1})... | (v1♭,v1♯,o1)AtrW(v2♭,v2♯,o2)⇔iffsuperscriptsubscript𝑣1♭superscriptsubscript𝑣1♯subscript𝑜1superscriptsubscript𝐴tr𝑊superscriptsubscript𝑣2♭superscriptsubscript𝑣2♯subscript𝑜2absent\displaystyle({v_{1}^{\flat},v_{1}^{\sharp},o_{1}})\,A_{\textsc{tr}}^{W}\,({v_%
{2}^{\flat},v_{2}^{\sharp},o_{2}})\iff( italic_v start... | (v1♭,v1♯,o1)AtrW(v2♭,v2♯,o2)⇔iffsuperscriptsubscript𝑣1♭superscriptsubscript𝑣1♯subscript𝑜1superscriptsubscript𝐴tr𝑊superscriptsubscript𝑣2♭superscriptsubscript𝑣2♯subscript𝑜2absent\displaystyle({v_{1}^{\flat},v_{1}^{\sharp},o_{1}})\,A_{\textsc{tr}}^{W}\,({v_%
{2}^{\flat},v_{2}^{\sharp},o_{2}})\iff( italic_v start... | B |
A hash table is an effective method for collecting the statistics of IP addresses Sanders2015HS . It uses a hash function to compute a hash codes for an array of buckets with the statistical results. The hash function assigns each key to a unique bucket for each IP address. Unfortunately, the hash function can generate... | In this paper, we present two efficient algorithms for collecting the statistics of large-scale IP address data. We can obtain the frequently occurring IP addresses from the statistics, which can be regarded as a pre-processing step of user behavior analysis in network traffic management. Because of the increasing volu... |
A number of statistical algorithms for solving this problem have been studied in the last few decades Jing05DLBS ; Kapur2013SA ; Klein2013Sorting ; Fredman2014Sorting ; Agapitos2016RSA . A classic divide-and-conquer strategy has been proposed in which IP addresses are first divided into multiple subsets. Then, each su... |
The hardware architecture of modern processors usually consists of more than two independent central processing units (CPUs) or graphics processing units (GPUs). Parallel software platforms can be implemented using high-level programming frameworks for specific hardware architectures Chen2009SA . The Compute Unified D... |
The statistics collection algorithm should be stable, effective, and efficient for large-scale records. To overcome the disadvantages of general statistics collection methods, a number of parallel techniques have been developed for large-scale records by optimizing the efficiency and complexity. For example, these alg... | D |
0&0&I\end{array}\right].caligraphic_A = [ start_ARRAY start_ROW start_CELL italic_I end_CELL start_CELL 0 end_CELL start_CELL 0 end_CELL end_ROW start_ROW start_CELL italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT end_CELL s... | \mathcal{U}caligraphic_P start_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT caligraphic_A = ( caligraphic_L caligraphic_D ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT caligraphic_A = caligraphic_U, which is the block upper
triangular m... | or 4 (block-diagonal preconditioners). See [31, 25] for the proof. In contrast, the preconditioners based on the nested Schur complement satisfy polynomials with degrees may be as high as n𝑛nitalic_n (block triangular)
or n!𝑛n!italic_n ! (block-diagonal). Therefore, an additive Schur complement based preconditioner i... | In this study, we explore two methodologies for designing preconditioners tailored for 3333-by-3333 block systems exhibiting block tridiagonal form, as well as their extensions to n𝑛nitalic_n-tuple scenarios. One approach centers around the nested (or recursive) Schur complement [37]. Unlike the approach presented in ... | We study both block-triangular and block-diagonal preconditioners for the system matrix (1). For block-triangular preconditioners, we focus on a lower triangular type with left preconditioning because an upper triangular one with right preconditioning can be discussed in a similar way [4, 25]. We consider the following... | D |
Further, the silos may not want to share data with one another directly due to different policy restrictions, regulations, privacy/security concerns, or even bandwidth limitations in the communication network.
The paradigm of training a global model over such feature-partitioned data is called vertical federated learni... | Several works have focused on distributed training with vertical partitions in a federated setting. The authors in works (Chen et al., 2020; Hardy et al., 2017; Yang et al., 2019b; Wu et al., 2020; Feng and Yu, 2020; Kang et al., 2020) propose vertical federated learning algorithms for single-tier communication network... | A coordinate descent algorithm with gradient steps was first proposed in (Tseng and Yun, 2009), where a fully centralized system is considered.
In the decentralized context, parallel coordinate descent methods have been proposed in (Richtárik and Takáč, 2016; Mahajan et al., 2017; Kang et al., 2014). These algorithms t... | In recent years, in distributed learning, where data is horizontally partitioned across multiple clients periodic averaging-based methods like federated averaging and its variations have been extensively studied. Federated averaging methods have been studied in the context of convex objectives (Stich, 2019; Wang et al.... | A common approach for model training over vertically partitioned data builds on parallel coordinate gradient descent-type algorithms (Richtárik and Takáč, 2016; Mahajan et al., 2017; Kang et al., 2014).
Here, each silo executes independent local training iterations to optimize a global model along its subset of the coo... | D |
The approach of employing pseudospectra localizations for H- and Z-eigenvalues brazell_solving_2013 ; mo2019z ; qi2005eigenvalues , aimed at identifying more positive definite tensors, has been investigated by many researchers KostiPseudospectra2016 ; LiLiuWei2019 . However, to the best of the authors’ knowledge, there... | In this section, we delve into the study of pseudospectra for third-order tensors within the tensor-tensor multiplication framework. Specifically, we explore different formulations of pseudospectra for third-order tensors in Subsection 4.1. Subsection 4.2 is dedicated to the examination of various properties of pseudos... | The generalization of eigenvalues from matrices to tensors has been studied through the implementation of tensor-tensor multiplication. Significant attention and extensive research have been devoted to this field, resulting in a substantial body of work focused on their variants, applications, and theoretical analysis.... |
The study of T-eigenvalues has emerged as a prominent research area within the field of tensor analysis. Motivated by the aforementioned research, we pay our attention to the perturbation analysis of third-order tensors under the novel tensor-tensor multiplication (3) in this paper, encompassing both the extension of ... | The multiplication of tensors, a fundamental and crucial operation analogous to matrix multiplication, has garnered considerable attention across various scientific disciplines.
In 2008, Kilmer et al. Kilmer2008third introduced a novel form of tensor multiplication that enables the representation of a third-order tens... | D |
In this encoder-decoder based backbone, we replace all the vanilla convolutions with the partial convolution layers to better capture information from irregular boundaries, since partial convolutions are conditioned only on uncorrupted pixels. Besides, skip connections are utilized to produce more sophisticated predic... |
In this paper, we propose a novel two-stream network which casts image inpainting into two collaborative subtasks, i.e., structure-constrained texture synthesis and texture-guided structure reconstruction. In this way, the two parallel-coupled streams are individually modeled and combined to complement each other. Cor... | Bi-directional Gated Feature Fusion (Bi-GFF). This module is proposed to further combine the decoded texture and structure features. It exchanges messages between the two kinds of information, where soft gating is exploited to control the rate. Due to this integration operation, the feature is refined and simultaneousl... |
Figure 2: Overview of the proposed method (best viewed in color). Generator: Image inpainting is cast into two subtasks, i.e., structure-constrained texture synthesis (left, blue) and texture-guided structure reconstruction (right, red), and the two parallel-coupled streams borrow encoded deep features from each other... |
We design a Bi-directional Gated Feature Fusion (Bi-GFF) module to share and combine information between the structure and texture features for consistency enhancement and a Contextual Feature Aggregation (CFA) module to yield more vivid details by modeling long-term spatial dependency. | B |
We obtained explicit formulas for the average decoding error probabilities of the ensemble under these three decoding principles and computed the error exponents. We also compare the results with the random [n,k]qsubscript𝑛𝑘𝑞[n,k]_{q}[ italic_n , italic_k ] start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT code ensemb... |
For unambiguous decoding, we computed the variance of the decoding error probability of the ensemble and the error exponent of the variance, from which we derived a strong concentration result, that is, under general conditions, the ratio of the decoding error probability of a random code in the ensemble and the avera... |
First recall that the error exponents of the average decoding error probability of the ensemble ℛ(1−R)n,nsubscriptℛ1𝑅𝑛𝑛\mathcal{R}_{(1-R)n,n}caligraphic_R start_POSTSUBSCRIPT ( 1 - italic_R ) italic_n , italic_n end_POSTSUBSCRIPT over the erasure channel under the three decoding principles are defined by |
We obtained explicit formulas for the average decoding error probabilities of the ensemble under these three decoding principles and computed the error exponents. We also compare the results with the random [n,k]qsubscript𝑛𝑘𝑞[n,k]_{q}[ italic_n , italic_k ] start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT code ensemb... | We establish a strong concentration result for the unsuccessful decoding probability of a random code in the ensemble ℛ(1−R)n,nsubscriptℛ1𝑅𝑛𝑛\mathcal{R}_{(1-R)n,n}caligraphic_R start_POSTSUBSCRIPT ( 1 - italic_R ) italic_n , italic_n end_POSTSUBSCRIPT towards the mean under unambiguous decoding.
| A |
Assuming that this problem was solved, a generated subgoal still remains to be assessed. The exact evaluation may, in general, require exhaustive search or access to an oracle (in which case the original problem is essentially solved). Consequently, it is unlikely that a simple planner (e.g., one unrolling independent... |
The deep learning revolution has brought spectacular advancements in pattern recognition techniques and models. Given the hard nature of reasoning problems, these are natural candidates to provide search heuristics [4]. Indeed, such a blend can produce impressive results [43, 44, 36, 1]. These approaches seek solution... |
Assuming that this problem was solved, a generated subgoal still remains to be assessed. The exact evaluation may, in general, require exhaustive search or access to an oracle (in which case the original problem is essentially solved). Consequently, it is unlikely that a simple planner (e.g., one unrolling independent... | More generally, concepts related to goals and subgoals percolated to reinforcement learning early on, leading, among others, to prominent ideas like hindsight [22], hierarchical learning [47, 7] or the Horde architecture [46]. Recently, with the advent of deep reinforcement learning, these ideas have been resurfacing a... | Reasoning is often regarded as a defining property of advanced intelligence [39, 18]. When confronted with a complicated task, humans’ thinking process often moves from one idea to a related idea, and the progress is made through milestones, or subgoals, rather than through atomic actions that are necessary to transiti... | C |
Among three general datasets, Chinese Resume is mainly collected from resume materials. The named entities in it are mostly people’s names, positions, and company names, which all have strong logic patterns. Ontonote mainly selects corpus from official news reports, whose grammar is formal and vocabulary is quite commo... |
our MFE-NER is a lightweight Named Entity Recognition method fusing the glyph and phonetic feature embeddings for Chinese character substitution, which is complementary to pre-trained language models in the representation of Chinese characters. As shown in Figure 2, MFE-NER introduces an extra module, fusing glyph emb... | These years, large-scale pre-trained language models based on Transformer [Vaswani et al., 2017] have shown their superiority in Natural Language Processing tasks. The self-attention mechanism can better capture the long-distance dependency in sentences and the parallel design is suitable for mass computing. Bidirectio... | So far, pre-trained models have been widely used in the semantic domain. One typical method is Word2Vec [Mikolov et al., 2013], which starts to use static embedding vectors to represent Chinese characters in the semantic domain. Now, we have more options. BERT [Kenton and Toutanova, 2019] has its Chinese version and ca... | The backbone of the NER model used in our work is mainly BiLSTM + CRF. The BiLSTM+CRF model is stable and has been verified in many research projects. Meanwhile, as our method is focused on providing a complementary lightweight module for current named entity recognition models, we select two pre-trained language model... | D |
In this work, we introduce neural étendue expanders as an optical element that expands the étendue of existing holographic displays without sacrificing displayed hologram fidelity. Neural étendue expanders are learned from a natural image dataset and are jointly optimized with the SLM’s wavefront modulation. Akin to a... | As the first learned optics for étendue expansion, we achieve étendue expansion factor 64×\times× with over 29 dB PSNR reconstructions, an order of magnitude improvement over existing approaches.
This means that expansion factor 64×64\times64 × combined with an 8K-pixel SLM can enable high-fidelity, ultra-wide-angle ho... |
While our experimental prototype was built for a HOLOEYE-PLUTO which possesses a 1K-pixel resolution, corresponding to a 1 mm eyebox with 75.6∘superscript75.675.6^{\circ}75.6 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT horizontal and vertical FOV, the improvement in hologram fidelity persists across resolutions. Irres... | The experimental findings on the display prototype verify that conventional non-étendue expanded holography can produce high-fidelity content but at the cost of a small FOV. Increasing the étendue via a binary random expander will increase the FOV but at the cost of low image fidelity, even at the design wavelength of ... | The uniform random expander is constructed by assigning each pixel a phase that is uniformly randomly chosen within [0,2π]02𝜋[0,2\pi][ 0 , 2 italic_π ]. To ensure at least 2π2𝜋2\pi2 italic_π phase is available for all wavelengths the [0,2π]02𝜋[0,2\pi][ 0 , 2 italic_π ] phase range is defined for 660 nmtimes660nm6... | A |
In addition to combining loss functions from different tasks, researchers also use additional adaptive loss functions ℒadaptsubscriptℒ𝑎𝑑𝑎𝑝𝑡\mathcal{L}_{adapt}caligraphic_L start_POSTSUBSCRIPT italic_a italic_d italic_a italic_p italic_t end_POSTSUBSCRIPT to enhance MTL models.
In (Li and Caragea, 2019), the al... |
Different from parallel feature fusion that combines features of different tasks at the same depth, hierarchical feature fusion can explicitly combine features at different depths and allow different processing for different features. To solve the Twitter demographic classification problem, \addedVijayaraghavan et al.... | The parallel architecture shares the bulk of the model among multiple tasks while each task has its own task-specific output layer.
The hierarchical architecture models the hierarchical relationships between tasks. Such architecture can hierarchically combine features from different tasks, take the output of one task a... | \addedChen et al. (2019) penalize the similarity between attention vectors from two tasks and the Euclidean distance between the resulting feature representations to enforce the models to focus on different task-specific features.
To learn domain-invariant features, \addedXing |
Different from learning shared features implicitly by sharing model parameters in the trunk, MTL models can actively combine features from different tasks, including shared and task-specific features, to form representations for each task. As shown in Fig. 1(b), such models can use a globally shared encoder to produce... | C |
IEEE recommends using the distribution from the TeXUser Group at http://www.tug.org. You can join TUG and obtain a DVD distribution or download for free from the links provided on their website: http://www.tug.org/texlive/. The DVD includes distributions for Windows, Mac OS X and Linux operating systems. | The IEEE Template Selector will always have the most up-to-date versions of the LaTeX and MSWord templates. Please see: https://template-selector.ieee.org/ and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have... | Be sure to use the \\\backslash\IEEEmembership command to identify IEEE membership status.
Please see the “IEEEtran_HOWTO.pdf” for specific information on coding authors for Conferences and Computer Society publications. Note that the closing curly brace for the author group comes at the end of the thanks group. This w... | The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ... |
Welcome to the updated and simplified documentation to using the IEEEtran LaTeX class file. The IEEE has examined hundreds of author submissions using this package to help formulate this easy to follow guide. We will cover the most commonly used elements of a journal article. For less common elements we will refer bac... | A |
Suppose we are given a triplet G,ℓ,𝒞𝐺ℓ𝒞G,\ell,\mathcal{C}italic_G , roman_ℓ , caligraphic_C, a vertex v∈V(G)𝑣𝑉𝐺v\in V(G)italic_v ∈ italic_V ( italic_G ) is said to be unlabeled if there does not exist ui∈Usubscript𝑢𝑖𝑈u_{i}\in Uitalic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ italic_U such
that ℓ(ui)... | ℓ(ui)=ℓ′(ui)ℓsubscript𝑢𝑖superscriptℓ′subscript𝑢𝑖\ell(u_{i})=\ell^{\prime}(u_{i})roman_ℓ ( italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ). Similarly, two coloring functions 𝒞,𝒞′𝒞superscri... | G,ℓ′,𝒞⊧ϕ[xi∖ui]models𝐺superscriptℓ′𝒞italic-ϕdelimited-[]subscript𝑥𝑖subscript𝑢𝑖G,\ell^{\prime},\mathcal{C}\models\phi[x_{i}\setminus u_{i}]italic_G , roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , caligraphic_C ⊧ italic_ϕ [ italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∖ italic_u start_POSTSUBSC... | labeling function ℓ′′:U→V(G):superscriptℓ′′→𝑈𝑉𝐺\ell^{\prime\prime}:U\to V(G)roman_ℓ start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT : italic_U → italic_V ( italic_G ) where ℓ′′(ui)=f(ℓ′(ui))=f(v)superscriptℓ′′subscript𝑢𝑖𝑓superscriptℓ′subscript𝑢𝑖𝑓𝑣\ell^{\prime\prime}(u_{i})=f(\ell^{\prime}(u_{i}))=f(v)roman... | We say that two labeling functions ℓ,ℓ′ℓsuperscriptℓ′\ell,\ell^{\prime}roman_ℓ , roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT
agree on a constant uisubscript𝑢𝑖u_{i}italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT if either they are both undefined on uisubscript𝑢𝑖u_{i}italic_u start_POSTSUBSCRIPT itali... | D |
Prior extensions of public goods provision to environments with endogenous linking include Galeotti and Goyal (2010), which furthers the specialization result of Bramoullé et al. (2007). These papers emphasize the prevalence of core-periphery architectures as equilibrium networks, but in a setting where players choose ... |
In another relevant study by Rand et al. (2011), the authors conducted an experiment to gauge the effects of endogenous networks on cooperation in a repeated prisoner’s dilemma. By varying the opportunity for network updates, they showed that subjects are able to take advantage of their ability to change social ties i... |
While we have demonstrated the effectiveness of our modeling paradigm and established proof-of-concept, the real power of our methodology is in its capacity to explain far more sophisticated patterns of learning and behavior. In particular, using laboratory experiments to collect small panels of network data opens the... |
There has been prior work examining both the extension of public goods to static (exogenous) networks, and the provision of public goods on endogenous networks. In particular, Bramoullé et al. (2007) launched research into this environment, by showing that given a network shape, specialized Nash equilibria (in which a... |
The estimation in column (3) of Table 1 shows the effects of the treatment on subjects’ abilities to coordinate on efficient structure. By efficient structure, we refer to a network topology that satisfies the conditions required for efficiency by Proposition 2, without necessarily satisfying the requirement of full c... | A |
The purpose of SISR is to enlarge a smaller size image into a larger one and to keep it as accurate as possible. Therefore, enlargement operation, also called upsampling, is an important step in SISR. The current upsampling mechanisms can be divided into four types: pre-upsampling SR, post-upsampling SR, progressive up... |
Due to the particularity of the SISR task, it is difficult to construct a large-scale paired real SR dataset. Therefore, researchers often apply degradation patterns on the aforementioned datasets to obtain corresponding degraded images to construct paired datasets. However, images in the real world are easily disturb... |
Many SISR methods have been studied long before, such as bicubic interpolation and Lanczos resampling (Duchon, 1979), which are based on interpolation. However, SISR is an inherently ill-posed problem, and multiple HR images corresponding to the same LR image always exist. To solve this issue, some numerical methods (... |
Interpolation is the most widely used upsampling method. The current mainstream of interpolation methods includes Nearest-neighbor Interpolation, Bilinear Interpolation, and Bicubic Interpolation. Being highly interpretable and easy to implement, these methods are still widely used today. Among them, Nearest-neighbor ... | et al., 2019), Zhao et al. proposed a deep Channel Splitting Network (CSN) to ease the representational burden of deep models and further improve the SR performance of MR images. In (Peng
et al., 2020), Peng et al. introduced a Spatially-Aware Interpolation Network (SAINT) for medical slice synthesis to alleviate the m... | C |
The downsampling operation can be implemented in several ways. If the downsampling kernel is known, then the best approach is to simply backpropagate through that kernel (assuming it is differentiable). Otherwise, we can create a trainable downsampling module representing the kernel and optimize its weights in an end-t... | In Figure 6, we demonstrate the downsampling effect of two non-standard kernels: i) delta function (leading to aliasing) and ii) diagonal Gaussian kernel. Different types of artifacts can be observed depending on the kernel. During training, Neural Knitwork blindly approximates the downsampling kernel based on the imag... |
Figure 7 contains results for a diagonal kernel and upscaling factor of 4, for the proposed Neural Knitwork, the conventional and SinGAN, another image super-resolution method based on internal learning. The results show that SinGAN has the lowest performance in terms of PSNR but it also creates distinguishable artif... | Figure 7: Comparison of blind image super-resolution for a diagonal Gaussian kernel and upscaling factor of 4x. Neural Knitwork can outperform conventional coordinate network and achieve higher PSNR. SinGAN, while generating a considerable amount of high frequency details, results in significant artifacts.
| The downsampling operation can be implemented in several ways. If the downsampling kernel is known, then the best approach is to simply backpropagate through that kernel (assuming it is differentiable). Otherwise, we can create a trainable downsampling module representing the kernel and optimize its weights in an end-t... | A |
Augment 𝒟←𝒟∪{xt,Φt(1)}←𝒟𝒟subscript𝑥𝑡subscriptΦ𝑡1\mathcal{D}\leftarrow\mathcal{D}\cup\{x_{t},\Phi_{t}(1)\}caligraphic_D ← caligraphic_D ∪ { italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , roman_Φ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( 1 ) }
|
The present paper is the first work we aware of that specifically applies TS to apple tasting, but previous work has considered its use for logistic bandits. For logistic contextual bandits, the implementation of exact TS (i.e. the policy that draws its sample from the exact posterior) is infeasible due to the intract... | The regret results of the Section 2 are based on exact sampling from the posterior. The PG-TS algorithm necessarily samples from an approximation of the posterior, to maintain a reasonable computational overhead. Recent work of Phan et al., (2019) has identified conditions under which sampling from an approximate poste... |
In this section we address the possible concern around use of an approximate sampler, by demonstrating that PG-TS does not meet the sufficient conditions identified by Phan et al., (2019), and further adapt the results of May et al., (2012) to LCAT to show that PG-TS obtains an asymptotically sublinear regret. |
The logistic classification model is such that the posterior on θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, πtsubscript𝜋𝑡\pi_{t}italic_π start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, for any t≥1𝑡1t\geq 1italic_t ≥ 1 is intractable. This renders the implementation of TS via sampl... | B |
Table 5: Macro F1-score achieved on IBM2015 dataset. We report uniform strategy (U), priority attention-based strategy (Att), and priority loss gain strategy (LG) with sampled memory M¯¯𝑀\bar{M}over¯ start_ARG italic_M end_ARG of size 10. Best results are in bold, second best results are underlined instead. Standard d... |
One problem is how to define an appropriate activation threshold δ𝛿\deltaitalic_δ, i.e., the minimum value to consider a memory slot has been used by the model. While in some cases, like ToS, a δ=0.5𝛿0.5\delta=0.5italic_δ = 0.5 threshold could be meaningful enough, in other cases, like for the IBM2015 dataset, we pr... |
Figure 3: MemDistilBERT analysis on IBM2015, 1-Topic. (a) P@K for increasing K𝐾Kitalic_K values and δ=0.25𝛿0.25\delta=0.25italic_δ = 0.25; (b) P@3 for increasing δ𝛿\deltaitalic_δ values. Metrics for sampling-based models are averaged across three distinct inferences on test set. | Activation threshold δ𝛿\deltaitalic_δ is set to 0.5.
Best results are in bold, second-best results are underlined for MRR. Columns C to P@3 are not directly comparable among models due to their different memory usage. Standard deviation is reported in subscript format. | Differently from unfairness detection, the large memory size hinders selective memory lookup operations. Additionally, claim detection on the IBM2015 dataset is a challenging task where existing solutions reach comparably low performance [66, 67]. For these reasons, following previous work on the same dataset, we focus... | B |
In this section, we propose some informal properties that the crowdedness level in a big city should satisfy to robustly withstand critical events.
Let c𝑐citalic_c represent a crowdedness threshold for all the areas of the city. Note, however, that the framework can accommodate for, e.g., area-specific threshold value... |
One possible stakeholder of our proposed framework is a telecommunications company, which would like to have a predictive alert system to ensure that their mobile network does not get overcrowded. The following three properties could be of interest to the telecommunications company: | Especially in high-dimensional, complex models, these requirements or properties relevant for decision-making are typically highly nonlinear functions of the random variables, and one is interested in their predictive distribution. Their verification ex-ante as well as their evaluation ex-post (as part of the posterior... | In this section, we propose some informal properties that the crowdedness level in a big city should satisfy to robustly withstand critical events.
Let c𝑐citalic_c represent a crowdedness threshold for all the areas of the city. Note, however, that the framework can accommodate for, e.g., area-specific threshold value... | In addition to the previous requirements that are related to general aspects of the mobile network,
for the evaluation of the city in terms of safety and quality of life, it is interesting to look at how the city is performing with respect to the reachability of some key points of interest. For example, in an emergency... | A |
If the explicit representation of X⊂ℝm𝑋superscriptℝ𝑚X\subset\mathbb{R}^{m}italic_X ⊂ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT
is replaced by oracle access to the distance function dist:X×X→ℝ+normal-:normal-distnormal-→𝑋𝑋subscriptℝ\operatorname{dist}:X\times X\to\mathbb{R}_{+}roman_dist : ital... | ∀x∈X,c∈ℋ,‖x−c‖=‖f(x)−f(c)‖formulae-sequencefor-all𝑥𝑋formulae-sequence𝑐ℋnorm𝑥𝑐norm𝑓𝑥𝑓𝑐\forall x\in X,c\in\mathcal{H},\quad\|x-c\|=\|f(x)-f(c)\|∀ italic_x ∈ italic_X , italic_c ∈ caligraphic_H , ∥ italic_x - italic_c ∥ = ∥ italic_f ( italic_x ) - italic_f ( italic_c ) ∥.
| Suppose that a reweighted subset S⊆X𝑆𝑋S\subseteq Xitalic_S ⊆ italic_X satisfies that
for every φ:X→ℝn+1normal-:𝜑normal-→𝑋superscriptℝ𝑛1\varphi:X\to\mathbb{R}^{n+1}italic_φ : italic_X → blackboard_R start_POSTSUPERSCRIPT italic_n + 1 end_POSTSUPERSCRIPT | such that for all x,y∈X𝑥𝑦𝑋x,y\in Xitalic_x , italic_y ∈ italic_X, ⟨φ(x),φ(y)⟩=K(x,y)𝜑𝑥𝜑𝑦𝐾𝑥𝑦\langle\varphi(x),\varphi(y)\rangle=K(x,y)⟨ italic_φ ( italic_x ) , italic_φ ( italic_y ) ⟩ = italic_K ( italic_x , italic_y ), for all C⊆ℝn+1𝐶superscriptℝ𝑛1C\subseteq\mathbb{R}^{n+1}italic_C ⊆ blackboard_R start_P... | Suppose f𝑓fitalic_f is the asserted map from Lemma 3.2.
Hence, f(φ(X))={f(φ(x)):x∈X}𝑓𝜑𝑋conditional-set𝑓𝜑𝑥𝑥𝑋f(\varphi(X))=\{f(\varphi(x)):x\in X\}italic_f ( italic_φ ( italic_X ) ) = { italic_f ( italic_φ ( italic_x ) ) : italic_x ∈ italic_X } is a subset of ℝn+1superscriptℝ𝑛1\mathbb{R}^{n+1}blackboard_R s... | D |
Combining the observations made in Remark 1 and in Remark 2 it not surprising that, given a theory 𝒯𝒯\mathcal{T}caligraphic_T in full First Order Logic over a signature L𝐿Litalic_L, a categorical semantics of 𝒯𝒯\mathcal{T}caligraphic_T in a first order doctrine P𝑃\mathit{P}italic_P is an object of 𝐅𝐎𝐃(Prp𝒯... | The syntactic doctrine associated with the (⊗,𝟏,⊸)tensor-product1⊸(\otimes,\mathbf{1},\multimap)( ⊗ , bold_1 , ⊸ )-fragment of ILL is a multiplicative linear doctrine;
the one associated with the (⊗,𝟏,⊸,⊕,𝟎,&,⊤)tensor-product1⊸direct-sum0top(\otimes,\mathbf{1},\multimap,\oplus,\mathbf{0},\mathbin{\&},\top)( ⊗ , bold... |
We first introduce PLLR, a calculus extending the (⊗,𝟏)tensor-product1(\otimes,\mathbf{1})( ⊗ , bold_1 )-fragment of Linear Logic by a family of modalities !r\mathsf{!}_{r}! start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, for r∈|R|𝑟𝑅r\in|R|italic_r ∈ | italic_R |. | Hence, ♭∞subscript♭\flat_{\infty}♭ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT models the usual bang modality of Linear Logic [Gir87].
Syntactic doctrines on the (⊗,𝟏,!)(\otimes,\mathbf{1},\mathsf{!})( ⊗ , bold_1 , ! )-fragment of Linear Logic are {∞}\{\infty\}{ ∞ }-graded. | We shall call their linear counterparts primary linear doctrines. They model the (⊗,𝟏)tensor-product1(\otimes,\mathbf{1})( ⊗ , bold_1 )-fragment of Linear Logic and provide the minimal structure needed to describe equality in a substructural setting.
The following definition is carved out from Seely’s definition of li... | D |
Theorem 4.6 shows that ForestSim score between two nodes can be represented in terms of diagonal elements of the forest matrix 𝑾𝑾\bm{\mathit{W}}bold_italic_W. Thus, the ForestSim score for all pairs of nodes in a graph with n𝑛nitalic_n vertices can be exactly calculated in O(n3)𝑂superscript𝑛3O(n^{3})italic_O ( i... |
Trivially, ForestSim obeys Range (P1) and Symmetry (P2). It has been proved that Transitive similarity (P4) is implied by Triangle inequality (P5) [5]. Thus, we only need to prove that ForestSim satisfies Automorphism conformation (P3) and Triangle inequality (P5). | Note that two nodes 3333 and 4444 are symmetry in G0subscript𝐺0G_{0}italic_G start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, and there is s(3)=s(4)𝑠3𝑠4s(3)=s(4)italic_s ( 3 ) = italic_s ( 4 ). Thus, for two similar nodes u𝑢uitalic_u and v𝑣vitalic_v, s(u)𝑠𝑢s(u)italic_s ( italic_u ) and s(v)𝑠𝑣s(v)italic_s ( italic_... | In this section, we first define a new role similarity measure, namely ForestSim, based on spanning rooted forests. We then show that the ForestSim score can be expressed in terms of the diagonal elements in the forest matrix and prove that ForestSim is an admissible role similarity metric. After that, we propose Fores... | Structural node similarity is a fundamental issue in graph analysis [1, 2, 3, 4, 5, 6, 7, 8, 9] and is adopted in a wide scope of applications. For example, node similarity has been used for role discovery [10, 11, 12], protein function prediction [13, 14], anomaly detection [15, 16], recommender system [17, 18, 19, 20... | A |
When it comes to sentiment classification performance, the results in Table 4 clearly demonstrate the superiority of our models over significant baselines, particularly in the case of the LSAE model.
The experimental results are as expected and show the proficiency of LSA. | Another limitation is that LSA is a quite simple mechanism and relies on relatively basic aspect features to construct sentiment aggregation windows, which may not be as competitive as state-of-the-art methods that employ more complex features.
Besides, the current sentiment aggregation window is intuitive but may not ... | We utilize LSA to classify aspect sentiments and aggregate the sentiment clusters.
The cluster prediction performance in Table 3 shows that our models consistently outperform the baseline models on all datasets. The performance of LSA is dependent on the base model. | One of the primary concerns associated with LSA is its occasional inability to outperform certain baselines based on the BERT model. We attribute this observation to two main reasons.
Firstly, LSA is a quite simple mechanism and relies on relatively basic aspect features to construct sentiment aggregation windows, whic... | In this section, we propose a local sentiment aggregation method for sentiment cluster prediction, which is based on the local sentiment coherency pattern.
We first introduce the implementation of local sentiment aggregation, which is based on sentiment window aggregation. | C |
𝐗𝐗\mathbf{X}bold_X. Suppose that there exist orthogonal projections 𝐋:ℝN→𝒫p(𝕊):𝐋→superscriptℝ𝑁superscript𝒫𝑝𝕊\mathbf{L}:\mathbb{R}^{N}\rightarrow{\mathcal{P}}^{p}(\mathbb{S})bold_L : blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT → caligraphic_P start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR... | The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop... | In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic... |
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and me... | gradient estimate of 𝜸𝐖subscript𝜸𝐖\bm{\gamma}_{\mathbf{W}}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT, where 𝜸𝐖0superscriptsubscript𝜸𝐖0\bm{\gamma}_{\mathbf{W}}^{0}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is the initial
guess. The main ... | A |
Left: Current quantum hardware has much larger error rates (around 10−3superscript10310^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT) than classical CPUs/GPUs. Right: Due to the errors, PQC (QNN) models suffer from severe accuracy drops. Different devices have various error magnitudes, leading to distinct accur... | QuantumNAT comprises a three-stage pipeline. The first step, post-measurement normalization normalizes the measurement outcomes on each quantum bit (qubit) across data samples, thus removing the quantum error-induced distribution shift. Furthermore, we inject noise to the PQC training process by performing error gate i... | Figure 3. QuantumNAT Overview. (1) Post-measurement normalization matches the distribution of measurement results between noise-free simulation and real QC. (2) Based on realistic noise models, noise-injection inserts quantum error gates to the training process to increase the classification margin between classes. (3)... | As in Figure 5, during training, for each QNN gate, we sample error gates based on ℰℰ\mathcal{E}caligraphic_E and insert it after the original gate. A new set of error gates is sampled for each training step. In reality, the QNN is compiled to the basis gate set of the quantum hardware (e.g., X, CNOT, RZ, CNOT, and ID)... | For measurement, we measure the expectation values on Pauli-Z basis and obtain a value [-1, 1] from each qubit. The measurement outcome goes through post-measurement normalization and quantization and is used as rotation angles for RY gates in the next block’s encoder.
After the last block, for two-classifications, we ... | A |
However, gallego2018unifying and other event-based data association methods zhu2017event ; gallego2019focus ; peng2022globally show that the events triggered by the same edge in the scene can be associated with each other using an event trajectory. Furthermore, they also show that the event trajectories, triggered a... | For the object tracking task, since the event data is associated between two adjacent frames, we employ an evaluation protocol of frame-wise tracking to evaluate the performance of the proposed EDA approach. For the frame-wise tracking, the evaluation of these competing methods is based on object pairs, each of which i... |
To evaluate the proposed EDA on visual tracking, we need to calculate the corresponding object bounding box based on the event trajectories associated by EDA. According to the frame-wise tracking protocol, for each tracking instance, we have the ground truth bounding box of the tracked object at the current frame. EDA... | We extensively evaluate the proposed EDA on object tracking. The experimental results demonstrate the superiority of EDA over other state-of-the-art event-based tracking methods and several popular conventional tracking methods. In addition, the estimated true event trajectories corresponding to object motions are also... |
However, gallego2018unifying and other event-based data association methods zhu2017event ; gallego2019focus ; peng2022globally show that the events triggered by the same edge in the scene can be associated with each other using an event trajectory. Furthermore, they also show that the event trajectories, triggered a... | C |
A good connected ordering of any connected perfect graph on n𝑛nitalic_n vertices can be computed in time O(nc+4)𝑂superscript𝑛𝑐4O(n^{c+4})italic_O ( italic_n start_POSTSUPERSCRIPT italic_c + 4 end_POSTSUPERSCRIPT ) provided that an optimal colouring of a perfect graph can be obtained in O(nc)𝑂superscript𝑛𝑐O(n^{... | This is formalized by Algorithm 1.
The size of the maximum clique of Line 1 is computed in O(nc)𝑂superscript𝑛𝑐O(n^{c})italic_O ( italic_n start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT ) time using the algorithm in [13], bringing the total time complexity of Algorithm 1 to O(nc+2)𝑂superscript𝑛𝑐2O(n^{c+2})it... | Note that the time bound of the above corollary relies to date on the complexity of the polynomial-time algorithm from [13], whose precise exponent has not been made explicit by the authors and which is most probably large.
This is in contrast to the algorithm for comparability graphs given by Theorem 7 which runs in O... | Concerning other subclasses of interest, as mentioned in the introduction, the same task can be done in time O(m+n)𝑂𝑚𝑛O(m+n)italic_O ( italic_m + italic_n ) for chordal graphs using the LexBFS algorithm [21], for Meyniel graphs this can be done in time O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUP... | As a connected vertex-ordering of H𝐻Hitalic_H can be obtained in linear time using a standard graph traversal algorithm, and a colouring of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT may be computed in O(mn)𝑂𝑚𝑛O(mn)italic_O ( italic_m italic_n ) time [12, Chapter 5.7], we concl... | B |
Figure 6: Visualization of learned representations of CL methods with ResNet-50 on STL-10. We visualize the 2048-dim embeddings by UMAP. Compared to ODC and BYOL, the local structures of clusters are well-preserved, while each cluster is discriminative. |
In KD tasks, GenURL follows the settings of the current-proposed contrastive-based KD method SEED [67], which adopts the non-linear projector network and data augmentations used in MoCo.v2. Note that MoCo.v2 pre-trained ResNet-50 is adopted as the teacher model. Similar to the SSL task, GenURL uses the BCE loss with ν... |
The main goal of unsupervised learning is to learn transferrable features. In Table V, we compare the representation quality of unsupervised pre-training on STL-10 by transferring to the classification task. We adopt linear evaluation on the CIFAR-10 in 64×\times×64 resolutions with 1600-epoch pre-trained ResNet-50 on... | We evaluate the KD tasks based on self-supervised learning on STL-10 dataset. In this experiment, we adopt MoCo.v2 with ResNet-50 under 1600-epoch pre-training. We choose multiple smaller networks with fewer parameters as the student network: ResNet-18 [70], MobileNet.v2 [86], ShuffleNet.v1 [87]. Similar to the pre-tra... |
TABLE IV: Unsupervised knowledge distillation. Top-1 accuracy (%) under linear evaluation on STL-10. The teacher model is ResNet-50 pre-trained by MoCo.v2. ††{\dagger}† indicates using a momentum encoder as MoCo.v2. SSL denotes the InfoNCE loss. KD denotes the knowledge distillation loss. H+AW denotes the Huber loss a... | D |
We analyze the advantage of our method on image classification datasets: ImageNet [11] as the standard benchmark, and Visual Wake Words [10] to reflect TinyML applications. We further validate our method on object detection datasets: Pascal VOC [13] and WIDER FACE [51] to show our advantage: be able to fit larger reso... | We compared MCUNetV2 with existing state-of-the-art solutions on ImageNet classification under two hardware settings: 256kB SRAM/1MB Flash and 512kB SRAM/2MB Flash. The former represents a widely used Cortex-M4 microcontroller; the latter corresponds to a higher-end Cortex-M7.
The goal is to achieve the highest ImageNe... |
We show the object detection results on Pascal VOC trained with YOLOv3 [41] on Table 3. We provide MCUNetV2 results for M4 MCU with 256kB SRAM and H7 MCU with 512kB SRAM. On H7 MCU, MCUNetV2-H7 improves the mAP by 16.7% compared to the state-of-the-art method MCUNet [30]. It can also scale down to fit a cheaper commod... |
We further profile existing networks running on STM32F746 MCU. We measure the SRAM usage of the network with per-layer and per-patch (2×2222\times 22 × 2 or 3×3333\times 33 × 3 patches) inference. Due to the memory limit of MCU (320kB SRAM, 1MB Flash), we have to scale down the width multiplier w𝑤witalic_w and input ... |
We follow [30] for super network training and evolutionary search, detailed in the supplementary. Models are quantized to int8 for deployment. We extend TinyEngine [30] to support patch-based inference, and benchmark the models on 3 MCU models with different hardware resources: STM32F412 (Cortex-M4, 256kB SRAM/1MB Fla... | D |
Besides, we modeled label sequences jointly using a CRF to improve the performance of our method. The model effect was not improved when the learning rate of CRF was relatively small. To explore the effect of CRF on BERT, we tried to continuously increase the learning rate of CRF. At last, we concluded that the learnin... |
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging [8] framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a... |
Main Results Table I shows the results of different models for CECE on two leaderboards. The single model we proposed, overwhelmingly outperforms the baseline in terms of all leaderboards and achieves encouraging 65.9%percent65.965.9\%65.9 %, 77.0%percent77.077.0\%77.0 % improvements in F1-score over the baseline meth... | In the model ensemble [9] stage, we adopt a simple and efficient way and get 1.30%percent1.301.30\%1.30 % boosting (Shown in Figure 2). We employed a two-step approach to get the final results. Firstly, we determined the serialization of the text boundary cross-validation results, with the character position of the pre... |
In this competition, we introduce a fresh perspective to revisit the relational event-cause extraction task and propose a novel sequence tagging framework, instead of extracting event types and events-causes separately. Experiments show our framework outperforms baseline methods even when its encoder module uses a ran... | B |
We propose the concepts of asymmetric structure and complementary encoders as foundational principles for the collaborative learning paradigm. To provide a more comprehensive theoretical analysis, we put forth two quantitative metrics to assess both the asymmetry and complementarity inherent in the collaborative frame... | We first evaluate the representations learned by CGCL in the setting of unsupervised learning. We follow the same process of InfoGraph [26], where representations are learned by models without any labels and then fed into a SVM to evaluate the graph classification performance.
CGCLGINsubscriptCGCL𝐺𝐼𝑁{\text{CGCL}}_... | To cope with the problem of model collapse, we devise the asymmetric structure for CGCL. The asymmetry lies in the differences of GNN-based encoders’ message-passing schemes. Besides, graph encoders in CGCL are supposed to be complementary for a stronger fitting ability. Specifically, high complementarity indicate that... |
Extensive experiments on nine datasets show that CGCL has advantages in graph classification task compared with the state-of-the-art methods. Besides, empirical evidence validates that the architecture of CGCL, characterized by its inherent asymmetry and complementarity, indeed yields enhanced performance outcomes. | As illustrated in Figure 4, the best assembly generally appears in the top right-hand corner, for example point A. This indicates that the assembly with both high AC and CC generally performs better, which further justifies the rationality of the design of asymmetric architecture and complementary encoders for CGCL. Be... | C |
The corrupted message corresponding to f∈ℱ𝑓ℱf\in\mathcal{F}italic_f ∈ caligraphic_F is denoted by ℓ(f)′ℓsuperscript𝑓′\ell(f)^{\prime}roman_ℓ ( italic_f ) start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT and the inferred features corresponding to this corrupted message are given by f′=ℓ−1(ℓ(f)′)∈ℱsuperscript𝑓′superscri... | Finally, the neural networks are trained using a linear combination of cross-entropy loss for the receiver, cross-entropy loss for the sender, and L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-regularization. The loss term for the sender incentives the language to be a one-to-one mapping.
| Recall that the definition of compositionality (with respect to given features) connects the change in feature values with the change in message symbols. It is thus reasonable to look for loss functions that are somehow factorized in terms of individual symbols.
Consider the following loss function: | We then formulate inductive biases in the loss function and prove that they are sufficient to achieve compositionality when coupled with communication over a noisy channel.
Consequently, we highlight the catalytic role of noise in the emergence of compositionality. | We say that ℓℓ\ellroman_ℓ is compositional with respect to a given feature if a change in i𝑖iitalic_i-th feature only impacts a corresponding j𝑗jitalic_j-th index of the message. Continuing the previous example, let 𝒜={𝚊,𝚋}𝒜𝚊𝚋\mathcal{A}=\{\mathtt{a,b}\}caligraphic_A = { typewriter_a , typewriter_b } and consid... | B |
Control barrier functions (CBFs) were introduced in [3, 4] to render a safe set controlled forward invariant. A CBF defines a set of safe control inputs that can be used to find a minimally invasive safety-preserving correction to a nominal control law by solving a convex quadratic program. Many variations and extensio... |
CBFs that account for uncertainties in the system dynamics have been considered in two ways. The authors in [10] and [11] consider input-to-state safety to quantify possible safety violation. Conversely, the work in [12] proposes robust CBFs to guarantee robust safety by accounting for all permissible errors within an... | Learning with CBFs: Approaches that use CBFs during learning typically assume that a valid CBF is already given, while we focus on constructing CBFs so that our approach can be viewed as complementary. In [19], it is shown how safe and optimal reward functions can be obtained, and how these are related to CBFs. The aut... |
A promising research direction is to learn CBFs from data. The authors in [36] construct CBFs from safe and unsafe data using support vector machines, while authors in [37] learn a set of linear CBFs for clustered datasets. The authors in [38] proposed learning limited duration CBFs and the work in [39] learns signed ... | Control barrier functions (CBFs) were introduced in [3, 4] to render a safe set controlled forward invariant. A CBF defines a set of safe control inputs that can be used to find a minimally invasive safety-preserving correction to a nominal control law by solving a convex quadratic program. Many variations and extensio... | A |
BQP}^{\mathsf{NP}}=\mathsf{BQP}^{\mathsf{BQP}}=\mathsf{BQP}sansserif_NP start_POSTSUPERSCRIPT sansserif_NP end_POSTSUPERSCRIPT ⊆ sansserif_NP start_POSTSUPERSCRIPT sansserif_BQP end_POSTSUPERSCRIPT ⊆ sansserif_BQP start_POSTSUPERSCRIPT sansserif_NP end_POSTSUPERSCRIPT = sansserif_BQP start_POSTSUPERSCRIPT sansserif_BQP... |
We follow a similar proof strategy to Theorem 29, with some additional steps. First, we take a random oracle, which separates 𝖯𝖧𝖯𝖧\mathsf{PH}sansserif_PH from 𝖯#𝖯superscript𝖯#𝖯\mathsf{P}^{\mathsf{\#P}}sansserif_P start_POSTSUPERSCRIPT # sansserif_P end_POSTSUPERSCRIPT (morally, because Parity is not approxima... |
The proof of Theorem 10, giving an oracle where 𝖯=𝖭𝖯≠𝖡𝖰𝖯=𝖯#𝖯𝖯𝖭𝖯𝖡𝖰𝖯superscript𝖯#𝖯\mathsf{P}=\mathsf{NP}\neq\mathsf{BQP}=\mathsf{P^{\#P}}sansserif_P = sansserif_NP ≠ sansserif_BQP = sansserif_P start_POSTSUPERSCRIPT # sansserif_P end_POSTSUPERSCRIPT, follows a similar recipe to the proof of Theorem 9. W... | Theorem 10 says, in effect, that there is no relativizing obstruction to 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP being inordinately powerful even while 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP is inordinately weak. It substantially extends the Raz-Tal Theorem, that there is an oracle relative to which 𝖡𝖰𝖯⊄𝖯𝖧not-subset-of𝖡𝖰... |
The starting point of this paper was the following question: in a “post-Raz-Tal world,” can we at last completely “unshackle” 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP from 𝖯𝖯\mathsf{P}sansserif_P, 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP, and 𝖯𝖧𝖯𝖧\mathsf{PH}sansserif_PH, by showing that there are no relativizing obstruction... | D |
The arc spaces may also have a rich scheme (i.e.. nilpotent) structure (see [27, 16, 14]) reflecting the geometry of the original scheme [35, 9].
In the case of a fat point ℐm=⟨xm⟩⊂k[x]subscriptℐ𝑚delimited-⟨⟩superscript𝑥𝑚𝑘delimited-[]𝑥\mathcal{I}_{m}=\langle x^{m}\rangle\subset k[x]caligraphic_I start_POSTSUBSCRI... | Our result (2) suggests that one possibility is to define the multiplicity of a solution as the growth rate of multiplicities of its truncations, and this definition will be consistent with the usual algebraic multiplicity for the case of a fat point on a line.
| The arc spaces may also have a rich scheme (i.e.. nilpotent) structure (see [27, 16, 14]) reflecting the geometry of the original scheme [35, 9].
In the case of a fat point ℐm=⟨xm⟩⊂k[x]subscriptℐ𝑚delimited-⟨⟩superscript𝑥𝑚𝑘delimited-[]𝑥\mathcal{I}_{m}=\langle x^{m}\rangle\subset k[x]caligraphic_I start_POSTSUBSCRI... | From the point of view of the algebraic geometry, I(∞)superscript𝐼I^{(\infty)}italic_I start_POSTSUPERSCRIPT ( ∞ ) end_POSTSUPERSCRIPT defines the arc space ℒ(X)ℒ𝑋\mathcal{L}(X)caligraphic_L ( italic_X ) [13] of the scheme X𝑋Xitalic_X.
Geometrically, the points of the arc space correspond to the Taylor coefficients... | Note that the series does not depend on the multiplicity m𝑚mitalic_m of the point.
One way to capture the scheme structure of ℒ(X)ℒ𝑋\mathcal{L}(X)caligraphic_L ( italic_X ) could be to take the components of the projections in (3) with their multiplicities. | D |
Karate-weighted: This weighted network is collected from a university karate club. In this weighted network, node denotes member, and edge between two nodes indicates the relative strength of the associations. Actually, this network is the weighted version of Karate club network. So, the number of communities is 2 and... |
where this definition of embeddedness extends that of [38, 24] from un-weighted network to weighted network whose adjacency matrix is connected and has nonnegative entries. Extending the definition of embeddedness for adjacency matrix in which there may exist negative elements is an interesting problem, and we leave i... | Gahuku-Gama subtribes: This data is the signed social network of tribes of the Gahuku–Gama alliance structure of the Eastern Central Highlands of New Guinea. This network has 16 tribes, and positive or negative link between two tribes means
they are allies or enmities, respectively. Meanwhile, there are 3 communities i... |
Karate-weighted: This weighted network is collected from a university karate club. In this weighted network, node denotes member, and edge between two nodes indicates the relative strength of the associations. Actually, this network is the weighted version of Karate club network. So, the number of communities is 2 and... | In CoauthorshipsNet, node means scientist and weights mean coauthorship, where weights are assigned by the original papers. For this network, there is no ground truth about nodes labels, and the numbers of communities are unknown. The CoauthorshipsNet has 1589 nodes, however its adjacency matrix is disconnected. Among ... | B |
Roy (1997); Sutton and Barto (2018). However, it is often impractical to train an on-policy algorithm in the distributed setting.
When data collection is performed using many workers, the communication latency, asynchronicity, and other factors make the behavioral policies lag behind the target one. This results in a s... | In our work, we introduced MA-Trace, a new multi-agent reinforcement learning algorithm. We evaluated it on 14141414 scenarios constituting the StarCraft Multi-Agent Challenge and confirmed its strong performance. We also included ablations regarding importance sampling, centralization of learning, scaling, and sharing... |
In this work, we take a step towards amending this situation. We propose MA-Trace, a new on-policy actor-critic algorithm, which adheres to the centralized training and decentralized execution paradigm Lowe et al. (2017); Foerster et al. (2018); Rashid et al. (2018). The key component of MA-Trace is the usage of impor... | In Table 1 and Figure 1, we present the results of the main version of our algorithm – MA-Trace (obs), in which the critic uses stacked observations of agents as described in Appendix F. MA-Trace (obs) reaches competitive results and in some cases exceeds the current state-of-the-art. We compare with a selection of the... | We evaluate MA-Trace on StarCraft Multi-Agent Challenge Samvelyan
et al. (2019a) – a standard benchmark for multi-agent algorithms. Our approach achieves competitive performance on all tasks and exceeds state-of-the-art results on some of them. Additionally, we provide a comprehensive set of ablations to quantify the i... | D |
G1: Bring diverging models’ performance to the spotlight.
A VA system must first focus on what each model has learned in general; as such, the assessment of every model’s performance (to decide which should remain under use) is a prerequisite. Models that fail to perform according to user-defined standards should not b... | G2: Disclose connections between features and predictions.
VA systems should expose the features’ impacts on predictions and allow humans to delete needless features based on that. During training, ML models learn different mappings between input features and resulting predictions based on the inherent mathematical fun... |
At this point, we want to investigate which features of the training set impacted the predictions more (see Figure 4). Interestingly, GDP per cap, H life exp, and Social sup are the top three features in the general ranking, as in Neto and Paulovich. Neto2021Multivariate A surprising outcome is that, although two of ... | Several VA systems exist that enable the comparison of ML models in classification problems, especially with the evaluation of predictive performance and the importance of features. Xu2019EnsembleLens ; Schneider2018Integrating ; Talbot2009EnsembleMatrix ; Zhang2019Manifold ; Squares2017Ren ; Gleicher2020Boxer ; Das202... | G4: Reason about the relationship of certain features and knowledge acquired.
Unlike G2, this one concentrates on deviations in ranking features for justifying the rule-based generated knowledge. VA systems can support humans with this alignment of features and knowledge extracted through the decision rules. Since doma... | A |
The conventional HS-MIMO scheme does not show full-matching of selected antenna indices with any of two PR-HS-MIMO schemes in the scenarios that Lt=4subscript𝐿𝑡4L_{t}=4italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = 4 and 6666. On the other hand, the two PR-HS-MIMO schemes, i.e., EW and global polarization... |
It is worth emphasizing that the combination of polarization reconfiguration and hybrid antenna selection, i.e., PR-HS can provide significant improvement in effective channel gain, i.e., the squared envelop of hijeffsubscriptsuperscriptℎeff𝑖𝑗h^{\rm eff}_{ij}italic_h start_POSTSUPERSCRIPT roman_eff end_POSTSUPERSCR... | It is worth emphasizing that estimation of optimal polarization vectors before the hybrid antenna selection stage is inevitable to have full benefit of joint polarization pre-post coding and the corresponding polarization reconfigurable antenna selection in PR-HS-MIMO spatial multiplexing.
| Even for a set of selected Tx antenna elements based on the conventional HS-MIMO, the channel capacity is improved via joint polarization pre-post coding after the hybrid antenna selection. However, the Tx antenna elements chosen by hybrid selection are different from the selection based on the proposed PR-HS-MIMO sche... |
This section delineates the PR-HS-MIMO communication system that performs spatial multiplexing with the selected antenna elements as depicted in Fig. 3. The Tx selects Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT out of Ntsubscript𝑁𝑡N_{t}italic_N start_POSTSUBSCRIPT italic_t end_POSTSU... | B |
Our results in Theorem 2 are in contrast to translational offline packing of convex polygons for which constant factor approximations exist.
In a recent paper, Alt, de Berg, and Knauer [6, 7] gave a constant-factor approximation algorithm for offline translational packing of convex polygons so as to minimize the area o... | The objects may in some applications appear in an online fashion, i.e., the pieces are given one after the other, and each of them must be placed before the next one is known. For example, in a scene which was not included in the final cinema edition of The Sound of Music, the butler of the von Trapp family was singing... | For height class hℎhitalic_h, the base type is a rectangle of size 2×2−h2superscript2ℎ2\times 2^{-h}2 × 2 start_POSTSUPERSCRIPT - italic_h end_POSTSUPERSCRIPT, and in the strip we allocate boxes of this size in which to pack pieces from the height class.
We define an infinite ternary box type tree for each height class... | In particular, we use the lower bound from Theorem 1 to create an adaptive stream of pieces that will force any packing algorithm to use excessive space.
In the reduction, the numbers to be sorted in the online sorting problem correspond to the slopes of the spine segments in the packing problems, and the impossibility... | The algorithm works by first grouping the pieces into exponentially increasing height classes and then sorting the pieces in each height class by the slopes222The slopes are computed with respect to the y𝑦yitalic_y-axis, namely they are the inverse of regular slopes. of their spine segments; see Figure 2. The spine se... | D |
We attempt to apply our templates selection method for [2] to improve the performance on facial landmark detection. The pre-trained model from the first stage of [2] is integrated into the proposed framework, which suggest the M𝑀Mitalic_M instances with the highest mean similarities (M𝑀Mitalic_M=50 in our experiment)... | Q: How good is the use of SIFT key points as substitutes for landmarks? Figure 5 demonstrate the relationship between landmarks and potential key points from handcraft methods in feature level (Eq. (9)).
The similarities calculated by potential key points are positively correlated with those by landmarks to a large ext... | To compare with other key point detection methods, we adopt SIFT, SURF, ORB, and random selection as our key point selectors. For SIFT, SURF, and ORB, we detect the key points and then filter out some very close points. For random selection, we randomly select 100 points as the key points.
The results are listed on Tab... | To implement template selection per Eq. (6), the knowledge of landmarks is assumed. However, even such knowledge is nonexistent before template selection. Therefore, we proposed to utilize potential key points to substitute landmarks. In particular, we utilize the classical multi-scale detector, SIFT, to find key point... | In this paper, we propose a framework named Sample Choosing Policy (SCP) to find the most annotation-worthy images as templates. First, to handle the situation of no landmark label, we choose handcrafted key points as substitutes for landmarks of interest. Second, to replace the MRE, we proposed to use a similarity sco... | A |
(1) We provide a general Mixed Membership Distribution-Free (MMDF for short) model for overlapping weighted networks in which a node can belong to multiple communities and an edge weight can be any real number. MMDF allows edge weights to follow any distribution as long as the expected adjacency matrix has a block str... |
In this paper, we have proposed a general, flexible, and identifiable mixed membership distribution-free (MMDF) model to capture community structures of overlapping weighted networks. An efficient spectral algorithm, DFSP, was used to conduct mixed membership community detection and shown to be consistent under mild c... | (Comparison to LFR benchmark networks) In [44], the authors proposed LFR benchmark graphs for testing community detection algorithms on non-overlapping unweighted networks. In [45], the authors proposed generalizations of LFR benchmark networks for testing community detection methods on overlapping weighted graphs. \ad... |
(1) We provide a general Mixed Membership Distribution-Free (MMDF for short) model for overlapping weighted networks in which a node can belong to multiple communities and an edge weight can be any real number. MMDF allows edge weights to follow any distribution as long as the expected adjacency matrix has a block str... | (2) We use a spectral algorithm to fit MMDF. We show that the proposed algorithm stably yields consistent community detection under MMDF. Especially, theoretical results when edge weights follow a specific distribution can be obtained immediately from our results.
| D |
Table 1: Comparison of average incremental accuracy (%) with or without Class-wise Decorrelation (CwD) at the initial phase.
B𝐵Bitalic_B denotes number of classes learned at initial phase and S𝑆Sitalic_S denotes number of classes learned per phase after the initial one. | Next, in Sec. 4.2, we add our proposed Class-wise Decorrelation (CwD) on some State-Of-the-Art (SOTA) methods [12, 8, 19] to validate its effectiveness.
Finally, in Sec. 4.3, we provide ablation study on how factors such as number of classes of the initial phase, number of exemplars for each class, and CwD coefficient ... | Number of exemplars for each class is 20.
For AANet, we use its version based on LUCIR[12]. AANet[19] on ImageNet (denoted by *{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT) is running without class-balance finetuning after each phase due to missing implementation in its code. | For all experiments, we use ResNet18[11] and the SGD optimizer, with the batch size of 128128128128. The strategy of Herding[26, 12, 8, 19] is used to select exemplars after each phase.
For experiments based on CIFAR100, in each CIL phase, all models are trained for 160160160160 epochs and the learning rate is divided ... | In this section, we add our proposed CwD on three previous SOTA methods, namely LUCIR[12], PODNet[8] and AANet[19], to validate the effectiveness of our method.
We denote the number of classes learned at initial phase by B𝐵Bitalic_B and number of new classes learned per phase after the initial one by S𝑆Sitalic_S. | B |
We also evaluated the smoothness of the displacement field by calculating its determinant of the Jacobian and examining the number and percentage of voxels with a non-positive Jacobian determinant for each method.
These voxels correspond to locations where the deformation is not diffeomorphic. | The clinical experts of our team (H.A., M.B., B.W., J.S., E.C., J.R., S.A., M.M.) were provided with the segmentation of the tumor core (i.e., the potentially resectable region) and with specific annotation instructions.
Specifically, for each pre-operative scan, an expert placed χ𝜒\chiitalic_χ number of landmarks nea... | Furthermore, we computed both the minimum and the 99thsuperscript99𝑡ℎ99^{th}99 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT percentile of the Jacobian determinant (as opposed to the maximum, which is susceptible to noise).
These were computed within different regions of interest (e.g., within the tumor... | To investigate whether landmarks in the vicinity of the tumor are more difficult to annotate, we correlated landmark-wise AV with respective distance to the tumor core (cf. Fig. 4(b)).
We observed increased levels of variability, especially in the close vicinity of the tumor. | Furthermore, we computed both the minimum and the 99thsuperscript99𝑡ℎ99^{th}99 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT percentile of the Jacobian determinant (as opposed to the maximum, which is susceptible to noise).
These were computed within different regions of interest (e.g., within the tumor... | B |
\mathsf{A}(D,\epsilon,\delta)\leq(1+\epsilon)\cdot\sharp\mathsf{rep}_{\hat{%
\Sigma}}(D)\right)\ \geq\ 1-\delta,sansserif_Pr ( ( 1 - italic_ϵ ) ⋅ ♯ sansserif_rep start_POSTSUBSCRIPT over^ start_ARG roman_Σ end_ARG end_POSTSUBSCRIPT ( italic_D ) ≤ sansserif_A ( italic_D , italic_ϵ , italic_δ ) ≤ ( 1 + italic_ϵ ) ⋅ ♯ san... | This work focused on the exact and approximate counting of database repairs.
Concerning the exact version of the problem, we have lifted the FP/♯♯\sharp♯P-complete dichotomy in the case of primary keys and self-join-free CQs from [25] to the more general case of FDs and self-join-free CQs (Theorem 2). | Concerning the exact version of the problem, we have lifted the FP/♯♯\sharp♯P-complete dichotomy in the case of primary keys and self-join-free CQs from [25] to the more general case of FDs and self-join-free CQs (Theorem 2).
Concerning the approximate version of the problem, although we have not provided a complete ap... | Concerning the exact version of the problem, we have lifted the FP/♯♯\sharp♯P-complete dichotomy in the case of primary keys and self-join-free CQs from [25] to the more general case of FDs and self-join-free CQs (Theorem 2).
Concerning the approximate version of the problem, although we have not provided a complete ap... | Concerning the approximate version of the problem, although we have not provided a complete approximability/inapproximability classification, we established that (i) the problem admits an FPRAS in the case of FDs with an LHS chain (up to equivalence) and CQs (even with self-joins), but (ii) it does not admit an FPRAS (... | B |
We use Algorithm 3 to run SIR simulations on the graph GRLsuperscript𝐺RLG^{\mathrm{RL}}italic_G start_POSTSUPERSCRIPT roman_RL end_POSTSUPERSCRIPT for different node-absorption rates (which correspond to the recovery intensities) and transmission intensities. We refer to a set of node-absorption rates δ1,…,δNsubscript... |
0: The means of the outbreak durations, final sizes, and outbreak peaks of Nsimsubscript𝑁simN_{\mathrm{sim}}italic_N start_POSTSUBSCRIPT roman_sim end_POSTSUBSCRIPT SIR simulations for each of the NSsubscript𝑁SN_{\rm S}italic_N start_POSTSUBSCRIPT roman_S end_POSTSUBSCRIPT parameter configurations that are defined ... |
Algorithm 3 SIR simulations on a network GRLsuperscript𝐺RLG^{\mathrm{RL}}italic_G start_POSTSUPERSCRIPT roman_RL end_POSTSUPERSCRIPT (which we generate with Algorithm 2) for different node-absorption rates (i.e., recovery intensities) and transmission intensities. | 3: Run SIR simulations: Run Nsimsubscript𝑁simN_{\mathrm{sim}}italic_N start_POSTSUBSCRIPT roman_sim end_POSTSUBSCRIPT simulations of the SIR model on the graph GRLsuperscript𝐺RLG^{\mathrm{RL}}italic_G start_POSTSUPERSCRIPT roman_RL end_POSTSUPERSCRIPT with transmission intensities β∈{β∗,β∗∗}𝛽subscript𝛽subscript𝛽a... | We use Algorithm 3 to run SIR simulations on the graph GRLsuperscript𝐺RLG^{\mathrm{RL}}italic_G start_POSTSUPERSCRIPT roman_RL end_POSTSUPERSCRIPT for different node-absorption rates (which correspond to the recovery intensities) and transmission intensities. We refer to a set of node-absorption rates δ1,…,δNsubscript... | B |
Also, the memory constraint is that for any node i𝑖iitalic_i, the memory available in i𝑖iitalic_i should be more than 2x+y2𝑥𝑦2x+y2 italic_x + italic_y where x𝑥xitalic_x is the number of swapping trees that use i𝑖iitalic_i as an intermediate node
and y𝑦yitalic_y is the number of trees that use i𝑖iitalic_i as an... | as e−d/(2L)superscript𝑒𝑑2𝐿e^{-d/(2L)}italic_e start_POSTSUPERSCRIPT - italic_d / ( 2 italic_L ) end_POSTSUPERSCRIPT [18] where L𝐿Litalic_L is the channel attenuation length
(chosen as 20km for an optical fiber) and d𝑑ditalic_d is the distance between the nodes. | is that, in the WaitLess model, the EP generation
over a path is a very low probability event—essentially plsuperscript𝑝𝑙p^{l}italic_p start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT where p𝑝pitalic_p is the link-EP success probability and l𝑙litalic_l is the path length, for the case of W=1𝑊1W=1italic_W = 1 | in that both
consider only balanced trees; however, we use a heuristic metric that facilitates a polynomial-time Dijkstra-like heuristic to select the optimal path, while their recursive metric 666We note that their formula (Eqn. 10 in [18]) is incorrect as it either ignores the 3/2 factor or assumes the EP generations... | EP generation rate decay only polynomially in L𝐿Litalic_L.
More recently, Caleffi [18] formulated the entanglement generation rate on a given path between two nodes, under the more realistic condition where the intermediate nodes in the path may not all be equidistant, but still considered only balanced trees. Their ... | D |
In another probabilistic decision-making model, Wang et al. [118] approach lane merging task as a dynamic process and integrate internal states into joint Hidden Markov Model (HMM) and Gaussian Mixture Regression (GMM). The experiments conducted on the
|
The two most prevalent learning paradigms for end-to-end autonomous driving are RL and IL [16]. Typically, RL approaches map sensor information to states and produce control signals for an autonomous car. On the other hand, in IL-based end-to-end learning, an agent is trained by imitating an expert’s behavior to learn... | Kim and Canny [67] use a causal attention model on top of the saliency filtering that indicates which input regions actually affect the steering control. Their experiments are conducted on the driving datasets - Comma.ai [105], Udacity [106], and Hyundai Center of Excellence in Integrated Vehicle Safety Systems and Con... | Rjoub et al. [122] have shown that federated deep RL combined with XAI can lead to trusted autonomous driving. They use a federated learning approach for decision-making and leverage edge computing that enables different devices to train an ML model in a collaborative manner. The model is first developed on the paramet... | making has further been explored by subsequent studies as well [18, 8, 91].
While the mentioned studies focus on vision-based explanations of already obtained predictions of the model, there have been some recent studies paying attention to counterfactual explanations. In the context of automated driving, counterfactu... | C |
Visual place recognition (VPR) plays an incredible role in robotics and localization in recent years. VPR significantly contributes to the mission of identifying a single localization or simultaneous localization and mapping (SLAM) systems [1]. Typically, The VPR problem can be formulated as image retrieval in an image... |
Further, the critical point to improving VPR performance after feature extraction is to form the compact global image representation when subjected to various image transformations (second step) [8, 9]. VPR mission aggregates the extracted local feature descriptors to the global feature representations. The global des... | The aggregated representation layers can be seen as an H×W𝐻𝑊H\times Witalic_H × italic_W grid of C𝐶Citalic_C-dimensional feature descriptors to a vector representing the original image, aggregating local features into a representative global feature. And then, a similarity function (e.g., Euclidean distance or cosin... | Feature extraction is the first step in the VPR mission. The traditional feature representations include scale-invariant feature transform (SIFT), FAB-MAP and Cross-Region-BoW, improving the robustness against appearance or illumination changes. However, these methods always lead to a large amount of calculation compar... |
Currently, the most effective feature extraction method (first step) in VPR is to use deep learning techniques. The original design goal of a convolutional neural network (CNN) is to create a network in which neurons in the first layer extracts local visual features, and neurons in the last layer fuse these features t... | D |
In Chapter 2, we collect all the notations, definitions and known facts needed in the remainder of the paper. We briefly illustrate the XL-Algorithm to solve Boolean equations systems and the algebraic attack to nonlinear filter generators presented in [10]. | The main aim of our work is not to perform effectively the attack described in Section 3 on WG-PRNG, but to estimate how many keystream bits one needs to perform successfully the attack on WG-PRNG. We will show that knowing less than 218superscript2182^{18}2 start_POSTSUPERSCRIPT 18 end_POSTSUPERSCRIPT keystream bits, ... | Stream ciphers[16] are one of the main cryptographic primitives used in symmetric cryptography. Historically, the first stream ciphers were built with “linear” registers, where linearity is meant both in the register update function (which sends one state to the next) and in the output function, which computes the keys... |
In Chapter 4, to validate our algebraic attack, first we apply it to two toy stream ciphers and then we show that it is feasible to perform it on WG-PRNG. We conclude showing that the security of WG-PRNG is less that claimed until now. For the sake of presentation, we will first describe the part regarding WG-PRNG, an... | In this paper, we propose a new form of algebraic attack, which is especially effective against nonlinear filter generators. We show with two toy examples how the attack can be performed in practice. We also apply our attack to WG-PRNG and we provide a complexity estimate that shows a fatal weakness of this cipher. We ... | C |
The vast majority of theoretically sound, continual depth-limited solving algorithms assume perfect rationality of the opponent and do not allow explicit modeling of an opponent and exploitation of the opponent’s mistakes. As a result, even very weak opponents exploitable by the heuristic local best response (LBR) Lis... | Opponent modeling and exploitation is an essential topic in computational game theory, with many approaches attempting to model and exploit opponents in various games. However, exploiting opponents in very large games is not trivial, and only recently was an algorithm created to exploit models in depth-limited solving.... |
Approximate best response (ABR) Timbers et al. (2020) is also a generalization of the LBR and showed promising results in evaluating strategies. However, our approach focuses on model exploitation, which requires crucial differences, such as quick re-computation against unseen models. ABR needs to independently learn ... |
While CDBR maximizes the exploitation of the fixed opponent model, it allows a player to be exploited. When we face an opponent unsure if our model is perfect we must limit our exploitability. For example, when we gradually build a model during play, we must limit our exploitability in the initial game rounds when the... | The opponent modeling and exploitation process consists of two steps: opponent modeling and model exploitation. Opponent modeling requires building a model from previous data or actions observed during an online play. Model exploitation is finding a good strategy against the given model and is the main focus of this pa... | D |
given any partition 𝒜𝒜{\mathcal{A}}caligraphic_A of the vertex set, in a linear number of operations (seeing only Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT) the greedy amalgamating algorithm finds a partition 𝒜′superscript𝒜′{\mathcal{A}}^{\prime}caligraphic_A start_POSTSUPERSCRIPT... | We shall prove Theorem 2.1 in Section 10, using a deterministic lemma, Lemma 10.6, together with Theorem 10.5, which is a version of Theorem 1.2 with weighted underlying graph. We note that [2, 3] showed that whp the modularity optimal partition will agree with the planted partition except for o(n)𝑜𝑛o(n)italic_o ( i... | The proof follows that of the non-weighted case, Theorem 1.1, line by line with the following adaptations. In place of the fattening lemma used on the underlying graph, use the weighted version, Lemma 10.3; and replace instances of G𝐺Gitalic_G and Gpsubscript𝐺𝑝G_{p}italic_G start_POSTSUBSCRIPT italic_p end_POSTSUBSC... | Observe that Theorem 1.1 is the special case of Theorem 10.1 when w𝑤witalic_w is {0,1}01\{0,1\}{ 0 , 1 }-valued, and similarly Theorem 1.2 is a special case of Theorem 10.2.
In order to prove Theorems 10.1 and 10.2, we may use almost the same proofs as before. We need a natural minor variant of Lemma 3.1. | In this subsection we show that Theorem 2.1 on the modularity value q∗(Gn,k,p,q)superscript𝑞subscript𝐺𝑛𝑘𝑝𝑞q^{*}(G_{n,k,p,q})italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_G start_POSTSUBSCRIPT italic_n , italic_k , italic_p , italic_q end_POSTSUBSCRIPT ) of the stochastic block model follows quick... | C |
The specific objective of the study is the impact of environmental variables on migration, thus on the right-hand side of the regression a proxy of the environmental change is included. Slow-onset events are typically defined as gradual modifications of temperature, precipitation, and soil quality. Respectively, three... | Characteristics of the sample are one of the main sources of heterogeneity. The level of the analysis varies considerably from paper to paper, as we include both micro-and macro-level studies. we code variables capturing both the specific unit of analysis and the source of the data. Typically micro-level studies use da... | The overall sample includes both unpublished and published papers, so we add some moderators variables describing different features of the studies that are published. In particular, we introduce a dummy for Published articles and a control for the quality of the journal in which the study is published by adding the va... | From the other sets of controls emerges that specific features of studies included in the MRA differently explain the diversity in the results within clusters. The positive coefficients of controls for corridors such as Internal and Urbanization state that people respond to adverse climatic change with increased intern... | Since it is natural to expect the adjustment of migratory flows in response to climate change is not instantaneous, especially in the case of gradual phenomena, most of the studies use a panel structure with a macroeconomic focus and attempt to assess the impact of changes in climatic conditions on human migratory flow... | A |
\mathbf{x}_{s-1})\rVert_{2}^{2}over¯ start_ARG italic_V end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_s = 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ∥ ∇ italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( bold_x start_POSTSUBSCRIPT italic_s end_P... |
Up to now, we have shown that it is possible to design online methods to achieve stronger guarantees than static methods under the challenging one-gradient feedback online learning, and meanwhile suffer no computational overhead in terms of the gradient query complexity. | In Section 4.2, we develop the Sword algorithm, which achieves the gradient-variation dynamic regret under the multi-gradient feedback model. In Section 4.3, we present an improved algorithm called Sword++ that can achieve the same dynamic regret guarantee (up to constants) under the more challenging one-gradient feedb... | In addition to the regret measure, we further consider the gradient query complexity. Note that algorithms designed for the multi-gradient feedback model may query the gradients for multiple times at each round. However, most algorithms designed for the static regret minimization only require one gradient per iteration... |
So far, we have designed an online algorithm (Sword) with the gradient-variation dynamic regret. While it achieves a favorable regret guarantee, one caveat is that Sword runs N=𝒪(logT)𝑁𝒪𝑇N=\mathcal{O}(\log T)italic_N = caligraphic_O ( roman_log italic_T ) base-learners simultaneously and each base-learner requir... | A |
on w=σn←(u)a𝑤superscript𝜎superscript𝑛←𝑢𝑎w=\sigma^{\stackrel{{\scriptstyle\leftarrow}}{{n}}}(u)aitalic_w = italic_σ start_POSTSUPERSCRIPT start_RELOP SUPERSCRIPTOP start_ARG italic_n end_ARG start_ARG ← end_ARG end_RELOP end_POSTSUPERSCRIPT ( italic_u ) italic_a and thus σnω(w)superscript𝜎𝑛𝜔𝑤\sigma^{n\omega... | is right-prolongable on u∈A+𝑢superscript𝐴u\in A^{+}italic_u ∈ italic_A start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT. If x𝑥xitalic_x is in F(σn)ω𝐹superscriptsuperscript𝜎𝑛𝜔F(\sigma^{n})^{\omega}italic_F ( italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRI... | (recall that σnω=(σn)ωsuperscript𝜎𝑛𝜔superscriptsuperscript𝜎𝑛𝜔\sigma^{n\omega}=(\sigma^{n})^{\omega}italic_σ start_POSTSUPERSCRIPT italic_n italic_ω end_POSTSUPERSCRIPT = ( italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT). If v𝑣vitalic_v is growing... | One has ℒ(σ)=ℒ(𝖷(σ))ℒ𝜎ℒ𝖷𝜎\mathcal{L}(\sigma)=\mathcal{L}(\mathsf{X}(\sigma))caligraphic_L ( italic_σ ) = caligraphic_L ( sansserif_X ( italic_σ ) )
if and only if every letter a∈A𝑎𝐴a\in Aitalic_a ∈ italic_A is in ℒ(𝖷(σ))ℒ𝖷𝜎\mathcal{L}(\mathsf{X}(\sigma))caligraphic_L ( sansserif_X ( italic_σ ) ). | Let x𝑥xitalic_x be such a fixed point. By Proposition 5.4, either
x𝑥xitalic_x is in F(σn)ω𝐹superscriptsuperscript𝜎𝑛𝜔F(\sigma^{n})^{\omega}italic_F ( italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT or of the form σnω(u)superscript𝜎𝑛𝜔𝑢\sigma^{n... | B |
{n}}{n}+\bigg{(}\frac{C_{n}}{n}\bigg{)}^{\frac{2}{2s+1}}\Bigg{)}.italic_R ( caligraphic_T start_POSTSUBSCRIPT italic_n , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n e... | ℋdk(1)superscriptsubscriptℋ𝑑𝑘1\mathcal{H}_{d}^{k}(1)caligraphic_H start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( 1 ). This matches the optimal rate for estimation over Holder
classes (see Sadhanala et al. (2017) for a formal statement and proof for | smoothness index k+1𝑘1k+1italic_k + 1 in each coordinate direction, and any third index q≥1𝑞1q\geq 1italic_q ≥ 1, is indeed n−2s/(2s+1)superscript𝑛2𝑠2𝑠1n^{-2s/(2s+1)}italic_n start_POSTSUPERSCRIPT - 2 italic_s / ( 2 italic_s + 1 ) end_POSTSUPERSCRIPT for s>1/2𝑠12s>1/2italic_s > 1 / 2 (or 2k+2>d2𝑘2𝑑2k+2>d2 it... | This paper is a unification and extension of Sadhanala et al. (2016, 2017) (more will be said about the relationship to these papers
in Section 1.3). The models of smoothness for f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that we | The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al. (2017). More precisely,
these authors established the third term on the right-hand side in | D |
Figure 14: The normalized histogram of the Wasserstein distance between average MZ- and DZ-twin correlations within each state over 1000 derangements. Since the generated null data has no genetic signal, we are basically computing the Wasserstein distance between two connectivity matrices with random noises. In compar... | In this study, we proposed the topological clustering method for the estimation and quantification of dynamic state changes in time-varying brain networks. A coherent statistical theory, grounded in persistent homology, was developed, and we demonstrated the application of this method to resting-state fMRI data. Restin... |
The predominant method for computing time-varying correlation in time series data, particularly in neuroimaging studies, involves Sliding Windows (SW). This technique entails computing correlations between brain regions across various time windows (Allen et al., 2014; Hutchison et al., 2013; Shakil et al., 2016; Mokht... | However, the method has been mainly used on static networks or a static summary of time-varying networks. The dynamic pattern of persistent homology for time-varying brain networks was rarely investigated, with a few exceptions (Yoo et al., 2016; Santos et al., 2019; Songdechakraiwut and Chung, 2020; Giusti et al., 201... |
In contrast to previous studies that reported relatively low heritability in functional brain networks (Glahn et al., 2010; Xu et al., 2017; Korgaonkar et al., 2014; Wan et al., 2022), our findings indicate significant higher heritability across various regions of the brain network. This discovery not only challenges ... | A |
+2)-\tau\sqrt{\tau^{2}+4}}{2}}=:\sigma_{m}(\tau)italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ( roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT ) = roman_e start_POSTSUPERSCRIPT italic_λ italic_τ end_POSTSUPERSCRIPT square-root start_ARG divide start_ARG ( italic_τ start_... | τ>0𝜏0\tau>0italic_τ > 0. If A𝐴Aitalic_A is non-diagonalizable with eigenvalue
λ∈(0,0.5)𝜆00.5\lambda\in(0,0.5)italic_λ ∈ ( 0 , 0.5 ), σmin(eJτ)subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏\sigma_{min}(\mathrm{e}^{J\tau})italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ( roman_e start_POSTSUPERSCRIPT ... | \left\lVert v\right\rVert∥ roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT italic_v ∥ ≥ italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ( roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT ) ∥ italic_v ∥,
where σmin(eJτ)subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏... | σmin(eJτ)subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏\sigma_{min}(\mathrm{e}^{J\tau})italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT ( roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT ) is a monotonically increasing function
of τ𝜏\tauitalic_τ. Thus, σmin(eJτ)>1subscript𝜎𝑚𝑖�... | value of eJτsuperscripte𝐽𝜏\mathrm{e}^{J\tau}roman_e start_POSTSUPERSCRIPT italic_J italic_τ end_POSTSUPERSCRIPT. Thus,
∥R∥≥σmin(eJτ)delimited-∥∥𝑅subscript𝜎𝑚𝑖𝑛superscripte𝐽𝜏\left\lVert R\right\rVert\geq\sigma_{min}(\mathrm{e}^{J\tau})∥ italic_R ∥ ≥ italic_σ start_POSTSUBSCRIPT italic_m italic_i italic_n en... | C |
Stability and safety verification in Ordinary Differential Equations (ODEs) has been explored widely and a detailed list of works can be found in survey papers [1] and [2]. Two different notions have emerged in existing literature that characterize stability and safety, respectively:
| Next, we present the temperature response and control variables under the two control strategies, as shown in Figs. 5-7. The spatiotemporal temperature distribution of the battery module in Fig. 5 shows an increased in temperature at 700s700𝑠700s700 italic_s for all two strategies after the disturbance injection by t... |
Input-to-state safety (ISSf) [4, 5, 6]: Here the objective is to ensure that the system state trajectories stay away from a predefined unsafe region, or in other words, stay close to safe region. Specifically, trajectories moving from safe zone towards unsafe region will violate safety boundary only in a sense proport... |
Consider the system (4) with boundary conditions (8). Let us also consider the unsafe set for this system to be (12) and the metric measuring the distance from this unsafe set to be given by (13). If the controller gains are chosen such that the following inequalities are satisfied, |
The goal of this work is to design a control strategy that will guarantee safety of the battery system under anomalies. The criterion for safety is that the spatial norm of the temperature deviation of the battery from a set-point remains below a prescribed threshold h¯¯ℎ\overline{h}over¯ start_ARG italic_h end_ARG. M... | B |
Driven by the early success in the assessment of wellbeing in organizations, researchers have turned their focus to the use of passive sensing in the assessment of workplace performance. By tracking behaviors, recent work has attempted to characterize workplace performance with the long-term goals of identifying the be... | Feng and Narayanan [10] propose a method for capturing behavioral consistency in wearable data using the activity curve model. They find that consistency features improve accuracy by up to 6% when compared to using only summary features from the Fitbit fitness tracker in a study of 97 hospital workers throughout 10 wee... |
In the behavioral side of workplace performance inventories, IOD-ID and IOD-OD are measures of “bad” conduct in the workplace. Behaviors that indicate IOD-ID can involve cursing a co-worker, playing pranks, or making fun of someone. Behaviors that indicate IOD-OD can be tardiness or absenteeism, leaving work early wit... | Driven by the early success in the assessment of wellbeing in organizations, researchers have turned their focus to the use of passive sensing in the assessment of workplace performance. By tracking behaviors, recent work has attempted to characterize workplace performance with the long-term goals of identifying the be... |
Therefore, researchers in the pervasive computing community have often turned to job performance inventories developed by psychologists as ground truth to measure perceived workplace performance across organizations and industries in a generalizable manner. They have used passive sensing to predict participant scores ... | D |
Since both the training speed as well as the final accuracy are important factors in federated learning, we measure: (i) the performance achieved at a specified number of rounds and (ii) the number of rounds required for an algorithm to attain the desired level of target accuracy, following Al-Shedivat et al. [2].
For ... | Table 2: Results from reduced participation rates (2% for 100 clients, 1% for 500 clients) on CIFAR-10 and CIFAR-100 with the Dirichlet parameter 0.3.
FedCM† and FedDC‡ require 50% and 100% additional communication costs for each communication round, respectively. | Since the numbers of local epochs and iterations are set to 5 and 50, respectively, each client has little training opportunity with few training examples and client heterogeneity increases significantly.
As shown in Table 2, FedACG outperforms the other methods in most cases, with the performance gap between FedACG an... | Results on three benchmarks with two different federated learning settings.
For (a) a moderate-scale experiment, the number of clients and the participation rate, are set to 100, and 5%, respectively, while (b) a large-scale setting has 500 clients with a 2% participation rate. The Dirichlet parameter is commonly set t... | Note that FedCM and FedDC respectively require 1.5×1.5\times1.5 × and 2×2\times2 × network costs for each communication round since they communicate the current model and the associated gradient information per round, while the rest of the algorithms only need to transmit model parameters.
| A |
Other works have addressed spectrum allocation with other optimization objectives. e.g., researchers have considered throughput maximization as an objective [40, 49] under various constraints such as
maximum allocated power [31], given QoS requirements [29], etc. | context.222If we use the RL technique in our setting by considering
actions as power allocations, we’ll need to provide training examples for every possible system state, due to the lack of an underlying MDP, making the approach infeasible. Note that in a setting with no underlying MDP, the RL approach learns the polic... |
Paper Organization. The rest of the paper is organized as follows. In the following section, we develop our spectrum allocation model and setting, discuss related work, and give a high-level overview of our approach. In §III, we develop our CNN-based deep learning model and associated techniques for spectrum allocatio... | See Fig. 10(b), which plots the 𝒜errsubscript𝒜err\mathrm{\mathcal{A}_{err}}caligraphic_A start_POSTSUBSCRIPT roman_err end_POSTSUBSCRIPT metric in PU-Setting for the above models compared with our DeepAlloc.
We see that our approach of pre-training using log-normal model-based images in DeepAlloc yields a notable per... | Second, in our context, an unsupervised approach is meaningless as unlabelled samples have minimal information (actually, zero information in the PU-Setting), and as explained
in §III, a reinforcement-learning approach is also not suitable for our setting. | D |
\left<g\gamma_{1}-\gamma_{2}\right>(\alpha)}\leq\sqrt{2}\frac{\delta L}{\hat{c%
}}(e^{\hat{c}L}-1).italic_d ( italic_g caligraphic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , caligraphic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) ≤ square-root start_ARG 2 end_ARG | | italic_g italic_γ start_POSTSUBSCRIPT 1 end_POSTSUB... | In this paper, we considered practical aspects of reconstructing planar curves with prescribed Euclidean or affine curvatures. An immediate extension of the current work would be the reconstruction of planar curves with prescribed projective curvatures, and obtaining distance estimates between curves, modulo a projecti... |
To a human eye, two figures look the same if they are related by a rigid motion. However, since a reflection changes the orientation of an object, a group of orientation-preserving rigid motions, consisting of rotations and translations only, is often considered. This group is called the special Euclidean group and is... | In this paper, we used the Hausdorff distance between curves when considering both the SE(2)𝑆𝐸2SE(2)italic_S italic_E ( 2 )- and the SA(2)𝑆𝐴2SA(2)italic_S italic_A ( 2 )-actions on the plane. However, while the Hausdorff distance is SE(2)𝑆𝐸2SE(2)italic_S italic_E ( 2 )-invariant, it is not SA(2)𝑆𝐴2SA(2)... | In this work, we consider congruence of planar curves relative to the special Euclidean group SE(2)𝑆𝐸2SE(2)italic_S italic_E ( 2 ) and the special affine group SA(2)𝑆𝐴2SA(2)italic_S italic_A ( 2 ). The latter group consists of compositions of area and orientation preserving (i.e. unimodular) linear transformati... | A |
The regret bounds summarized in Table 1 are consistent with regret bounds of full-gradient based online optimization algorithms proved in the existing literature [29, 11, 7, 23] under similar settings. Our dynamic regret bounds for strongly convex functions proved in Theorems 6 and 7 might need multiple updates at each... | To the best of our knowledge, Coordinate descent [31], as an important class of optimization algorithms, is not sufficiently analyzed by researchers in the online optimization community. In coordinate descent algorithms, most components of the decision variable are fixed during one iteration while the cost function is ... |
The main contributions of the paper can be summarized as follows. First, we extend the coordinate descent algorithms considered in [31] to the online case and provide their regret analysis. To the best of our knowledge, this is the first attempt to look at possibilities of using coordinate descent methods to solve OCO... |
The rest of paper is organized as follows. The problem formulation is presented in Section 2. The online coordinate descent algorithm considered in this paper is given in Section 3. Regret bounds for random online coordinate descent algorithms are given in Section 4 followed by regret bounds for deterministic online c... | In this work, we have proposed an online coordinate descent algorithms to deal with optimization problems that may change over time. Three widely used update rules of coordinate descent are considered. Under different assumptions, we have provided different upper bounds on the regrets of these online algorithms. In par... | C |
Many researchers have taken an interest in memristive devices, given their ability to implement tunable nonvolatile weights similar to synaptic efficacy in biological synapses. Following this paradigm, memristors have been applied to many network-based computational approaches. Doygu et al. simulated memristor network... |
The last section (VI.C) explored the computational properties of STP and the double exponential dynamics. Both STP and double exponential decay dynamics increased the accuracy of the network compared to the original HOTS network with single exponentials and no-STP. STP contributes to reducing ”Noise” in the network. M... | In recent years, a different approach has surfaced. Several works have demonstrated the presence of transient conductance responses in memristive devices akin to short-term plasticity (STP) and Excitatory/Inhibitory Post Synaptic Potentials (EPSP/IPSP)[12, 13, 14, 15]. These memristors with “volatile” properties are ex... | Recent developments in semiconductor technology have led to the design and creation of a new class of devices called memristors. It has been predicted that memristors will be used in the near future as the atomic component of more advanced and complex systems, which can provide performance superior to conventional tran... | Changing the material properties enables “programming” the exponential decays in the range of tens to hundreds of milliseconds, which is optimal for temporal integration of events for many real-world datasets [28, 44]. Due to the double exponential decay, closely spaced pulses generate accumulation. This property allow... | B |
In this paper we study an agent-based model for opinion formation on a social network where the opinion of an agent depends both on its own intrinsic opinion and on the opinions of its network neighbors. One of the earliest influential models in this direction was defined by DeGroot DeGroot (1974). In this model the o... |
In this paper we study an agent-based model for opinion formation on a social network where the opinion of an agent depends both on its own intrinsic opinion and on the opinions of its network neighbors. One of the earliest influential models in this direction was defined by DeGroot DeGroot (1974). In this model the o... | In this paper we study the following Hegselmann-Krause system (HKS). We have n𝑛nitalic_n agents and their opinions are modeled by points in d𝑑ditalic_d-dimensional Euclidean space ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, for some d≥1𝑑1d\geq 1italic_d ≥ 1. The age... | Johnsen (1990) extended this by incorporating private opinions.
Every agent has a private opinion which does not change and an expressed opinion that changes over time. The expressed opinion of an agent is determined as a function of the expressed opinions of its neighbors and its private opinion. | Johnsen (1990) (extending earlier work by DeGroot DeGroot (1974)) each agent has an innate opinion and strategically selects an expressed opinion that is a compromise of its innate opinion and the opinions of its neighbors. Recently, co-evolutionary and game-theoretic variants were studied Bindel
et al. (2015); Bhawalk... | C |
Neural networks consist of multiple layers that process input data to produce output predictions. Each layer takes a previous vector of activations, represented as h(n)ℎ𝑛h(n)italic_h ( italic_n ), and applies a function F𝐹Fitalic_F to generate a new state activation, h(n+1)=F(h(n))ℎ𝑛1𝐹ℎ𝑛h(n+1)=F(h(n))italic_h ... |
In our project, we employed the DenseNet architecture, which comprises two main blocks: the DenseBlock and the TransitionBlock. The DenseBlock keeps the feature size dimension constant while varying the number of filters. Within a DenseBlock, each layer performs a 1x1 convolution for feature extraction and a 3x3 convo... |
Convolutional Neural Networks (CNNs) Fukushima and Wake (1991) are widely used neural network architectures in image classification tasks. They efficiently extract and learn image features through convolution and pooling layers. The pioneering CNN architecture, AlexNet Krizhevsky et al. (2012), employed multiple convo... |
The DenseNet architecture Huang et al. (2017) addresses the vanishing gradient problem by incorporating dense connections between layers. In DenseNet, each layer is directly connected to all the preceding layers in the network. This connectivity pattern enables each layer to access the feature maps produced by all the... | To initialize the network, we employed transfer learning by utilizing pre-trained weights from ImageNet. The early layers of the DenseNet, which capture general image features like edges, were left unchanged. We skipped the top layers, which contain more specific image features, and added two additional layers: a Globa... | C |
Among the best-known algorithms for sequential selection are the multi-armed bandit algorithms, such as Thompson sampling (Thompson, 1933), and upper confidence bounds (Lai and Robbins, 1985; Auer et al., 2002). Although multi-armed bandit algorithms solve the exploration–exploitation tradeoff to maximize total effecti... | Unlike multi-armed bandit algorithms, BAI algorithms are designed solely to deliver the most effective exploration.111Although the term “best arm identification” has appeared only recently, several strands of research share the same goal, among which ranking and selection (RS; Bechhofer 1954) is among the best known. S... | A particularly interesting strand of research concerns the RS problem (Hong et al., 2021).
Although RS and BAI are both used to find the best arm, they differ considerably in that RS considers a static allocation of samples based on knowledge of the true model parameter. | Among the best-known algorithms for sequential selection are the multi-armed bandit algorithms, such as Thompson sampling (Thompson, 1933), and upper confidence bounds (Lai and Robbins, 1985; Auer et al., 2002). Although multi-armed bandit algorithms solve the exploration–exploitation tradeoff to maximize total effecti... |
The aforementioned works focus on fixed-budget identification, in which the horizon T𝑇Titalic_T is fixed. There is also a significant body of literature devoted to fixed-confidence identification. In this scenario, the forecaster is given a confidence level δ𝛿\deltaitalic_δ and aims to stop the sampling as soon as t... | A |
𝒱¯ij={p|(p−pi+pj2)Tpij‖pij‖≥12rmin}.superscript¯𝒱𝑖𝑗conditional-set𝑝superscript𝑝superscript𝑝𝑖superscript𝑝𝑗2Tsuperscript𝑝𝑖𝑗normsuperscript𝑝𝑖𝑗12subscript𝑟min\overline{\mathcal{V}}^{ij}=\left\{p\,|\,(p-\frac{p^{i}+p^{j}}{2})^{\mathrm{T}%
}\frac{p^{ij}}{\|p^{ij}\|}\geq\frac{1}{2}r_{\text{min}}\right\}.... | This scenario is designed to emphasize that the modified space partition constraint in (III-A) leads to a more accurate separation among the robots and thus a higher utility rate of the workspace.
A comparison between the proposed method and the traditional BVC [32] is shown in Fig. 9 for a particular setup. | Top: The traditional BVC method may generate zigzag motions as BVC partitions the workspace only based on the current positions of all robots, as illustrated at t=1.2s𝑡1.2𝑠t=1.2sitalic_t = 1.2 italic_s.
Bottom: The proposed MBVC-WB however divides the workspace based on all future planned trajectories as shown at t=... | Specifically, since the BVC only considers the current positions of all robots for space partition, all future positions in the planned trajectory are limited to this partition.
Thus, it often leads to an overly conservative navigation structure with excessive breaking and low efficiency. | (i) BVC only considers the current positions of robots i𝑖iitalic_i and j𝑗jitalic_j, while MBVC-WB takes into account all future positions of both robots according to their predetermined trajectories.
This leads to a more accurate space separation and thus a higher utilization rate of the workspace; | D |
Note that the model never saw rotated images during training but still manages to encode and reconstruct them due to its inherent equivariant design.
We find that the encoded latent representation is indeed rotation invariant (up to machine precision), but only for rotations of an angle θ=n⋅π2,n∈ℕformulae-sequence𝜃⋅𝑛... | For all other rotations, we see slight variations in the latent code, which, however, is to be expected due to interpolation artifacts for rotations on a discretized grid. Still, inspecting the 2d-projection of the latent code of our proposed model in Figure 2, we see distinct clusters for each digit class for the diff... | We also run experiments on the ShapeNet dataset Chang et al. (2015). We utilized 3D Steerable CNNs proposed by Weiler et al. (2018b) as equivariant encoder for the 3d voxel input space. We utilized the scalar outputs as rotation-invariant embedding (z𝑧zitalic_z) and predict (analogously to our experiments on 3d point ... |
In the first experiment, we train an SO(2)-invariant autoencoder on the original (non-rotated) MNIST dataset and validate the trained model on the rotated MNIST dataset (ref. mni ) which consists of randomly rotated versions of the original MNIST dataset. For the functions η𝜂\etaitalic_η and ψ𝜓\psiitalic_ψ we utiliz... | Next, we train a permutation-invariant autoencoder on sets of digits. A set with N𝑁Nitalic_N digits is represented by concatenating one-hot vectors of each digit in a N×D𝑁𝐷N\times Ditalic_N × italic_D-dimensional matrix, where we take D=10𝐷10D=10italic_D = 10. Notice that this matrix-representation of a set is not ... | A |
Although models are generally improved by using data on road traffic and other pollution sources, these data can be difficult or expensive to obtain for developing nations or citizen scientists. We demonstrate that even a minimal feature set containing only sensor locations and readings can be useful. Additionally, all... | Secondly, the measurements are treated as ground-truth readings. This simplifies the error metric calculations, but also makes irrelevant one of the advantages of probabilistically modelling the data, as this allows that uncertainty to be modelled directly.
Another limitation is that the results on the satellite data a... | The main contribution of this paper is showing Bayesian optimisation to be useful for automating planning of pollution sensor networks. Specifically, we consider the problem of iteratively placing stationary sensors and locating the maximum average pollution in the given area.
Knowledge of the maximum lets us know whet... |
Table 1: Confidence intervals for the means of the maximum ratio at the final iteration for the satellite data. Given are means ±plus-or-minus\pm± one standard deviation of the mean. The best values are given in bold. This confirms the story from Figs. 4 and 5 that improved results are obtained on the strong and selec... | Part of our rationale for using comparatively simple GP kernels is to facilitate interpretation of hyperparameters, and analysis of the distribution patterns of pollution in an area, e.g. its smoothness and variability in space and time. This lets us perform “at-a-glance” comparisons of different areas, discern whether... | B |
In Section C.2, we also experiment with increasing the complexity of H𝐻Hitalic_H and H∗superscript𝐻H^{*}italic_H start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT by using two hidden layers instead of a single hidden layer in the MLP.
We did not observe significant improvements in variance reduction. | The benefits of using Stein operators to construct discrete CVs are twofold. First, the operator structure permits us to learn CVs with a flexible functional form such as those parameterized by neural networks.
Second, since our operators are derived from Markov chains on the discrete support, they naturally incorporat... | We then apply it to generalize the linear CVs in Double CV to very flexible ones such as neural networks.
Moreover, we provide an effective CV design based on surrogate functions that requires no additional evaluation of f𝑓fitalic_f compared to RLOO. | Our RODEO estimator does not rely on continuous reparameterization of the distribution, requires no additional function evaluations per sample point, and can be adapted online to learn very flexible control variates parameterized by neural networks.
| while still maintaining a tractable correction term.
Our method enables online adaptation of CVs to minimize gradient variance (similar to RELAX [20]) but does not assume qηsubscript𝑞𝜂q_{\eta}italic_q start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT has a continuous reparameterization. | C |
Now let us consider negative MRC. When both users fail it has no effect. If there is one successful user, it can happen that negative MRC turns it into an undecodable one, thus making SIC impossible. Similarly, if both users are successful, negative MRC could make them both undecodable (in this case it is not enough to... | The markers represent the performance of the probability of the outage for the idealized version of the Random selection scheme and serve as a reference.
The dotted lines show the actual performance of MRC in the presence of correlated interference. The dashed lines depict the scenario with finite pool of Q=24𝑄24Q=24i... | In Fig. 5 we present a comparison of the outage probability obtained by the simulation results and the developed approximations.
Once again, we consider the performance of the Steiner system and Random selection across the range of received SNRs and traffic intensities. |
In Fig. 4 we plot the outage probability as given by the proposed approximation (17) (dashed lines) and compare it to the results of the corresponding simulations in which the full procedure is implemented (markers). The derived approximations prove to be very close to the exact results across the whole SNR range and ... | The results from simulations, which implement the exact procedure, are given with markers.
Solid and dashed lines (Steiner and Random respectively) correspond to the approximation based on eq. (26) (Approx. 1). Similarly, dotted and dash-dotted lines (Approx. 2) correspond to the simpler approximation by gamma distribu... | A |
We remark that although time-wise this algorithm is fast, the ratio constant of the yielded path over the optimal path has not been computed and is much larger that Christofides’ 3/2323/23 / 2 ratio. In fact, in our algorithm, the yielded path has length at
most (300)9/2log300superscript30092300(300)^{9/2}\log{300}( ... |
The Analyst’s Traveling Salesman Problem (ATSP) [Jon90] is a generalization of the TSP where it is asked to find a curve of finite length that contains a given (finite or infinite) set V⊂ℝN𝑉superscriptℝ𝑁V\subset\mathbb{R}^{N}italic_V ⊂ blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT. While the TSP (w... |
It should be noted that the work of Jones [Jon90], Okikiolu [Oki92], and Schul [Sch07] provides the sets for which a solution exists but not the solution itself when the set is infinite. That is, they classify the sets which are contained in curves of finite length, but do not provide the parametrization of the curves... | An interesting connection to the Jones-Scul algorithm was given by Gu, Lutz, and Mayordomo [GLM06] who classified the sets V𝑉Vitalic_V that admit a solution to a computable extension of the ASTP. This variant of the problem characterizes the sets which are contained in a rectifiable computable curve. As we are concern... |
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper... | C |
This is a discriminant that characterizes the linear subspaces of dimension n−r𝑛𝑟n-ritalic_n - italic_r that intersect V𝑉Vitalic_V non-transversally.
When deg(V)≥2degree𝑉2\deg(V)\geq 2roman_deg ( italic_V ) ≥ 2, these linear spaces form a hypersurface in the corresponding Grassmannian. | To avoid the case that the denominator is zero in the resultant computations, we can apply the technique of the generalized characteristic polynomial [6], similarly to the projective case. Now, we can apply a symbolic perturbation to all the terms of all the polynomials [49]
or only to the terms that appear in the diag... | a system of n+1𝑛1n+1italic_n + 1 polynomials in n+1𝑛1n+1italic_n + 1 variables; we concetrate on the 𝒙𝒙\bm{x}bold_italic_x variables.
The polynomial M𝑀Mitalic_M plays the role of the u𝑢uitalic_u-resultant (also appear with the term separating linear form). | The coefficients of the linear forms in this factorization correspond to the solutions of the zero dimensional system. To force (some) of these solutions to have multiplicities
we compute the discriminant R2subscript𝑅2R_{2}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT of R1subscript𝑅1R_{1}italic_R start_POSTSUBSCR... | As V𝑉Vitalic_V is a complete intersection, if the linear forms are generic, then the resulting system
does not have any solution and hence its resultant is not zero. The resultant of the system when we eliminate the variables 𝒙𝒙\bm{x}bold_italic_x, using the | D |
The submission to the Ethical Committee covered essential topics for the development of the experiments. Among others, the adequacy of the volunteers’ informed consent, the research goals and plans, the data management and de-identification procedures, and the compliance with the European General Data Protection Regul... |
Moreover, they granted permission for processing their personal data to the extent necessary for the implementation of the research project, including sharing with other researchers their physiological recording and speech features, as well as the initial questionnaires and self-reported annotations. |
The libraries recommended for further processing of the WEMAC dataset are the ones we found most useful for data cleaning and filtering for physiological and speech signals. On the one hand, Matlab® was employed for the physiological data processing using the TEAP toolbox 888https://github.com/Gijom/TEAP. On the other... |
Self-reported annotations [19]: They contain the emotional labeling reported by the participants after watching each of the 14141414 videos in the experiment. The data are stored in one CSV file, that contains 14141414 columns and 1,40014001,4001 , 400 rows (100100100100 volunteers ×\times× 14141414 clips). Regarding ... | The speech signals recorded have a duration ranging from 20s20𝑠20\leavevmode\nobreak\ s20 italic_s to 60s60𝑠60\leavevmode\nobreak\ s60 italic_s and mostly contain speech.
Since the release of the raw speech signals is not possible due to ethics and privacy issues 777Regulation (EU) 2016/679 of the European Parliame... | A |
Neural Network (NN): NN comprises interconnected neurons arranged in layers to process and transmit information. The input layer receives data, hidden layers perform computations, and the output layer produces the final prediction. NNs learn from large datasets, capturing complex nonlinear relationships between feature... |
In Deep4MalDroid (Hou et al., 2016a), the authors introduce a unique dynamic analysis method called Component Traversal. This method generates system call graphs based on the code routines within Android applications. They then apply a deep learning framework to these graph features and conduct malware classification.... | Neural Network (NN): NN comprises interconnected neurons arranged in layers to process and transmit information. The input layer receives data, hidden layers perform computations, and the output layer produces the final prediction. NNs learn from large datasets, capturing complex nonlinear relationships between feature... |
Deep Belief Network (DBN): DBN is an unsupervised machine learning algorithm consisting of multiple layers of restricted Boltzmann machines (RBMs). Each layer of RBMs is trained unsupervised to learn a compressed representation of input data (Hinton, 2009). The output of one RBM layer serves as input to the next, and ... |
Variational Auto-encoder (VAE): VAE, a generative deep learning model for unsupervised learning (Diederik et al., 2014), encodes input data into a lower-dimensional latent space and decodes this representation to generate new, similar data. VAEs use a probabilistic approach to training and can learn complex data distr... | C |
converges, as its partial sums differ from those of expression (1) by s222log2−sn+12(n+1)log(n+1)superscriptsubscript𝑠2222superscriptsubscript𝑠𝑛12𝑛1𝑛1\frac{s_{2}^{2}}{2\log 2}-\frac{s_{n+1}^{2}}{(n+1)\log(n+1)}divide start_ARG italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSU... | The last step of the proof of Lemma 3.5 is the only location where the positivity of snsubscript𝑠𝑛s_{n}italic_s start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is used. Indeed, ∑sn/(n3/2(logn)3/4)subscript𝑠𝑛superscript𝑛32superscript𝑛34\sum s_{n}/(n^{3/2}(\log n)^{3/4})∑ italic_s start_POSTSUBSCRIPT italic_n en... | Now, ∑1/(2nlog(2n))12𝑛2𝑛\sum 1/(2n\log(2n))∑ 1 / ( 2 italic_n roman_log ( 2 italic_n ) ) diverges to ∞\infty∞ (at rate on the order of loglogn𝑛\log\log nroman_log roman_log italic_n). It thus suffices to show that the probability that no player is chosen on Step 1 and
|
The idea of the proof is that since Deviator must move to the right during periods where Honest moves substantially to the left (to avoid going below zero), Deviator must thus move to the left when Honest moves to the right to avoid being clearly right-biased (Steps 1 and 2 detect right-biased behavior). Thus Deviator... | Almost surely either a player was chosen on Step 1 or 2 or the sum of just the odd-numbered terms (given by B’s moves) of expression (2) diverges to ∞\infty∞, by Lemma 3.6. In the latter case, the sum of the even-numbered terms must diverge to −∞-\infty- ∞ (as the sum of all terms is convergent), and therefore cannot d... | D |
This platform will be fully open-sourced so that researchers can collectively develop new APIs to fit future attack/defense evaluation needs, and also contribute attack and defense implementations to form a semantic AD AI security benchmark, which can improve comparability, reproducibility, and also encourage open-sour... |
In this paper, we thus take the initiative to address this critical scientific methodology-level gap by developing a uniform and extensible system-driven evaluation platform, named PASS (Platform for Autonomous driving Safety and Security), for the semantic AD AI security research community (§V). We choose a simulatio... |
Simulation-centric hybrid design. To build such a community-level evaluation infrastructure, a fundamental design challenge is the trade-off between real vehicle-based and simulation-based evaluation methodology, which is shown in Table III. Real vehicle-based is more fidel as it has the vehicle, sensors, and physical... | We choose the design to be simulation-centric because we find that the fidelity drawback of simulation-based approach is tolerable since (1) our results show that for today’s representative AD AI attacks, the attack results and characteristics are highly correlated in today’s industry-grade simulator and physical world... |
As described in §IV-A, to effectively fill this gap at the research community level, a common system-level evaluation infrastructure is highly desired. To foster this, we take the initiative to build a system-driven evaluation platform that unifies the common system-level instrumentations needed for AD AI security eva... | B |
In their paper, each vertex and edge of the original cubic graph was represented by a set of intervals, called vertex and edge gadgets respectively.
The interval model consisted of first all the vertex gadgets, and then all the edge gadgets arranged from left to right. | The number of intervals in any gadget was much greater than the total number of link intervals in the graph.
It was shown that, in any Maximum Cut partition of this interval graph, each vertex gadget or edge gadget could have only two possible partitions. | In their paper, each vertex and edge of the original cubic graph was represented by a set of intervals, called vertex and edge gadgets respectively.
The interval model consisted of first all the vertex gadgets, and then all the edge gadgets arranged from left to right. | On each of the figures, this gadget is colored in Red and Blue.
After we construct the whole graph H𝐻Hitalic_H of interval count two, we will argue that, for every Maximum Cut partition of H𝐻Hitalic_H, the coloring of each of its gadgets is similar to one displayed on the corresponding figure. | For a vertex gadget, these two partitions were made to correspond to its membership in one of the partition sets for a maximum cut of the original cubic graph.
If two adjacent vertices of the cubic graph belonged to different sets, then the corresponding edge gadget would make more cut edges with link intervals than if... | A |
Hypothesis 8: BatchNorm enables “cheating” in certain tasks.
BatchNorm can leak information within batches to “cheat” certain objectives. While previous work finds empirical evidence for this effect through degrading performance [77], we conclusively demonstrate it by carefully designing a toy task: We propose an impos... | Fig. 7(a) shows how anticipation predictions immediately improve in batches which contain an instrument occurrence at some point later in the batch. These examples visualize the “cheating” effect explicitly, not only indirectly through impact on performance. Likely, the model has learned to recognize the instrument and... | We provide a comprehensive and detailed analysis of how BatchNorm affects end-to-end surgical workflow analysis. We show the advantage of end-to-end over 2-stage learning (Hypothesis 1), longer training sequences (H.2) and carrying hidden states across batches in online tasks (H.3) and how this can fail using BN-based ... | Hypothesis 6: Freezing backbone layers can improve BN models, but also their BN-free counterparts.
As argued before, the trade-off between sequence length and batch diversity is a disadvantage of BN-based models. By freezing parts of the CNN backbones, sequence lengths can be increased while maintaining the same number... | Hypothesis 9: “Cheating” can cause BN models to fail in instrument anticipation.
To visualize how BN can “cheat” during training in anticipation tasks, we evaluate BN in training mode, i.e. processing test videos offline in batches with sequence lengths equal to the training lengths and use batch (instead of global) st... | D |
Results on IJB-B and IJB-C. IJB-B consists of 21.8 K images of 1,845 subjects and 55 K frames of 7,011 videos. IJB-C, an extended version of IJB-B, contains 31.3 K images of 3,531 subjects and 117.5 K frames of 11,799 videos. 10 k / 8 M and 19 K / 15 M of positive / negative pairs in IJB-B and IJB-C were used for 1:1 v... |
Results on MegaFace. MegaFace consists of a gallery set of 1 M images with 690 K classes and probe photos of 100 K images with 530 classes. We followed the test protocol of ArcFace[4]. We removed noisy images and measured rank-1 accuracy for the 1 M distractor after following the identification scenarios using the dev... | Does it sufficiently satisfy WDFS? We conclude that UNPG helps FR models to form WDFS by reducing the gap between 𝒮^psuperscript^𝒮𝑝\mathcal{\hat{S}}^{p}over^ start_ARG caligraphic_S end_ARG start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT and 𝒮^nsuperscript^𝒮𝑛\mathcal{\hat{S}}^{n}over^ start_ARG caligraphic_S e... |
Results on K-FACE. K-FACE focuses on FR under fine-grained conditions. It consists of 4.3 M images with 6 accessories, 30 illuminations, 3 expressions, and 20 poses for 400 persons. We adopted the same training and test splits used in MixFace [41]. The training split was composed of 3.8 M images with 370 persons. In p... | Results on IJB-B and IJB-C. IJB-B consists of 21.8 K images of 1,845 subjects and 55 K frames of 7,011 videos. IJB-C, an extended version of IJB-B, contains 31.3 K images of 3,531 subjects and 117.5 K frames of 11,799 videos. 10 k / 8 M and 19 K / 15 M of positive / negative pairs in IJB-B and IJB-C were used for 1:1 v... | A |
In retinal imaging, GANs have been used to create synthetic data. Li et al. [27] highlighted the importance of enhancing the quality of synthetic retinal images in their review, emphasizing that using synthetic images in training can improve performance and help mitigate overfitting. | Anh et al [30] tested the FundusGAN to generate eye-fundus images for two eye disease: Age-related macular degeneration and Diabetic retinopathy and demonstrated its ability for the synthetic images to be generalisable for the two disease.However, Anh et al. [30] work was confined to a single dataset in which most part... | The second experiment determines whether clinical experts, who are very experienced with the analysis of eye fundus images, can distinguish between synthetic and real images. Such a step is essential to evaluate the effectiveness of StyleGAN2-ADA for generating synthetic eye fundus images. The experts were provided wit... | Burlina et al have successfully developed a method for [8, 10] generating the synthetic images. Their method is based on GAN, and they tested their method by showing that experts were unable to identify the synthetic images. However, their method has been patented and hence not available for being used by others. We h... | Bellemo et al. [28] described the possible advantages and limitations towards synthetic retina image generation using GANs. The authors highlighted the potential clinical applications of GANs concerning early- and late-stage AMD classification.
Burlina et al. [8] trained a Progressive GAN [29] on 133,821133821133,82113... | D |
In order to better observe the change of weights, we also show the change of weights corresponding to 16 local regions in all iterations in Fig.12.
From Fig.12, it is seen that the weight value fluctuates at the beginning of network training and it is gradually stabilized until the end of the training. | Actually, one purpose of our work is to explore how to automatically enhance the significance of local crucial regions in deep FER, while any landmarks are not given as the prior information of facial crucial regions.
Thus, in order to validate it, we make an analysis for the weights of 16 local regions obtained by our... | From Fig.10, it is obvious that some crucial regions obtain higher weights and non-crucial regions get smaller weights for each facial expression.
For examples, the areas including or around eyes are given higher weights for the first person in the first row, where the maximum is given the local region located at the c... | Some patches that are visually more discriminative are lightened with higher weights and some patches located at the non crucial regions cut down with smaller weights.
In summary, the analyses for non-local weights demonstrate that the proposed method can effectively automatically enhance the significance of facial cru... | Experimental results also demonstrate that some local crucial regions can be effectively enhanced in feature learning by LNLAttenNet while there are not any given information of landmarks in the training model.
Moreover, the proposed method focuses on enhancing facial crucial regions in FER without any landmark informa... | C |
Both strategies produce higher ratings for larger n𝑛nitalic_n, up to n=Θ(k1/3)𝑛Θsuperscript𝑘13n=\Theta(k^{1/3})italic_n = roman_Θ ( italic_k start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT ), at which point the asymtotic highest rating in k𝑘kitalic_k is Θ(k1/3)Θsuperscript𝑘13\Theta(k^{1/3})roman_Θ ( italic_k sta... | Each player is given some ‘rating’ value (measured in ‘points’ or simply ‘Elo’), which updates as they play games. These rating points are somewhat analogous to poker chips: when player A𝐴Aitalic_A and player B𝐵Bitalic_B play a game, they each place some of their rating points into a pot. In the case of a draw, the p... | Both strategies produce higher ratings for larger n𝑛nitalic_n, up to n=Θ(k1/3)𝑛Θsuperscript𝑘13n=\Theta(k^{1/3})italic_n = roman_Θ ( italic_k start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT ), at which point the asymtotic highest rating in k𝑘kitalic_k is Θ(k1/3)Θsuperscript𝑘13\Theta(k^{1/3})roman_Θ ( italic_k sta... | Pick any pair of players whose ratings are within δ𝛿\deltaitalic_δ of each other and have the higher rated player beat the lower rated player. Repeat until no two players are within δ𝛿\deltaitalic_δ rating points or until k𝑘kitalic_k games have been played. Note for δ=0𝛿0\delta=0italic_δ = 0, one recovers the origi... | This strategy is guaranteed to produce a player of either very high or very low rating. If it produces a player of very low rating, simply re-do the strategy picking the same sequence of pairs of players but have the opposite player win. Since game outcomes are symmetric, this will produce a player of high rating inste... | D |
Each interview lasted about 1 hour and 15 minutes, and the interviews were structured as follows:
(1) introduction of the primary objectives of HardVis, including the analytical tasks and design goals of Section 3; (2) presentation of the functionality of every visualization and interaction with the system using the ir... | To verify our sampling execution actions, we continuously monitor the process through the Sankey diagram, as shown in Figure 1(g). From this representation, we acknowledge that the population of safe instances decreased drastically when the undersampling was executed. The manual undersampling and oversampling processes... | Limitations identified by the experts.
E1 and E2 were concerned about the scalability of the system. The former concentrated on the problem of visualizing hundreds of features, while the latter on the exploration of more than three classes. E1 acknowledged that the box plots and the table heatmap view are interactive w... | The promising findings we were able to obtain with the help of our VA system in the usage scenario of Section 5 amazed E3 and E4. While using the same value for the number of neighbors parameter and the k-value for the distribution of data types, E3 appreciated that the k-value could still be adapted freely, as illustr... | All experts agreed that HardVis’ workflow is well designed and reasonable from their perspective. They characterized the workflow as straightforward and aligned with respective fully-automated sampling processes. E1 and E2 repeatedly commented positively upon our systematic and fine-grained approach that they have neve... | D |
Examples of frontrunning attacks can be seen on various decentralized applications (dApps).
The first and most prominent attack vector is on decentralized exchanges (DEX). DEX is an exchange platform built on smart contracts and enables users to exchange assets without the need for an intermediary [42]. Unlike centrali... |
In this paper, we proposed a decentralized framework, FIRST for mitigating frontrunning attacks on EVM-based smart contracts without modifying the consensus layer of blockchain. FIRST is not an application-specific solution and hence is more accessible for implementation in various dApps. We experimentally show that w... |
Profitability analysis: The profits made by frontrunners have been quantified by Torres et al. [47] and Qin et al. [42]; the latter also brought to attention the presence of private transactions submitted to miners. Zhou et al. [55] formalized and quantified the profit made by sandwich attacks enabled by frontrunning ... |
a) We propose FrontrunnIng Resistant Smart ConTracts (FIRST), a framework that significantly curtails frontrunning attacks in EVM-based blockchains without requiring any changes in the underlying blockchain infrastructure. Our framework is not application-specific, and hence can be easily adopted by any dApps. | A frontrunner can perform attacks with highly predictable results due to deterministic pricing mechanism as well as the transparency of liquidity amounts of decentralized exchanges. In this context, Qin et al. estimated a profit of 1.51 Million USD made by frontrunners [42]. Other domains that are affected by frontrunn... | D |
Figure 4: We plot the score distributions that are induced by βcompsubscript𝛽comp\beta_{\text{comp}}italic_β start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT, βstratsubscript𝛽strat\beta_{\text{strat}}italic_β start_POSTSUBSCRIPT strat end_POSTSUBSCRIPT, βcapsubscript𝛽cap\beta_{\text{cap}}italic_β start_POSTSUBSCRIPT cap ... | We construct a simulated distribution over agent unobservables using the NELS data.111Due to computational constraints of the data generation, we construct a distribution over K=8𝐾8K=8italic_K = 8 representative agent types instead of using the empirical distribution over all m𝑚mitalic_m unobservables. Additional det... |
In Section 5, we aim to learn the selection criterion that maximizes the equilibrium policy value. We develop a consistent estimator of the policy gradient, the gradient of the equilibrium policy value with respect to the selection criterion, and run gradient descent. Adapting the approach of Wager and Xu (2021), we e... |
The decision maker runs a unit-level experiment each epoch to obtain a consistent estimate the model gradient (Section 5). The decision maker then updates the selection criterion β𝛽\betaitalic_β via projected gradient descent. Recall that the model gradient accounts for students’ strategic behavior but does not accou... | We generate a simulated dataset with n=14915𝑛14915n=14915italic_n = 14915 students. For the capacity-aware baseline, we consider an RCT where treatments are assigned to n𝑛nitalic_n students in order to estimate the CATE. For the gradient-based methods, we randomly initialize the policy and optimize β𝛽\betaitalic_β v... | D |
To gauge the robustness of models, it is important to examine their behaviors across varying levels of biases. For this, we present the unbiased accuracies obtained by training separate models on training sets with pbias∈{0.75,0.9,0.95,0.99}subscript𝑝𝑏𝑖𝑎𝑠0.750.90.950.99p_{bias}\in\{0.75,0.9,0.95,0.99\}italic_p... |
To examine if the proposed inductive biases improve bias-resilience in other architectures too, we created OccamEfficientNet-B2 and OccamMobileNet-v3 by modifying EfficientNet-B2 [65] and MobileNet-v3 [28, 29]. OccamNet variants outperform standard architectures on both Biased MNISTv2 (OccamEfficientNet-B2: 59.2 vs. E... | Modifications for COCO-on-Places.
For COCO-on-Places, the images are small (64×64646464\times 6464 × 64), so for ResNet-18 and OccamResNet-18, we replace the first convolutional layer (kernel size=7777, padding=3333, stride=2222), with a smaller layer (kernel size=3333, padding=1111 and stride=1111) and also remove the... | Apart from ResNet, we also tested the proposed inductive biases on EfficientNet and MobileNet. The results are presented in Table A13. For both Biased MNISTv2 and COCO-on-Places, Occam variants outperform the standard architectures, showing the efficacy of the proposed modifications.
|
Ablations. To study the importance of the proposed inductive biases, we perform ablations on Biased MNISTv2 and COCO-on-Places. First, to examine if the multi-exit setup is helpful, we train networks with single exit attached to the end of the network. This caused accuracy drops of 29.1% on Biased MNISTv2 and 8.4% on ... | C |
Furthermore, static contexts and motional contexts are highly correlated, not isolated, because both contexts are complementary to each other to represent the information existing in several nearby frames. Therefore, the ideal solution for learning local temporal contexts is to jointly learn static and motional context... |
In this paper, we propose a Coarse-to-Fine Feature Mining (CFFM) technique to learn local temporal contexts, which consists of two parts: Coarse-to-Fine Feature Assembling (CFFA) and Cross-frame Feature Mining (CFM). Specifically, we first apply an efficient deep network [52] to extract features from each frame. Then,... | The video contexts contain local temporal contexts which represent the contextual information from neighbouring/nearby frames and global temporal contexts which indicate the contexts from the whole video. This paper first studies local temporal contexts which can be further divided into static contexts and motional con... | Instead of directly modeling global relationships, we propose to model relationships only among necessary tokens for the joint learning of static and motional contexts. Our CFFM technique consists of two steps. The first step, Coarse-to-Fine Feature Assembling (CFFA), assembles the features extracted from neighbouring ... |
Figure 2: Overview of the proposed Coarse-to-Fine Feature Mining for mining local temporal contexts. All frames are first input to an encoder to extract features, which then go through the coarse-to-fine feature assembling module (CFFA). Features for different frames are processed by different pooling strategies to ge... | A |
Recommendation models constitute a crucial and widely deployed class of machine learning (ML) workloads [1]. These models employ compute-intensive neural networks and memory-intensive embedding tables to store user and item features [2]. With the increasing number of interactions between users and items, the size of t... | Large recommender models can be trained using two distributed modes: the hybrid mode and the data-parallel mode. In the hybrid mode, embeddings are stored and gathered on the CPU, while neural networks are executed on GPUs in a data-parallel manner. However, the training throughput of the hybrid mode is often limited d... |
In the GPU-only mode, multiple GPUs are used to store all embeddings and perform data-parallel neural network execution. However, this mode experiences low compute utilization primarily because recommendation models grow with the size of the embedding tables, resulting in a larger memory footprint rather than an incre... | The deep Learning Recommendation Model (DLRM) and the Time-Based Sequence Model (TBSM) are popular commercial models. These models are typically trained using either a hybrid CPU-GPU mode or a GPU-only mode [6, 7]. In the hybrid mode (Figure 1a), the CPU provides memory capacity for the embedding entries, while GPUs of... | To overcome these limitations of existing training modes, this paper introduces a novel heterogeneous acceleration pipeline called Hotline. The primary goal of Hotline is to fully exploit GPUs’ compute throughput and CPU-based memory capacity without encountering any communication or computation bottlenecks. By combini... | C |
of the codomain, ℝℝ\mathbb{R}blackboard_R. If F𝐹Fitalic_F is generic and transversal and a𝑎aitalic_a is a transversal threshold, [4, Lem. 5.6] tells us 𝒞(F)∈{a}𝒞subscript𝐹absent𝑎\mathcal{C}(F)_{\in\{a\}}caligraphic_C ( italic_F ) start_POSTSUBSCRIPT ∈ { italic_a } end_POSTSUBSCRIPT has dimension n−1𝑛1n-1italic... | It is well-known (e.g. [1]) that the class of functions F:ℝn→ℝ:𝐹→superscriptℝ𝑛ℝF:\mathbb{R}^{n}\rightarrow\mathbb{R}italic_F : blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT → blackboard_R realizable by feedforward ReLU neural networks is precisely the class of finite piecewise linear (PL) functions ... | As indicated in the introduction, studying the topology of sublevel sets of finite PL functions is quite a bit more involved than it is in the smooth case. The purpose of this section is to introduce the notion of a PL Morse function (using the definition from [7], see also [6]) and prove that for each architecture of ... | Indeed, we believe that the probability that a given ReLU neural network function is non PL Morse grows with depth. Thus, studying the topology of the sublevel sets of ReLU neural network functions requires establishing more flexible results governing how the sublevel set varies with the threshold. We will introduce su... |
Unsurprisingly, the key to understanding how the topology of sublevel sets changes as one varies the threshold are the PL analogues of points where the gradient of the function vanishes, cf. Theorem 1. These are the so-called flat or constant cells (Definition 3.7), which map to nontransversal thresholds (cf. Lemma 2.... | B |
Fortunately, neural network methods have more universalities. The same neural network can be used to represent the states or to study the dynamical processes for various systems, such as those with different dimensions or with different interactions. |
We have realized the time evolutions of the energy expectation value, the universal statistics of the topological defects numbers and the kink-kink correlations in a quantum phase transition of a TFQIM by virtue of the neural networks. The results were found to satisfy theoretical predictions. Thus, it numerically ver... | The recent state-of-the-art neural networks have been shown to provide high efficient representations of such complex states, making the overwhelming complexity computationally tractable [6, 7]. Except for the success in the industrial applications, such as the image and speech recognitions [8], the autonomous driving,... | The topological defects, i.e., kinks form in the course of the quantum phase transitions due to the KZM. It predicts that the power-law scalings of the mean value of the kink numbers to the quench rate is proportional to τQ−dν/(1+νz)superscriptsubscript𝜏𝑄𝑑𝜈1𝜈𝑧\tau_{Q}^{-d\nu/(1+\nu z)}italic_τ start_POSTSUBSCRI... |
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions. Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39]. Critical slowing down near the phase transition point may invalidate the applic... | B |
{\textsf{bdry}}+\mathcal{L}_{\textsf{momentum}}+\mathcal{L}_{\textsf{%
divergence}}roman_min start_POSTSUBSCRIPT bold_italic_u start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT , italic_p start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT init start_POSTSUB... | where QNNsubscript𝑄𝑁𝑁Q_{NN}italic_Q start_POSTSUBSCRIPT italic_N italic_N end_POSTSUBSCRIPT is the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT projection onto the space of neural network functions chosen in the model. This in general can not guarantee the divergence free property of 𝝎... |
The goal of this paper is to construct the neural network model that can preserve the fluids helicity. Unlike the standard finite element methods based on the weak formulation of the PDE models, Physics-informed neural networks (PINN) model [27] is based on the strong PDE and thus, conservation can be shown to be made... |
Figure 2: Seq2seq PINN. The blue line is initial condition for each sequence. The red line is boundary condition for each sequence. The domain will be uniformly sectioned. When training of the first sequence finished, the solution at t=0.01𝑡0.01t=0.01italic_t = 0.01 will be calculated and used as the initial conditio... | The original PINN approach trains the NN model to predict the entire space-time at once. In complex cases, this can be more difficult to learn. Seq2seq strategy was proposed in [16], where the PINN learns to predict the solution at each time step, instead of all times. Note that the only data available of the first seq... | D |
We conduct comprehensive experiments on multiple synthetic and real-world data sets of diverse sizes and dimensions using a variety of models (Logistic Regression, K-Nearest-Neighbor, Artificial Neural Networks, Deep Neural Networks, ElasticNet, Random Forest, and SVM), distance measures (Chebyshev, Manhattan, and Eucl... | In the next experiment, we evaluate the prediction probabilities generated by probabilistic classification models and demonstrate their failure for query points that are not represented by data. To do so, we employ data set 𝒟𝒟\mathcal{D}caligraphic_D and train an arbitrary probabilistic classifier such as Gaussian Na... |
In this section, we thoroughly evaluate the RU measures in the context of the existing approaches discussed in §7, demonstrate why the existing approaches fail, and how RU measures are superior in capturing the unreliability of individual predictions. | We also demonstrate the failure of existing work such as Conformal Prediction, Prediction Probabilities (for cases that are not represented by the data), and data coverage (for the cases that belong to the uncertain regions) and how our proposed measures perform superior in capturing the prediction unreliability associ... | In particular, angelopoulos2021gentle recognizes the lack of guarantees in the performance of CP for such regions.
On the contrary, prediction outcomes are specifically unreliable for regions that are unlikely to be sampled. As a result, as we further discuss in § 3.1, such approaches fail for cases that are not repre... | C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.