aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1904.04297
2939498466
This paper proposes an approach to learn generic multi-modal mesh surface representations using a novel scheme for fusing texture and geometric data. Our approach defines an inverse mapping between different geometric descriptors computed on the mesh surface or its down-sampled version, and the corresponding 2D texture image of the mesh, allowing the construction of fused geometrically augmented images (FGAI). This new fused modality enables us to learn feature representations from 3D data in a highly efficient manner by simply employing standard convolutional neural networks in a transfer-learning mode. In contrast to existing methods, the proposed approach is both computationally and memory efficient, preserves intrinsic geometric information and learns highly discriminative feature representation by effectively fusing shape and texture information at data level. The efficacy of our approach is demonstrated for the tasks of facial action unit detection and expression classification. The extensive experiments conducted on the Bosphorus and BU-4DFE datasets, show that our method produces a significant boost in the performance when compared to state-of-the-art solutions
In one of the earliest works, CNNs and recursive neural networks (RNNs) have been jointly trained on RGB-D data @cite_6 . @cite_42 learned geocentric embedding, which encodes height above ground and angle with gravity for depth images. @cite_21 first separated the color (RGB) and depth (D) information through a CNN followed by late-fusion for RGB-D object detection. Compared with RGB-D data, 3D data in the form of mesh model provides complete and more structured shape information. New methods have been developed to represent and learn features from such data. These methods are discussed below.
{ "cite_N": [ "@cite_42", "@cite_21", "@cite_6" ], "mid": [ "1565402342", "2963956866", "2109992539" ], "abstract": [ "In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.", "Robust object recognition is a crucial ingredient of many, if not all, real-world robotics applications. This paper leverages recent progress on Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture for object recognition. Our architecture is composed of two separate CNN processing streams - one for each modality - which are consecutively combined with a late fusion network. We focus on learning with imperfect sensor data, a typical problem in real-world robotics tasks. For accurate learning, we introduce a multi-stage training methodology and two crucial ingredients for handling depth data with CNNs. The first, an effective encoding of depth information for CNNs that enables learning without the need for large depth datasets. The second, a data augmentation scheme for robust learning with depth images by corrupting them with realistic noise patterns. We present state-of-the-art results on the RGB-D object dataset [15] and show recognition in challenging RGB-D real-world noisy settings.", "Recent advances in 3D sensing technologies make it possible to easily record color and depth images which together can improve object recognition. Most current methods rely on very well-designed features for this new 3D modality. We introduce a model based on a combination of convolutional and recursive neural networks (CNN and RNN) for learning features and classifying RGB-D images. The CNN layer learns low-level translationally invariant features which are then given as inputs to multiple, fixed-tree RNNs in order to compose higher order features. RNNs can be seen as combining convolution and pooling into one efficient, hierarchical operation. Our main result is that even RNNs with random weights compose powerful features. Our model obtains state of the art performance on a standard RGB-D object dataset while being more accurate and faster during training and testing than comparable architectures such as two-layer CNNs." ] }
1904.04297
2939498466
This paper proposes an approach to learn generic multi-modal mesh surface representations using a novel scheme for fusing texture and geometric data. Our approach defines an inverse mapping between different geometric descriptors computed on the mesh surface or its down-sampled version, and the corresponding 2D texture image of the mesh, allowing the construction of fused geometrically augmented images (FGAI). This new fused modality enables us to learn feature representations from 3D data in a highly efficient manner by simply employing standard convolutional neural networks in a transfer-learning mode. In contrast to existing methods, the proposed approach is both computationally and memory efficient, preserves intrinsic geometric information and learns highly discriminative feature representation by effectively fusing shape and texture information at data level. The efficacy of our approach is demonstrated for the tasks of facial action unit detection and expression classification. The extensive experiments conducted on the Bosphorus and BU-4DFE datasets, show that our method produces a significant boost in the performance when compared to state-of-the-art solutions
Approaches for learning features from 3D data can be divided into two categories: approaches and approaches. The first category treats 3D volumetric data and encompasses basically two paradigms: volumetric CNNs and multiview CNNs. Volumetric CNNs process 3D data in its raw format (, a volumetric tensor of binary real-valued voxels @cite_53 . Unlike 2D images, where each pixel carries meaningful information, only the voxels corresponding to the object surface and boundaries are helpful. Volumetric representation based CNNs are therefore memory intensive and inefficient. Recent works addressed this problem by proposing architectures operating on a cloud of points, while respecting the permutation invariance of points in the input @cite_16 . Multiview CNNs paradigm @cite_48 extends 2D CNNs to 3D data by synthetically rendering multiple 2D images across different viewpoints of a given 3D point-cloud. These multiple images are then fed as inputs to CNNs, followed by a fusion scheme to get a single entity representation of the 3D shape. These multi-view representations have shown superior performance compared with volumetric approaches. However, a limitation of multi-view scheme is that 3D geometric information is not fully preserved in rendering images from 3D data.
{ "cite_N": [ "@cite_48", "@cite_53", "@cite_16" ], "mid": [ "2962731536", "2293349265", "2560609797" ], "abstract": [ "3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-theart methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multiresolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data.", "This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach.", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption." ] }
1904.04297
2939498466
This paper proposes an approach to learn generic multi-modal mesh surface representations using a novel scheme for fusing texture and geometric data. Our approach defines an inverse mapping between different geometric descriptors computed on the mesh surface or its down-sampled version, and the corresponding 2D texture image of the mesh, allowing the construction of fused geometrically augmented images (FGAI). This new fused modality enables us to learn feature representations from 3D data in a highly efficient manner by simply employing standard convolutional neural networks in a transfer-learning mode. In contrast to existing methods, the proposed approach is both computationally and memory efficient, preserves intrinsic geometric information and learns highly discriminative feature representation by effectively fusing shape and texture information at data level. The efficacy of our approach is demonstrated for the tasks of facial action unit detection and expression classification. The extensive experiments conducted on the Bosphorus and BU-4DFE datasets, show that our method produces a significant boost in the performance when compared to state-of-the-art solutions
Manifold approaches operate on mesh surfaces, which serve as a natural parametrization to 3D shapes; but learning using CNNs is a challenging task in such modality. Current paradigms to tackle this challenge either adapt the convolutional filters to mesh surfaces or learn spectral descriptors defined by the Laplace-Beltrami operator. @cite_13 proposed a generalization of CNNs to non-Euclidean domains for the analysis of deformable shapes based on localized frequency analysis. @cite_17 extended the CNN paradigm to non-Euclidean manifolds by using a local geodesic system of polar coordinates to extract patches'' on which geodesic convolution can be computed. @cite_19 introduced a geometric CNN (gCNN) that deals with data representation over a mesh surface and renders pattern recognition in a multi-shell mesh structure. @cite_49 built a hash table to quickly construct the local neighborhood volume of eight sibling octants that allow an efficient computation of the 3D convolutions of these octants in parallel.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_49", "@cite_17" ], "mid": [ "2742837160", "1951806617", "", "2963021451" ], "abstract": [ "The conventional CNN, widely used for two-dimensional images, however, is not directly applicable to non-regular geometric surface, such as a cortical thickness. We propose Geometric CNN (gCNN) that deals with data representation over a spherical surface and renders pattern recognition in a multi-shell mesh structure. The classification accuracy for sex was significantly higher than that of SVM and image based CNN. It only uses MRI thickness data to classify gender but this method can expand to classify disease from other MRI or fMRI data", "In this paper, we propose a generalization of convolutional neural networks (CNN) to non-Euclidean domains for the analysis of deformable shapes. Our construction is based on localized frequency analysis (a generalization of the windowed Fourier transform to manifolds) that is used to extract the local behavior of some dense intrinsic descriptor, roughly acting as an analogy to patches in images. The resulting local frequency representations are then passed through a bank of filters whose coefficient are determined by a learning procedure minimizing a task-specific cost. Our approach generalizes several previous methods such as HKS, WKS, spectral CNN, and GPS embeddings. Experimental results show that the proposed approach allows learning class-specific shape descriptors significantly outperforming recent state-of-the-art methods on standard benchmarks.", "", "Feature descriptors play a crucial role in a wide range of geometry analysis and processing applications, including shape correspondence, retrieval, and segmentation. In this paper, we introduce Geodesic Convolutional Neural Networks (GCNN), a generalization of the convolutional neural networks (CNN) paradigm to non-Euclidean manifolds. Our construction is based on a local geodesic system of polar coordinates to extract \"patches\", which are then passed through a cascade of filters and linear and non-linear operators. The coefficients of the filters and linear combination weights are optimization variables that are learned to minimize a task-specific cost function. We use ShapeNet to learn invariant shape features, allowing to achieve state-of-the-art performance in problems such as shape description, retrieval, and correspondence." ] }
1904.04297
2939498466
This paper proposes an approach to learn generic multi-modal mesh surface representations using a novel scheme for fusing texture and geometric data. Our approach defines an inverse mapping between different geometric descriptors computed on the mesh surface or its down-sampled version, and the corresponding 2D texture image of the mesh, allowing the construction of fused geometrically augmented images (FGAI). This new fused modality enables us to learn feature representations from 3D data in a highly efficient manner by simply employing standard convolutional neural networks in a transfer-learning mode. In contrast to existing methods, the proposed approach is both computationally and memory efficient, preserves intrinsic geometric information and learns highly discriminative feature representation by effectively fusing shape and texture information at data level. The efficacy of our approach is demonstrated for the tasks of facial action unit detection and expression classification. The extensive experiments conducted on the Bosphorus and BU-4DFE datasets, show that our method produces a significant boost in the performance when compared to state-of-the-art solutions
Most of the work on facial expression recognition and AU detection has been done using 2D data. A survey of these works appeared in @cite_47 . Here, we review the methods developed for 3D data only. We can broadly categorize 3D facial analysis methods into two groups: and . Feature-based methods extract geometric descriptors either holistically or locally from the 3D facial scans. For example, @cite_30 detect facial landmarks on a given 3D face. Local geometric and texture features were then extracted around the detected landmarks, and used to represent the 3D facial scan. Similarly, @cite_33 represented a facial scan in terms of local surface patches extracted around @math facial landmarks. Geodesic distance on the Riemannian geometry is utilized as a metric to compare the extracted patches. A number of other works represented 3D scans using either local or holistic geometric descriptors. Examples include distances between 3D facial landmarks @cite_18 , distances between locally extracted surface patches @cite_3
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_33", "@cite_3", "@cite_47" ], "mid": [ "2018776244", "2166034132", "2068610869", "1040410175", "2009289647" ], "abstract": [ "Automatic facial expression recognition on 3D face data is still a challenging problem. In this paper we propose a novel approach to perform expression recognition automatically and flexibly by combining a Bayesian Belief Net (BBN) and Statistical facial feature models (SFAM). A novel BBN is designed for the specific problem with our proposed parameter computing method. By learning global variations in face landmark configuration (morphology) and local ones in terms of texture and shape around landmarks, morphable Statistic Facial feature Model (SFAM) allows not only to perform an automatic landmarking but also to compute the belief to feed the BBN. Tested on the public 3D face expression database BU-3DFE, our automatic approach allows to recognize expressions successfully, reaching an average recognition rate over 82 .", "In this paper, the problem of person-independent facial expression recognition from 3D facial shapes is investigated. We propose a novel automatic feature selection method based on maximizing the average relative entropy of marginalized class-conditional feature distributions and apply it to a complete pool of candidate features composed of normalized Euclidean distances between 83 facial feature points in the 3D space. Using a regularized multi-class AdaBoost classification algorithm, we achieve a 95.1 average recognition rate for six universal facial expressions on the publicly available 3D facial expression database BU-3DFE [1], with a highest average recognition rate of 99.2 for the recognition of surprise. We compare these results with the results based on a set of manually devised features and demonstrate that the auto features yield better results than the manual features. Our results outperform the results presented in the previous work [2] and [3], namely average recognition rates of 83.6 and 91.3 on the same database, respectively.", "In this paper we address the problem of 3D facial expression recognition. We propose a local geometric shape analysis of facial surfaces coupled with machine learning techniques for expression classification. A computation of the length of the geodesic path between corresponding patches, using a Riemannian framework, in a shape space provides a quantitative information about their similarities. These measures are then used as inputs to several classification methods. The experimental results demonstrate the effectiveness of the proposed approach. Using multiboosting and support vector machines (SVM) classifiers, we achieved 98.81 and 97.75 recognition average rates, respectively, for recognition of the six prototypical facial expressions on BU-3DFE database. A comparative study using the same experimental setting shows that the suggested approach outperforms previous work.", "We propose a feature-based 2D+3D multimodal facial expression recognition method.It is fully automatic benefit from a large set of automatically detected landmarks.The complementarities between 2D and 3D features are comprehensively demonstrated.Our method achieves the best accuracy on the BU-3DFE database so far.A good generalization ability is shown on the Bosphorus database. We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32 . Moreover, a good generalization ability is shown on the Bosphorus database.", "The huge research effort in the field of face expression recognition (FER) technology is justified by the potential applications in multiple domains: computer science, engineering, psychology, neuroscience, to name just a few. Obviously, this generates an impressive number of scientific publications. The aim of this paper is to identify key representative approaches for facial expression recognition research in the past ten years (2003-2012)." ] }
1904.04297
2939498466
This paper proposes an approach to learn generic multi-modal mesh surface representations using a novel scheme for fusing texture and geometric data. Our approach defines an inverse mapping between different geometric descriptors computed on the mesh surface or its down-sampled version, and the corresponding 2D texture image of the mesh, allowing the construction of fused geometrically augmented images (FGAI). This new fused modality enables us to learn feature representations from 3D data in a highly efficient manner by simply employing standard convolutional neural networks in a transfer-learning mode. In contrast to existing methods, the proposed approach is both computationally and memory efficient, preserves intrinsic geometric information and learns highly discriminative feature representation by effectively fusing shape and texture information at data level. The efficacy of our approach is demonstrated for the tasks of facial action unit detection and expression classification. The extensive experiments conducted on the Bosphorus and BU-4DFE datasets, show that our method produces a significant boost in the performance when compared to state-of-the-art solutions
Model-based approaches first establish a dense point-to-point correspondence between a query mesh and a generic expression deformable mesh using rigid and non-rigid transformation techniques. The transformation parameters are then used as representations of the query mesh for classification. Non-rigid facial deformations are characterized by a bilinear deformable model in @cite_14 . The shape of a scan with facial expression is decomposed into neutral and expression parts in @cite_22 . The expression part of the decomposed scan is then employed for encoding the facial scan. Some works combine the strengths of both feature-based and model-based techniques. For example, @cite_15 first segment the face into multiple regions based upon muscular movements. Geometric descriptors are then extracted from these regions, followed by a fusion scheme weights to optimally combine decisions from different regions.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_22" ], "mid": [ "2339620988", "2104539097", "1978435065" ], "abstract": [ "Facial expression is an important channel for human nonverbal communication. This paper presents a novel and effective approach to automatic 3D 4D facial expression recognition based on the muscular movement model (MMM). In contrast to most of existing methods, the MMM deals with such an issue in the viewpoint of anatomy. It first automatically segments the input 3D face (frame) by localizing the corresponding points within each muscular region of the reference using iterative closest normal point. A set of features with multiple differential quantities, including @math , @math and @math values, are then extracted to describe the geometry deformation of each segmented region. Meanwhile, we analyze the importance of these muscular areas, and a score level fusion strategy is exploited to optimize their weights by the genetic algorithm in the learning step. The support vector machine and the hidden Markov model are finally used to predict the expression label in 3D and 4D, respectively. The experiments are conducted on the BU-3DFE and BU-4DFE databases, and the results achieved clearly demonstrate the effectiveness of the proposed method.", "In this paper, we explore bilinear models for jointly addressing 3D face and facial expression recognition. An elastically deformable model algorithm that establishes correspondence among a set of faces is proposed first and then bilinear models that decouple the identity and facial expression factors are constructed. Fitting these models to unknown faces enables us to perform face recognition invariant to facial expressions and facial expression recognition with unknown identity. A quantitative evaluation of the proposed technique is conducted on the publicly available BU-3DFE face database in comparison with our previous work on face recognition and other state-of-the-art algorithms for facial expression recognition. Experimental results demonstrate an overall 90.5 facial expression recognition rate and an 86 rank-1 face recognition rate.", "Facial expression recognition has many applications in multimedia processing and the development of 3D data acquisition techniques makes it possible to identify expressions using 3D shape information. In this paper, we propose an automatic facial expression recognition approach based on a single 3D face. The shape of an expressional 3D face is approximated as the sum of two parts, a basic facial shape component (BFSC) and an expressional shape component (ESC). The BFSC represents the basic face structure and neutral-style shape and the ESC contains shape changes caused by facial expressions. To separate the BFSC and ESC, our method firstly builds a reference face for each input 3D non-neutral face by a learning method, which well represents the basic facial shape. Then, based on the BFSC and the original expressional face, a facial expression descriptor is designed. The surface depth changes are considered in the descriptor. Finally, the descriptor is input into an SVM to recognize the expression. Unlike previous methods which recognize a facial expression with the help of manually labeled key points and or a neutral face, our method works on a single 3D face without any manual assistance. Extensive experiments are carried out on the BU-3DFE database and comparisons with existing methods are conducted. The experimental results show the effectiveness of our method." ] }
1904.04433
2939328166
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometries of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.
DeepFace @cite_37 and DeepID @cite_9 treat face recognition as a multi-class classification problem and use deep CNNs to learn features supervised by the softmax loss. Triplet loss @cite_22 and center loss @cite_2 are proposed to increase the Euclidean margin in the feature space between classes. The angular softmax loss is proposed in SphereFace @cite_36 to learn angularly discriminative features. CosFace @cite_19 uses the large margin cosine loss to maximize the cosine margin. The additive angular margin loss is proposed in ArcFace @cite_23 to learn highly discriminative features.
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_36", "@cite_9", "@cite_19", "@cite_23", "@cite_2" ], "mid": [ "2145287260", "2096733369", "2963466847", "1998808035", "2962898354", "2784874046", "2520774990" ], "abstract": [ "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.", "This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45 verification accuracy on LFW is achieved with only weakly aligned faces.", "Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.", "One of the main challenges in feature learning using Deep Convolutional Neural Networks (DCNNs) for large-scale face recognition is the design of appropriate loss functions that enhance discriminative power. Centre loss penalises the distance between the deep features and their corresponding class centres in the Euclidean space to achieve intra-class compactness. SphereFace assumes that the linear transformation matrix in the last fully connected layer can be used as a representation of the class centres in an angular space and penalises the angles between the deep features and their corresponding weights in a multiplicative way. Recently, a popular line of research is to incorporate margins in well-established loss functions in order to maximise face class separability. In this paper, we propose an Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. The proposed ArcFace has a clear geometric interpretation due to the exact correspondence to the geodesic distance on the hypersphere. We present arguably the most extensive experimental evaluation of all the recent state-of-the-art face recognition methods on over 10 face recognition benchmarks including a new large-scale image database with trillion level of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state-of-the-art and can be easily implemented with negligible computational overhead. We release all refined training data, training codes, pre-trained models and training logs, which will help reproduce the results in this paper.", "Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks." ] }
1904.04433
2939328166
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometries of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.
Deep CNNs are highly vulnerable to adversarial examples @cite_6 @cite_28 @cite_8 . Face recognition has also been shown the vulnerability against attacks. @cite_30 , the perturbations are constrained to the eyeglass region and generated by gradient-based methods, which fool face recognition systems even in the physical world. The adversarial eyeglasses can also be produced by generative networks @cite_26 . However, these methods rely on the white-box manipulations of face recognition models, which is unrealistic in real-world applications. Instead, we focus on evaluating the robustness of face recognition models in the decision-based black-box attack setting.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_8", "@cite_28", "@cite_6" ], "mid": [ "2535873859", "2782017896", "2543927648", "2963207607", "2964153729" ], "abstract": [ "Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.", "In this paper we show that misclassification attacks against face-recognition systems based on deep neural networks (DNNs) are more dangerous than previously demonstrated, even in contexts where the adversary can manipulate only her physical appearance (versus directly manipulating the image input to the DNN). Specifically, we show how to create eyeglasses that, when worn, can succeed in targeted (impersonation) or untargeted (dodging) attacks while improving on previous work in one or more of three facets: (i) inconspicuousness to onlooking observers, which we test through a user study; (ii) robustness of the attack against proposed defenses; and (iii) scalability in the sense of decoupling eyeglass creation from the subject who will wear them, i.e., by creating \"universal\" sets of eyeglasses that facilitate misclassification. Central to these improvements are adversarial generative nets, a method we propose to generate physically realizable attack artifacts (here, eyeglasses) automatically.", "Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.", "Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input." ] }
1904.04433
2939328166
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometries of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.
Black-box attacks can be divided into transfer-based, score-based, and decision-based attacks. Transfer-based attacks generate adversarial examples for a white-box model and attack the black-box model based on the transferability @cite_35 @cite_33 . In score-based attacks, the predicted probability is given by the model. Several methods rely on approximated gradients to generate adversarial examples @cite_4 @cite_0 . In decision-based attacks, we can only obtain the hard-label predictions. The boundary attack method is based on random walks on the decision boundary @cite_10 . The optimization-based method @cite_32 formulates this problem as a continuous optimization problem and estimates the gradient for optimization. However, it needs to calculate the distance to the decision boundary along a direction by binary search. @cite_0 , the predicted probability is estimated by hard-label predictions. Then, the natural evolution strategy (NES) is used to maximize the target class probability or minimize the true class probability. These methods generally require a large number of queries to generate an adversarial example with a minimum perturbation, or converge to a large perturbation with few queries .
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_33", "@cite_32", "@cite_0", "@cite_10" ], "mid": [ "2570685808", "2746600820", "2774644650", "2874797877", "2963062382", "2963070423" ], "abstract": [ "An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system.", "Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs. Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to efficiently attack black-box models. By exploiting zeroth order optimization, improved attacks to the targeted DNN can be accomplished, sparing the need for training substitute models and avoiding the loss in attack transferability. Experimental results on MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective as the state-of-the-art white-box attack (e.g., Carlini and Wagner's attack) and significantly outperforms existing black-box attacks via substitute models.", "Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.", "We study the problem of attacking a machine learning model in the hard-label black-box setting, where no model information is revealed except that the attacker can make queries to probe the corresponding hard-label decisions. This is a very challenging problem since the direct extension of state-of-the-art white-box attacks (e.g., CW or PGD) to the hard-label black-box setting will require minimizing a non-continuous step function, which is combinatorial and cannot be solved by a gradient-based optimizer. The only current approach is based on random walk on the boundary, which requires lots of queries and lacks convergence guarantees. We propose a novel way to formulate the hard-label black-box attack as a real-valued optimization problem which is usually continuous and can be solved by any zeroth order optimization algorithm. For example, using the Randomized Gradient-Free method, we are able to bound the number of iterations needed for our algorithm to achieve stationary points. We demonstrate that our proposed method outperforms the previous random walk approach to attacking convolutional neural networks on MNIST, CIFAR, and ImageNet datasets. More interestingly, we show that the proposed algorithm can also be used to attack other discrete and non-continuous machine learning models, such as Gradient Boosting Decision Trees (GBDT).", "", "Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios. In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against. Here we emphasise the importance of attacks which solely rely on the final model decision. Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks. Previous attacks in this category were limited to simple models or simple datasets. Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet. We apply the attack on two black-box algorithms from Clarifai.com. The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems. An implementation of the attack is available at XXXXXX." ] }
1904.04326
2938647293
A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated. General initialization schemes as well as general regimes for the network width and training data size are considered. In the over-parametrized regime, it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels. In addition, it is proved that throughout the training process the functions represented by the neural network model are uniformly close to that of a kernel method. For general values of the network width and training data size, sharp estimates of the generalization error is established for target functions in the appropriate reproducing kernel Hilbert space. Our analysis suggests strongly that in terms of implicit regularization', two-layer neural network models do not outperform the kernel method.
The seminal work of @cite_18 presented both numerical and theoretical evidence that over-parametrized neural networks can fit random labels. Building upon earlier work on the non-degeneracy of some Gram matrices @cite_19 , went a step further by proving that the GD algorithm can find global minima of the empirical risk for sufficiently over-parametrized two-layer neural networks @cite_28 . This result was extended to multi-layer networks in @cite_27 @cite_30 . The related result for infinitely wide neural networks was obtained in @cite_29 . The similar result for a general setting also appears in @cite_7 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_7", "@cite_28", "@cite_29", "@cite_19", "@cite_27" ], "mid": [ "2899748887", "2566079294", "2904838594", "2964161337", "2950743785", "2614119628", "2899790086" ], "abstract": [ "Deep neural networks (DNNs) have demonstrated dominating performance in many fields; since AlexNet, networks used in practice are going wider and deeper. On the theoretical side, a long line of works has been focusing on training neural networks with one hidden layer. The theory of multi-layer networks remains largely unsettled. In this work, we prove why stochastic gradient descent (SGD) can find @math on the training objective of DNNs in @math . We only make two assumptions: the inputs are non-degenerate and the network is over-parameterized. The latter means the network width is sufficiently large: @math in @math , the number of layers and in @math , the number of samples. Our key technique is to derive that, in a sufficiently large neighborhood of the random initialization, the optimization landscape is almost-convex and semi-smooth even with ReLU activations. This implies an equivalence between over-parameterized neural networks and neural tangent kernel (NTK) in the finite (and polynomial) width setting. As concrete examples, starting from randomly initialized weights, we prove that SGD can attain 100 training accuracy in classification tasks, or minimize regression loss in linear convergence speed, with running time polynomial in @math . Our theory applies to the widely-used but non-smooth ReLU activation, and to any smooth and possibly non-convex loss functions. In terms of network architectures, our theory at least applies to fully-connected neural networks, convolutional neural networks (CNN), and residual neural networks (ResNet).", "Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. @PARASPLIT Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. @PARASPLIT We interpret our experimental findings by comparison with traditional models.", "In a series of recent theoretical works, it has been shown that strongly over-parameterized neural networks trained with gradient-based methods could converge linearly to zero training loss, with their parameters hardly varying. In this note, our goal is to exhibit the simple structure that is behind these results. In a simplified setting, we prove that \"lazy training\" essentially solves a kernel regression. We also show that this behavior is not so much due to over-parameterization than to a choice of scaling, often implicit, that allows to linearize the model around its initialization. These theoretical results complemented with simple numerical experiments make it seem unlikely that \"lazy training\" is behind the many successes of neural networks in high dimensional tasks.", "", "At initialization, artificial neural networks (ANNs) are equivalent to Gaussian processes in the infinite-width limit, thus connecting them to kernel methods. We prove that the evolution of an ANN during training can also be described by a kernel: during gradient descent on the parameters of an ANN, the network function @math (which maps input vectors to output vectors) follows the kernel gradient of the functional cost (which is convex, in contrast to the parameter cost) w.r.t. a new kernel: the Neural Tangent Kernel (NTK). This kernel is central to describe the generalization features of ANNs. While the NTK is random at initialization and varies during training, in the infinite-width limit it converges to an explicit limiting kernel and it stays constant during training. This makes it possible to study the training of ANNs in function space instead of parameter space. Convergence of the training can then be related to the positive-definiteness of the limiting NTK. We prove the positive-definiteness of the limiting NTK when the data is supported on the sphere and the non-linearity is non-polynomial. We then focus on the setting of least-squares regression and show that in the infinite-width limit, the network function @math follows a linear differential equation during training. The convergence is fastest along the largest kernel principal components of the input data with respect to the NTK, hence suggesting a theoretical motivation for early stopping. Finally we study the NTK numerically, observe its behavior for wide networks, and compare it to the infinite-width limit.", "", "Gradient descent finds a global minimum in training deep neural networks despite the objective function being non-convex. The current paper proves gradient descent achieves zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). Our analysis relies on the particular structure of the Gram matrix induced by the neural network architecture. This structure allows us to show the Gram matrix is stable throughout the training process and this stability implies the global optimality of the gradient descent algorithm. We further extend our analysis to deep residual convolutional neural networks and obtain a similar convergence result." ] }
1904.04326
2938647293
A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated. General initialization schemes as well as general regimes for the network width and training data size are considered. In the over-parametrized regime, it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels. In addition, it is proved that throughout the training process the functions represented by the neural network model are uniformly close to that of a kernel method. For general values of the network width and training data size, sharp estimates of the generalization error is established for target functions in the appropriate reproducing kernel Hilbert space. Our analysis suggests strongly that in terms of implicit regularization', two-layer neural network models do not outperform the kernel method.
The issue of generalization is less clear. @cite_24 established generalization error bounds for solutions produced by the online stochastic gradient descent (SGD) algorithm with early stopping when the target function is in a certain RKHS. Similar results were proved in @cite_5 for the classification problem, and in @cite_9 for offline SGD algorithms. In @cite_4 , generalization results were proved for the GD algorithm for target functions that can be represented by the underlying neural network models. More recently in @cite_16 , a generalization bound was derived for GD solutions using a data-dependent norm. This norm is bounded if the target function belongs to the appropriate RKHS. However, their error bounds are not strong enough to rule out the possibility of curse of dimensionality. Indeed the results of the present paper do suggest that curse of dimensionality does occur in their setting (see Theorem ).
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_24", "@cite_5", "@cite_16" ], "mid": [ "2900103278", "2927724204", "2593958421", "2886067286", "2911867426" ], "abstract": [ "Neural networks have great success in many machine learning applications, but the fundamental learning theory behind them remains largely unsolved. Learning neural networks is NP-hard, but in practice, simple algorithms like stochastic gradient descent (SGD) often produce good solutions. Moreover, it is observed that overparameterization (that is, designing networks whose number of parameters is larger than statistically needed to perfectly fit the data) improves both optimization and generalization, appearing to contradict traditional learning theory. In this work, we extend the theoretical understanding of two and three-layer neural networks in the overparameterized regime. We prove that, using overparameterized neural networks, one can (improperly) learn some notable hypothesis classes, including two and three-layer neural networks with fewer parameters. Moreover, the learning process can be simply done by SGD or its variants in polynomial time using polynomially many samples. We also show that for a fixed sample size, the generalization error of the solution found by some SGD variant can be made almost independent of the number of parameters in the overparameterized network.", "", "We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely, Frostig and Singer. The result holds for log-depth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two. As corollaries, it follows that for neural networks of any depth between 2 and log(n), SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results.", "Neural networks have many successful applications, while much less theoretical understanding has been gained. Towards bridging this gap, we study the problem of learning a two-layer overparameterized ReLU neural network for multi-class classification via stochastic gradient descent (SGD) from random initialization. In the overparameterized setting, when the data comes from mixtures of well-separated distributions, we prove that SGD learns a network with a small generalization error, albeit the network has enough capacity to fit arbitrary labels. Furthermore, the analysis provides interesting insights into several aspects of learning neural networks and can be verified based on empirical studies on synthetic data and on the MNIST dataset.", "Recent works have cast some light on the mystery of why deep nets fit any data and generalize despite being very overparametrized. This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works: (i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [ ICLR'17]. (ii) Generalization bound independent of network size, using a data-dependent complexity measure. Our measure distinguishes clearly between random labels and true labels on MNIST and CIFAR, as shown by experiments. Moreover, recent papers require sample complexity to increase (slowly) with the size, while our sample complexity is completely independent of the network size. (iii) Learnability of a broad class of smooth functions by 2-layer ReLU nets trained via gradient descent. The key idea is to track dynamics of training and generalization via properties of a related kernel." ] }
1904.04025
2976021111
Policy gradient algorithms in reinforcement learning optimize the policy directly and rely on efficiently sampling an environment. However, while most sampling procedures are based solely on sampling the agent's policy, other measures directly accessible through these algorithms could be used to improve sampling before each policy update. Following this line of thoughts, we propose the use of SAUNA, a method where transitions are rejected from the gradient updates if they do not meet a particular criterion, and kept otherwise. This criterion, the fraction of variance explained @math , is a measure of the discrepancy between a model and actual samples. In this work, @math is used to evaluate the impact each transition will have on learning: this criterion refines sampling and improves the policy gradient algorithm. In this paper: (a) We introduce and explore @math , the criterion used for denoising policy gradient updates. (b) We conduct experiments across a variety of benchmark environments, including standard continuous control problems. Our results show better performance with SAUNA. (c) We investigate why @math provides a reliable assessment for the selection of samples that will positively impact learning. (d) We show how this criterion can work as a dynamic tool to adjust the ratio between exploration and exploitation.
Actor-critic algorithms essentially use the value function to alternate between policy evaluation and policy improvement @cite_34 @cite_28 . In order to update the actor, many methods adopt the on-policy formulation @cite_3 @cite_11 @cite_31 @cite_30 . However, despite their important success, these methods suffer from sample complexity. In the literature, research has also been conducted in sampling prioritization. While @cite_8 makes the learning from experience replay more efficient by using the TD error as a measure of these priorities in an off-policy setting, our method directly selects the samples on-policy. @cite_26 is related to our method in that it calculates the expected improvement in prediction error, but the objective is to maximize the intrinsic reward through artificial curiosity while our method estimates the expected variance explained.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_8", "@cite_28", "@cite_3", "@cite_31", "@cite_34", "@cite_11" ], "mid": [ "2736601468", "1863227302", "2963477884", "2091565802", "2125612430", "2964043796", "1515851193", "" ], "abstract": [ "We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.", "A novel curious model-building control system is described which actively tries to provoke situations for which it learned to expect to learn something about the environment. Such a system has been implemented as a four-network system based on Watkins' Q-learning algorithm which can be used to maximize the expectation of the temporal derivative of the adaptive assumed reliability of future predictions. An experiment with an artificial nondeterministic environment demonstrates that the system can be superior to previous model-building control systems, which do not address the problem of modeling the reliability of the world model's predictions in uncertain environments and use ad-hoc methods (like random search) to train the world model. >", "Abstract: Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.", "It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem. The task is to balance a pole that is hinged to a movable cart by applying forces to the cart's base. It is argued that the learning problems faced by adaptive elements that are components of adaptive networks are at least as difficult as this version of the pole-balancing problem. The learning system consists of a single associative search element (ASE) and a single adaptive critic element (ACE). In the course of learning to balance the pole, the ASE constructs associations between input and output by searching under the influence of reinforcement feedback, and the ACE constructs a more informative evaluation function than reinforcement feedback alone can provide. The differences between this approach and other attempts to solve problems using neurolike elements are discussed, as is the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences.", "Autonomous learning is one of the hallmarks of human and animal behavior, and understanding the principles of learning will be crucial in order to achieve true autonomy in advanced machines like humanoid robots. In this paper, we examine learning of complex motor skills with human-like limbs. While supervised learning can offer useful tools for bootstrapping behavior, e.g., by learning from demonstration, it is only reinforcement learning that offers a general approach to the final trial-and-error improvement that is needed by each individual acquiring a skill. Neither neurobiological nor machine learning studies have, so far, offered compelling results on how reinforcement learning can be scaled to the high-dimensional continuous state and action spaces of humans or humanoids. Here, we combine two recent research developments on learning motor control in order to achieve this scaling. First, we interpret the idea of modular motor control by means of motor primitives as a suitable way to generate parameterized control policies for reinforcement learning. Second, we combine motor primitives with the theory of stochastic policy gradient learning, which currently seems to be the only feasible framework for reinforcement learning for humanoids. We evaluate different policy gradient methods with a focus on their applicability to parameterized motor primitives. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.", "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.", "From the Publisher: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.", "" ] }
1904.04025
2976021111
Policy gradient algorithms in reinforcement learning optimize the policy directly and rely on efficiently sampling an environment. However, while most sampling procedures are based solely on sampling the agent's policy, other measures directly accessible through these algorithms could be used to improve sampling before each policy update. Following this line of thoughts, we propose the use of SAUNA, a method where transitions are rejected from the gradient updates if they do not meet a particular criterion, and kept otherwise. This criterion, the fraction of variance explained @math , is a measure of the discrepancy between a model and actual samples. In this work, @math is used to evaluate the impact each transition will have on learning: this criterion refines sampling and improves the policy gradient algorithm. In this paper: (a) We introduce and explore @math , the criterion used for denoising policy gradient updates. (b) We conduct experiments across a variety of benchmark environments, including standard continuous control problems. Our results show better performance with SAUNA. (c) We investigate why @math provides a reliable assessment for the selection of samples that will positively impact learning. (d) We show how this criterion can work as a dynamic tool to adjust the ratio between exploration and exploitation.
Motion control in physics-based environments is a long-standing and active research field. In particular, there are many prior works on continuous action spaces @cite_24 @cite_27 @cite_17 @cite_5 that demonstrate how locomotion behavior and other skilled movements can emerge as the outcome of optimization problems.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_27", "@cite_17" ], "mid": [ "", "2964006217", "2121103318", "2963864421" ], "abstract": [ "", "We present a unified framework for learning continuous control policies using backpropagation. It supports stochastic control by treating stochasticity in the Bellman equation as a deterministic function of exogenous noise. The product is a spectrum of general policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions. We use learned models but only require observations from the environment instead of observations from model-predicted trajectories, minimizing the impact of compounded model errors. We apply these algorithms first to a toy stochastic control problem and then to several physics-based control problems in simulation. One of these variants, SVG(1), shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains.", "We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.", "Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs." ] }
1904.04049
2934025569
Knowledge graph based simple question answering (KBSQA) is a major area of research within question answering. Although only dealing with simple questions, i.e., questions that can be answered through a single knowledge base (KB) fact, this task is neither simple nor close to being solved. Targeting on the two main steps, subgraph selection and fact selection, the research community has developed sophisticated approaches. However, the importance of subgraph ranking and leveraging the subject--relation dependency of a KB fact have not been sufficiently explored. Motivated by this, we present a unified framework to describe and analyze existing approaches. Using this framework as a starting point, we focus on two aspects: improving subgraph selection through a novel ranking method and leveraging the subject--relation dependency by proposing a joint scoring CNN model with a novel loss function that enforces the well-order of scores. Our methods achieve a new state of the art (85.44 in accuracy) on the SimpleQuestions dataset.
The methods for subgraph selection fall in two schools: parsing methods @cite_0 @cite_8 @cite_26 and sequence tagging methods @cite_23 . The latter proves to be simpler yet effective, with the most effective model being BiLSTM-CRF @cite_23 @cite_16 @cite_19 .
{ "cite_N": [ "@cite_26", "@cite_8", "@cite_0", "@cite_19", "@cite_23", "@cite_16" ], "mid": [ "2889276770", "2251079237", "2252136820", "2798313137", "2963861211", "" ], "abstract": [ "The gap between unstructured natural language and structured data makes it challenging to build a system that supports using natural language to query large knowledge graphs. Many existing methods construct a structured query for the input question based on a syntactic parser. Once the input question is parsed incorrectly, a false structured query will be generated, which may result in false or incomplete answers. The problem gets worse especially for complex questions. In this paper, we propose a novel systematic method to understand natural language questions by using a large number of binary templates rather than semantic parsers. As sufficient templates are critical in the procedure, we present a low-cost approach that can build a huge number of templates automatically. To reduce the search space, we carefully devise an index to facilitate the online template decomposition. Moreover, we design effective strategies to perform the two-level disambiguations (i.e., entity-level ambiguity and structure-level ambiguity) by considering the query semantics. Extensive experiments over several benchmarks demonstrate that our proposed approach is effective as it significantly outperforms state-of-the-art methods in terms of both precision and recall.", "We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F1 measure of 52.5 on the WEBQUESTIONS dataset.", "In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large scale, we learn from question-answer pairs. The main challenge in this setting is narrowing down the huge number of possible logical predicates for a given question. We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates using a knowledge base and a large text corpus. Second, we use a bridging operation to generate additional predicates based on neighboring predicates. On the dataset of Cai and Yates (2013), despite not having annotated logical forms, our system outperforms their state-of-the-art parser. Additionally, we collected a more realistic and challenging dataset of question-answer pairs and improves over a natural baseline.", "The SimpleQuestions dataset is one of the most commonly used benchmarks for studying single-relation factoid questions. In this paper, we present new evidence that this benchmark can be nearly solved by standard methods. First we show that ambiguity in the data bounds performance on this benchmark at 83.4 ; there are often multiple answers that cannot be disambiguated from the linguistic signal alone. Second we introduce a baseline that sets a new state-of-the-art performance level at 78.1 accuracy, despite using standard methods. Finally, we report an empirical analysis showing that the upperbound is loose; roughly a third of the remaining errors are also not resolvable from the linguistic signal. Together, these results suggest that the SimpleQuestions dataset is nearly solved.", "", "" ] }
1904.04073
2951592338
Abuse on the Internet represents a significant societ al problem of our time. Previous research on automated abusive language detection in Twitter has shown that community-based profiling of users is a promising technique for this task. However, existing approaches only capture shallow properties of online communities by modeling follower-following relationships. In contrast, working with graph convolutional networks (GCNs), we present the first approach that captures not only the structure of online communities but also the linguistic behavior of the users within them. We show that such a heterogeneous graph-structured modeling of communities significantly advances the current state of the art in abusive language detection.
Supervised learning for abusive language detection was first explored by Spertus who extracted rule-based features to train their classifier. Subsequently, manually-engineered lexical--syntactic features formed the crux of most approaches to the task @cite_0 @cite_6 . showed that dense comment representations generated using outperform bag-of-words features. Several works have since utilized (deep) neural architectures to achieve impressive results on a variety of abuse-annotated datasets @cite_3 @cite_5 . Recently, the research focus has shifted towards extraction of features that capture behavioral and social traits of users. showed that including randomly-initialized user embeddings improved the performance of their methods. employed s to generate inter and intra-user representations based on tweets, but they did not leverage community information.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_3", "@cite_6" ], "mid": [ "2119769989", "2760103715", "2340954483", "78136081" ], "abstract": [ "Web 2.0 has led to the development and evolution of web-based communities and applications. These communities provide places for information sharing and collaboration. They also open t he door for inappropriate online activities, such as harassment, i n which some users post messages in a virtual community that are intention- ally offensive to other members of the community. It is a new and challenging task to detect online harassment; currently fe w systems attempt to solve this problem. In this paper, we use a supervised learning approach for dete ct- ing harassment. Our technique employs content features, sentiment features, and contextual features of documents. The experi mental results described herein show that our method achieves significant improvements over several baselines, including Term Frequency- Inverse Document Frequency (TFIDF) approaches. Identifica tion of online harassment is feasible when TFIDF is supplemented with sentiment and contextual feature attributes.", "", "Detection of abusive language in user generated online content has become an issue of increasing importance in recent years. Most current commercial methods make use of blacklists and regular expressions, however these measures fall short when contending with more subtle, less ham-fisted examples of hate speech. In this work, we develop a machine learning based method to detect hate speech on online user comments from two domains which outperforms a state-of-the-art deep learning approach. We also develop a corpus of user comments annotated for abusive language, the first of its kind. Finally, we use our detection tool to analyze abusive language over time and in different settings to further enhance our knowledge of this behavior.", "We present an approach to detecting hate speech in online text, where hate speech is defined as abusive speech targeting specific group characteristics, such as ethnic origin, religion, gender, or sexual orientation. While hate speech against any group may exhibit some common characteristics, we have observed that hatred against each different group is typically characterized by the use of a small set of high frequency stereotypical words; however, such words may be used in either a positive or a negative sense, making our task similar to that of words sense disambiguation. In this paper we describe our definition of hate speech, the collection and annotation of our hate speech corpus, and a mechanism for detecting some commonly used methods of evading common \"dirty word\" filters. We describe pilot classification experiments in which we classify anti-semitic speech reaching an accuracy 94 , precision of 68 and recall at 60 , for an F1 measure of. 6375." ] }
1904.04088
2189540548
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
Dozens of feature extraction (selection or transformation) methods have been proposed in the literature due to the significance of this technique in pattern recognition and machine learning @cite_7 @cite_22 @cite_14 @cite_5 @cite_16 . We refer to @cite_7 for a survey of traditional feature extraction approaches, and only focus in this paper on joint feature extraction across multiple tasks. An early work of this kind was done by @cite_1 , in which the @math -norm was introduced to encourage similar sparsity patterns for related tasks in feature selection. This was extended in @cite_26 by emphasizing @math -norm on both the loss and regularization term for the sake of efficiency and robustness. Considering that labeled data might not be available, developed an unsupervised feature selection method @cite_18 , in which feature correlation is exploited via the @math -norm, and discriminative information is incorporated in learning by the defined local discriminative score. The discriminative information is also exploited in @cite_30 for unsupervised feature selection by making use of the spectral clustering to learn pseudo class labels. We differ from them in that multiple types of features are utilized.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_26", "@cite_22", "@cite_7", "@cite_1", "@cite_5", "@cite_16" ], "mid": [ "141062567", "2009501510", "", "2171837816", "", "1922017469", "2121007818", "", "" ], "abstract": [ "In this paper, a new unsupervised learning algorithm, namely Nonnegative Discriminative Feature Selection (NDFS), is proposed. To exploit the discriminative information in unsupervised scenarios, we perform spectral clustering to learn the cluster labels of the input samples, during which the feature selection is performed simultaneously. The joint learning of the cluster labels and feature selection matrix enables NDFS to select the most discriminative features. To learn more accurate cluster labels, a nonnegative constraint is explicitly imposed to the class indicators. To reduce the redundant or even noisy features, l2,1-norm minimization constraint is added into the objective function, which guarantees the feature selection matrix sparse in rows. Our algorithm exploits the discriminative information and feature correlation simultaneously to select a better feature subset. A simple yet efficient iterative algorithm is designed to optimize the proposed objective function. Experimental results on different real world datasets demonstrate the encouraging performance of our algorithm over the state-of-the-arts.", "Compared with supervised learning for feature selection, it is much more difficult to select the discriminative features in unsupervised learning due to the lack of label information. Traditional unsupervised feature selection algorithms usually select the features which best preserve the data distribution, e.g., manifold structure, of the whole feature set. Under the assumption that the class label of input data can be predicted by a linear classifier, we incorporate discriminative analysis and l2,1-norm minimization into a joint framework for unsupervised feature selection. Different from existing unsupervised feature selection algorithms, our algorithm selects the most discriminative feature subset from the whole feature set in batch mode. Extensive experiment on different data types demonstrates the effectiveness of our algorithm.", "", "Feature selection is an important component of many machine learning applications. Especially in many bioinformatics tasks, efficient and robust feature selection methods are desired to extract meaningful features and eliminate noisy ones. In this paper, we propose a new robust feature selection method with emphasizing joint l2,1-norm minimization on both loss function and regularization. The l2,1-norm based loss function is robust to outliers in data points and the l2,1-norm regularization selects features across all data points with joint sparsity. An efficient algorithm is introduced with proved convergence. Our regression based objective makes the feature selection process more efficient. Our method has been applied into both genomic and proteomic biomarkers discovery. Extensive empirical studies are performed on six data sets to demonstrate the performance of our feature selection method.", "", "An Introduction to Feature Extraction.- An Introduction to Feature Extraction.- Feature Extraction Fundamentals.- Learning Machines.- Assessment Methods.- Filter Methods.- Search Strategies.- Embedded Methods.- Information-Theoretic Methods.- Ensemble Learning.- Fuzzy Neural Networks.- Feature Selection Challenge.- Design and Analysis of the NIPS2003 Challenge.- High Dimensional Classification with Bayesian Neural Networks and Dirichlet Diffusion Trees.- Ensembles of Regularized Least Squares Classifiers for High-Dimensional Problems.- Combining SVMs with Various Feature Selection Strategies.- Feature Selection with Transductive Support Vector Machines.- Variable Selection using Correlation and Single Variable Classifier Methods: Applications.- Tree-Based Ensembles with Dynamic Soft Feature Selection.- Sparse, Flexible and Efficient Modeling using L 1 Regularization.- Margin Based Feature Selection and Infogain with Standard Classifiers.- Bayesian Support Vector Machines for Feature Ranking and Selection.- Nonlinear Feature Selection with the Potential Support Vector Machine.- Combining a Filter Method with SVMs.- Feature Selection via Sensitivity Analysis with Direct Kernel PLS.- Information Gain, Correlation and Support Vector Machines.- Mining for Complex Models Comprising Feature Selection and Classification.- Combining Information-Based Supervised and Unsupervised Feature Selection.- An Enhanced Selective Naive Bayes Method with Optimal Discretization.- An Input Variable Importance Definition based on Empirical Data Probability Distribution.- New Perspectives in Feature Extraction.- Spectral Dimensionality Reduction.- Constructing Orthogonal Latent Features for Arbitrary Loss.- Large Margin Principles for Feature Selection.- Feature Extraction for Classification of Proteomic Mass Spectra: A Comparative Study.- Sequence Motifs: Highly Predictive Features of Protein Function.", "Fisher score and Laplacian score are two popular feature selection algorithms, both of which belong to the general graph-based feature selection framework. In this framework, a feature subset is selected based on the corresponding score (subset-level score), which is calculated in a trace ratio form. Since the number of all possible feature subsets is very huge, it is often prohibitively expensive in computational cost to search in a brute force manner for the feature subset with the maximum subset-level score. Instead of calculating the scores of all the feature subsets, traditional methods calculate the score for each feature, and then select the leading features based on the rank of these feature-level scores. However, selecting the feature subset based on the feature-level score cannot guarantee the optimum of the subset-level score. In this paper, we directly optimize the subset-level score, and propose a novel algorithm to efficiently find the global optimal feature subset such that the subset-level score is maximized. Extensive experiments demonstrate the effectiveness of our proposed algorithm in comparison with the traditional methods for feature selection.", "", "" ] }
1904.04088
2189540548
The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.
The proposed LM3FE belongs to multi-modal subspace learning but utilizes a weighted modality combination training strategy. Recently, a multi-modal feature selection method @cite_34 has been proposed that explores the correlations between different modalities by taking the tensor product of their feature spaces. However, this method cannot handle the multi-class problem naturally and must train an SVM classifier to eliminate one feature at a time. The relationships between different classes are therefore discarded, and the training cost is very high. The closest work to our method is the multi-modal feature learning approach presented in @cite_25 since it also utilizes the @math -norm to discover the task relationships, but it differs from our method in that the group @math -norm is employed to capture the correlations between modalities. The main drawback of this approach is that the feature weight matrices of different modalities are concatenated and directly utilized as the prediction matrix. Also, least squares loss is adopted, therefore the prediction (e.g., classification) power of the learned features is limited. In the proposed LM3FE, a prediction matrix is learned in addition to the feature extraction matrices, and strongly predictive features are obtained by minimizing the hinge loss under the maximum margin principle.
{ "cite_N": [ "@cite_34", "@cite_25" ], "mid": [ "2044849491", "2105709960" ], "abstract": [ "In the era of big data, we can easily access information from multiple views which may be obtained from different sources or feature subsets. Generally, different views provide complementary information for learning tasks. Thus, multi-view learning can facilitate the learning process and is prevalent in a wide range of application domains. For example, in medical science, measurements from a series of medical examinations are documented for each subject, including clinical, imaging, immunologic, serologic and cognitive measures which are obtained from multiple sources. Specifically, for brain diagnosis, we can have different quantitative analysis which can be seen as different feature subsets of a subject. It is desirable to combine all these features in an effective way for disease diagnosis. However, some measurements from less relevant medical examinations can introduce irrelevant information which can even be exaggerated after view combinations. Feature selection should therefore be incorporated in the process of multi-view learning. In this paper, we explore tensor product to bring different views together in a joint space, and present a dual method of tensor-based multi-view feature selection DUAL-TMFS based on the idea of support vector machine recursive feature elimination. Experiments conducted on datasets derived from neurological disorder demonstrate the features selected by our proposed method yield better classification performance and are relevant to disease diagnosis.", "Combining information from various data sources has become an important research topic in machine learning with many scientific applications. Most previous studies employ kernels or graphs to integrate different types of features, which routinely assume one weight for one type of features. However, for many problems, the importance of features in one source to an individual cluster of data can be varied, which makes the previous approaches ineffective. In this paper, we propose a novel multi-view learning model to integrate all features and learn the weight for every feature with respect to each cluster individually via new joint structured sparsity-inducing norms. The proposed multi-view learning framework allows us not only to perform clustering tasks, but also to deal with classification tasks by an extension when the labeling knowledge is available. A new efficient algorithm is derived to solve the formulated objective with rigorous theoretical proof on its convergence. We applied our new data fusion method to five broadly used multi-view data sets for both clustering and classification. In all experimental results, our method clearly outperforms other related state-of-the-art methods." ] }
1904.03953
2928580779
The AdaBoost algorithm has the superiority of resisting overfitting. Understanding the mysteries of this phenomena is a very fascinating fundamental theoretical problem. Many studies are devoted to explaining it from statistical view and margin theory. In this paper, we illustrate it from feature learning viewpoint, and propose the AdaBoost+SVM algorithm, which can explain the resistant to overfitting of AdaBoost directly and easily to understand. Firstly, we adopt the AdaBoost algorithm to learn the base classifiers. Then, instead of directly weighted combination the base classifiers, we regard them as features and input them to SVM classifier. With this, the new coefficient and bias can be obtained, which can be used to construct the final classifier. We explain the rationality of this and illustrate the theorem that when the dimension of these features increases, the performance of SVM would not be worse, which can explain the resistant to overfitting of AdaBoost.
AdaBoost algorithm @cite_37 is one of boosting classification algorithms which can boost a group of ��weak�� classifiers to a ��strong�� classifier. These algorithms usually first use a base classify algorithm whose classification ability is just better than random guessing to train a base classifier from the initial training samples. Then adjust the sample weight according to the result of the base classifier, which makes the samples that was classified incorrectly be paid more attention to. And then use the adjusted samples to train a next base learner. After iterations, weighted are added to these base learners to form the final classifier. Next is the description of AdaBoost algorithm.
{ "cite_N": [ "@cite_37" ], "mid": [ "1988790447" ], "abstract": [ "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line." ] }
1904.03953
2928580779
The AdaBoost algorithm has the superiority of resisting overfitting. Understanding the mysteries of this phenomena is a very fascinating fundamental theoretical problem. Many studies are devoted to explaining it from statistical view and margin theory. In this paper, we illustrate it from feature learning viewpoint, and propose the AdaBoost+SVM algorithm, which can explain the resistant to overfitting of AdaBoost directly and easily to understand. Firstly, we adopt the AdaBoost algorithm to learn the base classifiers. Then, instead of directly weighted combination the base classifiers, we regard them as features and input them to SVM classifier. With this, the new coefficient and bias can be obtained, which can be used to construct the final classifier. We explain the rationality of this and illustrate the theorem that when the dimension of these features increases, the performance of SVM would not be worse, which can explain the resistant to overfitting of AdaBoost.
From Eq.) and Eq.) we can know that when @math , @math . And @math would increase with @math decrease. In fact, AdaBoost minimizes the exponential loss function in this process @cite_10 .
{ "cite_N": [ "@cite_10" ], "mid": [ "2024046085" ], "abstract": [ "Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications." ] }
1904.03953
2928580779
The AdaBoost algorithm has the superiority of resisting overfitting. Understanding the mysteries of this phenomena is a very fascinating fundamental theoretical problem. Many studies are devoted to explaining it from statistical view and margin theory. In this paper, we illustrate it from feature learning viewpoint, and propose the AdaBoost+SVM algorithm, which can explain the resistant to overfitting of AdaBoost directly and easily to understand. Firstly, we adopt the AdaBoost algorithm to learn the base classifiers. Then, instead of directly weighted combination the base classifiers, we regard them as features and input them to SVM classifier. With this, the new coefficient and bias can be obtained, which can be used to construct the final classifier. We explain the rationality of this and illustrate the theorem that when the dimension of these features increases, the performance of SVM would not be worse, which can explain the resistant to overfitting of AdaBoost.
@cite_23 were the first ones to use the margin theory to explain this overfitting phenomenon. Define @math as the margin for @math with respect to @math . Use @math to refer as the probability with respect to sample weight vector @math , and @math to denote the probability with respect to uniform distribution over the sample @math . They first proved the following theorem to bound the generalization error of each voting classifier:
{ "cite_N": [ "@cite_23" ], "mid": [ "1975846642" ], "abstract": [ "One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition." ] }
1904.03953
2928580779
The AdaBoost algorithm has the superiority of resisting overfitting. Understanding the mysteries of this phenomena is a very fascinating fundamental theoretical problem. Many studies are devoted to explaining it from statistical view and margin theory. In this paper, we illustrate it from feature learning viewpoint, and propose the AdaBoost+SVM algorithm, which can explain the resistant to overfitting of AdaBoost directly and easily to understand. Firstly, we adopt the AdaBoost algorithm to learn the base classifiers. Then, instead of directly weighted combination the base classifiers, we regard them as features and input them to SVM classifier. With this, the new coefficient and bias can be obtained, which can be used to construct the final classifier. We explain the rationality of this and illustrate the theorem that when the dimension of these features increases, the performance of SVM would not be worse, which can explain the resistant to overfitting of AdaBoost.
Later, Breiman @cite_7 proved a generalization bound, which is tighter than Eq.) and designed the arc-gv algorithm which directly maximizes the minimum margin. According to margin theory, arc-gv should perform better than AdaBoost. However, the experiments results show that arc-gv does produce uniformly larger minimum margin but the test error increases. Thus Breiman concluded that the margin theory was in serious doubt.
{ "cite_N": [ "@cite_7" ], "mid": [ "2172195373" ], "abstract": [ "The theory behind the success of adaptive reweighting and combining algorithms (arcing) such as Adaboost (Freund & Schapire, 1996a, 1997) and others in reducing generalization error has not been well understood. By formulating prediction as a game where one player makes a selection from instances in the training set and the other a convex linear combination of predictors from a finite set, existing arcing algorithms are shown to be algorithms for finding good game strategies. The minimax theorem is an essential ingredient of the convergence proofs. An arcing algorithm is described that converges to the optimal strategy. A bound on the generalization error for the combined predictors in terms of their maximum error is proven that is sharper than bounds to date. Schapire, Freund, Bartlett, and Lee (1997) offered an explanation of why Adaboost works in terms of its ability to produce generally high margins. The empirical comparison of Adaboost to the optimal arcing algorithm shows that their explanation is n..." ] }
1904.03892
2931452266
In general, deep learning based models require a tremendous amount of samples for appropriate training, which is difficult to satisfy in the medical field. This issue can usually be avoided with a proper initialization of the weights. On the task of medical image segmentation in general, two techniques are usually employed to tackle the training of a deep network @math . The first one consists in reusing some weights of a network @math pre-trained on a large scale database ( @math ImageNet). This procedure, also known as , happens to reduce the flexibility when it comes to new network design since @math is constrained to match some parts of @math . The second commonly used technique consists in working on image patches to benefit from the large number of available patches. This paper brings together these two techniques and propose to train arbitrarily designed networks, with a focus on relatively small databases, in two stages: patch pre-training and full sized image fine-tuning. An experimental work have been carried out on the tasks of retinal blood vessel segmentation and the optic disc one, using four publicly available databases. Furthermore, three types of network are considered, going from a very light weighted network to a densely connected one. The final results show the efficiency of the proposed framework along with state of the art results on all the databases.
On the task of RBVS, proposed a freely designed patch-to-label DCNN which is further extended to output a small patch centered on the input. The network is trained from scratch using @math RGB patches. In @cite_40 , an additional complexity constraint is imposed on the patch-to-patch network. The goal was to build networks that work efficiently on real-time embedded systems, for example a binocular indirect ophthalmoscope. proposed a patch-to-patch model and added a stationary wavelet transform pre-processing step to improve the network's performance. On the other hand, @cite_22 chose to use plain RGB patches but reused the AlexNet network @cite_38 which is pre-trained on the ImageNet dataset. Other propositions include @cite_42 @cite_29 @cite_6 , where freely designed patch-to-patch networks are once again trained from scratch.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_29", "@cite_42", "@cite_6", "@cite_40" ], "mid": [ "2949667497", "2801475611", "", "2556022279", "", "2788864845" ], "abstract": [ "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "Abstract Since the retinal blood vessel has been acknowledged as an indispensable element in both ophthalmological and cardiovascular disease diagnosis, the accurate segmentation of the retinal vessel tree has become the prerequisite step for automated or computer-aided diagnosis systems. In this paper, a supervised method is presented based on a pre-trained fully convolutional network through transfer learning. This proposed method has simplified the typical retinal vessel segmentation problem from full-size image segmentation to regional vessel element recognition and result merging. Meanwhile, additional unsupervised image post-processing techniques are applied to this proposed method so as to refine the final result. Extensive experiments have been conducted on DRIVE, STARE, CHASE_DB1 and HRF databases, and the accuracy of the cross-database test on these four databases is state-of-the-art, which also presents the high robustness of the proposed approach. This successful result has not only contributed to the area of automated retinal blood vessel segmentation but also supports the effectiveness of transfer learning when applying deep learning technique to medical imaging.", "", "Automatic segmentation of retinal blood vessels from fundus images plays an important role in the computer aided diagnosis of retinal diseases. The task of blood vessel segmentation is challenging due to the extreme variations in morphology of the vessels against noisy background. In this paper, we formulate the segmentation task as a multi-label inference task and utilize the implicit advantages of the combination of convolutional neural networks and structured prediction. Our proposed convolutional neural network based model achieves strong performance and significantly outperforms the state-of-the-art for automatic retinal blood vessel segmentation on DRIVE dataset with 95.33 accuracy and 0.974 AUC score.", "", "Retinal vessel information is helpful in retinal disease screening and diagnosis. Retinal vessel segmentation provides useful information about vessels and can be used by physicians during intraocular surgery and retinal diagnostic operations. Convolutional neural networks (CNNs) are powerful tools for classification and segmentation of medical images. However, complexity of CNNs makes it difficult to implement them in portable devices such as binocular indirect ophthalmoscopes. In this paper a simplification approach is proposed for CNNs based on combination of quantization and pruning. Fully connected layers are quantized and convolutional layers are pruned to have a simple and efficient network structure. Experiments on images of the STARE dataset show that our simplified network is able to segment retinal vessels with acceptable accuracy and low complexity." ] }
1904.03892
2931452266
In general, deep learning based models require a tremendous amount of samples for appropriate training, which is difficult to satisfy in the medical field. This issue can usually be avoided with a proper initialization of the weights. On the task of medical image segmentation in general, two techniques are usually employed to tackle the training of a deep network @math . The first one consists in reusing some weights of a network @math pre-trained on a large scale database ( @math ImageNet). This procedure, also known as , happens to reduce the flexibility when it comes to new network design since @math is constrained to match some parts of @math . The second commonly used technique consists in working on image patches to benefit from the large number of available patches. This paper brings together these two techniques and propose to train arbitrarily designed networks, with a focus on relatively small databases, in two stages: patch pre-training and full sized image fine-tuning. An experimental work have been carried out on the tasks of retinal blood vessel segmentation and the optic disc one, using four publicly available databases. Furthermore, three types of network are considered, going from a very light weighted network to a densely connected one. The final results show the efficiency of the proposed framework along with state of the art results on all the databases.
On the task of optic disc segmentation, @cite_1 presented a multi-level FCNN with a patch-to-patch scheme which aggregates segmentations from different scales. The network is applied on patches centered on the optic disc. Therefore, an optic disc detection is required beforehand. proposed a patch-to-label network which classifies the central pixel of an input patch into four classes: blood vessel, optic disc, fovea, and background. Because of the lack of training data, most of the RIS models are patch-based. However, some propositions have been made using an image-to-image scheme based on weights transfer from the VGG network.
{ "cite_N": [ "@cite_1" ], "mid": [ "2782364420" ], "abstract": [ "Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA data set. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets." ] }
1904.03892
2931452266
In general, deep learning based models require a tremendous amount of samples for appropriate training, which is difficult to satisfy in the medical field. This issue can usually be avoided with a proper initialization of the weights. On the task of medical image segmentation in general, two techniques are usually employed to tackle the training of a deep network @math . The first one consists in reusing some weights of a network @math pre-trained on a large scale database ( @math ImageNet). This procedure, also known as , happens to reduce the flexibility when it comes to new network design since @math is constrained to match some parts of @math . The second commonly used technique consists in working on image patches to benefit from the large number of available patches. This paper brings together these two techniques and propose to train arbitrarily designed networks, with a focus on relatively small databases, in two stages: patch pre-training and full sized image fine-tuning. An experimental work have been carried out on the tasks of retinal blood vessel segmentation and the optic disc one, using four publicly available databases. Furthermore, three types of network are considered, going from a very light weighted network to a densely connected one. The final results show the efficiency of the proposed framework along with state of the art results on all the databases.
A proof of concept is presented in our previous work @cite_31 . Promising results have been obtained when using the patch network's weights as an initialization point of the image network. However the patch network's metrics tend to be better than the image ones. This remark may question the importance of the transfer. Moreover, only one database and a single network have been considered. Hence, one may question the generalization to other databases and networks.
{ "cite_N": [ "@cite_31" ], "mid": [ "2890916832" ], "abstract": [ "Fully convolutional networks (FCNs) are well known to provide state-of-the-art results in various medical image segmentation tasks. However, these models usually need a tremendous number of training samples to achieve good performances. Unfortunately, this requirement is often difficult to satisfy in the medical imaging field, due to the scarcity of labeled images. As a consequence, the common tricks for FCNs’ training go from data augmentation and transfer learning to patch-based segmentation. In the latter, the segmentation of an image involves patch extraction, patch segmentation, then patch aggregation. This paper presents a framework that takes advantage of all these tricks by starting with a patch-level segmentation which is then extended to the image level by transfer learning. The proposed framework follows two main steps. Given a image database ( D ), a first network ( N _P ) is designed and trained using patches extracted from ( D ). Then, ( N _P ) is used to pre-train a FCN ( N _I ) to be trained on the full sized images of ( D ). Experimental results are presented on the task of retinal blood vessel segmentation using the well known publicly available DRIVE database." ] }
1904.03968
2960329024
As the rapid proliferation of on-body Internet of Things (IoT) devices, their security vulnerabilities have raised serious privacy and safety issues. Traditional efforts to secure these devices against impersonation attacks mainly rely on either dedicated sensors or specified user motions, impeding their wide-scale adoption. This paper transcends these limitations with a general security solution by leveraging ubiquitous wireless chips available in IoT devices. Particularly, representative time and frequency features are first extracted from received signal strengths (RSSs) to characterize radio propagation profiles. Then, an adversarial multi-player network is developed to recognize underlying radio propagation patterns and facilitate on-body device authentication. We prove that at equilibrium, our adversarial model can extract all information about propagation patterns and eliminate any irrelevant information caused by motion variances. We build a prototype of our system using universal software radio peripheral (USRP) devices and conduct extensive experiments with both static and dynamic body motions in typical indoor and outdoor environments. The experimental results show that our system achieves an average authentication accuracy of 90.4%, with a high area under the receiver operating characteristic curve (AUROC) of 0.958 and better generalization performance in comparison with the conventional non-adversarial-based approach.
Dedicated sensors, including accelerometers @cite_10 , bioimpedance sensors @cite_2 , motion sensors @cite_1 and capacitive touch sensors @cite_7 , have been used to differentiate on- and off-body devices. Additionally, various sensors in smartphones @cite_19 @cite_14 have been also exploited to identify devices or users. However, sensor-based approaches limit themselves to specified user motions or fitness related wearables. Existing measurements @cite_17 @cite_6 have shown that essential differences exist between on- and off-body radio propagations. Based on the above studies, radio propagation characteristics were examined to identify legitimate wearable devices @cite_5 . In comparison to the prior work, our work develops a customized adversarial network to essentially extract underlying propagation patterns and obtains a better generalized authentication performance in various motion scenarios.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_1", "@cite_6", "@cite_19", "@cite_2", "@cite_5", "@cite_10", "@cite_17" ], "mid": [ "2147063679", "2154346544", "2344680582", "1993949615", "1980481605", "2022376209", "2963564396", "2617021199", "1998316921" ], "abstract": [ "The widespread use of smart devices gives rise to privacy concerns. Fingerprinting smart devices can jeopardize privacy by allowing remote identification without user awareness. We study the feasibility of using microphones and speakers embedded in smartphones to uniquely fingerprint individual devices. During fabrication, subtle imperfections arise in device microphones and speakers, which induce anomalies in produced and received sounds. We exploit this observation to fingerprint smartphones through playback and recording of audio samples. We explore different acoustic features and analyze their ability to successfully fingerprint smartphones. Our experiments show that not only is it possible to fingerprint devices manufactured by different vendors but also devices that have the same maker and model; on average we were able to accurately attribute 98 of all recorded audio clips from 50 different Android smartphones. Our study also identifies the prominent acoustic features capable of fingerprinting smart devices with a high success rate, and examines the effect of background noise and other variables on fingerprinting accuracy.", "As we are surrounded by an ever-larger variety of post-PC devices, the traditional methods for identifying and authenticating users have become cumbersome and time-consuming. In this paper, we present a capacitive communication method through which a device can recognize who is interacting with it. This method exploits the capacitive touchscreens, which are now used in laptops, phones, and tablets, as a signal receiver. The signal that identifies the user can be generated by a small transmitter embedded into a ring, watch, or other artifact carried on the human body. We explore two example system designs with a low-power continuous transmitter that communicates through the skin and a signet ring that needs to be touched to the screen. Experiments with our prototype transmitter and tablet receiver show that capacitive communication through a touchscreen is possible, even without hardware or firmware modifications on a receiver. This latter approach imposes severe limits on the data rate, but the rate is sufficient for differentiating users in multiplayer tablet games or parental control applications. Controlled experiments with a signal generator also indicate that future designs may be able to achieve datarates that are useful for providing less obtrusive authentication with similar assurance as PIN codes or swipe patterns commonly used on smartphones today.", "The ubiquity of wearable and implantable devices has sparked a new set of mobile computing applications that leverage the abundant information from sensors. For many of these applications, ensuring the security of communication between legitimate devices is a crucial problem. In this paper, we design Walkie-Talkie, a shared secret key generation scheme that allows two legitimate devices to establish a common cryptographic key by exploiting users' walking characteristics (gait). The intuition is that the sensors on different locations on the same body experience similar accelerometer signals when the user is walking. However, the accelerometer also captures motion signals produced by other body parts (e.g., swinging arms). It is shown that a Blind Source Separation (BSS) technique can extract the informative signal produced by the unique gait patterns. Our experimental results show that the keys generated by two independent devices on the same body are able to achieve up to a 100 bit agreement rate. To demonstrate the feasibility, the proposed key generation scheme is implemented on modern smartphones. The evaluation results show that the proposed scheme can run in real-time on modern mobile devices and incurs low system overhead.", "A channel model for a wireless body area network at 400 MHz, 900 MHz and 2.4 GHz is derived. The electromagnetic wave propagation around the body is simulated with a finite-difference time-domain simulator. Creeping waves were identified as the propagation path around the body. Its impact on the delay spread in an indoor environment is discussed.", "The rapid deployment of sensing technology in smartphones and the explosion of their usage in people's daily lives provide users with the ability to collectively sense the world. This leads to a growing trend of mobile healthcare systems utilizing sensing data collected from smartphones with without additional external sensors to analyze and understand people's physical and mental states. However, such healthcare systems are vulnerable to user spoofing attacks, in which an adversary distributes his registered device to other users such that data collected from these users can be claimed as his own to obtain more healthcare benefits and undermine the successful operation of mobile healthcare systems. Existing mitigation approaches either only rely on a secret PIN number (which can not deal with colluded attacks) or require an explicit user action for verification. In this paper, we propose a user verification scheme leveraging unique gait patterns derived from acceleration readings in mobile healthcare systems to detect possible user spoofing attacks. Our framework exploits the readily available accelerometers embedded within smartphones for user verification. Specifically, our user spoofing attack mitigation scheme (which consists of three components, namely Step Cycle Identification, Step Cycle Interpolation, and Similarity Score Computation) is used to extract gait patterns from run-time accelerometer measurements to perform robust user verification under various walking speeds. Our experiments using 322 smartphone-based traces over a period of 6 months confirm that our scheme is highly effective for detecting user spoofing attacks. This strongly indicates the feasibility of using smartphone based low grade accelerometer to conduct gait recognition and facilitate effective user verification without active user cooperation.", "Body-area networks of pervasive wearable devices are increasingly used for health monitoring, personal assistance, entertainment, and home automation. In an ideal world, a user would simply wear their desired set of devices with no configuration necessary: the devices would discover each other, recognize that they are on the same person, construct a secure communications channel, and recognize the user to which they are attached. In this paper we address a portion of this vision by offering a wearable system that unobtrusively recognizes the person wearing it. Because it can recognize the user, our system can properly label sensor data or personalize interactions. Our recognition method uses bioimpedance, a measurement of how tissue responds when exposed to an electrical current. By collecting bioimpedance samples using a small wearable device we designed, our system can determine that (a)the wearer is indeed the expected person and (b) the device is physically on the wearer's body. Our recognition method works with 98 balanced-accuracy under a cross-validation of a day's worth of bioimpedance samples from a cohort of 8 volunteer subjects. We also demonstrate that our system continues to recognize a subset of these subjects even several months later. Finally, we measure the energy requirements of our system as implemented on a Nexus S smart phone and custom-designed module for the Shimmer sensing platform.", "On-body devices are an intrinsic part of the Internet-of-Things (IoT) vision to provide human-centric services. These on-body IoT devices are largely embedded devices that lack a sophisticated user interface to facilitate traditional pre-shared key-based security protocols. Motivated by this real-world security vulnerability, this paper proposes SecureTag, a system designed to add defense in depth against active attacks by integrating physical layer (PHY) information with upper-layer protocols. The underpinning of SecureTag is a signal processing technique that extracts the peculiar propagation characteristics of creeping waves to discern on-body devices. Upon overhearing a suspicious transmission, SecureTag initiates a PHY-based challenge-response protocol to mitigate attacks. We implement our system on different commercial off-the-shelf wearables and a smartphone. Extensive experiments are conducted in a lab, apartments, malls, and outdoor areas, involving 12 volunteer subjects of different age groups, to demonstrate the robustness of our system. Results show that our system can mitigate 96.13 of active attack attempts while triggering false alarms on merely 5.64 of legitimate traffic.", "The increased usage of smart wearables in various applications, specifically in health-care, emphasizes the need for secure communication to transmit sensitive health-data. In a practical scenario, where multiple devices are carried by a person, a common secret key is essential for secure group communication. Group key generation and sharing among wearables have received very little attention in the literature due to the underlying challenges: 1) difficulty in obtaining a good source of randomness to generate strong cryptographic keys, and 2) finding a common feature among all the devices to share the key. In this paper, we present a novel solution to generate and distribute group secret keys by exploiting on-board accelerometer sensor and the unique walking style of the user, i.e., gait. We propose a method to identify the suitable samples of accelerometer data during all routine activities of a subject to generate the keys with high entropy. In our scheme, the smartphone placed on waist employs fuzzy vault, a cryptographic construct, and utilizes the acceleration due to gait, a common characteristic extracted on all wearable devices to share the secret key. We implement our solution on commercially available off-the-shelf smart wearables, measure the system performance, and conduct experiments with multiple subjects. Our results demonstrate that the proposed solution has a bit rate of 750 b s, low system overhead, distributes the key securely and quickly to all legitimate devices, and is suitable for practical applications.", "Interest in on-body communication channels is growing as the use of wireless devices increases in medical, consumer and military sensor applications. This paper presents an experimental investigation and analysis of the narrowband on-body propagation channel. This analysis considers each of the factors affecting the channel during a range of stationary and motion activities in different environments with actual wireless mote devices on the body. Use of such motes allows greater freedom in the subject's movements and the inclusion of real-world indoor and outdoor environments in a test sequence. This paper identifies and analyses the effect of the different components of the signal propagation (mean propagation path gain, large-scale fading and small-scale fading) and the cause of the losses and variation due to activities, positions or environmental factors. Our results show the effect on the received signal and the impact of voluntary and involuntary movements, which cause shadowing effects. The analysis also allows us to identify sensor positions on the body that are more reliable and those positions that may require a relay or those that may be suitable for acting as a relay." ] }
1904.04032
2925849105
Since word embeddings have been the most popular input for many NLP tasks, evaluating their quality is of critical importance. Most research efforts are focusing on English word embeddings. This paper addresses the problem of constructing and evaluating such models for the Greek language. We created a new word analogy corpus considering the original English Word2vec word analogy corpus and some specific linguistic aspects of the Greek language as well. Moreover, we created a Greek version of WordSim353 corpora for a basic evaluation of word similarities. We tested seven word vector models and our evaluation showed that we are able to create meaningful representations. Last, we discovered that the morphological complexity of the Greek language and polysemy can influence the quality of the resulting word embeddings.
One of the most popular techniques for building distributional models is to train a neural network @cite_5 to predict a word given a context (CBOW), or a context given a word (Skip-gram), on the basis of a corpus in which every word occurrence represents one learning example. In this approach, word sense is represented as a vector of the neural network layer.
{ "cite_N": [ "@cite_5" ], "mid": [ "2153579005" ], "abstract": [ "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1904.04032
2925849105
Since word embeddings have been the most popular input for many NLP tasks, evaluating their quality is of critical importance. Most research efforts are focusing on English word embeddings. This paper addresses the problem of constructing and evaluating such models for the Greek language. We created a new word analogy corpus considering the original English Word2vec word analogy corpus and some specific linguistic aspects of the Greek language as well. Moreover, we created a Greek version of WordSim353 corpora for a basic evaluation of word similarities. We tested seven word vector models and our evaluation showed that we are able to create meaningful representations. Last, we discovered that the morphological complexity of the Greek language and polysemy can influence the quality of the resulting word embeddings.
Most of the proposed evaluation schemes are based on word analogies that were presented in @cite_5 for English. However, there are several publications about other languages as well. For the Arabic language @cite_24 , a benchmark has been created so that it can be utilized to perform intrinsic evaluation of different word embeddings. It consists of nine relations, each consisting of over 100 word pairs. Next, an evaluation analogy corpus has been proposed for Croatian @cite_15 , which it consists of two groups of analogy questions, one for semantic analogies and one for syntactic as in the original one presented in @cite_5 for the English language. Semantic questions are divided into 9 categories, each having around 20-100 question pairs. The Syntactic part of corpus is divided into 14 categories. Moreover, research on the evaluation of word embeddings has been published for the Polish language @cite_20 as well as Czech @cite_23 . To the best of our knowledge, no work has been done so far on the evaluation of word embeddings produced from Greek text data.
{ "cite_N": [ "@cite_24", "@cite_23", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "2740654950", "2501208544", "2153579005", "2767680406", "2775005957" ], "abstract": [ "", "The word embedding methods have been proven to be very useful in many tasks of NLP (Natural Language Processing). Much has been investigated about word embeddings of English words and phrases, but only little attention has been dedicated to other languages.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "Croatian is poorly resourced and highly inflected language from Slavic language family. Nowadays, research is focusing mostly on English. We created a new word analogy corpus based on the original English Word2vec word analogy corpus and added some of the specific linguistic aspects from Croatian language. Next, we created Croatian WordSim353 and RG65 corpora for a basic evaluation of word similarities. We compared created corpora on two popular word representation models, based on Word2Vec tool and fastText tool. Models has been trained on 1.37B tokens training data corpus and tested on a new robust Croatian word analogy corpus. Results show that models are able to create meaningful word representation. This research has shown that free word order and the higher morphological complexity of Croatian language influences the quality of resulting word embeddings.", "Testing word embeddings for Polish Distributional Semantics postulates the representation of word meaning in the form of numeric vectors which represent words which occur in context in large text data. This paper addresses the problem of constructing such models for the Polish language. The paper compares the effectiveness of models based on lemmas and forms created with Continuous Bag of Words (CBOW) and skip-gram approaches based on different Polish corpora. For the purposes of this comparison, the results of two typical tasks solved with the help of distributional semantics, i.e. synonymy and analogy recognition, are compared. The results show that it is not possible to identify one universal approach to vector creation applicable to various tasks. The most important feature is the quality and size of the data, but different strategy choices can also lead to significantly different results. Testowanie wektorowych reprezentacji dystrybucyjnych slow jezyka polskiego Semantyka dystrybucyjna opiera sie na zalozeniu, ze znaczenie slow wyrazone jest za pomocą wektorow reprezentujących, w sposob bezpośredni bądź pośredni, konteksty, w jakich slowo to jest uzywane w duzym zbiorze tekstow. Niniejszy artykul dotyczy ewaluacji wielu takich modeli skonstruowanych dla jezyka polskiego. W pracy porownano skutecznośc modeli opartych na lematach i formach slow, utworzonych przy wykorzystaniu sieci neuronowych na danych z dwoch roznych korpusow jezyka polskiego. Ewaluacji dokonano na podstawie wynikow dwoch typowych zadan rozwiązywanych za pomocą metod semantyki dystrybucyjnej, tzn. rozpoznania wystepowania synonimii i analogii miedzy konkretnymi parami slow. Uzyskane wyniki dowodzą, ze nie mozna wskazac jednego uniwersalnego podejścia do tworzenia modeli dystrybucyjnych, gdyz ich skutecznośc jest rozna w zalezności od zastosowania. Najwazniejszą cechą wplywającą na jakośc modelu jest jakośc oraz rozmiar danych, ale wybory roznych strategii uczenia sieci mogą rowniez prowadzic do istotnie odmiennych wynikow." ] }
1904.04126
2952397892
We consider message-efficient continuous random sampling from a distributed stream, where the probability of inclusion of an item in the sample is proportional to a weight associated with the item. The unweighted version, where all weights are equal, is well studied, and admits tight upper and lower bounds on message complexity. For weighted sampling with replacement, there is a simple reduction to unweighted sampling with replacement. However, in many applications the stream has only a few heavy items which may dominate a random sample when chosen with replacement. Weighted sampling (weighted SWOR) eludes this issue, since such heavy items can be sampled at most once. In this work, we present the first message-optimal algorithm for weighted SWOR from a distributed stream. Our algorithm also has optimal space and time complexity. As an application of our algorithm for weighted SWOR, we derive the first distributed streaming algorithms for tracking . Here the goal is to identify stream items that contribute significantly to the residual stream, once the heaviest items are removed. Residual heavy hitters generalize the notion of @math heavy hitters and are important in streams that have a skewed distribution of weights. In addition to the upper bound, we also provide a lower bound on the message complexity that is nearly tight up to a @math factor. Finally, we use our weighted sampling algorithm to improve the message complexity of distributed @math tracking, also known as count tracking, which is a widely studied problem in distributed streaming. We also derive a tight message lower bound, which closes the message complexity of this fundamental problem.
Random sampling from a stream is a fundamental problem and there has been substantial prior work on it. The reservoir sampling algorithm (attributed to Waterman @cite_15 ) has been known since the 1960s. There has been much follow-up work on reservoir sampling including methods for speeding up reservoir sampling @cite_6 , sampling over a sliding window @cite_25 @cite_20 @cite_1 @cite_14 @cite_17 , and sampling from distinct elements in data @cite_0 @cite_31 .
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_1", "@cite_6", "@cite_0", "@cite_15", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2134786002", "2082553115", "2119163494", "2119885577", "2112400233", "", "", "1990465412", "2036035006" ], "abstract": [ "Estimating the number of distinct values is a wellstudied problem, due to its frequent occurrence in queries and its importance in selecting good query plans. Previous work has shown powerful negative results on the quality of distinct-values estimates based on sampling (or other techniques that examine only part of the input data). We present an approach, called distinct sampling, that collects a specially tailored sample over the distinct values in the input, in a single scan of the data. In contrast to the previous negative results, our small Distinct Samples are guaranteed to accurately estimate the number of distinct values. The samples can be incrementally maintained up-to-date in the presence of data insertions and deletions, with minimal time and memory overheads, so that the full scan may be performed only once. Moreover, a stored Distinct Sample can be used to accurately estimate the number of distinct values within any range specified by the query, or within any other subset of the data satisfying a query predicate. We present an extensive experimental study of distinct sampling. Using synthetic and real-world data sets, we show that distinct sampling gives distinct-values estimates to within 0 ‐10 relative error, whereas previous methods typically incur 50 ‐250 relative error. Next, we show how distinct sampling can provide fast, highlyaccurate approximate answers for “report” queries in high-volume, session-based event recording environments, such as IP networks, customer service call centers, etc. For a commercial call center environment, we show that a 1 Distinct Sample", "We introduce the problem of sampling from a moving window of recent items from a data stream and develop two algorithms for this problem. The first algorithm, \"chain-sample\", extends reservoir sampling to deal with the expiration of data elements from the sample. The expected memory usage of our algorithm is O(k) when maintaining a sample of size k over a window of the n most recent elements from the data stream, and with high probability the algorithm requires no more than O(k log n) memory.When the number of elements in the window is variable, as is the case when the size of the window is defined as a time duration rather than as a fixed number of data elements, the sampling problem becomes harder. Our second algorithm, \"priority-sample\", works even when the number of elements in the window can vary dynamically over time. With high probability, the \"priority-sample\" algorithm uses no more than O(k log n) memory.", "We study the problem of maintaining a sketch of recent elements of a data stream. Motivated by applications involving network data, we consider streams that are asynchronous, in which the observed order of data is not the same as the time order in which the data was generated. The notion of recent elements of a stream is modeled by the sliding timestamp window, which is the set of elements with timestamps that are close to the current time. We design algorithms for maintaining sketches of all elements within the sliding timestamp window that can give provably accurate estimates of two basic aggregates, the sum and the median, of a stream of numbers. The space taken by the sketches, the time needed for querying the sketch, and the time for inserting new elements into the sketch are all polylogarithmic with respect to the maximum window size. Our sketches can be easily combined in a lossless and compact way, making them useful for distributed computations over data streams. Previous works on sketching recent elements of a data stream have all considered the more restrictive scenario of synchronous streams, where the observed order of data is the same as the time order in which the data was generated. Our notion of recency of elements is more general than that studied in previous work, and thus our sketches are more robust to network delays and asynchrony.", "We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main result of the paper is the design and analysis of Algorithm Z; it does the sampling in one pass using constant space and in O ( n (1 + log( N n ))) expected time, which is optimum, up to a constant factor. Several optimizations are studied that collectively improve the speed of the naive version of the algorithm by an order of magnitude. We give an efficient Pascal-like implementation that incorporates these modifications and that is suitable for general use. Theoretical and empirical results indicate that Algorithm Z outperforms current methods by a significant margin.", "Massive data sets often arise as physically distributed, parallel data streams. We present algorithms for estimating simple functions on the union of such data streams, while using only logarithmic space per stream. Each processor observes only its own stream, and communicates with the other processors only after observing its entire stream. This models the set-up in current network monitoring products. Our algorithms employ a novel coordinated sampling technique to extract a sample of the union; this sample can be used to estimate aggregate functions on the union. The technique can also be used to estimate aggregate functions over the distinct “labels” in one or more data streams, e.g., to determine the zeroth frequency moment (i.e., the number of distinct labels) in one or more data streams. Our space and time bounds are the best known for these problems, and our logarithmic space bounds for coordinated sampling contrast with polynomial lower bounds for independent sampling. We relate our distributed streams model to previously studied non-distributed (i.e., merged) streams models, presenting tight bounds on the gap between the distributed and merged models for deterministic algorithms.", "", "", "This paper presents algorithms for estimating aggregate functions over a \"sliding window\" of the N most recent data items in one or more streams. Our results include: For a single stream, we present the first e-approximation scheme for the number of 1's in a sliding window that is optimal in both worst case time and space. We also present the first e for the sum of integers in [0..R] in a sliding window that is optimal in both worst case time and space (assuming R is at most polynomial in N). Both algorithms are deterministic and use only logarithmic memory words. In contrast, we show that an deterministic algorithm that estimates, to within a small constant relative error, the number of 1's (or the sum of integers) in a sliding window over the union of distributed streams requires O(N) space. We present the first randomized (e,s)-approximation scheme for the number of 1's in a sliding window over the union of distributed streams that uses only logarithmic memory words. We also present the first (e,s)-approximation scheme for the number of distinct values in a sliding window over distributed streams that uses only logarithmic memory words. < olOur results are obtained using a novel family of synopsis data structures.", "Random sampling is an appealing approach to build synopses of large data streams because random samples can be used for a broad spectrum of analytical tasks. Users are often interested in analyzing only the most recent fraction of the data stream in order to avoid outdated results. In this paper, we focus on sampling schemes that sample from a sliding window over a recent time interval; such windows are a popular and highly comprehensible method to model recency. In this setting, the main challenge is to guarantee an upper bound on the space consumption of the sample while using the allotted space efficiently at the same time. The difficulty arises from the fact that the number of items in the window is unknown in advance and may vary significantly over time, so that the sampling fraction has to be adjusted dynamically. We consider uniform sampling schemes, which produce each sample of the same size with equal probability, and stratified sampling schemes, in which the window is divided into smaller strata and a uniform sample is maintained per stratum. For uniform sampling, we prove that it is impossible to guarantee a minimum sample size in bounded space. We then introduce a novel sampling scheme called bounded priority sampling (BPS), which requires only bounded space. We derive a lower bound on the expected sample size and show that BPS quickly adapts to changing data rates. For stratified sampling, we propose a merge-based stratification scheme (MBS), which maintains strata of approximately equal size. Compared to naive stratification, MBS has the advantage that the sample is evenly distributed across the window, so that no part of the window is over- or underrepresented. We conclude the paper with a feasibility study of our algorithms on large real-world datasets." ] }
1904.04126
2952397892
We consider message-efficient continuous random sampling from a distributed stream, where the probability of inclusion of an item in the sample is proportional to a weight associated with the item. The unweighted version, where all weights are equal, is well studied, and admits tight upper and lower bounds on message complexity. For weighted sampling with replacement, there is a simple reduction to unweighted sampling with replacement. However, in many applications the stream has only a few heavy items which may dominate a random sample when chosen with replacement. Weighted sampling (weighted SWOR) eludes this issue, since such heavy items can be sampled at most once. In this work, we present the first message-optimal algorithm for weighted SWOR from a distributed stream. Our algorithm also has optimal space and time complexity. As an application of our algorithm for weighted SWOR, we derive the first distributed streaming algorithms for tracking . Here the goal is to identify stream items that contribute significantly to the residual stream, once the heaviest items are removed. Residual heavy hitters generalize the notion of @math heavy hitters and are important in streams that have a skewed distribution of weights. In addition to the upper bound, we also provide a lower bound on the message complexity that is nearly tight up to a @math factor. Finally, we use our weighted sampling algorithm to improve the message complexity of distributed @math tracking, also known as count tracking, which is a widely studied problem in distributed streaming. We also derive a tight message lower bound, which closes the message complexity of this fundamental problem.
The sequential version of weighted reservoir sampling was considered by Efraimidis and Spirakis @cite_34 , who presented a one-pass @math algorithm for weighted SWOR. @cite_28 presented another sequential algorithm for weighted SWOR, using a reduction to sampling with replacement through a cascade sampling'' algorithm. Unweighted random sampling from distributed streams has been considered in prior works @cite_5 @cite_10 @cite_33 , which have yielded matching upper and lower bounds on sampling without replacement. Continuous random sampling for distinct elements from a data stream in a distributed setting has been considered in @cite_4 .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_28", "@cite_5", "@cite_34", "@cite_10" ], "mid": [ "1593168739", "", "", "2139076222", "1982682305", "2346474025" ], "abstract": [ "We consider continuous maintenance of a random sample of distinct elements from a massive data stream, whose input elements are observed at multiple distributed sites that communicate via a central coordinator. At any point, when a query is received at the coordinator, it responds with a random sample from the set of all distinct elements observed at the different sites so far. We present the first algorithms for distinct random sampling from a distributed stream. We also present a lower bound on the expected number of messages that must be transmitted by any distributed algorithm, showing that our algorithm is message optimal to within a factor of four. We present extensions to sliding windows, and experimental results showing the performance of our algorithm on real-world data sets.", "", "", "A fundamental problem in data management is to draw and maintain a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distributed sites. The main challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol on the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low for each participant. In this article, we present communication-efficient protocols for continuously maintaining a sample (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most recent elements, or arrivals within the last w time units. We show that our protocols are optimal (up to logarithmic factors), not just in terms of the communication used, but also the time and space costs for each participant.", "In this work, a new algorithm for drawing a weighted random sample of size m from a population of n weighted items, where m ≤ n, is presented. The algorithm can generate a weighted random sample in one-pass over unknown populations.", "We present a simple, message-optimal algorithm for maintaining a random sample from a large data stream whose input elements are distributed across multiple sites that communicate via a central coordinator. At any point in time, the set of elements held by the coordinator represent a uniform random sample from the set of all the elements observed so far. When compared with prior work, our algorithms asymptotically improve the total number of messages sent in the system. We present a matching lower bound, showing that our protocol sends the optimal number of messages up to a constant factor with large probability. We also consider the important case when the distribution of elements across different sites is non-uniform, and show that for such inputs, our algorithm significantly outperforms prior solutions." ] }
1904.04126
2952397892
We consider message-efficient continuous random sampling from a distributed stream, where the probability of inclusion of an item in the sample is proportional to a weight associated with the item. The unweighted version, where all weights are equal, is well studied, and admits tight upper and lower bounds on message complexity. For weighted sampling with replacement, there is a simple reduction to unweighted sampling with replacement. However, in many applications the stream has only a few heavy items which may dominate a random sample when chosen with replacement. Weighted sampling (weighted SWOR) eludes this issue, since such heavy items can be sampled at most once. In this work, we present the first message-optimal algorithm for weighted SWOR from a distributed stream. Our algorithm also has optimal space and time complexity. As an application of our algorithm for weighted SWOR, we derive the first distributed streaming algorithms for tracking . Here the goal is to identify stream items that contribute significantly to the residual stream, once the heaviest items are removed. Residual heavy hitters generalize the notion of @math heavy hitters and are important in streams that have a skewed distribution of weights. In addition to the upper bound, we also provide a lower bound on the message complexity that is nearly tight up to a @math factor. Finally, we use our weighted sampling algorithm to improve the message complexity of distributed @math tracking, also known as count tracking, which is a widely studied problem in distributed streaming. We also derive a tight message lower bound, which closes the message complexity of this fundamental problem.
There has been a significant body of research on algorithms and lower bounds in the continuous distributed streaming model. This includes algorithms for frequency moments, @cite_23 @cite_29 , entropy estimation @cite_18 @cite_36 , heavy hitters and quantiles @cite_24 , distributed counts @cite_12 , and lower bounds on various statistical and graph aggregates @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_29", "@cite_3", "@cite_24", "@cite_23", "@cite_12" ], "mid": [ "1521238083", "2260790865", "2107443258", "2570634592", "1997642935", "2123430048", "2045480089" ], "abstract": [ "The notion of distributed functional monitoring was recently introduced by Cormode, Muthukrishnan and Yi to initiate a formal study of the communication cost of certain fundamental problems arising in distributed systems, especially sensor networks. In this model, each of k sites reads a stream of tokens and is in communication with a central coordinator, who wishes to continuously monitor some function f of *** , the union of the k streams. The goal is to minimize the number of bits communicated by a protocol that correctly monitors f (*** ), to within some small error. As in previous work, we focus on a threshold version of the problem, where the coordinator's task is simply to maintain a single output bit, which is 0 whenever f (*** ) ≤ *** (1 *** *** ) and 1 whenever f (*** ) *** *** . Following , we term this the (k ,f ,*** ,*** ) functional monitoring problem. In previous work, some upper and lower bounds were obtained for this problem, with f being a frequency moment function, e.g., F 0 , F 1 , F 2 . Importantly, these functions are monotone . Here, we further advance the study of such problems, proving three new classes of results. First, we provide nontrivial monitoring protocols when f is either H , the empirical Shannon entropy of a stream, or any of a related class of entropy functions (Tsallis entropies). These are the first nontrivial algorithms for distributed monitoring of non-monotone functions. Second, we study the effect of non-monotonicity of f on our ability to give nontrivial monitoring protocols, by considering f = F p with deletions allowed, as well as f = H . Third, we prove new lower bounds on this problem when f = F p , for several values of p .", "Modern data management systems often need to deal with massive, dynamic and inherently distributed data sources. We collect the data using a distributed network, and at the same time try to maintain a global view of the data at a central coordinator using a minimal amount of communication. Such applications have been captured by the distributed monitoring model which has attracted a lot of attention in recent years. In this paper we investigate the monitoring of the entropy functions, which are very useful in network monitoring applications such as detecting distributed denial-of-service attacks. Our results improve the previous best results by in ICLP 1: 95–106 (2009). Our technical contribution also includes implementing the celebrated AMS sampling method (by in J Comput Syst Sci 58(1): 137–147 1999) in the distributed monitoring model, which could be of independent interest.", "Emerging large-scale monitoring applications require continuous tracking of complex data-analysis queries over collections of physically-distributed streams. Effective solutions have to be simultaneously space time efficient (at each remote monitor site), communication efficient (across the underlying communication network), and provide continuous, guaranteed-quality approximate query answers. In this paper, we propose novel algorithmic solutions for the problem of continuously tracking a broad class of complex aggregate queries in such a distributed-streams setting. Our tracking schemes maintain approximate query answers with provable error guarantees, while simultaneously optimizing the storage space and processing time at each remote site, and the communication cost across the network. They rely on tracking general-purpose randomized sketch summaries of local streams at remote sites along with concise prediction models of local site behavior in order to produce highly communication- and space time-efficient solutions. The result is a powerful approximate query tracking framework that readily incorporates several complex analysis queries (including distributed join and multi-join aggregates, and approximate wavelet representations), thus giving the first known low-overhead tracking solution for such queries in the distributed-streams model.", "We consider a number of fundamental statistical and graph problems in the message-passing model, where we have (k ) machines (sites), each holding a piece of data, and the machines want to jointly solve a problem defined on the union of the (k ) data sets. The communication is point-to-point, and the goal is to minimize the total communication among the (k ) machines. This model captures all point-to-point distributed computational models with respect to minimizing communication costs. Our analysis shows that exact computation of many statistical and graph problems in this distributed setting requires a prohibitively large amount of communication, and often one cannot improve upon the communication of the simple protocol in which all machines send their data to a centralized server. Thus, in order to obtain protocols that are communication-efficient, one has to allow approximation, or investigate the distribution or layout of the data sets.", "We consider the problem of tracking heavy hitters and quantiles in the distributed streaming model. The heavy hitters and quantiles are two important statistics for characterizing a data distribution. Let A be a multiset of elements, drawn from the universe U= 1,ź,u . For a given 0≤ź≤1, the ź-heavy hitters are those elements of A whose frequency in A is at least ź|A|; the ź-quantile of A is an element x of U such that at most ź|A| elements of A are smaller than A and at most (1źź)|A| elements of A are greater than x. Suppose the elements of A are received at k remote sites over time, and each of the sites has a two-way communication channel to a designated coordinator, whose goal is to track the set of ź-heavy hitters and the ź-quantile of A approximately at all times with minimum communication. We give tracking algorithms with worst-case communication cost O(k ∈źlogn) for both problems, where n is the total number of items in A, and ∈ is the approximation error. This substantially improves upon the previous known algorithms. We also give matching lower bounds on the communication costs for both problems, showing that our algorithms are optimal. We also consider a more general version of the problem where we simultaneously track the ź-quantiles for all 0≤ź≤1.", "We study what we call functional monitoring problems. We have k players each tracking their inputs, say player i tracking a multiset Ai(t) up until time t, and communicating with a central coordinator. The coordinator's task is to monitor a given function f computed over the union of the inputs ∪iAi(t), continuously at all times t. The goal is to minimize the number of bits communicated between the players and the coordinator. A simple example is when f is the sum, and the coordinator is required to alert when the sum of a distributed set of values exceeds a given threshold τ. Of interest is the approximate version where the coordinator outputs 1 if f ≥ τ and 0 if f ≤ (1 - e)τ. This defines the (k, f, τ, e) distributed, functional monitoring problem. Functional monitoring problems are fundamental in distributed systems, in particular sensor networks, where we must minimize communication; they also connect to problems in communication complexity, communication theory, and signal processing. Yet few formal bounds are known for functional monitoring. We give upper and lower bounds for the (k, f, τ, e) problem for some of the basic f's. In particular, we study frequency moments (F0, F1, F2). For F0 and F1, we obtain continuously monitoring algorithms with costs almost the same as their one-shot computation algorithms. However, for F2 the monitoring problem seems much harder. We give a carefully constructed multi-round algorithm that uses \"sketch summaries\" at multiple levels of detail and solves the (k, F2, τ, e) problem with communication O(k2 e+ (√k e)3). Since frequency moment estimation is central to other problems, our results have immediate applications to histograms, wavelet computations, and others. Our algorithmic techniques are likely to be useful for other functional monitoring problems as well.", "We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the count-tracking problem, where there are k players, each holding a counter ni that gets incremented over time, and the goal is to track an ∑-approximation of their sum n=∑ini continuously at all times, using minimum communication. While the deterministic communication complexity of the problem is θ(k e • log N), where N is the final value of n when the tracking finishes, we show that with randomization, the communication cost can be reduced to θ(√k e • log N). Our algorithm is simple and uses only O(1) space at each player, while the lower bound holds even assuming each player has infinite computing power. Then, we extend our techniques to two related distributed tracking problems: frequency-tracking and rank-tracking, and obtain similar improvements over previous deterministic algorithms. Both problems are of central importance in large data monitoring and analysis, and have been extensively studied in the literature." ] }
1904.04144
2953405240
Depth estimation from a single image represents a fascinating, yet challenging problem with countless applications. Recent works proved that this task could be learned without direct supervision from ground truth labels leveraging image synthesis on sequences or stereo pairs. Focusing on this second case, in this paper we leverage stereo matching in order to improve monocular depth estimation. To this aim we propose monoResMatch, a novel deep architecture designed to infer depth from a single input image by synthesizing features from a different point of view, horizontally aligned with the input image, performing stereo matching between the two cues. In contrast to previous works sharing this rationale, our network is the first trained end-to-end from scratch. Moreover, we show how obtaining proxy ground truth annotation through traditional stereo algorithms, such as Semi-Global Matching, enables more accurate monocular depth estimation still countering the need for expensive depth labels by keeping a self-supervised approach. Exhaustive experimental results prove how the synergy between i) the proposed monoResMatch architecture and ii) proxy-supervision attains state-of-the-art for self-supervised monocular depth estimation. The code is publicly available at this https URL.
Finally, relevant to our work is Single View Stereo matching (SVS) @cite_46 , processing a single image to obtain a second synthetic view using Deep3D @cite_66 and then computing a disparity map between the two using DispNetC @cite_54 . However, these two architectures are trained independently. Moreover, DispNetC is supervised with ground truth labels from synthetic @cite_54 and real domains @cite_2 . Differently, the framework we are going to introduce requires no ground truth at all and is elegantly trained in an end-to-end manner, outperforming SVS by a notable margin.
{ "cite_N": [ "@cite_46", "@cite_54", "@cite_66", "@cite_2" ], "mid": [ "", "2259424905", "2336968928", "1921093919" ], "abstract": [ "", "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.", "As 3D movie viewing becomes mainstream and the Virtual Reality (VR) market emerges, the demand for 3D contents is growing rapidly. Producing 3D videos, however, remains challenging. In this paper we propose to use deep neural networks to automatically convert 2D videos and images to a stereoscopic 3D format. In contrast to previous automatic 2D-to-3D conversion algorithms, which have separate stages and need ground truth depth map as supervision, our approach is trained end-to-end directly on stereo pairs extracted from existing 3D movies. This novel training scheme makes it possible to exploit orders of magnitude more data and significantly increases performance. Indeed, Deep3D outperforms baselines in both quantitative and human subject evaluations.", "This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods." ] }
1904.04144
2953405240
Depth estimation from a single image represents a fascinating, yet challenging problem with countless applications. Recent works proved that this task could be learned without direct supervision from ground truth labels leveraging image synthesis on sequences or stereo pairs. Focusing on this second case, in this paper we leverage stereo matching in order to improve monocular depth estimation. To this aim we propose monoResMatch, a novel deep architecture designed to infer depth from a single input image by synthesizing features from a different point of view, horizontally aligned with the input image, performing stereo matching between the two cues. In contrast to previous works sharing this rationale, our network is the first trained end-to-end from scratch. Moreover, we show how obtaining proxy ground truth annotation through traditional stereo algorithms, such as Semi-Global Matching, enables more accurate monocular depth estimation still countering the need for expensive depth labels by keeping a self-supervised approach. Exhaustive experimental results prove how the synergy between i) the proposed monoResMatch architecture and ii) proxy-supervision attains state-of-the-art for self-supervised monocular depth estimation. The code is publicly available at this https URL.
figures network.tex Since for most tasks ground truth labels are difficult and expensive to source, some works recently enquired about the possibility to replace them with easier to obtain proxy labels. Tonioni al @cite_43 proposed to adapt deep stereo networks to unseen environments leveraging traditional stereo algorithms and confidence measures @cite_31 , Tosi al @cite_59 learned confidence estimation selecting positive and negative matches by means of traditional confidence measures, Makansi al @cite_44 and Liu al @cite_8 generated proxy labels for training optical flow networks using conventional methods. Specifically relevant to monocular depth estimation are the works proposed by Yang al @cite_35 , using stereo visual odometry to train monocular depth estimation, by Klodt and Vedaldi @cite_45 , leveraging structure from motion algorithms and by Guo al @cite_15 , obtaining labels from a deep network trained with supervision to infer disparity maps from stereo pairs.
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_15", "@cite_44", "@cite_43", "@cite_45", "@cite_59", "@cite_31" ], "mid": [ "2830339951", "", "2886322387", "2886245013", "2779124836", "2895192073", "2894436088", "2777632355" ], "abstract": [ "Monocular visual odometry approaches that purely rely on geometric cues are prone to scale drift and require sufficient motion parallax in successive frames for motion estimation and 3D reconstruction. In this paper, we propose to leverage deep monocular depth prediction to overcome limitations of geometry-based monocular visual odometry. To this end, we incorporate deep depth predictions into Direct Sparse Odometry (DSO) as direct virtual stereo measurements. For depth prediction, we design a novel deep network that refines predicted depth from a single image in a two-stage process. We train our network in a semi-supervised way on photoconsistency in stereo images and on consistency with accurate sparse depth reconstructions from Stereo DSO. Our deep predictions excel state-of-the-art approaches for monocular depth on the KITTI benchmark. Moreover, our Deep Virtual Stereo Odometry clearly exceeds previous monocular and deep-learning based methods in accuracy. It even achieves comparable performance to the state-of-the-art stereo methods, while only relying on a single camera.", "", "Monocular depth estimation aims at estimating a pixelwise depth map for a single image, which has wide applications in scene understanding and autonomous driving. Existing supervised and unsupervised methods face great challenges. Supervised methods require large amounts of depth measurement data, which are generally difficult to obtain, while unsupervised methods are usually limited in estimation accuracy. Synthetic data generated by graphics engines provide a possible solution for collecting large amounts of depth data. However, the large domain gaps between synthetic and realistic data make directly training with them challenging. In this paper, we propose to use the stereo matching network as a proxy to learn depth from synthetic data and use predicted stereo disparity maps for supervising the monocular depth estimation network. Cross-domain synthetic data could be fully utilized in this novel framework. Different strategies are proposed to ensure learned depth perception capability well transferred across different domains. Our extensive experiments show state-of-the-art results of monocular depth estimation on KITTI dataset.", "Recent work has shown that convolutional neural networks (CNNs) can be used to estimate optical flow with high quality and fast runtime. This makes them preferable for real-world applications. However, such networks require very large training datasets. Engineering the training data is difficult and or laborious. This paper shows how to augment a network trained on an existing synthetic dataset with large amounts of additional unlabelled data. In particular, we introduce a selection mechanism to assemble from multiple estimates a joint optical flow field, which outperforms that of all input methods. The latter can be used as proxy-ground-truth to train a network on real-world data and to adapt it to specific domains of interest. Our experimental results show that the performance of networks improves considerably, both, in cross-domain and in domain-specific scenarios. As a consequence, we obtain state-of-the-art results on the KITTI benchmarks.", "Recent ground-breaking works have shown that deep neural networks can be trained end-to-end to regress dense disparity maps directly from image pairs. Computer generated imagery is deployed to gather the large data corpus required to train such networks, an additional fine-tuning allowing to adapt the model to work well also on real and possibly diverse environments. Yet, besides a few public datasets such as Kitti, the ground-truth needed to adapt the network to a new scenario is hardly available in practice. In this paper we propose a novel unsupervised adaptation approach that enables to fine-tune a deep learning stereo model without any ground-truth information. We rely on off-the-shelf stereo algorithms together with state-of-the-art confidence measures, the latter able to ascertain upon correctness of the measurements yielded by former. Thus, we train the network based on a novel loss-function that penalizes predictions disagreeing with the highly confident disparities provided by the algorithm and enforces a smoothness constraint. Experiments on popular datasets (KITTI 2012, KITTI 2015 and Middlebury 2014) and other challenging test images demonstrate the effectiveness of our proposal.", "Recent work has demonstrated that it is possible to learn deep neural networks for monocular depth and ego-motion estimation from unlabelled video sequences, an interesting theoretical development with numerous advantages in applications. In this paper, we propose a number of improvements to these approaches. First, since such self-supervised approaches are based on the brightness constancy assumption, which is valid only for a subset of pixels, we propose a probabilistic learning formulation where the network predicts distributions over variables rather than specific values. As these distributions are conditioned on the observed image, the network can learn which scene and object types are likely to violate the model assumptions, resulting in more robust learning. We also propose to build on dozens of years of experience in developing handcrafted structure-from-motion (SFM) algorithms. We do so by using an off-the-shelf SFM system to generate a supervisory signal for the deep neural network. While this signal is also noisy, we show that our probabilistic formulation can learn and account for the defects of SFM, helping to integrate different sources of information and boosting the overall performance of the network.", "", "Confidence measures aim at detecting unreliable depth measurements and play an important role for many purposes and in particular, as recently shown, to improve stereo accuracy. This topic has been thoroughly investigated by Hu and Mordohai in 2010 (and 2012) considering 17 confidence measures and two local algorithms on the two datasets available at that time. However, since then major breakthroughs happened in this field: the availability of much larger and challenging datasets, novel and more effective stereo algorithms including ones based on deep learning and confidence measures leveraging on machine learning techniques. Therefore, this paper aims at providing an exhaustive and updated review and quantitative evaluation of 52 (actually, 76 considering variants) stateof- the-art confidence measures - focusing on recent ones mostly based on random-forests and deep learning - with three algorithms on the challenging datasets available today. Moreover we deal with problems inherently induced by learning-based confidence measures. How are these methods able to generalize to new data? How a specific training improves their effectiveness? How more effective confidence measures can actually improve the overall stereo accurac?" ] }
1904.03620
2934890652
Sketching is more fundamental to human cognition than speech. Deep Neural Networks (DNNs) have achieved the state-of-the-art in speech-related tasks but have not made significant development in generating stroke-based sketches a.k.a sketches in vector format. Though there are Variational Auto Encoders (VAEs) for generating sketches in vector format, there is no Generative Adversarial Network (GAN) architecture for the same. In this paper, we propose a standalone GAN architecture SkeGAN and a VAE-GAN architecture VASkeGAN, for sketch generation in vector format. SkeGAN is a stochastic policy in Reinforcement Learning (RL), capable of generating both multidimensional continuous and discrete outputs. VASkeGAN hybridizes a VAE and a GAN, in order to couple the efficient representation of data by VAE with the powerful generating capabilities of a GAN, to produce visually appealing sketches. We also propose a new metric called the Ske-score which quantifies the quality of vector sketches. We have validated that SkeGAN and VASkeGAN generate visually appealing sketches by using Human Turing Test and Ske-score.
There are a very few approaches of sketch generation which use stochastic techniques such as Hidden Markov Models (HMMs) @cite_15 and others that use pure image processing techniques such as @cite_3 . There are quite a lot of work done relating to human-drawn sketches in general using DL such as recognition @cite_23 @cite_7 @cite_13 , eye-fixation or saliency @cite_19 , guessing a sketch being drawn @cite_12 and parsing @cite_27 . Specifically @cite_24 uses GANs to generate sketches of human faces given digital portraits of their faces. There are also works such as @cite_0 @cite_4 which discuss the approaches to convert rasterized sketches into realistic images. One commonality amongst all of them is that all of them work with sketches in the raster format.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_3", "@cite_24", "@cite_19", "@cite_27", "@cite_0", "@cite_23", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2884466206", "2249139947", "2018548648", "2964356534", "2594648402", "2963914894", "2963561004", "2949877834", "2160333382", "2063559668", "2963647718" ], "abstract": [ "In this paper we investigate image generation guided by hand sketch. When the input sketch is badly drawn, the output of common image-to-image translation follows the input edges due to the hard condition imposed by the translation process. Instead, we propose to use sketch as weak constraint, where the output edges do not necessarily follow the input edges. We address this problem using a novel joint image completion approach, where the sketch provides the image context for completing, or generating the output image. We train a generated adversarial network, i.e, contextual GAN to learn the joint distribution of sketch and the corresponding image by using joint images. Our contextual GAN has several advantages. First, the simple joint image representation allows for simple and effective learning of joint distribution in the same image-sketch space, which avoids complicated issues in cross-domain learning. Second, while the output is related to its input overall, the generated features exhibit more freedom in appearance and do not strictly align with the input features as previous conditional GANs do. Third, from the joint image’s point of view, image and sketch are of no difference, thus exactly the same deep joint image completion network can be used for image-to-sketch generation. Experiments evaluated on three different datasets show that our contextual GAN can generate more realistic images than state-of-the-art conditional GANs on challenging inputs and generalize well on common categories.", "Studies from neuroscience show that part-mapping computations are employed by human visual system in the process of object recognition. In this paper, we present an approach for analyzing semantic-part characteristics of object category representations. For our experiments, we use category-epitome, a recently proposed sketch-based spatial representation for objects. To enable part-importance analysis, we first obtain semantic-part annotations of hand-drawn sketches originally used to construct the epitomes. We then examine the extent to which the semantic-parts are present in the epitomes of a category and visualize the relative importance of parts as a word cloud. Finally, we show how such word cloud visualizations provide an intuitive understanding of category-level structural trends that exist in the category-epitome object representations. Our method is general in applicability and can also be used to analyze part-based visual object representations for other depiction methods such as photographic images.", "We describe Paul, a robotic installation that produces observational face drawings of people. Paul is a naive drawer: it does not have highlevel knowledge of the structures constitutive of the human face (such as the mouth, nose, eyes) nor the capability of learning expertise based on experience as a human would. However, Paul is able to draw using the equivalent of an artist's stylistic signature based on a number of processes mimicking drawing skills and technique, which together form a drawing cycle. Furthermore, we present here our first efforts in implementing two different versions of visual feedback to permit the robot to iteratively augment and improve a drawing which initially is built from a process of salient lines recovery. The first form of visual feedback we study we refer to as computational as it involves a purely internal (memory-based) representation of regions to render via shading by the robot. The second version we call physical as it involves the use of a camera as an 'eye' taking new snapshots of the artefact in progress. This is then analysed to take decisions on where and how to render shading next. A main point we emphasise in this work is the issue of embodiment of graphical systems, in our case in a robotic platform. We present our arguments in favour of such a position for the graphics community to reflect upon. Finally, we emphasise that the drawings produced by Paul have been considered of interest by fine art professionals in recent international art fairs and exhibitions, as well as by the public at large. One drawing is now in the Victoria and Albert museum collection. We identify a number of factors that may account for such perceived qualities of the produced drawings. Graphical abstractPaul drawing at the Merge Festival Gallery, London, October 2012. Paul is a robotic installation that produces observational face drawings of people using the equivalent of an artist s stylistic signature forming a drawing cycle with visual feedback.Display Omitted HighlightsÂ? Paul is a robotic system which integrates graphics with vision for face portraying. Â? The NPR community can explore the embodiment of artistic skills jointly with software. Â? We design, implement and test computational (software only) visual feedback. Â? We also design, implement and test physical (hardware and software) visual feedback. Â? The production of Paul is being exhibited internationally in well known art venues.", "", "The study of eye gaze fixations on photographic images is an active research area. In contrast, the image sub-category of freehand sketches has not received as much attention for such studies. In this paper, we analyze the results of a free-viewing gaze fixation study conducted on 3904 freehand sketches distributed across 160 object categories. Our analysis shows that fixation sequences exhibit marked consistency within a sketch, across sketches of a category and even across suitably grouped sets of categories. This multi-level consistency is remarkable given the variability in depiction and extreme image content sparsity that characterizes hand-drawn object sketches. In this paper, we show that the multi-level consistency in the fixation data can be exploited to 1) predict a test sketch’s category given only its fixation sequence and 2) build a computational model which predicts part-labels underlying fixations on objects. We hope that our findings motivate the community to deem sketch-like representations worthy of gaze-based studies vis-a-vis photographic images.", "The ability to semantically interpret hand-drawn line sketches, although very challenging, can pave way for novel applications in multimedia. We propose SKETCHPARSE, the first deep-network architecture for fully automatic parsing of freehand object sketches. SKETCHPARSE is configured as a two-level fully convolutional network. The first level contains shared layers common to all object categories. The second level contains a number of expert sub-networks. Each expert specializes in parsing sketches from object categories which contain structurally similar parts. Effectively, the two-level configuration enables our architecture to scale up efficiently as additional categories are added. We introduce a router layer which (i) relays sketch features from shared layers to the correct expert (ii) eliminates the need to manually specify object category during inference. To bypass laborious part-level annotation, we sketchify photos from semantic object-part image datasets and use them for training. Our architecture also incorporates object pose prediction as a novel auxiliary task which boosts overall performance while providing supplementary information regarding the sketch. We demonstrate SKETCHPARSE's abilities (i) on two challenging large-scale sketch datasets (ii) in parsing unseen, semantically related object categories (iii) in improving fine-grained sketch-based image retrieval. As a novel application, we also outline how SKETCHPARSE's output can be used to generate caption-style descriptions for hand-drawn sketches.", "Synthesizing realistic images from human drawn sketches is a challenging problem in computer graphics and vision. Existing approaches either need exact edge maps, or rely on retrieval of existing photographs. In this work, we propose a novel Generative Adversarial Network (GAN) approach that synthesizes plausible images from 50 categories including motorcycles, horses and couches. We demonstrate a data augmentation technique for sketches which is fully automatic, and we show that the augmented data is helpful to our task. We introduce a new network building block suitable for both the generator and discriminator which improves the information flow by injecting the input image at multiple scales. Compared to state-of-the-art image translation methods, our approach generates more realistic images and achieves significantly higher Inception Scores.", "Freehand sketching is an inherently sequential process. Yet, most approaches for hand-drawn sketch recognition either ignore this sequential aspect or exploit it in an ad-hoc manner. In our work, we propose a recurrent neural network architecture for sketch object recognition which exploits the long-term sequential and structural regularities in stroke data in a scalable manner. Specifically, we introduce a Gated Recurrent Unit based framework which leverages deep sketch features and weighted per-timestep loss to achieve state-of-the-art results on a large database of freehand object sketches across a large number of object categories. The inherently online nature of our framework is especially suited for on-the-fly recognition of objects as they are being drawn. Thus, our framework can enable interesting applications such as camera-equipped robots playing the popular party game Pictionary with human players and generating sparsified yet recognizable sketches of objects.", "We present a system for generating 2D illustrations from hand drawn outlines consisting of only curve strokes. A user can draw a coarse sketch and the system would automatically augment the shape, thickness, color and surrounding texture of the curves making up the sketch. The styles for these refinements are learned from examples whose semantics have been pre-classified. There can be several styles applicable on a curve and the system automatically identifies which one to use and how to use it based on a curve's shape and its context in the illustration. Our approach is based on a Hierarchical Hidden Markov Model. We present a two level hierarchy in which the refinement process is applied at: the curve level and the scene level.", "As a form of visual representation, freehand line sketches are typically studied as an end product of the sketching process. However, from a recognition point of view, one can also study various orderings and properties of the primitive strokes that compose the sketch. Studying sketches in this manner has enabled us to create novel sparse yet discriminative sketch-based representations for object categories which we term category-epitomes. Concurrently, the epitome construction provides a natural measure for quantifying the sparseness underlying the original sketch, which we term epitome-score. We analyze category-epitomes and epitome-scores for hand-drawn sketches from a sketch dataset of 160 object categories commonly encountered in daily life. Our analysis provides a novel viewpoint for examining the complexity of representation for visual object categories.", "" ] }
1904.03620
2934890652
Sketching is more fundamental to human cognition than speech. Deep Neural Networks (DNNs) have achieved the state-of-the-art in speech-related tasks but have not made significant development in generating stroke-based sketches a.k.a sketches in vector format. Though there are Variational Auto Encoders (VAEs) for generating sketches in vector format, there is no Generative Adversarial Network (GAN) architecture for the same. In this paper, we propose a standalone GAN architecture SkeGAN and a VAE-GAN architecture VASkeGAN, for sketch generation in vector format. SkeGAN is a stochastic policy in Reinforcement Learning (RL), capable of generating both multidimensional continuous and discrete outputs. VASkeGAN hybridizes a VAE and a GAN, in order to couple the efficient representation of data by VAE with the powerful generating capabilities of a GAN, to produce visually appealing sketches. We also propose a new metric called the Ske-score which quantifies the quality of vector sketches. We have validated that SkeGAN and VASkeGAN generate visually appealing sketches by using Human Turing Test and Ske-score.
All the architectures mentioned for sketch generation in vector format, are VAEs. A well known disadvantage with VAEs, they tend to produce blurred images in case of raster images. Since there is no concept of blurring, the vector images produced by VAEs like sketch-rnn @cite_10 tend to suffer from a mode-collapse-like situation wherein the pen is not lifted to draw at another location, but stays on the paper and continues to scribble. We call this as the . Figure shows this effect in the sketches of yoga poses" and mosquitos". Since VAEs assume the prior to be Gaussian, they need to be trained for a very large number of iterations so that the weights of the decoder get adjusted accordingly in order to generate close to the distribution of data. In the case of @cite_10 , the training is done for 10 million iterations. Also, GANs have performed outstandingly well for a variety of tasks mentioned in Section , with raster images. In order to alleviate these disadvantages of VAEs and harness the power of GANs, we propose a standalone GAN called the and another GAN called the with which we compare SkeGAN.
{ "cite_N": [ "@cite_10" ], "mid": [ "2606712314" ], "abstract": [ "We present sketch-rnn, a recurrent neural network able to construct stroke-based drawings of common objects. The model is trained on a dataset of human-drawn images representing many different classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format." ] }
1904.03848
2926569033
Unsupervised deep learning for optical flow computation has achieved promising results. Most existing deep-net based methods rely on image brightness consistency and local smoothness constraint to train the networks. Their performance degrades at regions where repetitive textures or occlusions occur. In this paper, we propose Deep Epipolar Flow, an unsupervised optical flow method which incorporates global geometric constraints into network learning. In particular, we investigate multiple ways of enforcing the epipolar constraint in flow estimation. To alleviate a chicken-and-egg'' type of problem encountered in dynamic scenes where multiple motions may be present, we propose a low-rank constraint as well as a union-of-subspaces constraint for training. Experimental results on various benchmarking datasets show that our method achieves competitive performance compared with supervised methods and outperforms state-of-the-art unsupervised deep-learning methods.
Supervised deep optical flow. Recently, end-to-end learning based deep optical flow approaches have shown their superiority in learning optical flow. Given a large amount of training samples, optical flow estimation is formulated to learn the regression between image pair and corresponding optical flow. These approaches achieve comparable performance to state-of-the-art conventional methods on several benchmarks while being significantly faster. FlowNet @cite_2 is a pioneer in this direction, which needs a large-size synthetic dataset to supervise network learning. FlowNet2 @cite_14 greatly extends FlowNet by stacking multiple encoder-decoder networks one after the other, which could achieve a comparable result to conventional methods on various benchmarks. Recently, PWC-Net @cite_43 combines sophisticated conventional strategies such as pyramid, warping and cost volume into network design and set the state-of-the-art performance on KITTI @cite_28 @cite_0 and MPI Sintel @cite_23 . These supervised deep optical flow methods are hampered by the need for large-scale training data with ground truth optical flow, which also limits their generalization ability.
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_0", "@cite_43", "@cite_23", "@cite_2" ], "mid": [ "2560474170", "2150066425", "1921093919", "2963782415", "1513100184", "764651262" ], "abstract": [ "The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.", "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net datasets kitti", "This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods.", "We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the current optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024 A— 436) images. Our models are available on our project website.", "Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available.", "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps." ] }
1904.03568
2935549553
Eating is an essential activity of daily living (ADL) for staying healthy and living at home independently. Although numerous assistive devices have been introduced, many people with disabilities are still restricted from independent eating due to the devices' physical or perceptual limitations. In this work, we introduce a new meal-assistance system using a general-purpose mobile manipulator, a Willow Garage PR2, which has the potential to serve as a versatile form of assistive technology. Our active feeding framework enables the robot to autonomously deliver food to the user's mouth. In detail, our web-based user interface, visually-guided behaviors, and safety tools allow people with severe motor impairments to benefit from the robotic assistance. We evaluated our system with 10 able-bodied participants and 9 people with motor impairments. Both groups of participants successfully ate various foods using the system and reported high rates of success for the system's autonomous behaviors in a laboratory environment. Then, we performed in-home evaluation with Henry Evans, a person with quadriplegia, at his house in California, USA. In general, Henry and the other people who operated the system reported that it was comfortable, safe, and easy-to-use. We discuss learned lessons and design insights through user evaluations.
Fixed-base assistive robots are often placed near a user or a targeted workspace. Researchers mounted early assistive robots to desktops for assistance with feeding, cosmetics, and hygiene. The professional vocational assistive robot (ProVAR) is a representative desktop manipulator placed in an office workspace @cite_25 . Handy-1 is another adjustable table-mounted manipulator for ADLs such as eating, drinking, and washing applications @cite_23 . The mounted robots were designed to perform various ADLs using a general-purpose manipulator. However, the limited workspaces of the robots restricts the range of available activities. Alternatively, researchers have introduced various wheelchair-mounted robotic arms (WMRAs). For meal assistance, showed that people with disabilities can feed themselves using a manually controlled JACO arm mounted on a wheelchair @cite_67 . showed drinking assistance using a 7-DoF KUKA arm @cite_11 . For object fetching, introduced the UCF-MANUS robot, consisting of a wheelchair-mounted manipulator and interface @cite_45 .
{ "cite_N": [ "@cite_67", "@cite_45", "@cite_23", "@cite_25", "@cite_11" ], "mid": [ "2111956227", "2053963447", "", "2114788085", "1718442951" ], "abstract": [ "Many activities of daily living, such as picking up glasses, holding a fork or opening a door, which most people do without thinking, can become insurmountable for people who have upper extremity disabilities. The alternative to asking for human help is to use some assistive devices to compensate their loss of mobility; however, many of those devices are limited in terms of functionality. Robotics may provide a better approach for the development of assistive devices, by allowing greater functionality. In this paper, we present results of a study (n=31) which objectives were to evaluate the efficacy of a new joystick-controlled seven-degree of freedom robotic manipulator and assess its potential economic benefits. Results show that JACO is easy to use as the majority of the participants were able to accomplish the testing tasks on their first attempt. The economic model results inferred that the use of the JACO arm system could potentially reduce caregiving time by 41 . These study results are expected to provide valuable data for interested parties, such as individuals with disabilities, their family or caregivers.", "This paper reports on the system design for integrating the various processes needed for end-to-end implementation of a smart assistive robotic manipulator. Specifically, progress is reported in the empowerment of the UCF-MANUS system with a suite of sensory, computational, and multimodal interface capabilities so that its autonomy can be made accessible to users with a wide range of disabilities. Laboratory experiments are reported to demonstrate the ability of the system prototype to successfully and efficiently complete object retrieval tasks. Benchmarking of the impact of the various interface modalities on user performance is performed via empirical studies with healthy subjects operating the robot in a simulated instrumental activities of daily living tasks setup. It is seen through a analysis of the collected quantitative data that the prototype is interface neutral and shows robustness to variations in the tasks and the environment. It is also seen that the prototype autonomous system is quantitatively superior to Cartesian control for all tested tasks under a “number of commands” metric, however, under a “time to task completion” metric, the system is seen to be superior for “hard” tasks but not for “easy” tasks.", "", "This paper describes the implementation of a robot control architecture designed to combine a manipulation task design environment with a motion controller that uses the operational space formulation to define and implement arm trajectories and object manipulation. The ProVAR desktop manipulation system is an assistive robot for individuals with a severe physical disability, such as quadriplegia as a result of a high-level spinal cord injury. ProVAR allows non-technical operators access to the robot's capabilities through a direct-manipulation simulation preview user interface. The novel interface concept is based on two built-in characters to play the roles of helpful consultant and down-to-earth robot arm. This team-based interface concept was chosen to maximize user performance and comfort in controlling the inherently complex mechatronic technology. This paper describes our design decisions and rationale.", "Stroke and neurodegenerative diseases, among a range of other neurologic disorders, can cause chronic paralysis. Patients suffering from paralysis may remain unable to achieve even basic everyday tasks such as liquid intake. Currently, there is a great interest in developing robotic assistants controlled via brain-machine interfaces (BMIs) to restore the ability to perform such tasks. This paper describes an autonomous robotic assistant for liquid intake. The components of the system include autonomous online detection both of the cup to be grasped and of the mouth of the user. It plans motions of the robot arm under the constraints that the cup stays upright while moving towards the mouth and that the cup stays in direct contact with the user's mouth while the robot tilts it during the drinking phase. To achieve this, our system also includes a technique for online estimation of the location of the user's mouth even under partial occlusions by the cup or robot arm. We tested our system in a real environment and in a shared-control setting using frequency-specific modulations recorded by electroencephalography (EEG) from the brain of the user. Our experiments demonstrate that our BMI-controlled robotic system enables a reliable liquid intake. We believe that our approach can easily be extended to other useful tasks including food intake and object manipulation." ] }
1904.03568
2935549553
Eating is an essential activity of daily living (ADL) for staying healthy and living at home independently. Although numerous assistive devices have been introduced, many people with disabilities are still restricted from independent eating due to the devices' physical or perceptual limitations. In this work, we introduce a new meal-assistance system using a general-purpose mobile manipulator, a Willow Garage PR2, which has the potential to serve as a versatile form of assistive technology. Our active feeding framework enables the robot to autonomously deliver food to the user's mouth. In detail, our web-based user interface, visually-guided behaviors, and safety tools allow people with severe motor impairments to benefit from the robotic assistance. We evaluated our system with 10 able-bodied participants and 9 people with motor impairments. Both groups of participants successfully ate various foods using the system and reported high rates of success for the system's autonomous behaviors in a laboratory environment. Then, we performed in-home evaluation with Henry Evans, a person with quadriplegia, at his house in California, USA. In general, Henry and the other people who operated the system reported that it was comfortable, safe, and easy-to-use. We discuss learned lessons and design insights through user evaluations.
The absence of mobility is an important issue in robotic assistance. found that movement of a mobile manipulator's base was needed to provide assistance with a shaving task, since the PR2 that they used could not otherwise reach the relevant locations @cite_26 . In feeding, the fixed robot base often requires the relocation of the robot or user by caregivers in the beginning or during the task. A fixed base restricts the scope of assistive tasks @cite_64 . Without mobility, robots are restricted to a narrow set of tasks and are unable to leave the immediate vicinity of the human to provide assistance elsewhere. Recent studies have introduced general-purpose mobile manipulators for various assistive robotic tasks, including shaving @cite_46 @cite_39 , dressing @cite_66 @cite_2 @cite_40 , fetch-and-carry @cite_19 @cite_12 @cite_59 @cite_50 , and guiding tasks @cite_4 . Our meal-assistance system has a mobile base that has the potential to enhance the quality of feeding assistance.
{ "cite_N": [ "@cite_26", "@cite_64", "@cite_4", "@cite_39", "@cite_19", "@cite_40", "@cite_59", "@cite_2", "@cite_50", "@cite_46", "@cite_66", "@cite_12" ], "mid": [ "2537120235", "2196984578", "2145358043", "", "2148070927", "", "", "", "", "2156324385", "2068872940", "" ], "abstract": [ "Human-scale mobile robots with arms have the potential to assist people with a variety of tasks. We present a proof-of-concept system that has enabled a person with severe quadriplegia named Henry Evans to shave himself in his own home using a general purpose mobile manipulator (PR2 from Willow Garage). The robot primarily provides assistance by holding a tool (e.g., an electric shaver) at user-specified locations around the user's head, while he she moves his her head against it. If the robot detects forces inappropriate for the task (e.g., shaving), it withdraws the tool. The robot also holds a mirror with its other arm, so that the user can see what he she is doing. For all aspects of the task, the robot and the human work together. The robot uses a series of distinct semi-autonomous subsystems during the task to navigate to poses next to the wheelchair, attain initial arm configurations, register a 3D model of the person's head, move the tool to coarse semantically-labeled tool poses (e.g, “Cheek”), and finely position the tool via incremental movements. Notably, while moving the tool near the user's head, the robot uses an ellipsoidal coordinate system attached to the 3D head model. In addition to describing the complete robotic system, we report results from Henry Evans using it to shave both sides of his face while sitting in his wheelchair at home. He found the process to be long (54 minutes) and the interface unintuitive. Yet, he also found the system to be comfortable to use, felt safe while using it, was satisfied with it, and preferred it to a human caregiver.", "When a mobile manipulator functions as an assistive device, the robot's initial configuration and the configuration of the environment can impact the robot's ability to provide effective assistance. Selecting initial configurations for assistive tasks can be challenging due to the high number of degrees of freedom of the robot, the environment, and the person, as well as the complexity of the task. In addition, rapid selection of initial conditions can be important, so that the system will be responsive to the user and will not require the user to wait a long time while the robot makes a decision. To address these challenges, we present Task-centric initial Configuration Selection (TCS), which unlike previous work uses a measure of task-centric manipulability to accommodate state estimation error, considers various environmental degrees of freedom, and can find a set of configurations from which a robot can perform a task. TCS performs substantial offline computation, so that it can rapidly provide solutions at run time. At run time, the system performs an optimization over candidate initial configurations using a utility function that can include factors such as movement costs for the robot's mobile base. To evaluate TCS, we created models of 11 activities of daily living (ADLs) and evaluated TCS's performance with these 11 assistive tasks in a computer simulation of a PR2, a robotic bed, and a model of a human body. TCS performed as well or better than a baseline algorithm in all of our tests against state estimation error.", "Robotic systems have been widely used in many areas to assist human beings. Mobile manipulators are among the most popular choices. This paper investigates human assistance systems using a mobile manipulator, for example, to guide the blind and to transport objects. Distinct from existing systems, an integrated dynamic model and controller of the mobile manipulator are designed. Singularity, manipulability and safety are all considered in the system design. Furthermore, two human assistance modes - Robot-Human mode and Teleoperator- Robot-Human mode - are designed and analysed. The Teleoperator-Robot-Human mode can integrate human intelligence into the assistance system to further enhance the system efficiency and safety. The experimental results implemented on a mobile manipulator demonstrated the effectiveness of the designed systems.", "", "In this paper the mechanism, design, and control system of a new humanoid-type hand with human-like manipulation abilities is discussed. The hand is designed for the humanoid robot which has to work autonomously or interactively in cooperation with humans. The ideal end effector for such a humanoid would be able to use the tools and objects that a person uses when working in the same environment. Therefore, a new hand is designed for anatomical consistency with the human hand. This includes the number of fingers and the placement and motion of the thumb, the proportions of the link lengths, and the shape of the palm. The hand can perform most of human grasping types. In this paper, particular attention is dedicated to measurement analysis, technical characteristics, and functionality of the hand prototype. Furthermore, first experience gained from using hand prototypes on a humanoid robot is outlined.", "", "", "", "", "Assistive mobile manipulators (AMMs) have the potential to one day serve as surrogates and helpers for people with disabilities, giving them the freedom to perform tasks such as scratching an itch, picking up a cup, or socializing with their families.", "This paper describes dressing assistance by an autonomous robot. We especially focus on a dressing action that is particularly problematic for disabled people: the pulling of a bottom along the legs. To avoid injuring the subject's legs, the robot should recognize the state of the manipulated clothing. Therefore, while handling the clothing, the robot is supplied with both visual and force sensory information. Based on the them, dressing failure is detected and recovery from the failure is planned automatically. The effectiveness of the proposed approach is implemented and validated in a life-sized humanoid robot.", "" ] }
1904.03568
2935549553
Eating is an essential activity of daily living (ADL) for staying healthy and living at home independently. Although numerous assistive devices have been introduced, many people with disabilities are still restricted from independent eating due to the devices' physical or perceptual limitations. In this work, we introduce a new meal-assistance system using a general-purpose mobile manipulator, a Willow Garage PR2, which has the potential to serve as a versatile form of assistive technology. Our active feeding framework enables the robot to autonomously deliver food to the user's mouth. In detail, our web-based user interface, visually-guided behaviors, and safety tools allow people with severe motor impairments to benefit from the robotic assistance. We evaluated our system with 10 able-bodied participants and 9 people with motor impairments. Both groups of participants successfully ate various foods using the system and reported high rates of success for the system's autonomous behaviors in a laboratory environment. Then, we performed in-home evaluation with Henry Evans, a person with quadriplegia, at his house in California, USA. In general, Henry and the other people who operated the system reported that it was comfortable, safe, and easy-to-use. We discuss learned lessons and design insights through user evaluations.
In terms of delivering food, most robotic systems use executions in which a robot conveys food to a predefined location, typically in front of the user's mouth. These systems depend on the users' upper body movement to reach the food. Takahashi and Suzukawa, on the other hand, introduced an interface enabling a user with quadriplegia to manually adjust feeding locations @cite_34 . Similar to our work, proposed an adaptive drinking assistance robot that finds the user's mouth location with an external vision system @cite_11 . We leverage such visual input to detect gestures and anomalies as well as the user's mouth. In addition, researchers have adapted feeding task movements to users' preferences by incrementally updating movement primitives @cite_69 @cite_70 . Table shows a comparison result of features in currently available meal assistance robots that provide both food grasping and feeding functions.
{ "cite_N": [ "@cite_70", "@cite_69", "@cite_34", "@cite_11" ], "mid": [ "", "2528826962", "1976398473", "1718442951" ], "abstract": [ "", "The deployment of robots at home must involve robots with pre-defined skills and the capability of personalizing their behavior by non-expert users. A framework to tackle this personalization is presented and applied to an automatic feeding task. The personalization involves the caregiver providing several examples of feeding using Learning-by-Demostration, and a ProMP formalism to compute an overall trajectory and the variance along the path. Experiments show the validity of the approach in generating different feeding motions to adapt to user’s preferences, automatically extracting the relevant task parameters. The importance of the nature of the demonstrations is also assessed, and two training strategies are compared.", "This paper proposes an easy human interface device to operate an assist robot for a severely handicapped person. A quadriplegic patient can move a mouse pointer on a personal computer display using a head space pointer, and can click it with the newly designed input device which uses patient's cheek movement. The input device uses a fiber sensor to detect the patient's cheek movement. The human interface was applied to operate an eating assist robot. The evaluation experiments have been conducted in cooperation with a quadriplegic patient.", "Stroke and neurodegenerative diseases, among a range of other neurologic disorders, can cause chronic paralysis. Patients suffering from paralysis may remain unable to achieve even basic everyday tasks such as liquid intake. Currently, there is a great interest in developing robotic assistants controlled via brain-machine interfaces (BMIs) to restore the ability to perform such tasks. This paper describes an autonomous robotic assistant for liquid intake. The components of the system include autonomous online detection both of the cup to be grasped and of the mouth of the user. It plans motions of the robot arm under the constraints that the cup stays upright while moving towards the mouth and that the cup stays in direct contact with the user's mouth while the robot tilts it during the drinking phase. To achieve this, our system also includes a technique for online estimation of the location of the user's mouth even under partial occlusions by the cup or robot arm. We tested our system in a real environment and in a shared-control setting using frequency-specific modulations recorded by electroencephalography (EEG) from the brain of the user. Our experiments demonstrate that our BMI-controlled robotic system enables a reliable liquid intake. We believe that our approach can easily be extended to other useful tasks including food intake and object manipulation." ] }
1904.03735
2935344270
Virtual and augmented reality (VR AR) systems are emerging technologies requiring data rates of multiple Gbps. Existing high quality VR headsets require connections through HDMI cables to a computer rendering rich graphic contents to meet the extremely high data transfer rate requirement. Such a cable connection limits the VR user's mobility and interferes with the VR experience. Current wireless technologies such as WiFi cannot support the multi-Gbps graphics data transfer. Instead, we propose to use visible light communication (VLC) for establishing high speed wireless links between a rendering computer and a VR headset. But, VLC transceivers are highly directional with narrow beams and require constant maintenance of line-of-sight (LOS) alignment between the transmitter and the receiver. Thus, we present a novel multi-detector hemispherical VR headset design to tackle the beam misalignment problem caused by the VR user's random head orientation. We provide detailed analysis on how the number of detectors on the headset can be minimized while maintaining the required beam alignment and providing high quality VR experience.
VLC systems typically use white LEDs as transmitters. The modulation bandwidth of such LEDs is a few MHz which limits the transmission rate of a VLC system @cite_25 . To achieve the desired high speed data rate, different methods have been proposed, such as, frequency domain equalization @cite_35 , using optical filters, modulation techniques like orthogonal frequency division multiplexing @cite_22 @cite_15 , multiple-input multiple output (MIMO) @cite_30 @cite_17 , and using red-green-blue (RGB) LEDs @cite_33 . In @cite_27 , a data rate of 3.4 Gbps was reported which was achieved using wavelength division multiplexing (WDM) and RGB LEDs. In @cite_18 @cite_9 , RGB laser diodes (RGB-LDs) were used instead of LEDs, since LDs can provide the high modulation bandwidth of multiple GHz. It has been shown in @cite_4 @cite_12 that a combination of visible RGB lasers along with a diffuser can generate white light that has comparable properties with white LED sources. @cite_9 has shown that data rates of as high as 25Gbps can be achieved using RGB-LDs.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_4", "@cite_22", "@cite_33", "@cite_9", "@cite_27", "@cite_15", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2163770389", "2149302746", "2139864908", "2079170079", "2149130381", "2024951505", "2517765861", "1969564620", "2561529697", "1503835506", "1964604939", "2027009445" ], "abstract": [ "Solid-state lighting is a rapidly growing area of research and applications, due to the reliability and predicted high efficiency of these devices. The white LED sources that are typically used for general illumination can also be used for data transmission, and Visible Light Communications (VLC) is a rapidly growing area of research. One of the key challenges is the limited modulation bandwidth of sources, typically several MHz. However, as a room or coverage space would typically be illuminated by an array of LEDs there is the potential for parallel data transmission, and using optical MIMO techniques is potentially attractive for achieving high data rates. In this paper we investigate non-imaging and imaging MIMO approaches: a non-imaging optical MIMO system does not perform properly at all receiver positions due to symmetry, but an imaging based system can operate under all foreseeable circumstances. Simulations show such systems can operate at several hundred Mbit s, and up to Gbit s in many circumstances.", "This letter describes a high-speed visible light communications link that uses a white-light light-emitting diode (LED). Such devices have bandwidths of few megahertz, severely limiting the data rates of any communication system. Here, we demonstrate that by detecting only the blue component of the LED, and using a simple first-order analogue equalizer, a data rate of 100 Mb s can be achieved using on-off keying nonreturn-to-zero modulation.", "Over the past decade, visible light communication (VLC) systems have typically operated between 50 Mbps and 3.4 Gbps. In this paper, we propose and evaluate mobile VLC systems that operate at 10 Gbps. The enhancements in channel bandwidth and data rate are achieved by the introduction of laser diodes (LDs), angle diversity receivers (ADR), imaging receivers, relay nodes, and delay adaptation techniques. We propose three mobile VLC systems: an ADR relay assisted LD-VLC, an imaging relay assisted LD-VLC (IMGR-LD), and select-the-best imaging relay assisted LD-VLC. The ADR and imaging receiver are proposed for the VLC system to mitigate the intersymbol interference, maximize the signal-to-noise ratio (SNR), and reduce the impact of multipath dispersion due to mobility. The combination of IMGR-LD with a delay adaptation technique adds a degree of freedom to the link design, which results in a VLC system that has the ability to provide high data rates under mobility. The proposed IMGR-LD system achieves significant improvements in the SNR over other systems in the worst case scenario in the considered real indoor environment.", "Solid-state lighting is currently based on light-emitting diodes (LEDs) and phosphors. Solid-state lighting based on lasers would offer significant advantages including high potential efficiencies at high current densities. Light emitted from lasers, however, has a much narrower spectral linewidth than light emitted from LEDs or phosphors. Therefore it is a common belief that white light produced by a set of lasers of different colors would not be of high enough quality for general illumination. We tested this belief experimentally, and found the opposite to be true. This result paves the way for the use of lasers in solid-state lighting.", "Visible light LEDs are being used for indoor optical wireless systems as well as room lighting. In indoor diffuse optical wireless links multipath dispersion limits the maximum transmission data rates. In this paper we investigate OFDM system where multipath induced intersymbol interference (ISI) is reduced, enabling higher data rates. Pilot signals are uniformly inserted into data symbols (subcarriers) and are extracted at the receiver for channel estimation. Predicted and simulated results for the symbol error rate (SER) for an OFDM employing BPSK, QPSK and M-QAM for line of sight (LOS) and diffuse links are presented for a range of pilot signal.", "We report the first Gigabit-range visible light link based on off-the-shelf RGB-type white LEDs. By application of WDM and DMT modulation an aggregate rate of 1.25 Gbit s within the FEC 2·10-3 limit has been reached at illumination levels recommended by the lighting standard for the working environment.", "Visible light communication (VLC) systems have typically operated at data rates below 20 Gbps and operation at this data rate was shown to be feasible by using laser diodes (LDs), beam steering, imaging receivers and delay adaptation techniques. However, an increase in the computational cost is incurred. In this paper, we introduce fast computer generated holograms (FCGHs) to speed up the adaptation process. The new, fast and efficient fully adaptive VLC system can improve the receiver signal to noise ratio (SNR) and reduce the required time to estimate the position of the VLC receiver. In addition, an imaging receiver and a delay adaptation technique are used to reduce the effect of inter symbol interference (ISI) and multipath dispersion. Significant enhancements in the SNR, with VLC channel bandwidths of more than 36 GHz are obtained resulting in a compact impulse response and a VLC system that is able to achieve higher data rates (25 Gbps) with full mobility in the considered indoor environment.", "In this paper, we experimentally realized a gigabit-class indoor visible light communication system using commercially available RGB White LED and exploiting an optimized DMT modulation. We achieved data rate of 1.5 Gbit s with single channel and 3.4 Gbit s by implementing WDM transmission at standard illumination levels. In both experiments, the resulting bit error ratios were below the FEC limit. To the best of our knowledge, these values are the highest ever achieved in VLC systems.", "Multiple-input multiple-output (MIMO) is a natural and effective way to increase the capacity of white light-emitting diode (LED) based visible light communication (VLC) systems. Orthogonal frequency division multiplexing (OFDM) using high-order modulation is another widely used technique in VLC systems. Due to the intensity modulation and direct detection nature of VLC systems, Hermitian symmetry is usually imposed in OFDM so as to obtain a real-valued signal. In this paper, we investigate a non-Hermitian symmetry OFDM (NHS-OFDM) scheme for MIMO-VLC systems. By transmitting the real and imaginary parts of a complex-valued OFDM signal via a pair of white LEDs, NHS-OFDM circumvents the constraint of Hermitian symmetry. We evaluate the performance of an indoor 2×2 MIMO-VLC system using conventional Hermitian symmetry-based OFDM (HS-OFDM) and NHS-OFDM, where both a non-imaging receiver and an imaging receiver are considered. Analytical results show that the system using NHS-OFDM achieves superior bit error rate (BER) performance to that using HS-OFDM, with lower or nearly the same computational complexity. The superior BER performance of NHS-OFDM-based MIMO-VLC is further verified by experiments. The experimental results demonstrate that, in a 400 Mb s2×2 MIMO-VLC system with an imaging receiver, NHS-OFDM improves the communication coverage by about 30 compared with conventional HS-OFDM for a target BER of 3.8×10−3.", "Detailing a systems approach, Optical Wireless Communications: System and Channel Modelling with MATLAB, is a self-contained volume that concisely and comprehensively covers the theory and technology of optical wireless communications systems (OWC) in a way that is suitable for undergraduate and graduate-level students, as well as researchers and professional engineers. Incorporating MATLAB throughout, the authors highlight past and current research activities to illustrate optical sources, transmitters, detectors, receivers, and other devices used in optical wireless communications. They also discuss both indoor and outdoor environments, discussing how different factorsincluding various channel modelsaffect system performance and mitigation techniques. In addition, this book broadly covers crucial aspects of OWC systems: Fundamental principles of OWC Devices and systems Modulation techniques and schemes (including polarization shift keying) Channel models and system performance analysis Emerging visible light communications Terrestrial free space optics communication Use of infrared in indoor OWC One entire chapter explores the emerging field of visible light communications, and others describe techniques for using theoretical analysis and simulation to mitigate channel impact on system performance. Additional topics include wavelet denoising, artificial neural networks, and spatial diversity. Content also covers different challenges encountered in OWC, as well as outlining possible solutions and current research trends. A major attraction of the book is the presentation of MATLAB simulations and codes, which enable readers to execute extensive simulations and better understand OWC in general.", "Laser-based white lighting offers a viable option as an efficient and color-stable high-power solid-state white light source. We show that white light generation is possible using blue or near-UV laser diodes in combination with yellow-emitting cerium-substituted yttrium aluminum garnet (YAG:Ce) or a mixture of red-, green-, and blue-emitting phosphors. A variety of correlated color temperatures (CCT) are achieved, ranging from cool white light with a CCT of 4400 K using a blue laser diode to a warm white light with a CCT of 2700 K using a near-UV laser diode, with respective color rendering indices of 57 and 95. The luminous flux of these devices are measured to be 252 lm and 53 lm with luminous efficacies of 76 lm W and 19 lm W, respectively. An estimation of the maximum efficacy of a device comprising a blue laser diode in combination with YAG:Ce is calculated and the results are used to optimize the device.", "Multiple-input multiple-output (MIMO) systems using multiple light emitting diode (LED) sources and photodiode (PD) detectors are attractive for visible light communication (VLC) as they offer a capacity gain proportional to the number of parallel single-input single-output (SISO) channels. MIMO VLC systems exploit the high signal-to-noise ratio (SNR) of a SISO channel offered due to typical illumination requirements to overcome the capacity constraints due to limited modulation bandwidth of LEDs. In this work, a modified singular value decomposition VLC (SVD-VLC) MIMO system is proposed. This system maximizes the data rate while maintaining the target illumination and allowing the channel matrix to vary in order to support mobility in a practical indoor VLC deployment. The upper bound on capacity of the proposed SVD-VLC MIMO system is calculated assuming an imaging receiver. The relationship between the proposed system performance and system parameters total power constraint, lens aperture and random receiver locations are described." ] }
1904.03735
2935344270
Virtual and augmented reality (VR AR) systems are emerging technologies requiring data rates of multiple Gbps. Existing high quality VR headsets require connections through HDMI cables to a computer rendering rich graphic contents to meet the extremely high data transfer rate requirement. Such a cable connection limits the VR user's mobility and interferes with the VR experience. Current wireless technologies such as WiFi cannot support the multi-Gbps graphics data transfer. Instead, we propose to use visible light communication (VLC) for establishing high speed wireless links between a rendering computer and a VR headset. But, VLC transceivers are highly directional with narrow beams and require constant maintenance of line-of-sight (LOS) alignment between the transmitter and the receiver. Thus, we present a novel multi-detector hemispherical VR headset design to tackle the beam misalignment problem caused by the VR user's random head orientation. We provide detailed analysis on how the number of detectors on the headset can be minimized while maintaining the required beam alignment and providing high quality VR experience.
An indoor VLC network has multiple attocells to provide full wireless communication coverage. Inter-cell-interference (ICI) becomes an issue in such networks due to the overlapping of optical transmission beams. Different approaches have been proposed to mitigate ICI, for example, frequency division based carrier allocation @cite_7 , a dynamic interference-constrained sub-carrier reuse algorithm @cite_36 , and using angle diversity receivers (ADRs). In @cite_11 , an ADR was proposed with a hemispherical base where the PDs were placed in one or two layers. In @cite_2 @cite_19 , different ADR designs were presented where the PDs are placed at a given inclination angle on the same horizontal plane. Such ADRs can help maintain a VLC link in stationary scenario or during yaw movement but not in the presence of roll and or pitch movement of the receiver. In this paper, we present a multi-detector VR-headset design that can provide connectivity for any orientation of the VR-user's head, such as, roll, pitch, and yaw (Fig. ).
{ "cite_N": [ "@cite_7", "@cite_36", "@cite_19", "@cite_2", "@cite_11" ], "mid": [ "2077185787", "2133972154", "1525277621", "2805610301", "2061597183" ], "abstract": [ "It is necessary in visible light communication where different information from adjacent cells is distinguishable without inter-cell interference for indoor localization-based services and positioning systems. In order to mitigate inter-cell interference, a carrier allocation-visible light communication system, adopting various carriers among the system's respective cells, is suggested. The effects of inter-cell interference for a single channel and the proposed systems are investigated by experimentation with simultaneous transmissions of QPSK signals.", "Discrete multi-tone (DMT) modulation is known to be an efficient single-transmitter technique for visible-light communication. However, the use of this technique in a multiple transmitter environment requires effective subcarrier and power allocation design in order to exploit the full potential of spatial multiple-transmitter diversity. Spatial reuse of the subcarriers in the presence of interference and power constraints increases the efficiency of multiple access (MA) DMT communication. In this paper, we propose an algorithm that manages interference-constrained subcarrier reuse between different transmitters and power redistribution between different subcarriers in a heuristic manner. The algorithm simulation shows an improvement in the average bit-rate as compared with a conventional DMT method. Furthermore, the effectiveness of the proposed MA-DMT scheme increases with the number of users.", "This paper proposes two novel and practical designs of angle diversity receivers to achieve multiple-input-multiple-output (MIMO) capacity for indoor visible light communications (VLC). Both designs are easy to construct and suitable for small mobile devices. By using light emitting diodes for both illumination and data transmission, our receiver designs consist of multiple photodetectors (PDs), which are oriented with different inclination angles to achieve high-rank MIMO channels and can be closely packed without the requirement of spatial separation. Due to the orientations of the PDs, the proposed receiver designs are named pyramid receiver (PR) and hemispheric receiver (HR). In a PR, the normal vectors of PDs are chosen the same as the normal vectors of the triangle faces of a pyramid with equilateral @math -gon base. On the other hand, the idea behind HR is to evenly distribute the PDs on a hemisphere. Through analytical investigation, simulations and experiments, the channel capacity and bit-error-rate (BER) performance under various settings are presented to show that our receiver designs are practical and promising for enabling VLC-MIMO. In comparison to induced link-blocked receiver, our designs do not require any hardware adjustment at the receiver from location to location so that they can support user mobility. Besides, their channel capacities and BER performance are quite close to that of link-blocked receiver. Meanwhile, they substantially outperform spatially-separated receiver. This study reveals that using angle diversity to build VLC-MIMO system is very promising.", "Due to severe inter-cell interference (ICI), the signal-to- interference-and-noise ratio (SINR) suffers from very significant fluctuation in indoor multi-cell visible light communication (VLC) systems, which greatly limits the overall system performance. In this paper, we propose and evaluate a generalized angle diversity receiver (ADR) structure to reduce SINR fluctuation in indoor multi-cell VLC systems. The generalized ADR consists of one top detector and multiple inclined side detectors. In an indoor multi-cell VLC system using the generalized ADR, the inclination angle of the side detectors is optimized in order to minimize the SINR fluctuation over the receiving plane, where the impact of receiver random rotation is considered. An optimized ADR with different numbers of detectors is analyzed with different LED layouts and diversity combining techniques in a typical indoor environment. Analytical results show that the SINR fluctuation is gradually reduced when more detectors are equipped in the optimized ADR. In an indoor two-cell VLC system, much more significant SINR fluctuation reduction is achieved by using select-best combining (SBC). However, nearly the same SINR fluctuations are obtained in an indoor four-cell VLC system when using SBC and maximal-ratio combining (MRC). By applying the optimized ADR, up to 15.9 and 32.4-dB SINR fluctuation reductions can be achieved in comparison to a conventional single-element receiver (SER) in indoor two-cell and four-cell VLC systems, respectively.", "In this paper, we investigate the benefits of an angle diversity receiver (ADR) in an indoor cellular optical wireless communications (OWC) network. As the ADR consists of multiple photodiodes (PDs), a proper signal combining scheme is essential to optimise the system performance. Therefore, three different combination schemes, the equal gain combining (EGC), the select best combining (SBC) and the maximum ratio combining (MRC), are investigated. The results indicate that the ADR significantly outperforms the single-PD receiver in terms of both signal to interference plus noise ratio (SINR) and area spectral efficiency (ASE). In particular, the ADR implementing the MRC scheme achieves the best performance, where over 40 dB SINR improvement is attained compared to the single-PD receiver." ] }
1904.03735
2935344270
Virtual and augmented reality (VR AR) systems are emerging technologies requiring data rates of multiple Gbps. Existing high quality VR headsets require connections through HDMI cables to a computer rendering rich graphic contents to meet the extremely high data transfer rate requirement. Such a cable connection limits the VR user's mobility and interferes with the VR experience. Current wireless technologies such as WiFi cannot support the multi-Gbps graphics data transfer. Instead, we propose to use visible light communication (VLC) for establishing high speed wireless links between a rendering computer and a VR headset. But, VLC transceivers are highly directional with narrow beams and require constant maintenance of line-of-sight (LOS) alignment between the transmitter and the receiver. Thus, we present a novel multi-detector hemispherical VR headset design to tackle the beam misalignment problem caused by the VR user's random head orientation. We provide detailed analysis on how the number of detectors on the headset can be minimized while maintaining the required beam alignment and providing high quality VR experience.
Existing wireless VR systems like Samsung Gear or Google Cardboard rely on smart phones and cannot process rich graphics content. VR systems like Zotac require the user to carry a processor unit in backpack. High quality VR systems like Occulus Rift and HTC Vive are not wireless and requires HDMI and USB cable connection to a computer for rendering the rich graphic content. For providing wireless VR with high quality, 60 GHz communication is being explored as a possible solution. In @cite_3 , a proof-of-concept VR system with programming capability on the headset is proposed that uses WiGig modules for wireless connection. The authors in @cite_28 developed a mmWave reflector that helps maintain connection with a VR-headset in the event of blockage. In @cite_10 , the authors proposed using Free-space-optics (FSO), a.k.a. OWC to replace the HDMI cable on a VR-headset. They proposed a mechanical steering mechanism for both a ceiling mounted transmitter and a receiver on the headset to maintain line-of-sight alignment.
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_3" ], "mid": [ "", "2888940161", "2750802720" ], "abstract": [ "", "In near future, we expect the Virtual Reality (VR) headsets to be wireless and to be demanding data-rate up to Tbps, thus pushing RF technology to the limit. In this work, we explore the possibility of using steerable Free Space Optics (FSO) to create a wireless link between a wall-mounted static transceiver and a VR headset in motion with its user. We describe our architecture, develop a link emulator using off-the-shelf optical devices, analyze the response time of the system and its effect on tolerable motion of the VR headset. We describe an in-band feedback mechanism to track the laser beam between the transceivers in motion. We lay out additional challenges to tackle, so as to unleash the full potential of FSO communications for wearable devices like VR headset.", "All existing high-quality virtual reality (VR) systems (e.g., HTC Vive and Oculus Rift) are tethered, requiring an HDMI cable to connect the head mounted display (HMD) to a PC for rendering rich graphic contents. Such a tethered design not only limits user mobility but also imposes hazards to users. To get rid of the cable, \"cable replacement\" solutions have been proposed but without any programmability at the HMD side. In this paper, we explore how to build a programmable wireless high-quality VR system using commodity hardware. With programmability at both the PC side and the HMD side, our system provides extensibility and flexibility for exploring various new ideas and software-based techniques in high-quality VR. We present our system design, describe challenges, explore possible solutions to cut the wire, and compare the performance of different approaches for transmitting the high-volume graphics data over a wireless link. We share our experience and report preliminary findings. Experimental results show that building a wireless high-quality VR system is very challenging, and needs extensive effort on both the software and hardware sides in order to meet the performance requirements." ] }
1904.03513
2935468715
In this paper, we describe our submission to SemEval-2019 Task 4 on Hyperpartisan News Detection. Our system relies on a variety of engineered features originally used to detect propaganda. This is based on the assumption that biased messages are propagandistic in the sense that they promote a particular political cause or viewpoint. We trained a logistic regression model with features ranging from simple bag-of-words to vocabulary richness and text readability features. Our system achieved 72.9 accuracy on the test data that is annotated manually and 60.8 on the test data that is annotated with distant supervision. Additional experiments showed that significant performance improvements can be achieved with better feature pre-processing.
Similarly to , we believe that there is an inherent style in propaganda, regardless of the source publishing it. Many stylistic features were proposed for authorship identification, i.e., the task of predicting whether a piece of text has been written by a particular author. One of the most successful representations for such a task are character-level @math -grams @cite_21 , and they turn out to represent some of our most important stylistic features.
{ "cite_N": [ "@cite_21" ], "mid": [ "2126631960" ], "abstract": [ "Authorship attribution supported by statistical or computational methods has a long history starting from the 19th century and is marked by the seminal study of Mosteller and Wallace (1964) on the authorship of the disputed “Federalist Papers.” During the last decade, this scientific field has been developed substantially, taking advantage of research advances in areas such as machine learning, information retrieval, and natural language processing. The plethora of available electronic texts (e.g., e-mail messages, online forum messages, blogs, source code, etc.) indicates a wide variety of applications of this technology, provided it is able to handle short and noisy text from multiple candidate authors. In this article, a survey of recent advances of the automated approaches to attributing authorship is presented, examining their characteristics for both text representation and text classification. The focus of this survey is on computational requirements and settings rather than on linguistic or literary issues. We also discuss evaluation methodologies and criteria for authorship attribution studies and list open questions that will attract future work in this area. © 2009 Wiley Periodicals, Inc." ] }
1904.03513
2935468715
In this paper, we describe our submission to SemEval-2019 Task 4 on Hyperpartisan News Detection. Our system relies on a variety of engineered features originally used to detect propaganda. This is based on the assumption that biased messages are propagandistic in the sense that they promote a particular political cause or viewpoint. We trained a logistic regression model with features ranging from simple bag-of-words to vocabulary richness and text readability features. Our system achieved 72.9 accuracy on the test data that is annotated manually and 60.8 on the test data that is annotated with distant supervision. Additional experiments showed that significant performance improvements can be achieved with better feature pre-processing.
More details about research on fact-checking and the spread of fake news online can be found in @cite_25 @cite_10 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_25" ], "mid": [ "2963567867", "2790166049", "2791544114" ], "abstract": [ "", "We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise 126,000 stories tweeted by 3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98 agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.", "The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials." ] }
1904.03797
2934198733
We present FoveaBox, an accurate, flexible and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize the predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, their performance and generalization ability are also limited to the design of anchors. Instead, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations for each input image. Without bells and whistles, FoveaBox achieves state-of-the-art single model performance of 42.1 AP on the standard COCO detection benchmark. Specially for the objects with arbitrary aspect ratios, FoveaBox brings in significant improvement compared to the anchor-based detectors. More surprisingly, when it is challenged by the stretched testing images, FoveaBox shows great robustness and generalization ability to the changed distribution of bounding box shapes. The code will be made publicly available.
: Prior to the success of deep CNNs, the widely used detection systems are based on the combination of independent components (HOG @cite_4 , SIFT @cite_20 etc.). DPM @cite_40 and its variants help extending object detectors to more general object categories and have leading results for many years @cite_36 . The sliding-window approach is the leading detection paradigm for searching the object of interest in classic object detection frameworks.
{ "cite_N": [ "@cite_36", "@cite_40", "@cite_4", "@cite_20" ], "mid": [ "2031489346", "2168356304", "2161969291", "" ], "abstract": [ "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "" ] }
1904.03797
2934198733
We present FoveaBox, an accurate, flexible and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize the predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, their performance and generalization ability are also limited to the design of anchors. Instead, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations for each input image. Without bells and whistles, FoveaBox achieves state-of-the-art single model performance of 42.1 AP on the standard COCO detection benchmark. Specially for the objects with arbitrary aspect ratios, FoveaBox brings in significant improvement compared to the anchor-based detectors. More surprisingly, when it is challenged by the stretched testing images, FoveaBox shows great robustness and generalization ability to the changed distribution of bounding box shapes. The code will be made publicly available.
: Modern object detectors are generally grouped into two factions: two-stage, proposal driven detectors and one-stage, proposal free methods. For two-stage detectors, the first stage generates a sparse set of object proposals, and the second stage classifies the proposals as well as refines the coordinates with the sliding window manner. Such pipeline is first demonstrated its effectiveness by R-CNN @cite_30 and is widely used in later two-stage methods @cite_35 @cite_19 . In Faster R-CNN @cite_15 , the first stage (RPN) simultaneously predicts object bounds and objectness scores at each pre-defined sliding window anchor with a light-weight network. Several attempts have been performed to boost the performance of the detector, including feature pyramid @cite_6 @cite_34 @cite_1 , multiscale @cite_39 @cite_33 , and object relations @cite_8 , etc.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_33", "@cite_8", "@cite_1", "@cite_6", "@cite_39", "@cite_19", "@cite_15", "@cite_34" ], "mid": [ "2102605133", "2179352600", "2903269730", "2964080601", "2888728082", "2565639579", "2768489488", "", "2613718673", "2962992847" ], "abstract": [ "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.", "This paper describes AutoFocus, an efficient multi-scale inference algorithm for deep-learning based object detectors. Instead of processing an entire image pyramid, AutoFocus adopts a coarse to fine approach and only processes regions which are likely to contain small objects at finer scales. This is achieved by predicting category agnostic segmentation maps for small objects at coarser scales, called FocusPixels. FocusPixels can be predicted with high recall, and in many cases, they only cover a small fraction of the entire image. To make efficient use of FocusPixels, an algorithm is proposed which generates compact rectangular FocusChips which enclose FocusPixels. The detector is only applied inside FocusChips, which reduces computation while processing finer scales. Different types of error can arise when detections from FocusChips of multiple scales are combined, hence techniques to correct them are proposed. AutoFocus obtains an mAP of 47.9 (68.3 at 50 overlap) on the COCO test-dev set while processing 6.4 images per second on a Titan X (Pascal) GPU. This is 2.5X faster than our multi-scale baseline detector and matches its mAP. The number of pixels processed in the pyramid can be reduced by 5X with a 1 drop in mAP. AutoFocus obtains more than 10 mAP gain compared to RetinaNet but runs at the same speed with the same ResNet-101 backbone.", "Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector.", "State-of-the-art object detectors usually learn multi-scale representations to get better results by employing feature pyramids. However, the current designs for feature pyramids are still inefficient to integrate the semantic information over different scales. In this paper, we begin by investigating current feature pyramids solutions, and then reformulate the feature pyramid construction as the feature reconfiguration process. Finally, we propose a novel reconfiguration architecture to combine low-level representations with high-level semantic features in a highly-nonlinear yet efficient way. In particular, our architecture which consists of global attention and local reconfigurations, is able to gather task-oriented features across different spatial locations and scales, globally and locally. Both the global attention and local reconfiguration are lightweight, in-place, and end-to-end trainable. Using this method in the basic SSD system, our models achieve consistent and significant boosts compared with the original model and its other variations, without losing real-time processing speed.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "An analysis of different techniques for recognizing and detecting objects under extreme scale variation is presented. Scale specific and scale invariant design of detectors are compared by training them with different configurations of input data. By evaluating the performance of different network architectures for classifying small objects on ImageNet, we show that CNNs are not robust to changes in scale. Based on this analysis, we propose to train and test detectors on the same scales of an image-pyramid. Since small and large objects are difficult to recognize at smaller and larger scales respectively, we present a novel training scheme called Scale Normalization for Image Pyramids (SNIP) which selectively back-propagates the gradients of object instances of different sizes as a function of the image scale. On the COCO dataset, our single model performance is 45.7 and an ensemble of 3 networks obtains an mAP of 48.3 . We use off-the-shelf ImageNet-1000 pre-trained models and only train with bounding box supervision. Our submission won the Best Student Entry in the COCO 2017 challenge. Code will be made available at http: bit.ly 2yXVg4c.", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Almost all of the current top-performing object detection networks employ region proposals to guide the search for object instances. State-of-the-art region proposal methods usually need several thousand proposals to get high recall, thus hurting the detection efficiency. Although the latest Region Proposal Network method gets promising detection accuracy with several hundred proposals, it still struggles in small-size object detection and precise localization (e.g., large IoU thresholds), mainly due to the coarseness of its feature maps. In this paper, we present a deep hierarchical network, namely HyperNet, for handling region proposal generation and object detection jointly. Our HyperNet is primarily based on an elaborately designed Hyper Feature which aggregates hierarchical feature maps first and then compresses them into a uniform space. The Hyper Features well incorporate deep but highly semantic, intermediate but really complementary, and shallow but naturally high-resolution features of the image, thus enabling us to construct HyperNet by sharing them both in generating proposals and detecting objects via an end-to-end joint training strategy. For the deep VGG16 model, our method achieves completely leading recall and state-of-the-art object detection accuracy on PASCAL VOC 2007 and 2012 using only 100 proposals per image. It runs with a speed of 5 fps (including all steps) on a GPU, thus having the potential for real-time processing." ] }
1904.03797
2934198733
We present FoveaBox, an accurate, flexible and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize the predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, their performance and generalization ability are also limited to the design of anchors. Instead, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations for each input image. Without bells and whistles, FoveaBox achieves state-of-the-art single model performance of 42.1 AP on the standard COCO detection benchmark. Specially for the objects with arbitrary aspect ratios, FoveaBox brings in significant improvement compared to the anchor-based detectors. More surprisingly, when it is challenged by the stretched testing images, FoveaBox shows great robustness and generalization ability to the changed distribution of bounding box shapes. The code will be made publicly available.
Compared to two-stage approaches, the one-stage pipeline skips object proposal generation and predicts bounding boxes and class scores in one evaluation. Most top one-stage detectors rely on the anchor boxes to enumerate the possible locations of target objects (e.g. SSD @cite_44 , DSSD @cite_13 , YOLOv2 v3 @cite_43 @cite_2 , and RetinaNet @cite_31 ). In CornerNet @cite_5 , the authors propose to detect an object bounding box as a pair of keypoints. CornerNet adopts the Associative Embedding @cite_25 technique to separate different instances. Some prior works share similarities with our work, and we will discuss them in more detail in .
{ "cite_N": [ "@cite_44", "@cite_43", "@cite_2", "@cite_5", "@cite_31", "@cite_13", "@cite_25" ], "mid": [ "", "", "2796347433", "2886335102", "2884561390", "2579985080", "2555751471" ], "abstract": [ "", "", "We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL", "We propose CornerNet, a new approach to object detection where we detect an object bounding box as a pair of keypoints, the top-left corner and the bottom-right corner, using a single convolution neural network. By detecting objects as paired keypoints, we eliminate the need for designing a set of anchor boxes commonly used in prior single-stage detectors. In addition to our novel formulation, we introduce corner pooling, a new type of pooling layer that helps the network better localize corners. Experiments show that CornerNet achieves a 42.2 AP on MS COCO, outperforming all existing one-stage detectors.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https: github.com facebookresearch Detectron.", "The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.", "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets." ] }
1904.03704
2930880760
Purpose Given the multitude of challenges surgeons face during mitral valve repair surgery, they should have a high confidence in handling of instruments and in the application of surgical techniques before they enter the operating room. Unfortunately, opportunities for surgical training of minimally invasive repair are very limited, leading to a situation where most surgeons undergo a steep learning curve while operating the first patients.
Existing works already dealt with direct 3D-printing of the mitral valve using acrylonitrile butadiene styrene (ABS) plastic material @cite_8 . Such stiff material showed benefits for surgery planning and surgical teaching, but it is certainly inadequate for dexterity training. @cite_10 also used direct printing, but employed multi-material elastomeric TangoPlus materials (Stratasys, Eden Prairie, Minnesota, USA) that were compared to freshly harvested porcine leaflet tissue. All TangoPlus varieties were less stiff than the maximum tensile elastic modulus of porcine mitral valve tissue. Furthermore, the mesh creation required a lot of manual input and chordae tendineae were not included in the final rapid prototyping models.
{ "cite_N": [ "@cite_10", "@cite_8" ], "mid": [ "2467610204", "2037834527" ], "abstract": [ "As catheter-based structural heart interventions become increasingly complex, the ability to effectively model patient-specific valve geometry as well as the potential interaction of an implanted device within that geometry will become increasingly important. Our aim with this investigation was to combine the technologies of high-spatial resolution cardiac imaging, image processing software, and fused multi-material 3D printing, to demonstrate that patient-specific models of the mitral valve apparatus could be created to facilitate functional evaluation of novel trans-catheter mitral valve repair strategies. Clinical 3D transesophageal echocardiography and computed tomography images were acquired for three patients being evaluated for a catheter-based mitral valve repair. Target anatomies were identified, segmented and reconstructed into 3D patient-specific digital models. For each patient, the mitral valve apparatus was digitally reconstructed from a single or fused imaging data set. Using multi-material 3D printing methods, patient-specific anatomic replicas of the mitral valve were created. 3D print materials were selected based on the mechanical testing of elastomeric TangoPlus materials (Stratasys, Eden Prairie, Minnesota, USA) and were compared to freshly harvested porcine leaflet tissue. The effective bending modulus of healthy porcine MV tissue was significantly less than the bending modulus of TangoPlus (p 0.95). We have demonstrated that patient-specific mitral valve models can be reconstructed from multi-modality imaging datasets and fabricated using the multi-material 3D printing technology and we provide two examples to show how catheter-based repair devices could be evaluated within specific patient 3D printed valve geometry. However, we recognize that the use of 3D printed models for the development of new therapies, or for specific procedural training has yet to be defined.", "Purpose Advances in mitral valve repair and adoption have been partly attributed to improvements in echocardiographic imaging technology. To educate and guide repair surgery further, we have developed a methodology for fast production of physical models of the valve using novel three-dimensional (3D) echocardiographic imaging software in combination with stereolithographic printing. Description Quantitative virtual mitral valve shape models were developed from 3D transesophageal echocardiographic images using software based on semiautomated image segmentation and continuous medial representation algorithms. These quantitative virtual shape models were then used as input to a commercially available stereolithographic printer to generate a physical model of the each valve at end systole and end diastole. Evaluation Physical models of normal and diseased valves (ischemic mitral regurgitation and myxomatous degeneration) were constructed. There was good correspondence between the virtual shape models and physical models. Conclusions It was feasible to create a physical model of mitral valve geometry under normal, ischemic, and myxomatous valve conditions using 3D printing of 3D echocardiographic data. Printed valves have the potential to guide surgical therapy for mitral valve disease." ] }
1904.03704
2930880760
Purpose Given the multitude of challenges surgeons face during mitral valve repair surgery, they should have a high confidence in handling of instruments and in the application of surgical techniques before they enter the operating room. Unfortunately, opportunities for surgical training of minimally invasive repair are very limited, leading to a situation where most surgeons undergo a steep learning curve while operating the first patients.
@cite_0 @cite_9 proposed a method for creating 3D-printable molds, which can be filled with an elastic material such as silicone to create flexible and tear-resistant pediatric leaflet models. Beyond that, they compared direct printing with flexible material. Molding required more time and labor than directly printed models, but permits materials that better simulate real tissue and that are more economical at scale. Furthermore, direct printing of flexible material usually requires expensive 3D printers and materials. Surgeons reported good tissue properties for the cast valves (realistic flexibility, cuts and holds sutures well without tearing) from their personal experiences and considered the silicone models to be useful for surgical planning and training. However, their valve models did not include the subvalvular apparatus.
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2591595642", "2770469499" ], "abstract": [ "PURPOSE: Patient-specific heart and valve models have shown promise as training and planning tools for heart surgery, but physically realistic valve models remain elusive. Available proprietary, simulation-focused heart valve models are generic adult mitral valves and do not allow for patient-specific modeling as may be needed for rare diseases such as congenitally abnormal valves. We propose creating silicone valve models from a 3D-printed plastic mold as a solution that can be adapted to any individual patient and heart valve at a fraction of the cost of direct 3D-printing using soft materials. METHODS: Leaflets of a pediatric mitral valve, a tricuspid valve in a patient with hypoplastic left heart syndrome, and a complete atrioventricular canal valve were segmented from ultrasound images. A custom software was developed to automatically generate molds for each valve based on the segmentation. These molds were 3D-printed and used to make silicone valve models. The models were designed with cylindrical rims of different sizes surrounding the leaflets, to show the outline of the valve and add rigidity. Pediatric cardiac surgeons practiced suturing on the models and evaluated them for use as surgical planning and training tools. RESULTS: Five out of six surgeons reported that the valve models would be very useful as training tools for cardiac surgery. In this first iteration of valve models, leaflets were felt to be unrealistically thick or stiff compared to real pediatric leaflets. A thin tube rim was preferred for valve flexibility. CONCLUSION: The valve models were well received and considered to be valuable and accessible tools for heart valve surgery training. Further improvements will be made based on surgeons’ feedback.", "Mastering the technical skills required to perform pediatric cardiac valve surgery is challenging in part due to limited opportunity for practice. Transformation of 3D echocardiographic (echo) images of congenitally abnormal heart valves to realistic physical models could allow patient-specific simulation of surgical valve repair. We compared materials, processes, and costs for 3D printing and molding of patient-specific models for visualization and surgical simulation of congenitally abnormal heart valves. Pediatric atrioventricular valves (mitral, tricuspid, and common atrioventricular valve) were modeled from transthoracic 3D echo images using semi-automated methods implemented as custom modules in 3D Slicer. Valve models were then both 3D printed in soft materials and molded in silicone using 3D printed “negative” molds. Using pre-defined assessment criteria, valve models were evaluated by congenital cardiac surgeons to determine suitability for simulation. Surgeon assessment indicated that the molded valves had superior material properties for the purposes of simulation compared to directly printed valves (p < 0.01). Patient-specific, 3D echo-derived molded valves are a step toward realistic simulation of complex valve repairs but require more time and labor to create than directly printed models. Patient-specific simulation of valve repair in children using such models may be useful for surgical training and simulation of complex congenital cases." ] }
1904.03704
2930880760
Purpose Given the multitude of challenges surgeons face during mitral valve repair surgery, they should have a high confidence in handling of instruments and in the application of surgical techniques before they enter the operating room. Unfortunately, opportunities for surgical training of minimally invasive repair are very limited, leading to a situation where most surgeons undergo a steep learning curve while operating the first patients.
In a very recent medical review, @cite_17 describe a general guidance for creation of dynamic valve models usable in a flow simulator. The authors provide an overview of the production pipeline (imaging, segmentation, 3D printing, material casting) and discuss viable options for each step. Moreover, they present an approach, for which two interesting points shall be highlighted: They just printed the lower impression of the valve and not a full mold. Silicone was painted on the printed surface in a thin layer, which then cured to the shape of the valve. Furthermore, they incorporated braided nylon fishing line strings into the leaflets to mimic some strings of the chordae tendineae. With the mentioned steps, it seems to be difficult to obtain results of the same quality when multiple valves are produced from the same mold (e.g. two models could have different thicknesses or varying chordae attachments). The phantoms do not seem to include papillary muscles.
{ "cite_N": [ "@cite_17" ], "mid": [ "2752964856" ], "abstract": [ "Medical imaging has advanced enormously over the last few decades, revolutionizing patient diagnostics and care. At the same time, additive manufacturing has emerged as a means of reproducing physical shapes and models previously not possible. In combination, they have given rise to 3-dimensional (3D) modeling, an entirely new technology for physicians. In an era in which 3D imaging has become a standard for aiding in the diagnosis and treatment of cardiac disease, this visualization now can be taken further by bringing the patient’s anatomy into physical reality as a model. The authors describe the generalized process of creating a model of cardiac anatomy from patient images and their experience creating patient-specific dynamic mitral valve models. This involves a combination of image processing software and 3D printing technology. In this article, the complexity of 3D modeling is described and the decision-making process for cardiac anesthesiologists is summarized. The management of cardiac disease has been altered with the emergence of 3D echocardiography, and 3D modeling represents the next paradigm shift." ] }
1904.03713
2952487631
Artificial intelligence is revolutionizing formal education, fueled by innovations in learning assessment, content generation, and instructional delivery. Informal, lifelong learning settings have been the subject of less attention. We provide a proof-of-concept for an embodied book discussion companion, designed to stimulate conversations with readers about particularly creative metaphors in fiction literature. We collect ratings from 26 participants, each of whom discuss Jane Austen's "Pride and Prejudice" with the robot across one or more sessions, and find that participants rate their interactions highly. This suggests that companion robots could be an interesting entryway for the promotion of lifelong learning and cognitive exercise in future applications.
Educational scenarios to which social robots have been deployed have been primarily formal settings with child learners @cite_9 @cite_11 @cite_8 @cite_6 @cite_24 @cite_13 @cite_27 . Our focus is on a different setting: informal, conversational lifelong learning. is the process of acquiring knowledge and or exercising cognitive faculties across the human lifespan, outside of traditional academic contexts. Research involving social robots in lifelong learning scenarios has been scarce, with most work deploying social robots to adult populations focusing on psychological or physical healthcare needs instead @cite_19 @cite_18 @cite_10 @cite_28 . However, @cite_2 designed a human-robot music guessing game to stimulate cognition in older adults suffering from dementia, and @cite_26 created a social robot to scaffold motivation in adult second language learners. @cite_21 also explored second language learning in adults, although their robot's behaviors were originally designed with children in mind. A common theme across these systems is the absence of open-ended conversation: all cases utilize buttons and multiple-choice answers as their input. The inability to converse naturally limits a robot's potential to engage in cognitively meaningful interactions, particularly when dealing with more subjective topics like literature or metaphor interpretation.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_8", "@cite_28", "@cite_10", "@cite_9", "@cite_21", "@cite_6", "@cite_24", "@cite_19", "@cite_27", "@cite_2", "@cite_13", "@cite_11" ], "mid": [ "2140527237", "2809451070", "2192319054", "2793990128", "2761596890", "2809463637", "2592042377", "2001862949", "2809232049", "2898938475", "2110437072", "2096153636", "2291350754", "2204977270" ], "abstract": [ "Human-robot interaction (HRI) is now well enough understood to allow us to build useful systems that can function outside of the laboratory. We are studying long-term interaction in natural user environments and describe the implementation of a robot designed to help individuals effect behavior change while dieting. Our robotic weight loss coach is compared to a standalone computer and a paper log in a controlled study. We describe the software model used to create successful long-term HRI. We summarize the experimental design, analysis, and results of our study, the first where a sociable robot interacts with a user to achieve behavior change. Results show that participants track their calorie consumption and exercise for nearly twice as long when using the robot than with the other methods and develop a closer relationship with the robot. Both are indicators of longer-term success at weight loss and maintenance and show the effectiveness of sociable robots for long-term HRI.", "", "Effective tutoring requires personalization of the interaction to each student. Continuous and efficient assessment of the student's skills are a prerequisite for such personalization. We developed a Bayesian active-learning algorithm that continuously and efficiently assesses a child's word-reading skills and implemented it in a social robot. We then developed an integrated experimental paradigm in which a child plays a novel story-creation tablet game with the robot. The robot is portrayed as a younger peer who wishes to learn to read, framing the assessment of the child's word-reading skills as well as empowering the child. We show that our algorithm results in an accurate representation of the child's word-reading skills for a large age range, 4-8 year old children, and large initial reading skill range. We also show that employing child-specific assessment-based tutoring results in an age- and initial reading skill-independent learning, compared to random tutoring. Finally, our integrated system enables us to show that implementing the same learning algorithm on the robot's reading skills results in knowledge that is comparable to what the child thinks the robot has learned. The child's perception of the robot's knowledge is age-dependent and may facilitate an indirect assessment of the development of theory-of-mind.", "In this paper we present the results of a qualitative study with therapists to inform social robotics and human robot interaction (HRI) for engagement in rehabilitative therapies. Our results add to growing evidence that socially assistive robots (SARs) could play a role in addressing patients' low engagement with self-directed exercise programmes. Specifically, we propose how SARs might augment or offer more pro-active assistance over existing technologies such as smartphone applications, computer software and fitness trackers also designed to tackle this issue. In addition, we present a series of design implications for such SARs based on therapists' expert knowledge and best practices extracted from our results. This includes an initial set of SAR requirements and key considerations concerning personalised and adaptive interaction strategies.", "Lack of companionship, loneliness, and social isolation are often experienced by older adults diagnosed with clinical depression living independently within the community. Socially assistive robots (SARs), a relatively new concept within recreational therapy, may be one treatment modality that can address each one of these concerns. This exploratory study consisted of interviews with community mental health professionals, including a recreational therapist, to determine if they perceived SARs as an appropriate interdisciplinary clinical intervention for older adults diagnosed with clinical depression. Results indicated that SARs, especially those which can provide companionship and social interaction similar to animal assisted therapy, are an appropriate interdisciplinary intervention for this population and may have an impact on improving overall quality of life by decreasing loneliness and social isolation associated with clinical depression.", "LEGO Mindstorms robots are a popular educational tool for teaching programming concepts to young learners. However, learners working with these robots often lack sufficient feedback on their programs, which makes it difficult for them to reflect on domain concepts and may decrease their motivation. We see an opportunity to introduce feedback into LEGO Mindstorms programming environments by having the robot itself deliver feedback, leveraging research on learning companions to transform the programmable robot into a social actor. Our robot, ROBIN, provides learners with automated reflection prompts based on a domain model and the student’s current program, along with social encouragement based on a theory of instructional immediacy. We hypothesize that by having the robot itself provide cognitive and social feedback, students will both reflect more on their misconceptions and persist more with the activity. This paper describes the design and implementation of ROBIN and discusses how this approach can benefit students.", "In this paper, we present an approach to adaptive language tutoring in child-robot interaction. The approach is based on a dynamic probabilistic model that represents the interrelations between the learner's skills, her observed behaviour in tutoring interaction, and the tutoring action taken by the system. Being implemented in a robot language tutor, the model enables the robot tutor to trace the learner's knowledge and to decide which skill to teach next and how to address it in a game-like tutoring interaction. Results of an evaluation study are discussed demonstrating how participants in the adaptive tutoring condition successfully learned foreign language words. CCS Concepts. Computing methodologies → Probabilistic reasoning; Cognitive robotics;. Applied computing → Interactive learning environments;. Human-centered computing → Empirical studies in HCI;", "This article presents a novel robotic partner which children can teach handwriting. The system relies on the learning by teaching paradigm to build an interaction, so as to stimulate meta-cognition, empathy and increased self-esteem in the child user. We hypothesise that use of a humanoid robot in such a system could not just engage an unmotivated student, but could also present the opportunity for children to experience physically-induced benefits encountered during human-led handwriting interventions, such as motor mimicry. By leveraging simulated handwriting on a synchronised tablet display, a NAO humanoid robot with limited fine motor capabilities has been configured as a suitably embodied handwriting partner. Statistical shape models derived from principal component analysis of a dataset of adult-written letter trajectories allow the robot to draw purposefully deformed letters. By incorporating feedback from user demonstrations, the system is then able to learn the optimal parameters for the appropriate shape models. Preliminary in situ studies have been conducted with primary school classes to obtain insight into children's use of the novel system. Children aged 6-8 successfully engaged with the robot and improved its writing to a level which they were satisfied with. The validation of the interaction represents a significant step towards an innovative use for robotics which addresses a widespread and socially meaningful challenge in education.", "Driven by the latest technologies in artificial intelligence (e.g., natural language processing and emotion recognition), we design a novel robot system, called smart learning partner, to provide a more pleasurable learning experience and better motivate learners. The self-determination theory is used as the guideline to design its human-robot interaction. The large-scale deployment of SLP in local schools and families would bring both research and commercial opportunities.", "The ability to engage the user in a conversation and the credibility of the system are two fundamental characteristics of virtual coaches. In this paper, we present the architecture of a conversational e-coach for promoting healthy lifestyles in older age, developed in the context of the NESTORE H2020 EU project. The proposed system allows multiple access points to a conversational agent via different interaction modalities: a tangible companion that embodies the virtual coach will leverage voice and other non-verbal cues in the domestic environment, while a mobile app will integrate a text-based chat for a ubiquitous intervention. In both cases, the conversational agent will deliver personalized interventions based on behavior change models and will promote trust by means of emotionally rich conversations.", "In contrast to conventional teaching agents (including robots) that were designed to play the role of human teachers or caregivers, we propose the opposite scenario in which robots receive instruction or care from children. We hypothesize that by using this care-receiving robot, we may construct a new educational framework whose goal is to promote children's spontaneous learning by teaching through their teaching the robot. In this paper, we describe the introduction of a care-receiving robot into a classroom at an English language school for Japanese children (3--6 years of age) and then conduct an experiment to evaluate if the care-receiving robot can promote their learning using English verbs. The results suggest that the idea of a care-receiving robot is feasible and that the robot can help children learn new English verbs efficiently. In addition, we report on investigations into several forms of teaching performed by children, which were revealed through observations of the children, parent interviews, and other useful knowledge. These can be used to improve the design of care-receiving robots for educational purposes.", "Currently the 2 percent growth rate for the world's older population exceeds the 1.2 rate for the world's population as a whole. By 2050, the number of individuals over the age 85 is projected to be three times more than there is today. Most of these individuals will need physical, emotional, and cognitive assistance. In this paper, we present a new adaptive robotic system based on the socially assistive robotics (SAR) technology that tries to provide a customized help protocol through motivation, encouragements, and companionship to users suffering from cognitive changes related to aging and or Alzheimer's disease. Our results show that this approach can engage the patients and keep them interested in interacting with the robot, which, in turn, increases their positive behavior.", "To date, the majority of learning technologies only afford virtual interactions on desktops or tablets, despite evidence that students learn through physical manipulation of their environment. We implemented a tangible system that allows students to solve coordinate geometry problems by interacting in a physical space with digitally augmented devices, using a teachable agent framing. We describe our system and the results from a pilot involving students using our system to teach a virtual agent. Students used a variety of strategies to solve problems that included embodied behaviors, and the majority did feel they were teaching their agent. We discuss the implications of our findings with respect to the design of adaptive tangible teachable systems.", "Building on existing work on artificial tutors with human-like capabilities, we describe the EMOTE project approach to harnessing benefits of an artificial embodied tutor in a shared physical space. Embodied in robotic platforms or through virtual agents, EMOTE aims to capture some of the empathic and human elements characterising a traditional teacher. As such, empathy and engagement, abilities key to influencing student learning, are at the core of the EMOTE approach. We present non-verbal and adaptive dialogue challenges for such embodied tutors as a foundation for researchers investigating the potential for empathic tutors that will be accepted by students and teachers." ] }
1904.03713
2952487631
Artificial intelligence is revolutionizing formal education, fueled by innovations in learning assessment, content generation, and instructional delivery. Informal, lifelong learning settings have been the subject of less attention. We provide a proof-of-concept for an embodied book discussion companion, designed to stimulate conversations with readers about particularly creative metaphors in fiction literature. We collect ratings from 26 participants, each of whom discuss Jane Austen's "Pride and Prejudice" with the robot across one or more sessions, and find that participants rate their interactions highly. This suggests that companion robots could be an interesting entryway for the promotion of lifelong learning and cognitive exercise in future applications.
Although virtual avatars could implement the same methods as robots in most of these cases, they may fall short of achieving the same goals. Research has demonstrated that physically embodied robots elicit longer conversations and more positive perceptions than computer agents @cite_12 , and are better able to influence people than virtual avatars or videos of the same robots @cite_16 . In accordance with these findings, we implement our system using a physically embodied robot to maximize its anticipated utility.
{ "cite_N": [ "@cite_16", "@cite_12" ], "mid": [ "2245843004", "2082173922" ], "abstract": [ "The effects of physical embodiment and physical presence were explored through a survey of 33 experimental works comparing how people interacted with physical robots and virtual agents. A qualitative assessment of the direction of quantitative effects demonstrated that robots were more persuasive and perceived more positively when physically present in a user?s environment than when digitally-displayed on a screen either as a video feed of the same robot or as a virtual character analog; robots also led to better user performance when they were collocated as opposed to shown via video on a screen. However, participants did not respond differently to physical robots and virtual agents when both were displayed digitally on a screen - suggesting that physical presence, rather than physical embodiment, characterizes people?s responses to social robots. Implications for understanding psychological response to physical and virtual agents and for methodological design are discussed. Survey identified 33 works exploring user responses to physical robots and virtual agents.Robot agents had greater influence when physically present than telepresent.No differences were found between physical robots displayed on a screen and virtual agents that looked similar.Physical presence, but not physical embodiment alone, resulted in more favorable responses from participants.", "HRI researchers interested in social robots have made large investments in humanoid robots. There is still sparse evidence that peoples' responses to robots differ from their responses to computer agents, suggesting that agent studies might serve to test HRI hypotheses. To help us understand the difference between people's social interactions with an agent and a robot, we experimentally compared people's responses in a health interview with (a) a computer agent projected either on a computer monitor or life-size on a screen, (b) a remote robot projected life-size on a screen, or (c) a collocated robot in the same room. We found a few behavioral and large attitude differences across these conditions. Participants forgot more and disclosed least with the collocated robot, next with the projected remote robot, and then with the agent. They spent more time with the collocated robot and their attitudes were most positive toward that robot. We discuss tradeoffs for HRI research of using collocated robots, remote robots, and computer agents as proxies of robots." ] }
1904.03646
2931903779
Monte Carlo Tree Search (MCTS) algorithms perform simulation-based search to improve policies online. During search, the simulation policy is adapted to explore the most promising lines of play. MCTS has been used by state-of-the-art programs for many problems, however a disadvantage to MCTS is that it estimates the values of states with Monte Carlo averages, stored in a search tree; this does not scale to games with very high branching factors. We propose an alternative simulation-based search method, Policy Gradient Search (PGS), which adapts a neural network simulation policy online via policy gradient updates, avoiding the need for a search tree. In Hex, PGS achieves comparable performance to MCTS, and an agent trained using Expert Iteration with PGS was able defeat MoHex 2.0, the strongest open-source Hex agent, in 9x9 Hex.
Previous works have applied model-free Reinforcement Learning algorithms to train networks to play Hex previously, albeit never trained tabular rasa. NeuroHex @cite_2 used deep Q-learning, while @cite_27 proposed several variants of policy gradient algorithms for alternating move games. These networks are strong: they can win some games against MoHex 2.0 without test-time tree search, but they remain significantly weaker than MoHex. @cite_27 also showed that combining their neural network with MoHex 2.0 resulted in an tree search algorithm stronger than MoHex 2.0.
{ "cite_N": [ "@cite_27", "@cite_2" ], "mid": [ "2786478526", "2342981176" ], "abstract": [ "Policy gradient reinforcement learning has been applied to two-player alternate-turn zero-sum games, e.g., in AlphaGo, self-play REINFORCE was used to improve the neural net model after supervised learning. In this paper, we emphasize that two-player zero-sum games with alternating turns, which have been previously formulated as Alternating Markov Games (AMGs), are different from standard MDP because of their two-agent nature. We exploit the difference in associated Bellman equations, which leads to different policy iteration algorithms. As policy gradient method is a kind of generalized policy iteration, we show how these differences in policy iteration are reflected in policy gradient for AMGs. We formulate an adversarial policy gradient and discuss potential possibilities for developing better policy gradient methods other than self-play REINFORCE. The core idea is to estimate the minimum rather than the mean for the “critic”. Experimental results on the game of Hex show the modified Monte Carlo policy gradient methods are able to learn better pure neural net policies than the REINFORCE variants. To apply learned neural weights to multiple board sizes Hex, we describe a board-size independent neural net architecture. We show that when combined with search, using a single neural net model, the resulting program consistently beats MoHex 2.0, the state-of-the-art computer Hex player, on board sizes from 9×9 to 13×13.", "DeepMind's recent spectacular success in using deep convolutional neural nets and machine learning to build superhuman level agents --- e.g. for Atari games via deep Q-learning and for the game of Go via Reinforcement Learning --- raises many questions, including to what extent these methods will succeed in other domains. In this paper we consider DQL for the game of Hex: after supervised initialization, we use selfplay to train NeuroHex, an 11-layer CNN that plays Hex on the 13x13 board. Hex is the classic two-player alternate-turn stone placement game played on a rhombus of hexagonal cells in which the winner is whomever connects their two opposing sides. Despite the large action and state space, our system trains a Q-network capable of strong play with no search. After two weeks of Q-learning, NeuroHex achieves win-rates of 20.4 as first player and 2.1 as second player against a 1-second move version of MoHex, the current ICGA Olympiad Hex champion. Our data suggests further improvement might be possible with more training time." ] }
1904.03816
2928044548
We tackle the problem of automatic portrait matting on mobile devices. The proposed model is aimed at attaining real-time inference on mobile devices with minimal degradation of model performance. Our model MMNet, based on multi-branch dilated convolution with linear bottleneck blocks, outperforms the state-of-the-art model and is orders of magnitude faster. The model can be accelerated four times to attain 30 FPS on Xiaomi Mi 5 device with moderate increase in the gradient error. Under the same conditions, our model has an order of magnitude less number of parameters and is faster than Mobile DeepLabv3 while maintaining comparable performance. The accompanied implementation can be found at this https URL .
Many works on image matting are mainly focused on achieving higher accuracy rather than the real-time inference of models. But recently, researchers are shifting the focus to networks that accommodate real-time inference @cite_14 . studied real-time portrait matting on mobile devices which is directly comparable to our result.
{ "cite_N": [ "@cite_14" ], "mid": [ "2740994353" ], "abstract": [ "Image matting plays an important role in image and video editing. However, the formulation of image matting is inherently ill-posed. Traditional methods usually employ interaction to deal with the image matting problem with trimaps and strokes, and cannot run on the mobile phone in real-time. In this paper, we propose a real-time automatic deep matting approach for mobile devices. By leveraging the densely connected blocks and the dilated convolution, a light full convolutional network is designed to predict a coarse binary mask for portrait image. And a feathering block, which is edge-preserving and matting adaptive, is further developed to learn the guided filter and transform the binary mask into alpha matte. Finally, an automatic portrait animation system based on fast deep matting is built on mobile devices, which does not need any interaction and can realize real-time matting with 15 fps. The experiments show that the proposed approach achieves comparable results with the state-of-the-art matting solvers." ] }
1904.03816
2928044548
We tackle the problem of automatic portrait matting on mobile devices. The proposed model is aimed at attaining real-time inference on mobile devices with minimal degradation of model performance. Our model MMNet, based on multi-branch dilated convolution with linear bottleneck blocks, outperforms the state-of-the-art model and is orders of magnitude faster. The model can be accelerated four times to attain 30 FPS on Xiaomi Mi 5 device with moderate increase in the gradient error. Under the same conditions, our model has an order of magnitude less number of parameters and is faster than Mobile DeepLabv3 while maintaining comparable performance. The accompanied implementation can be found at this https URL .
Since the work of , fully convolutional networks (FCN) have been widely used in various segmentation tasks @cite_40 @cite_6 . Many of the semantic segmentation networks adopt encoder-decoder structure @cite_0 . The proposed model uses skip connections to concatenate the output of an encoder block to a decoder block which has been known to improve the result of semantic pixel-wise segmentation tasks @cite_27 .
{ "cite_N": [ "@cite_0", "@cite_27", "@cite_40", "@cite_6" ], "mid": [ "2963881378", "1901129140", "2124592697", "" ], "abstract": [ "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.", "" ] }
1904.03816
2928044548
We tackle the problem of automatic portrait matting on mobile devices. The proposed model is aimed at attaining real-time inference on mobile devices with minimal degradation of model performance. Our model MMNet, based on multi-branch dilated convolution with linear bottleneck blocks, outperforms the state-of-the-art model and is orders of magnitude faster. The model can be accelerated four times to attain 30 FPS on Xiaomi Mi 5 device with moderate increase in the gradient error. Under the same conditions, our model has an order of magnitude less number of parameters and is faster than Mobile DeepLabv3 while maintaining comparable performance. The accompanied implementation can be found at this https URL .
proposed DeepLab @cite_31 @cite_3 architecture which extensively uses the ASPP module. ASPP module aims to solve the problem of efficient upsampling and handling objects at multiple scales. Our model adopts a multi-branch structure from Inception network @cite_37 , together with the dilated convolution of different dilation rates, which resembles the ASPP module.
{ "cite_N": [ "@cite_31", "@cite_37", "@cite_3" ], "mid": [ "2630837129", "2097117768", "" ], "abstract": [ "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "" ] }
1904.03632
2932069071
Humans can easily recognize the importance of people in social event images, and they always focus on the most important individuals. However, learning to learn the relation between people in an image, and inferring the most important person based on this relation, remains undeveloped. In this work, we propose a deep imPOrtance relatIon NeTwork (POINT) that combines both relation modeling and feature learning. In particular, we infer two types of interaction modules: the person-person interaction module that learns the interaction between people and the event-person interaction module that learns to describe how a person is involved in the event occurring in an image. We then estimate the importance relations among people from both interactions and encode the relation feature from the importance relations. In this way, POINT automatically learns several types of relation features in parallel, and we aggregate these relation features and the person's feature to form the importance feature for important people classification. Extensive experimental results show that our method is effective for important people detection and verify the efficacy of learning to learn relations for important people detection.
Recently, the importance of generic object categories and persons has attracted increased attention and has been studied by several researchers @cite_16 @cite_6 @cite_19 @cite_0 @cite_7 @cite_10 @cite_4 @cite_17 . @cite_4 focused on studying the relative importance between a pair of faces, either in the same image or separate images, and developed a regression model for predicting the importance of faces. The authors designed customized features containing spatial and saliency information of faces for important face detection. In addition, @cite_18 trained an attention-based model with event recognition labels to assign attention importance scores to all detected individuals to measure how related they were to basketball game videos. More specifically , they proposed utilizing spatial and appearance features of persons including temporal information to infer the importance score of all detected persons. Recently, @cite_17 modeled all detected people in a hybrid interaction graph by organizing the interaction among persons sequentially and developed PersonRank, a graphical model to rank the persons by inferring the importance scores of persons from person-person interactions constructed on four types of features that have been pretrained for other tasks.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_6", "@cite_0", "@cite_19", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2963274633", "1906419545", "2059952380", "2029163572", "2106229755", "2137084491", "2067816745", "2071711566", "2963920562" ], "abstract": [ "Multi-person event recognition is a challenging task, often with many people active in the scene but only a small subset contributing to an actual event. In this paper, we propose a model which learns to detect events in such videos while automatically \"attending\" to the people responsible for the event. Our model does not use explicit annotations regarding who or where those people are during training and testing. In particular, we track people in videos and use a recurrent neural network (RNN) to represent the track features. We learn time-varying attention weights to combine these features at each time-instant. The attended features are then processed using another RNN for event detection classification. Since most video datasets with multiple people are restricted to a small number of videos, we also collected a new basketball dataset comprising 257 basketball games with 14K event annotations corresponding to 11 event classes. Our model outperforms state-of-the-art methods for both event classification and detection on this new dataset. Additionally, we show that the attention mechanism is able to consistently localize the relevant players.", "People preserve memories of events such as birthdays, weddings, or vacations by capturing photos, often depicting groups of people. Invariably, some individuals in the image are more important than others given the context of the event. This paper analyzes the concept of the importance of individuals in group photographs. We address two specific questions - Given an image, who are the most important individuals in it? Given multiple images of a person, which image depicts the person in the most important role? We introduce a measure of importance of people in images and investigate the correlation between importance and visual saliency. We find that not only can we automatically predict the importance of people from purely visual cues, incorporating this predicted importance results in significant improvement in applications such as im2text (generating sentences that describe images of groups of people).", "How important is a particular object in a photograph of a complex scene? We propose a definition of importance and present two methods for measuring object importance from human observers. Using this ground truth, we fit a function for predicting the importance of each object directly from a segmented image; our function combines a large number of object-related and image-related features. We validate our importance predictions on 2,841 objects and find that the most important objects may be identified automatically. We find that object position and size are particularly informative, while a popular measure of saliency is not.", "We introduce an approach to image retrieval and auto-tagging that leverages the implicit information about object importance conveyed by the list of keyword tags a person supplies for an image. We propose an unsupervised learning procedure based on Kernel Canonical Correlation Analysis that discovers the relationship between how humans tag images (e.g., the order in which words are mentioned) and the relative importance of objects and their layout in the scene. Using this discovered connection, we show how to boost accuracy for novel queries, such that the search results better preserve the aspects a human may find most worth mentioning. We evaluate our approach on three datasets using either keyword tags or natural language descriptions, and quantify results with both ground truth parameters as well as direct tests with human subjects. Our results show clear improvements over approaches that either rely on image features alone, or that use words and image features but ignore the implied importance cues. Overall, our work provides a novel way to incorporate high-level human perception of scenes into visual representations for enhanced image search.", "We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.", "The wide availability of large scale databases requires more efficient and scalable tools for data understanding and knowledge discovery. In this paper, we present a method to find important people who have appeared repeatedly in a certain time period from large news video databases. Specifically, we investigate two issues: how to group similar faces to find dominant groups and how to label these groups by the corresponding names for identification. These are challenging problems because firstly people can appear with large appearance variations such as hair styles, illumination conditions and poses that make comparing between similar faces more difficult: secondly, the number of people and their occurrence frequencies that are unknown make finding dominant and useful groups more complicated: and finally, the fact that in news video faces and names usually do not appear together can make troubles in aligning faces and names. To handle above problems, we propose using the relevant set correlation based clustering model which can efficiently handle dataset of millions of objects represented in thousands or even millions of dimensions to find groups of similar faces from the large and noisy face dataset. Then in order to identify faces in clusters, names extracted from the transcripts are filtered and used to find the best correspondences by using methods developed in the statistical machine translation literature. Experiments on large video datasets containing hundreds of hours showed that our system can efficiently find out important people by not only their appearance but also their identification.", "What do people care about in an image? To drive computational visual recognition toward more human-centric outputs, we need a better understanding of how people perceive and judge the importance of content in images. In this paper, we explore how a number of factors relate to human perception of importance. Proposed factors fall into 3 broad types: 1) factors related to composition, e.g. size, location, 2) factors related to semantics, e.g. category of object or scene, and 3) contextual factors related to the likelihood of attribute-object, or object-scene pairs. We explore these factors using what people describe as a proxy for importance. Finally, we build models to predict what will be described about an image given either known image content, or image content estimated automatically by recognition systems.", "We present a video summarization approach for egocentric or \"wearable\" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.", "Always, some individuals in images are more important attractive than the others in some events such as presentation, basketball game or speech. However, it is challenging to ?nd important people among all individuals in an image directly based on their spatial or appearance information due to the existence of diverse variations of pose, action, appearance of persons an various changes of occasions. We overcome this challenge by constructing a multiple HyperInteraction Graph that treats each individual in an image as a node and inferring the most active node from the interactions estimated by using various types of cues. We model a pairwise interaction between people as an edge message communicated between nodes, resulting in a bidirectional pairwise-interaction graph. To enrich the person-person interaction estimation, we further introduce a unidirectional hyper-interaction graph that models the consensus of interactions between a focal person and any person in his her local region around. Finally, we modify the PageRank algorithm to infer the activeness of people on the multiple Hybrid-Interaction Graph (HIG), the union of the pairwise-interaction and hyper-interaction graphs, and we call our algorithm the PersonRank. In order to provide publicable datasets for evaluation, we have contributed a new dataset called Multi-scene Important People Image Dataset and gathered a NCAA Basketball Image Dataset from sports game sequences. We have demonstrated that the proposed PersonRank outperforms related methods clearly and substantially. Our code and datasets are available at https: weihonglee.github.io Projects PersonRank.htm." ] }
1904.03632
2932069071
Humans can easily recognize the importance of people in social event images, and they always focus on the most important individuals. However, learning to learn the relation between people in an image, and inferring the most important person based on this relation, remains undeveloped. In this work, we propose a deep imPOrtance relatIon NeTwork (POINT) that combines both relation modeling and feature learning. In particular, we infer two types of interaction modules: the person-person interaction module that learns the interaction between people and the event-person interaction module that learns to describe how a person is involved in the event occurring in an image. We then estimate the importance relations among people from both interactions and encode the relation feature from the importance relations. In this way, POINT automatically learns several types of relation features in parallel, and we aggregate these relation features and the person's feature to form the importance feature for important people classification. Extensive experimental results show that our method is effective for important people detection and verify the efficacy of learning to learn relations for important people detection.
Relation modeling is not limited to important people detection and has broad application, such as object detection @cite_8 , AI gaming @cite_3 , image captioning @cite_14 , video classification @cite_11 , and few-shot recognition @cite_13 . Related to our method, @cite_8 proposed adapting the attention module by embedding a new geometric weight and applying it in a typical object detection CNN model to enhance the features for object classification and duplicate removal. @cite_3 exploited the attention module to iteratively identify the relations between entities in a scene and to guide a model-free policy in a novel navigation and planning task called Box-World.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_3", "@cite_13", "@cite_11" ], "mid": [ "1514535095", "2964080601", "2807340089", "2964105864", "" ], "abstract": [ "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.", "Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector.", "We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning. It uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy. Our results show that in a novel navigation and planning task called Box-World, our agent finds interpretable solutions that improve upon baselines in terms of sample complexity, ability to generalize to more complex scenes than experienced during training, and overall performance. In the StarCraft II Learning Environment, our agent achieves state-of-the-art performance on six mini-games -- surpassing human grandmaster performance on four. By considering architectural inductive biases, our work opens new directions for overcoming important, but stubborn, challenges in deep RL.", "We present a conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each. Our method, called the Relation Network (RN), is trained end-to-end from scratch. During meta-learning, it learns to learn a deep distance metric to compare a small number of images within episodes, each of which is designed to simulate the few-shot setting. Once trained, a RN is able to classify images of new classes by computing relation scores between query images and the few examples of each new class without further updating the network. Besides providing improved performance on few-shot learning, our framework is easily extended to zero-shot learning. Extensive experiments on five benchmarks demonstrate that our simple approach provides a unified and effective approach for both of these two tasks.", "" ] }
1904.03498
2958885478
Traditional 3D Convolutional Neural Networks (CNNs) are computationally expensive, memory intensive, prone to overfit, and most importantly, there is a need to improve their feature learning capabilities. To address these issues, we propose Rectified Local Phase Volume (ReLPV) block, an efficient alternative to the standard 3D convolutional layer. The ReLPV block extracts the phase in a 3D local neighborhood (e.g., 3x3x3) of each position of the input map to obtain the feature maps. The phase is extracted by computing 3D Short Term Fourier Transform (STFT) at multiple fixed low frequency points in the 3D local neighborhood of each position. These feature maps at different frequency points are then linearly combined after passing them through an activation function. The ReLPV block provides significant parameter savings of at least, 3^3 to 13^3 times compared to the standard 3D convolutional layer with the filter sizes 3x3x3 to 13x13x13, respectively. We show that the feature learning capabilities of the ReLPV block are significantly better than the standard 3D convolutional layer. Furthermore, it produces consistently better results across different 3D data representations. We achieve state-of-the-art accuracy on the volumetric ModelNet10 and ModelNet40 datasets while utilizing only 11 parameters of the current state-of-the-art. We also improve the state-of-the-art on the UCF-101 split-1 action recognition dataset by 5.68 (when trained from scratch) while using only 15 of the parameters of the state-of-the-art. The project webpage is available at this https URL.
Recently, 2D CNNs have achieved state-of-the-art results in most of the computer vision problems @cite_45 . Moreover, they have also made significant progress in other complementary areas such as network compression @cite_16 @cite_25 , binarization @cite_28 @cite_35 @cite_24 @cite_9 , quantization @cite_50 @cite_6 , regularization @cite_0 @cite_39 @cite_15 @cite_30 , etc. Therefore, not surprisingly, there have been many recent attempts to extend this success to the problems in the domain of 3D CNNs e.g., video classification @cite_46 , 3D object recognition @cite_38 @cite_19 and MRI volume segmentation @cite_51 @cite_13 . Unfortunately, 3D CNNs are computationally expensive and require large memory and disk space. Furthermore, they overfit very easily owing to the large number of parameters involved. Therefore, there has been recent interest in more efficient variants of 3D CNNs.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_38", "@cite_13", "@cite_28", "@cite_46", "@cite_9", "@cite_6", "@cite_39", "@cite_24", "@cite_0", "@cite_19", "@cite_45", "@cite_50", "@cite_15", "@cite_16", "@cite_51", "@cite_25" ], "mid": [ "2962980542", "2260663238", "2211722331", "2464708700", "2963114950", "2728972335", "2963551763", "2524428287", "2963743626", "2300242332", "2962684187", "2511691466", "", "", "2583938035", "2612445135", "2962914239", "" ], "abstract": [ "Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN.", "", "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.", "This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.", "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.", "We propose local binary convolution (LBC), an efficient alternative to convolutional layers in standard convolutional neural networks (CNN). The design principles of LBC are motivated by local binary patterns (LBP). The LBC layer comprises of a set of fixed sparse pre-defined binary convolutional filters that are not updated during the training process, a non-linear activation function and a set of learnable linear weights. The linear weights combine the activated filter responses to approximate the corresponding activated filter responses of a standard convolutional layer. The LBC layer affords significant parameter savings, 9x to 169x in the number of learnable parameters compared to a standard convolutional layer. Furthermore, the sparse and binary nature of the weights also results in up to 9x to 169x savings in model size compared to a standard convolutional layer. We demonstrate both theoretically and experimentally that our local binary convolution layer is a good approximation of a standard convolutional layer. Empirically, CNNs with LBC layers, called local binary convolutional neural networks (LBCNN), achieves performance parity with regular CNNs on a range of visual datasets (MNIST, SVHN, CIFAR-10, and ImageNet) while enjoying significant computational savings.", "We introduce a method to train Quantized Neural Networks (QNNs) -- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51 top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online.", "Batch Normalization (BN) is capable of accelerating the training of deep models by centering and scaling activations within mini-batches. In this work, we propose Decorrelated Batch Normalization (DBN), which not just centers and scales activations but whitens them. We explore multiple whitening techniques, and find that PCA whitening causes a problem we call stochastic axis swapping, which is detrimental to learning. We show that ZCA whitening does not suffer from this problem, permitting successful learning. DBN retains the desirable qualities of BN and further improves BN's optimization efficiency and generalization ability. We design comprehensive experiments to show that DBN can improve the performance of BN on multilayer perceptrons and convolutional neural networks. Furthermore, we consistently improve the accuracy of residual networks on CIFAR-10, CIFAR-100, and ImageNet.", "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet.", "Abstract: One major challenge in training Deep Neural Networks is preventing overfitting. Many techniques such as data augmentation and novel regularizers such as Dropout have been proposed to prevent overfitting without requiring a massive amount of training data. In this work, we propose a new regularizer called DeCov which leads to significantly reduced overfitting (as indicated by the difference between train and val performance), and better generalization. Our regularizer encourages diverse or non-redundant representations in Deep Neural Networks by minimizing the cross-covariance of hidden activations. This simple intuition has been explored in a number of past works but surprisingly has never been applied as a regularizer in supervised learning. Experiments across a range of datasets and network architectures show that this loss always reduces overfitting while almost always maintaining or increasing generalization performance and often improving performance over Dropout.", "When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5 relative improvement in the state of the art for object classification.", "", "", "Deep convolutional networks have achieved successful performance in data mining field. However, training large networks still remains a challenge, as the training data may be insufficient and the model can easily get overfitted. Hence the training process is usually combined with a model regularization. Typical regularizers include weight decay, Dropout, etc. In this paper, we propose a novel regularizer, named Structured Decorrelation Constraint (SDC), which is applied to the activations of the hidden layers to prevent overfitting and achieve better generalization. SDC impels the network to learn structured representations by grouping the hidden units and encouraging the units within the same group to have strong connections during the training procedure. Meanwhile, it forces the units in different groups to learn non-redundant representations by minimizing the cross-covariance between them. Compared with Dropout, SDC reduces the co-adaptions between the hidden units in an explicit way. Besides, we propose a novel approach called Reg-Conv that can help SDC to regularize the complex convolutional layers. Experiments on extensive datasets show that SDC significantly reduces overfitting and yields very meaningful improvements on classification performance (on CIFAR-10 6.22 accuracy promotion and on CIFAR-100 9.63 promotion).", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.", "Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods.", "" ] }
1904.03693
1512719907
We present a legged motion planning approach for quadrupedal locomotion over challenging terrain. We decompose the problem into body action planning and footstep planning. We use a lattice representation together with a set of defined body movement primitives for computing a body action plan. The lattice representation allows us to plan versatile movements that ensure feasibility for every possible plan. To this end, we propose a set of rules that define the footstep search regions and footstep sequence given a body action. We use Anytime Repairing A∗ (ARA∗) search that guarantees bounded suboptimal plans. Our main contribution is a planning approach that generates on-line versatile movements. Experimental trials demonstrate the performance of our planning approach in a set of challenging terrain conditions. The terrain information and plans are computed on-line and on-board.
Natural locomotion over rough terrain requires simultaneous computation of footstep sequences, body movements and locomotion behaviors (coupled planning) @cite_1 @cite_10 @cite_13 @cite_12 . One of the main problems with such approaches is that the search space quickly grows and searching becomes unfeasible, especially for systems that require on-line solutions. In contrast, we can state the planning and control problem into a set of sub-problems, following a decoupled planning strategy. For example the body path and the footstep planners can be separated, thus reducing the search space for each component @cite_16 @cite_11 @cite_8 . This can reduce the computation time at the expense of limiting the planning capabilities, sometimes required for extreme rough terrain. There are two main approaches of decoupled planning: @cite_2 @cite_15 @cite_11 and @cite_14 @cite_16 . These approaches find a solution in motion space, which defines the possible motion of the robot.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_10", "@cite_1", "@cite_2", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2077673559", "2104588723", "2042408133", "2295899596", "2142468282", "2137496616", "2144587497", "2101340954", "276848478", "2172037318" ], "abstract": [ "We present a novel approach to legged locomotion over rough terrain that is thoroughly rooted in optimization. This approach relies on a hierarchy of fast, anytime algorithms to plan a set of footholds, along with the dynamic body motions required to execute them. Components within the planning framework coordinate to exchange plans, cost-to-go estimates, and “certificates” that ensure the output of an abstract high-level planner can be realized by deeper layers of the hierarchy. The burden of careful engineering of cost functions to achieve desired performance is substantially mitigated by a simple inverse optimal control technique. Robustness is achieved by real-time re-planning of the full trajectory, augmented by reflexes and feedback control. We demonstrate the successful application of our approach in guiding the LittleDog quadruped robot over a variety of rough terrains.", "We address the problem of foothold selection in robotic legged locomotion over very rough terrain. The difficulty of the problem we address here is comparable to that of human rock-climbing, where foot hand-hold selection is one of the most critical aspects. Previous work in this domain typically involves defining a reward function over footholds as a weighted linear combination of terrain features. However, a significant amount of effort needs to be spent in designing these features in order to model more complex decision functions, and hand-tuning their weights is not a trivial task. We propose the use of terrain templates, which are discretized height maps of the terrain under a foothold on different length scales, as an alternative to manually designed features. We describe an algorithm that can simultaneously learn a small set of templates and a foothold ranking function using these templates, from expert-demonstrated footholds. Using the LittleDog quadruped robot, we experimentally show that the use of terrain templates can produce complex ranking functions with higher performance than standard terrain features, and improved generalization to unseen terrain.", "We present a motion synthesis framework capable of producing a wide variety of important human behaviors that have rarely been studied, including getting up from the ground, crawling, climbing, moving heavy objects, acrobatics (hand-stands in particular), and various cooperative actions involving two characters and their manipulation of the environment. Our framework is not specific to humans, but applies to characters of arbitrary morphology and limb configuration. The approach is fully automatic and does not require domain knowledge specific to each behavior. It also does not require pre-existing examples or motion capture data. At the core of our framework is the contact-invariant optimization (CIO) method we introduce here. It enables simultaneous optimization of contact and behavior. This is done by augmenting the search space with scalar variables that indicate whether a potential contact should be active in a given phase of the movement. These auxiliary variables affect not only the cost function but also the dynamics (by enabling and disabling contact forces), and are optimized together with the movement trajectory. Additional innovations include a continuation scheme allowing helper forces at the potential contacts rather than the torso, as well as a feature-based model of physics which is particularly well-suited to the CIO framework. We expect that CIO can also be used with a full physics model, but leave that extension for future work.", "We present a method for smoothing discontinuous dynamics involving contact and friction, thereby facilitating the use of local optimization techniques for control. The method replaces the standard Linear Complementarity Problem with a Stochastic Linear Complementarity Problem. The resulting dynamics are continuously differentiable, and the resulting controllers are robust to disturbances. We demonstrate our method on a simulated 6-dimensional manipulation task, which involves a finger learning to spin an anchored object by repeated flicking.", "This paper deals with the motion planning of a poly-articulated robotic system for which support contacts are allowed to occur between any part of the body and any part of the environment. Starting with a description of the environment and of a target, it computes a sequence of postures that allow our system to reach its target. We describe a very generic architecture of this planner, highly modular, as well as a first implementation of it. We then present our results, both simulations and real experiments, for a simple grasping task using the HRP-2 humanoid robot.", "Motion planning problems encountered in manipulation and legged locomotion have a distinctive multi-modal structure, where the space of feasible configurations consists of intersecting submanifolds, often of different dimensionalities. Such a feasible space does not possess expansiveness, a property that characterizes whether planning queries can be solved efficiently with traditional probabilistic roadmap (PRM) planners. In this paper we present a new PRM-based multi-modal planning algorithm for problems where the number of intersecting manifolds is finite. We also analyze the completeness properties of this algorithm. More specifically, we show that the algorithm converges quickly when each submanifold is individually expansive and establish a bound on the expected running time in that case. We also present an incremental variant of the algorithm that has the same convergence properties, but works better for problems with a large number of submanifolds by considering subsets of submanifolds likely to contain a solution path. These algorithms are demonstrated in geometric examples and in a legged locomotion planner.", "Legged robots have the potential to navigate a much larger variety of terrain than their wheeled counterparts. In this paper we present a hierarchical control architecture that enables a quadruped, the \"LittleDog\" robot, to walk over rough terrain. The controller consists of a high-level planner that plans a set of footsteps across the terrain, a low-level planner that plans trajectories for the robot's feet and center of gravity (COG), and a low-level controller that tracks these desired trajectories using a set of closed-loop mechanisms. We conduct extensive experiments to verify that the controller is able to robustly cross a wide variety of challenging terrains, climbing over obstacles nearly as tall as the robot's legs. In addition, we highlight several elements of the controller that we found to be particularly crucial for robust locomotion, and which are applicable to quadruped robots in general. In such cases we conduct empirical evaluations to test the usefulness of these elements.", "Direct methods for trajectory optimization are widely used for planning locally optimal trajectories of robotic systems. Many critical tasks, such as locomotion and manipulation, often involve impacting the ground or objects in the environment. Most state-of-the-art techniques treat the discontinuous dynamics that result from impacts as discrete modes and restrict the search for a complete path to a specified sequence through these modes. Here we present a novel method for trajectory planning of rigid-body systems that contact their environment through inelastic impacts and Coulomb friction. This method eliminates the requirement for a priori mode ordering. Motivated by the formulation of multi-contact dynamics as a Linear Complementarity Problem for forward simulation, the proposed algorithm poses the optimization problem as a Mathematical Program with Complementarity Constraints. We leverage Sequential Quadratic Programming to naturally resolve contact constraint forces while simultaneously optimizing a trajectory that satisfies the complementarity constraints. The method scales well to high-dimensional systems with large numbers of possible modes. We demonstrate the approach on four increasingly complex systems: rotating a pinned object with a finger, simple grasping and manipulation, planar walking with the Spring Flamingo robot, and high-speed bipedal running on the FastRunner platform.", "Abstract : To plan dynamic, whole-body motions for robots one conventionally faces the choice between a complex, fullbody dynamic model containing every link and actuator of the robot, or a highly simplified model of the robot as a point mass. In this paper we explore a powerful middle ground between these extremes. We present an approach to generate whole-body motions using a simple dynamics model, which enforces that the linear and angular momentum of the robot be consistent with the external wrenches on the robot, and a full-body kinematics model that enforces rich geometric constraints, such as end-effector positioning or collision avoidance. We obtain a trajectory for the robot and profiles of contact wrenches by solving a nonlinear optimization problem (NLP). We further demonstrate that we can plan without pre-specifying the contact sequence by exploiting the complementarity conditions between contact forces and contact distance. We demonstrate that this algorithm is capable of generating highly-dynamic motion plans with examples of a humanoid robot negotiating obstacle course elements and gait optimization for a quadrupedal robot.", "We present a search-based planning approach for controlling a quadrupedal robot over rough terrain. Given a start and goal position, we consider the problem of generating a complete joint trajectory that will result in the legged robot successfully moving from the start to the goal. We decompose the problem into two main phases: an initial global planning phase, which results in a footstep trajectory; and an execution phase, which dynamically generates a joint trajectory to best execute the footstep trajectory. We show how R* search can be employed to generate high-quality global plans in the high-dimensional space of footstep trajectories. Results show that the global plans coupled with the joint controller result in a system robust enough to deal with a variety of terrains." ] }
1904.03563
2927809969
We describe an algorithm based on a logarithmic barrier function, Newton's method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of the objective function and or a matrix-vector multiplication involving the Hessian of the objective. The approach can be implemented without explicit calculation or storage of the Hessian.
There is considerable recent work on algorithms for unconstrained smooth nonconvex optimization that have optimal worst-case iteration complexity for finding points that satisfy approximate first- and second-order optimality conditions. When applied to twice Lipschitz continuously differentiable functions, classical Newton-trust-region schemes @cite_20 require at most @math iterations @cite_23 to find a point satisfying However, for problems of this class, the optimal iteration complexity for finding a first-order optimal point is @math @cite_18 . This iteration complexity was first achieved by cubic regularization of Newton's method when @math @cite_10 . Since 2016, numerous other algorithms have also been proposed that match this iteration bound; see for example @cite_8 @cite_25 @cite_26 @cite_6 @cite_12 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_8", "@cite_6", "@cite_23", "@cite_12", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2766533971", "2398500126", "2623997598", "", "2082877410", "2530585400", "2009941369", "2156005216", "" ], "abstract": [ "We prove lower bounds on the complexity of finding @math -stationary points (points @math such that @math ) of smooth, high-dimensional, and potentially non-convex functions @math . We consider oracle-based complexity measures, where an algorithm is given access to the value and all derivatives of @math at a query point @math . We show that for any (potentially randomized) algorithm @math , there exists a function @math with Lipschitz @math th order derivatives such that @math requires at least @math queries to find an @math -stationary point. Our lower bounds are sharp to within constants, and they show that gradient descent, cubic-regularized Newton's method, and generalized @math th order regularization are worst-case optimal within their natural function classes.", "We propose a trust region algorithm for solving nonconvex smooth optimization problems. For any @math ∈¯?(0,?), the algorithm requires at most @math O(∈-3 2) iterations, function evaluations, and derivative evaluations to drive the norm of the gradient of the objective function below any @math ∈?(0,∈¯]. This improves upon the @math O(∈-2) bound known to hold for some other trust region algorithms and matches the @math O(∈-3 2) bound for the recently proposed Adaptive Regularisation framework using Cubics, also known as the arc algorithm. Our algorithm, entitled trace, follows a trust region framework, but employs modified step acceptance criteria and a novel trust region update mechanism that allow the algorithm to achieve such a worst-case global complexity bound. Importantly, we prove that our algorithm also attains global and fast local convergence guarantees under similar assumptions as for other trust region algorithms. We also prove a worst-case upper bound on the number of iterations, function evaluations, and derivative evaluations that the algorithm requires to obtain an approximate second-order stationary point.", "Cubic-regularization and trust-region methods with worst-case first-order complexity @math and worst-case second-order complexity @math have been developed in the last few years. In this paper it is proved that the same complexities are achieved by means of a quadratic-regularization method with a cubic sufficient-descent condition instead of the more usual predicted-reduction based descent. Asymptotic convergence and order of convergence results are also presented. Finally, some numerical experiments comparing the new algorithm with a well-established quadratic regularization method are shown.", "", "This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) [15] and (2010) [3] show that at most O(@e^-^3) iterations may have to be performed for finding an iterate which is within @e of satisfying second-order optimality conditions. We first show that this bound can be derived for a version of the algorithm, which only uses one-dimensional global optimization of the cubic model and that it is sharp. We next consider the standard trust-region method and show that a bound of the same type may also be derived for this method, and that it is also sharp in some cases. We conclude by showing that a comparison of the bounds on the worst-case behaviour of the cubic regularization and trust-region algorithms favours the first of these methods.", "In a recent paper, we introduced a trust-region method with variable norms for unconstrained minimization, we proved standard asymptotic convergence results, and we discussed the impact of this method in global optimization. Here we will show that, with a simple modification with respect to the sufficient descent condition and replacing the trust-region approach with a suitable cubic regularization, the complexity of this method for finding approximate first-order stationary points is @math O(ź-3 2). We also prove a complexity result with respect to second-order stationarity. Some numerical experiments are also presented to illustrate the effect of the modification on practical performance.", "In this paper, we provide theoretical analysis for a cubic regularization of Newton method as applied to unconstrained minimization problem. For this scheme, we prove general local convergence results. However, the main contribution of the paper is related to global worst-case complexity bounds for different problem classes including some nonconvex cases. It is shown that the search direction can be computed by standard linear algebra technique.", "An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA 12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation.", "" ] }
1904.03563
2927809969
We describe an algorithm based on a logarithmic barrier function, Newton's method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of the objective function and or a matrix-vector multiplication involving the Hessian of the objective. The approach can be implemented without explicit calculation or storage of the Hessian.
Some works also account for the computational cost of each iteration, thus yielding a bound on the overall computational complexity. Two independently proposed algorithms, respectively based on adapting accelerated gradient to the nonconvex setting @cite_11 and approximately solving the cubic regularization subproblem @cite_14 , require @math operations (with high probability, showing dependency only on @math ) to find a point @math that satisfies ) when @math . The difference of a factor of @math with the iteration complexity bounds arises from the cost of computing a negative curvature direction of @math and or the cost of solving a linear system. The probabilistic nature of the bound is due to the introduction of randomness in the curvature estimation process. A complexity bound of the same type was also established for a variant of accelerated gradient based only on gradient calculations, that periodically adds a random perturbation to the iterate when the gradient norm is small @cite_21 .
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_11" ], "mid": [ "2609037894", "2963487351", "2546420264" ], "abstract": [ "We design a non-convex second-order optimization algorithm that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples. The time complexity of our algorithm to find an approximate local minimum is even faster than that of gradient descent to find a critical point. Our algorithm applies to a general class of optimization problems including training a neural network and other non-convex objectives arising in machine learning.", "Nesterov's accelerated gradient descent (AGD), an instance of the general family of momentum methods,'' provably achieves faster convergence rate than gradient descent (GD) in the convex setting. While these methods are widely used in modern nonconvex applications, including training of deep neural networks, whether they are provably superior to GD in the nonconvex setting remains open. This paper studies a simple variant of Nesterov's AGD, and shows that it escapes saddle points and finds a second-order stationary point in ( O (1 ^ 7 4 ) ) iterations, matching the best known convergence rate, which is faster than the ( O (1 ^ 2 ) ) iterations required by GD. To the best of our knowledge, this is the first direct acceleration (single-loop) algorithm that is provably faster than GD in general nonconvex setting---all previous nonconvex accelerated algorithms rely on more complex mechanisms such as nested loops and proximal terms. Our analysis is based on two key ideas: (1) the use of a simple Hamiltonian function, inspired by a continuous-time perspective, which AGD monotonically decreases on each step even for nonconvex functions, and (2) a novel framework called improve or localize, which is useful for tracking the long-term behavior of gradient-based optimization algorithms. We believe that these techniques may deepen our understanding of both acceleration algorithms and nonconvex optimization.", "We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives. In a time @math , the method finds an @math -stationary point, meaning a point @math such that @math . The method improves upon the @math complexity of gradient descent and provides the additional second-order guarantee that @math for the computed @math . Furthermore, our method is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications." ] }
1904.03563
2927809969
We describe an algorithm based on a logarithmic barrier function, Newton's method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of the objective function and or a matrix-vector multiplication involving the Hessian of the objective. The approach can be implemented without explicit calculation or storage of the Hessian.
In another line of work, @cite_0 developed a damped Newton algorithm which inexactly minimizes the Newton system by the method of conjugate gradients and requires at most @math operations to satisfy ), to high probability. For purposes of computational complexity, this paper defines the unit of computation to be one Hessian-vector product or one gradient evaluation. We also adopt this definition here; it relies implicitly on the observation from computational algorithmic differentiation @cite_5 that these two operations differ in cost only by a modest factor, independent of the dimension @math . In a followup to @cite_0 , the paper @cite_9 built on techniques from @cite_19 to create a modified CG method to solve the Newton system. This algorithm, which is a foundation of the method described in this paper, again finds a point satisfying in @math operations, to high probability, and requires the same number of operations to find an approximate first-order critical point deterministically .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_9", "@cite_19" ], "mid": [ "2963075511", "1585773866", "2963900797", "2613615043" ], "abstract": [ "There has been much recent interest in finding unconstrained local minima of smooth functions, due in part to the prevalence of such problems in machine learning and robust statistics. A particular focus is algorithms with good complexity guarantees. Second-order Newton-type methods that make use of regularization and trust regions have been analyzed from such a perspective. More recent proposals, based chiefly on first-order methodology, have also been shown to enjoy optimal iteration complexity rates, while providing additional guarantees on computational cost. In this paper, we present an algorithm with favorable complexity properties that differs in two significant ways from other recently proposed methods. First, it is based on line searches only: Each step involves computation of a search direction, followed by a backtracking line search along that direction. Second, its analysis is rather straightforward, relying for the most part on the standard technique for demonstrating sufficient decrease in t...", "Algorithmic, or automatic, differentiation (AD) is a growing area of theoretical research and software development concerned with the accurate and efficient evaluation of derivatives for function evaluations given as computer programs. The resulting derivative values are useful for all scientific computations that are based on linear, quadratic, or higher order approximations to nonlinear scalar or vector functions. AD has been applied in particular to optimization, parameter identification, nonlinear equation solving, the numerical integration of differential equations, and combinations of these. Apart from quantifying sensitivities numerically, AD also yields structural dependence information, such as the sparsity pattern and generic rank of Jacobian matrices. The field opens up an exciting opportunity to develop new algorithms that reflect the true cost of accurate derivatives and to use them for improvements in speed and reliability. This second edition has been updated and expanded to cover recent developments in applications and theory, including an elegant NP completeness argument by Uwe Naumann and a brief introduction to scarcity, a generalization of sparsity. There is also added material on checkpointing and iterative differentiation. To improve readability the more detailed analysis of memory and complexity bounds has been relegated to separate, optional chapters.The book consists of three parts: a stand-alone introduction to the fundamentals of AD and its software; a thorough treatment of methods for sparse problems; and final chapters on program-reversal schedules, higher derivatives, nonsmooth problems and iterative processes. Each of the 15 chapters concludes with examples and exercises. Audience: This volume will be valuable to designers of algorithms and software for nonlinear computational problems. Current numerical software users should gain the insight necessary to choose and deploy existing AD software tools to the best advantage. Contents: Rules; Preface; Prologue; Mathematical Symbols; Chapter 1: Introduction; Chapter 2: A Framework for Evaluating Functions; Chapter 3: Fundamentals of Forward and Reverse; Chapter 4: Memory Issues and Complexity Bounds; Chapter 5: Repeating and Extending Reverse; Chapter 6: Implementation and Software; Chapter 7: Sparse Forward and Reverse; Chapter 8: Exploiting Sparsity by Compression; Chapter 9: Going beyond Forward and Reverse; Chapter 10: Jacobian and Hessian Accumulation; Chapter 11: Observations on Efficiency; Chapter 12: Reversal Schedules and Checkpointing; Chapter 13: Taylor and Tensor Coefficients; Chapter 14: Differentiation without Differentiability; Chapter 15: Implicit and Iterative Differentiation; Epilogue; List of Figures; List of Tables; Assumptions and Definitions; Propositions, Corollaries, and Lemmas; Bibliography; Index", "We consider minimization of a smooth nonconvex objective function using an iterative algorithm based on Newton’s method and the linear conjugate gradient algorithm, with explicit detection and use of negative curvature directions for the Hessian of the objective function. The algorithm tracks Newton-conjugate gradient procedures developed in the 1980s closely, but includes enhancements that allow worst-case complexity results to be proved for convergence to points that satisfy approximate first-order and second-order optimality conditions. The complexity results match the best known results in the literature for second-order methods.", "We develop and analyze a variant of Nesterov's accelerated gradient descent (AGD) for minimization of smooth non-convex functions. We prove that one of two cases occurs: either our AGD variant converges quickly, as if the function was convex, or we produce a certificate that the function is \"guilty\" of being non-convex. This non-convexity certificate allows us to exploit negative curvature and obtain deterministic, dimension-free acceleration of convergence for non-convex functions. For a function @math with Lipschitz continuous gradient and Hessian, we compute a point @math with @math in @math gradient and function evaluations. Assuming additionally that the third derivative is Lipschitz, we require only @math evaluations." ] }
1904.03563
2927809969
We describe an algorithm based on a logarithmic barrier function, Newton's method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of the objective function and or a matrix-vector multiplication involving the Hessian of the objective. The approach can be implemented without explicit calculation or storage of the Hessian.
A number of algorithms have also been proposed for constrained optimization problems which require at most @math iterations to find a point which satisfies some first-order (and sometimes second-order) optimality condition(s). Although the optimality conditions vary between papers, @cite_16 @cite_2 @cite_1 all achieve this iteration complexity bound for some first order optimality condition by solving a constrained cubic regularization subproblem at each iteration. Another recent work finds a first-order point in @math iterations for linear equality and bound constraints through the use of an active set method @cite_13 . When optimizing on a single face of the polytope, this method also uses a cubic regularization model. However, these papers do not account for the cost of solving the subproblem at each iteration, noting either that this subproblem may be NP-hard, or suggesting that a simple first-order, gradient- based method can solve it reliably.
{ "cite_N": [ "@cite_13", "@cite_16", "@cite_1", "@cite_2" ], "mid": [ "2801742348", "2133068139", "2898652568", "2018738327" ], "abstract": [ "The main objective of this research is to introduce a practical method for smooth bound-constrained optimization that possesses worst-case evaluation complexity @math for finding an @math -approximate first-order stationary point when the Hessian of the objective function is Lipschitz continuous. As other well-established algorithms for optimization with box constraints, the algorithm proceeds visiting the different faces of the domain aiming to reduce the norm of an internal projected gradient and abandoning active constraints when no additional progress is expected in the current face. The introduced method emerges as a particular case of a method for minimization with linear constraints. Moreover, the linearly constrained minimization algorithm is an instance of a minimization algorithm with general constraints whose implementation may be unaffordable when the constraints are complicated. As a procedure for leaving faces, a different method is employed that may be regarded as a...", "The adaptive cubic overestimation algorithm described in Cartis, Gould and Toint (2007) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, but without any Lipschitz continuity requirement on the objective’s Hessian. A worst-case complexity analysis in terms of evaluations of the problem’s function and derivatives is also presented for the Lipschitz continuous case and for a variant of the resulting algorithm. This analysis extends the best known bound for general unconstrained problems to nonlinear problems with convex constraints.", "We provide sharp worst-case evaluation complexity bounds for nonconvex minimization problems with general inexpensive constraints, i.e. problems where the cost of evaluating enforcing of the (possibly nonconvex or even disconnected) constraints, if any, is negligible compared to that of evaluating the objective function. These bounds unify, extend or improve all known upper and lower complexity bounds for unconstrained and convexly-constrained problems. It is shown that, given an accuracy level @math , a degree of highest available Lipschitz continuous derivatives @math and a desired optimality order @math between one and @math , a conceptual regularization algorithm requires no more than @math evaluations of the objective function and its derivatives to compute a suitably approximate @math -th order minimizer. With an appropriate choice of the regularization, a similar result also holds if the @math -th derivative is merely H \"older rather than Lipschitz continuous. We provide an example that shows that the above complexity bound is sharp for unconstrained and a wide class of constrained problems, we also give reasons for the optimality of regularization methods from a worst-case complexity point of view, within a large class of algorithms that use the same derivative information.", "When solving the general smooth nonlinear and possibly nonconvex optimization problem involving equality and or inequality constraints, an approximate first-order critical point of accuracy @math can be obtained by a second-order method using cubic regularization in at most @math evaluations of problem functions, the same order bound as in the unconstrained case. This result is obtained by first showing that the same result holds for inequality constrained nonlinear least-squares. As a consequence, the presence of (possibly nonconvex) equality inequality constraints does not affect the complexity of finding approximate first-order critical points in nonconvex optimization. This result improves on the best known ( @math ) evaluation-complexity bound for solving general nonconvexly constrained optimization problems." ] }
1904.03563
2927809969
We describe an algorithm based on a logarithmic barrier function, Newton's method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of the objective function and or a matrix-vector multiplication involving the Hessian of the objective. The approach can be implemented without explicit calculation or storage of the Hessian.
In this paper, we adapt the Newton-CG method of @cite_9 for unconstrained optimization to the problem of minimizing the primal log-barrier function , for a small, fixed value of @math . We target the optimality conditions ), which avoid enforcing tighter conditions on Hessian and gradient components that correspond to components of @math that are far from zero at optimality. This change allows us to solve a preconditioned Newton system of linear equations at each iteration in which the norm of the matrix can be bounded by a constant independent of iteration number. The Capped CG method developed in @cite_9 is used to solve this system, returning a useful search direction in a reasonable number of iterations. Our algorithm finds a point satisfying ) in @math iterations and @math gradient evaluations Hessian vector products when @math , making our algorithm the first method for bound-constrained optimization with this overall computational complexity. Further, our algorithm has the appealing practical feature that it puts minimal restrictions on the step size, allowing the line search to take steps that are much closer to the boundary than the current iterate.
{ "cite_N": [ "@cite_9" ], "mid": [ "2963900797" ], "abstract": [ "We consider minimization of a smooth nonconvex objective function using an iterative algorithm based on Newton’s method and the linear conjugate gradient algorithm, with explicit detection and use of negative curvature directions for the Hessian of the objective function. The algorithm tracks Newton-conjugate gradient procedures developed in the 1980s closely, but includes enhancements that allow worst-case complexity results to be proved for convergence to points that satisfy approximate first-order and second-order optimality conditions. The complexity results match the best known results in the literature for second-order methods." ] }
1904.03373
2953139336
High-density object counting in surveillance scenes is challenging mainly due to the drastic variation of object scales. The prevalence of deep learning has largely boosted the object counting accuracy on several benchmark datasets. However, does the global counts really count? Armed with this question we dive into the predicted density map whose summation over the whole regions reports the global counts for more in-depth analysis. We observe that the object density map generated by most existing methods usually lacks of local consistency, i.e., counting errors in local regions exist unexpectedly even though the global count seems to well match with the ground-truth. Towards this problem, in this paper we propose a constrained multi-stage Convolutional Neural Networks (CNNs) to jointly pursue locally consistent density map from two aspects. Different from most existing methods that mainly rely on the multi-column architectures of plain CNNs, we exploit a stacking formulation of plain CNNs. Benefited from the internal multi-stage learning process, the feature map could be repeatedly refined, allowing the density map to approach the ground-truth density distribution. For further refinement of the density map, we also propose a grid loss function. With finer local-region-based supervisions, the underlying model is constrained to generate locally consistent density values to minimize the training errors considering both the global and local counts accuracy. Experiments on two widely-tested object counting benchmarks with overall significant results compared with state-of-the-art methods demonstrate the effectiveness of our approach.
Traditional detection-based counting methods mainly rely on the performance of object detection algorithms @cite_16 @cite_13 @cite_15 and are usually fragile especially in crowded scenes with limited object sizes and severe occlusions. Alternatively, early regression approaches directly learn a mapping function from foreground representations to the corresponding counts @cite_26 @cite_4 , avoiding explicit delineation of individuals. However, this global regression approach ignores the useful spatial information. Towards this goal, a novel framework is proposed in the seminal work @cite_9 , which formulates object counting as a spatial density map prediction problem. With a continuously-valued density assigned for every single pixel, the final object counts can be obtained by summation of the pixel values over the whole density map. Enabling the utilization of spatial information, counting by density map prediction has been a widely-adopted paradigm for later counting approaches @cite_10 @cite_27 . However, the representation ability of hand-crafted features limits the performance of those methods, especially for more challenging situations with severe occlusions and drastic object scale variations. Following our analysis in , our solution to the local inconsistency problem is related to those counting methods handling the object scale variation, which we will give a detailed discussion below.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_10", "@cite_9", "@cite_27", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2121864252", "1976959044", "1542079534", "2145983039", "2207893099", "2413409465", "2101611399", "2138948290" ], "abstract": [ "This paper describes a viewpoint invariant learning-based method for counting people in crowds from a single camera. Our method takes into account feature normalization to deal with perspective projection and different camera orientation. The training features include edge orientation and blob size histograms resulted from edge detection and background subtraction. A density map that measures the relative size of individuals and a global scale measuring camera orientation are estimated and used for feature normalization. The relationship between the feature histograms and the number of pedestrians in the crowds is learned from labeled training data. Experimental results from different sites with different camera orientation demonstrate the performance and the potential of our method", "This paper presents a multi-output regression model for crowd counting in public scenes. Existing counting by regression methods either learn a single model for global counting, or train a large number of separate regressors for localised density estimation. In contrast, our single regression model based approach is able to estimate people count in spatially localised regions and is more scalable without the need for training a large number of regressors proportional to the number of local regions. In particular, the proposed model automatically learns the functional mapping between interdependent low-level features and multi-dimensional structured outputs. The model is able to discover the inherent importance of different features for people counting at different spatial locations. Extensive evaluations on an existing crowd analysis benchmark dataset and a new more challenging dataset demonstrate the effectiveness of our approach.", "Following [Lempitsky and Zisserman, 2010], we seek to count objects by integrating over an object density map that is predicted from an input image. In contrast to that work, we propose to estimate the object density map by averaging over structured, namely patch-wise, predictions. Using an ensemble of randomized regression trees that use dense features as input, we obtain results that are of similar quality, at a fraction of the training time, and with low implementation effort. An open source implementation will be provided in the framework of http: ilastik.org.", "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "This paper presents a patch-based approach for crowd density estimation in public scenes. We formulate the problem of estimating density in a structured learning framework applied to random decision forests. Our approach learns the mapping between patch features and relative locations of all objects inside each patch, which contribute to generate the patch density map through Gaussian kernel density estimation. We build the forest in a coarse-to-fine manner with two split node layers, and further propose a crowdedness prior and an effective forest reduction method to improve the estimation accuracy and speed. Moreover, we introduce a semi-automatic training method to learn the estimator for a specific scene. We achieved state-of-the-art results on the public Mall dataset and UCSD dataset, and also proposed two potential applications in traffic counts and scene understanding with promising results.", "People counting is one of the key techniques in video surveillance. This task usually encounters many challenges in crowded environment, such as heavy occlusion, low resolution, imaging viewpoint variability, etc. Motivated by the success of R-CNN (, 2014) 1 on object detection, in this paper we propose a head detection based people counting method combining the Adaboost algorithm and the CNN. Unlike the R-CNN which uses the general object proposals as the inputs of CNN, our method uses the cascade Adaboost algorithm to obtain the head region proposals for CNN, which can greatly reduce the following classification time. Resorting to the strong ability of feature learning of the CNN, it is used as a feature extractor in this paper, instead of as a classifier as its commonly-used strategy. The final classification is done by a linear SVM classifier trained on the features extracted using the CNN feature extractor. Finally, the prior knowledge can be applied to post-process the detection results to increase the precision of head detection and the people count is obtained by counting the head detection results. A real classroom surveillance dataset is used to evaluate the proposed method and experimental results show that this method has good performance and outperforms the baseline methods, including deformable part model and cascade Adaboost methods.", "This paper describes a vision based pedestrian detection and tracking system which is able to count people in very crowded situations like escalator entrances in underground stations. The proposed system uses motion to compute regions of interest and prediction of movements, extracts shape information from the video frames to detect individuals, and applies texture features to recognize people. A search strategy creates trajectories and new pedestrian hypotheses and then filters and combines those into accurate counting events. We show that counting accuracies up to 98 can be achieved.", "We propose a shape-based, hierarchical part-template matching approach to simultaneous human detection and segmentation combining local part-based and global shape-template-based schemes. The approach relies on the key idea of matching a part-template tree to images hierarchically to detect humans and estimate their poses. For learning a generic human detector, a pose-adaptive feature computation scheme is developed based on a tree matching approach. Instead of traditional concatenation-style image location-based feature encoding, we extract features adaptively in the context of human poses and train a kernel-SVM classifier to separate human nonhuman patterns. Specifically, the features are collected in the local context of poses by tracing around the estimated shape boundaries. We also introduce an approach to multiple occluded human detection and segmentation based on an iterative occlusion compensation scheme. The output of our learned generic human detector can be used as an initial set of human hypotheses for the iterative optimization. We evaluate our approaches on three public pedestrian data sets (INRIA, MIT-CBCL, and USC-B) and two crowded sequences from Caviar Benchmark and Munich Airport data sets." ] }
1904.03373
2953139336
High-density object counting in surveillance scenes is challenging mainly due to the drastic variation of object scales. The prevalence of deep learning has largely boosted the object counting accuracy on several benchmark datasets. However, does the global counts really count? Armed with this question we dive into the predicted density map whose summation over the whole regions reports the global counts for more in-depth analysis. We observe that the object density map generated by most existing methods usually lacks of local consistency, i.e., counting errors in local regions exist unexpectedly even though the global count seems to well match with the ground-truth. Towards this problem, in this paper we propose a constrained multi-stage Convolutional Neural Networks (CNNs) to jointly pursue locally consistent density map from two aspects. Different from most existing methods that mainly rely on the multi-column architectures of plain CNNs, we exploit a stacking formulation of plain CNNs. Benefited from the internal multi-stage learning process, the feature map could be repeatedly refined, allowing the density map to approach the ground-truth density distribution. For further refinement of the density map, we also propose a grid loss function. With finer local-region-based supervisions, the underlying model is constrained to generate locally consistent density values to minimize the training errors considering both the global and local counts accuracy. Experiments on two widely-tested object counting benchmarks with overall significant results compared with state-of-the-art methods demonstrate the effectiveness of our approach.
To the best of our knowledge, most existing methods on object scale variations handling mainly rely on the formulation of multi-scale feature either with the multi-column architecture @cite_21 or using the multi-resolution input @cite_24 . In this paper, we start from our observation with the local inconsistency problem and propose a joint solution from two aspects. First, we resort to a completely different formulation with previous methods which stacks multiple plain CNNs to handle the scale variation. Benefited from the internal multi-stage inference mechanism @cite_22 , the feature is repeatedly evaluated for refinement and correction, allowing the estimated density map to approach the ground-truth density distribution gradually with local consistent density values. In the other aspects, we propose a grid loss function to further constrain the model to adjust density values that are not consistent with local object counts. The multi-stage mechanism has been proven effective in various computer vision tasks like face detection @cite_2 , semantic segmentation @cite_14 , and pose estimation @cite_22 . In this paper, we exploit the multi-stage mechanism with the proposed grid loss towards locally consistent object counting. Our model is trained end-to-end efficiently, with validated effectiveness on two publicly available object counting datasets.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_21", "@cite_24", "@cite_2" ], "mid": [ "2963971305", "2307770531", "2141125852", "2519281173", "2473640056" ], "abstract": [ "Existing methods for pixel-wise labelling tasks generally disregard the underlying structure of labellings, often leading to predictions that are visually implausible. While incorporating structure into the model should improve prediction quality, doing so is challenging – manually specifying the form of structural constraints may be impractical and inference often becomes intractable even if structural constraints are given. We sidestep this problem by reducing structured prediction to a sequence of unconstrained prediction problems and demonstrate that this approach is capable of automatically discovering priors on shape, contiguity of region predictions and smoothness of region contours from data without any a priori specification. On the instance segmentation task, this method outperforms the state-of-the-art, achieving a mean APr of 63:6 at 50 overlap and 43:3 at 70 overlap.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible, wide and deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.", "In this paper we address the problem of counting objects instances in images. Our models are able to precisely estimate the number of vehicles in a traffic congestion, or to count the humans in a very crowded scene. Our first contribution is the proposal of a novel convolutional neural network solution, named Counting CNN (CCNN). Essentially, the CCNN is formulated as a regression model where the network learns how to map the appearance of the image patches to their corresponding object density maps. Our second contribution consists in a scale-aware counting model, the Hydra CNN, able to estimate object densities in different very crowded scenarios where no geometric information of the scene can be provided. Hydra CNN learns a multiscale non-linear regression model which uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. We report an extensive experimental evaluation, using up to three different object counting benchmarks, where we show how our solutions achieve a state-of-the-art performance.", "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training." ] }
1904.03219
2925520559
We consider a new problem of designing a network with small @math - @math effective resistance. In this problem, we are given an undirected graph @math , two designated vertices @math , and a budget @math . The goal is to choose a subgraph of @math with at most @math edges to minimize the @math - @math effective resistance. This problem is an interpolation between the shortest path problem and the minimum cost flow problem and has applications in electrical network design. We present several algorithmic and hardness results for this problem and its variants. On the hardness side, we show that the problem is NP-hard, and the weighted version is hard to approximate within a factor smaller than two assuming the small-set expansion conjecture. On the algorithmic side, we analyze a convex programming relaxation of the problem and design a constant factor approximation algorithm. The key of the rounding algorithm is a randomized path-rounding procedure based on the optimality conditions and a flow decomposition of the fractional solution. We also use dynamic programming to obtain a fully polynomial time approximation scheme when the input graph is a series-parallel graph, with better approximation ratio than the integrality gap of the convex program for these graphs.
In the survivable network design problem, we are given an undirected graph and a connectivity requirement @math for every pair of vertices @math , and the goal is to find a minimum cost subgraph such that there are at least @math edge-disjoint paths for all @math . This problem is extensively studied and captures many interesting special cases @cite_35 @cite_34 @cite_14 @cite_27 . The best approximation algorithm for this problem is due to Jain @cite_21 , who introduced the technique of iterative rounding to design a @math -approximation algorithm. His result has been extended in various directions, including element-connectivity @cite_30 @cite_4 , directed graphs @cite_6 @cite_27 , and with degree constraints @cite_25 @cite_24 @cite_3 @cite_40 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_14", "@cite_4", "@cite_21", "@cite_6", "@cite_3", "@cite_24", "@cite_27", "@cite_40", "@cite_34", "@cite_25" ], "mid": [ "2146146025", "1989224473", "2001139415", "1976459503", "2172955861", "2110257828", "2174245573", "2054728000", "1992614365", "2126340073", "2011823863", "2161190897" ], "abstract": [ "In the survivable network design problem (SNDP), given an undirected graph and values r sub ij for each pair of vertices i and j, we attempt to find a minimum-cost subgraph such that there are r sub ij disjoint paths between vertices i and j. In the edge connected version of this problem (EC-SNDP), these paths must be edge-disjoint. In the vertex connected version of the problem (VC-SNDP), the paths must be vertex disjoint. K. (1999) propose a version of the problem intermediate in difficulty to these two, called the element connectivity problem (ELC-SNDP, or ELC). These variants of SNDP are all known to be NP-hard. The best known approximation algorithm for the EC-SNDP has performance guarantee of 2 (K. Jain, 2001), and iteratively rounds solutions to a linear programming relaxation of the problem. ELC has a primal-dual O (log k) approximation algorithm, where k=max sub i,j r sub ij . VC-SNDP is not known to have a non-trivial approximation algorithm; however, recently L. Fleischer (2001) has shown how to extend the technique of K. Jain ( 2001) to give a 2-approximation algorithm in the case that r sub ij spl isin 0, 1, 2 . She also shows that the same techniques will not work for VC-SNDP for more general values of r sub ij . The authors show that these techniques can be extended to a 2-approximation algorithm for ELC. This gives the first constant approximation algorithm for a general survivable network design problem which allows node failures.", "", "We present a general approximation technique for a large class of graph problems. Our technique mostly applies to problems of covering, at minimum cost, the vertices of a graph with trees, cycles or paths satisfying certain requirements. In particular, many basic combinatorial optimization problems fit in this framework, including the shortest path, minimum-cost spanning tree, minimum-weight perfect matching, traveling salesman and Steiner tree problems. Our technique produces approximation algorithms that run in @math time and come within a factor of 2 of optimal for most of these problems. For instance, we obtain a 2-approximation algorithm for the minimum-weight perfect matching problem under the triangle inequality. Our running time of @math time compares favorably with the best strongly polynomial exact algorithms running in @math time for dense graphs. A similar result is obtained for the 2-matching problem and its variants. We also derive the first approximation algorithms for many NP-complete problems, including the non-fixed point-to-point connection problem, the exact path partitioning problem and complex location-design problems. Moreover, for the prize-collecting traveling salesman or Steiner tree problems, we obtain 2-approximation algorithms, therefore improving the previously best-known performance guarantees of 2.5 and 3, respectively [Math. Programming, 59 (1993), pp. 413--420].", "A typical problem in network design is to find a minimum-cost sub-network H of a given network G such that H satisfies some prespecified connectivity requirements. Our focus is on approximation algorithms for designing networks that satisfy vertex connectivity requirements. Our main tool is a linear programming relaxation of the following setpair formulation due to Frank and Jordan: a setpair consists of two subsets of vertices (of the given network G); each setpair has an integer requirement, and the goal is to find a minimum-cost subset of the edges of G sucht hat each setpair is covered by at least as many edges as its requirement. We introduce the notion of skew bisupermodular functions and use it to prove that the basic solutions of the linear program are characterized by “non-crossing families” of setpairs. This allows us to apply Jain’s iterative rounding method to find approximately optimal integer solutions. We give two applications. (1) In the k-vertex connectivity problem we are given a (directed or undirected) graph G=(V,E) with non-negative edge costs, and the task is to find a minimum-cost spanning subgraph H such that H is k-vertex connected. Let n=|V|, and let e<1 be a positive number such that k≤(1−e)n. We give an @math -approximation algorithm for both problems (directed or undirected), improving on the previous best approximation guarantees for k in the range @math . (2)We give a 2-approximation algorithm for the element connectivity problem, matching the previous best approximation guarantee due to Fleischer, Jain and Williamson.", "", "We discuss extensions of Jain’s framework for network design [8] that go beyond undirected graphs. The main problem is approximating a minimum cost set of directed edges that covers a crossing supermodular function. We show that iterated rounding gives a factor 3 approximation, where factor 4 was previously known and factor 2 was conjectured. Our bound is tight for the simplest interpretation of iterated rounding. We also show that (the simplest version of) iterated rounding has unbounded approximation ratio when the problem is extended to mixed graphs.", "We consider the problem of finding a minimum edge cost subgraph of a graph satisfying both given node-connectivity requirements and degree upper bounds on nodes. We present an iterative rounding algorithm of the biset linear programming relaxation for this problem. For directed graphs and @math -out-connectivity requirements from a root, our algorithm computes a solution that is a 2-approximation on the cost, and the degree of each node @math in the solution is at most @math , where @math is the degree upper bound on @math . For undirected graphs and element-connectivity requirements with maximum connectivity requirement @math , our algorithm computes a solution that is a @math -approximation on the cost, and the degree of each node @math in the solution is at most @math . These ratios improve the previous @math -approximation on the cost and @math -approximation on the degrees. Our algorithms can be used to improve approximation ratios for other node-connectivity problems such as undirected $k...", "We consider degree bounded network design problems with element and vertex connectivity requirements. In the degree bounded Survivable Network Design (SNDP) problem, the input is an undirected graph G = (V, E) with weights w(e) on the edges and degree bounds b(v) on the vertices, and connectivity requirements r(uv) for each pair uv of vertices. The goal is to select a minimum-weight subgraph H of G that meets the connectivity requirements and it satisfies the degree bounds on the vertices: for each pair uv of vertices, H has r(uv) disjoint paths between u and v; additionally, each vertex v is incident to at most b(v) edges in H. We give the first (O(1), O(1) · b(v)) bicriteria approximation algorithms for the degree-bounded SNDP problem with element connectivity requirements and for several degree-bounded SNDP problems with vertex connectivity requirements. Our algorithms construct a subgraph H whose weight is at most O(1) times the optimal such that each vertex v is incident to at most O(1) · b(v) edges in H. We can also extend our approach to network design problems in directed graphs with out-degree constraints to obtain (O(1), O(1) · b+(v)) bicriteria approximation.", "The smallest k-ECSS problem is, given a graph along with an integer k, find a spanning subgraph that is k-edge connected and contains the fewest possible number of edges. We examine a natural approximation algorithm based on rounding an LP solution. A tight bound on the approximation ratio is 1 + 3-k for undirected graphs with k > 1 odd, 1 + 2-k for undirected graphs with k even, and 1 + 2-k for directed graphs with k arbitrary. Using iterated rounding improves the first upper bound to 1 + 2-k. On the hardness side we show that for some absolute constant c > 0, for any integer k ≥ 2 (k ≥ 1), a polynomial-time algorithm approximating the smallest k-ECSS on undirected (directed) multigraphs to within ratio 1 + c-k would imply P = NP. © 2008 Wiley Periodicals, Inc. NETWORKS, 2009", "We present an approximation algorithm for the minimum bounded degree Steiner network problem that returns a Steiner network of cost at most two times the optimal and the degree on each vertex @math v is at most @math min bv+3rmax,2bv+2 , where @math rmax is the maximum connectivity requirement and @math bv is the given degree bound on @math v. This unifies, simplifies, and improves the previous results for this problem.", "We give the first approximation algorithm for the generalized network Steiner problem, a problem in network design. An instance consists of a network with link-costs and, for each pair @math of nodes, an edge-connectivity requirement @math . The goal is to find a minimum-cost network using the available links and satisfying the requirements. Our algorithm outputs a solution whose cost is within @math of optimal, where @math is the highest requirement value. In the course of proving the performance guarantee, we prove a combinatorial min-max approximate equality relating minimum-cost networks to maximum packings of certain kinds of cuts. As a consequence of the proof of this theorem, we obtain an approximation algorithm for optimally packing these cuts; we show that this algorithm has application to estimating the reliability of a probabilistic network.", "We present algorithmic and hardness results for network design problems with degree or order constraints. The first problem we consider is the Survivable Network Design problem with degree constraints on vertices. The objective is to find a minimum cost subgraph which satisfies connectivity requirements between vertices and also degree upper bounds @math on the vertices. This includes the well-studied Minimum Bounded Degree Spanning Tree problem as a special case. Our main result is a @math -approximation algorithm for the edge-connectivity Survivable Network Design problem with degree constraints, where the cost of the returned solution is at most twice the cost of an optimum solution (satisfying the degree bounds) and the degree of each vertex @math is at most @math . This implies the first constant factor (bicriteria) approximation algorithms for many degree constrained network design problems, including the Minimum Bounded Degree Steiner Forest problem. Our results also extend to directed graphs and provide the first constant factor (bicriteria) approximation algorithms for the Minimum Bounded Degree Arborescence problem and the Minimum Bounded Degree Strongly @math -Edge-Connected Subgraph problem. In contrast, we show that the vertex-connectivity Survivable Network Design problem with degree constraints is hard to approximate, even when the cost of every edge is zero. A striking aspect of our algorithmic result is its simplicity. It is based on the iterative relaxation method, which is an extension of Jain's iterative rounding method. This provides an elegant and unifying algorithmic framework for a broad range of network design problems. We also study the problem of finding a minimum cost @math -edge-connected subgraph with at least @math vertices, which we call the @math -subgraph problem. This generalizes some well-studied classical problems such as the @math -MST and the minimum cost @math -edge-connected subgraph problems. We give a polylogarithmic approximation for the @math -subgraph problem. However, by relating it to the Densest @math -Subgraph problem, we provide evidence that the @math -subgraph problem might be hard to approximate for arbitrary @math ." ] }
1904.03219
2925520559
We consider a new problem of designing a network with small @math - @math effective resistance. In this problem, we are given an undirected graph @math , two designated vertices @math , and a budget @math . The goal is to choose a subgraph of @math with at most @math edges to minimize the @math - @math effective resistance. This problem is an interpolation between the shortest path problem and the minimum cost flow problem and has applications in electrical network design. We present several algorithmic and hardness results for this problem and its variants. On the hardness side, we show that the problem is NP-hard, and the weighted version is hard to approximate within a factor smaller than two assuming the small-set expansion conjecture. On the algorithmic side, we analyze a convex programming relaxation of the problem and design a constant factor approximation algorithm. The key of the rounding algorithm is a randomized path-rounding procedure based on the optimality conditions and a flow decomposition of the fractional solution. We also use dynamic programming to obtain a fully polynomial time approximation scheme when the input graph is a series-parallel graph, with better approximation ratio than the integrality gap of the convex program for these graphs.
Other combinatorial connectivity requirements were also considered. A natural variation is to require @math internally vertex disjoint paths for every pair of vertices @math . This problem is much harder to approximate @cite_10 @cite_2 , but there are good approximation algorithms for global connectivity @cite_22 @cite_17 and when the maximum connectivity requirement is small @cite_39 @cite_36 . Another natural problem is to require a path of length @math between every pair of vertices @math . This problem is also hard to approximate in general but there are better approximation algorithms when every edge has the same cost and the same length @cite_23 .
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_39", "@cite_23", "@cite_2", "@cite_10", "@cite_17" ], "mid": [ "2138536745", "", "", "2022092881", "2963693432", "2052687843", "2178196238" ], "abstract": [ "We present an O(log n• log k)-approximation algorithm for the problem of finding k-vertex connected spanning subgraph of minimum cost, where n is the number of vertices in the input graph, and k is the connectivity requirement. Our algorithm works for both directed and undirected graphs. The best known approximation guarantees for these problems are O(ln k• min √k,n n-k ln k ) by Kortsarz and Nutov, and O(ln k ) in the case of undirected graphs where n≥ 6k2 by Cheriyan, Vempala, and Vetta. Our algorithm is the first that has a polylogarithmic guarantee for all values of k. Combining our algorithm with the algorithm of Kortsarz and Nutov in case of small k, e.g., k", "", "", "We study the following network design problem: Given a communication network, find a minimum cost subset of missing links such that adding these links to the network makes every pair of points within distance at most d from each other. The problem has been studied earlier [17] under the assumption that all link costs as well as link lengths are identical, and was shown to be R(logn)-hard for every d 2 4. We present a novel linear programming based approach to obtain an O(log la log d) approximation algorithm for the case of uniform link lengths and costs. We also extend the Cl(Iogn) hardness to d E Z, 3). On the other hand, if link costs can vary, we show that the prob” ‘-’ n lem is n(Z s )hard for d > 3. This version of our problem can be viewed as a special case of the minimum cost d-spanner problem and thus our hardness result applies there as well. For d = 2, however, we show that the problem continues to be O(logn) approximable by giving an O(log n)-approximation to the more general minimum cost Z-spanner problem. An n(2”s’-’ “)-hardness result also holds when all link costs are identical but link lengths may vary (applies even when all lengths are 1 or 2). Our reduction from the label cower problem [3] also applies to another well-studied network design problem. We show that the directed genemlized steiner network problem [6] is n(2 I’&-’ “)-hard, significantly improving upon the Q(logn) hardness known prior to our work. We also present O(n log d) approximation algorithm for our problem under arbitrary link costs and polynomially bounded link lengths. Same result holds for the minimum cost d-spanner problem. Finally, all our positive results extend to the case where each pair (u,u) of nodes has a distinct distance requirement, say d(u, v). The approximation guarantees above hold provided d is replaced by max,,, d(u, v). All our algorithmic as well as hardness results hold for both undirected and directed versions of the problem. Sanjeev Khanna", "Optimizing parameters of Two-Prover-One-Round Game (2P1R) is an important task in PCPs literature as it would imply a smaller PCP with the same or stronger soundness. While this is a basic question in PCPs community, the connection between the parameters of PCPs and hardness of approximations is sometimes obscure to approximation algorithm community. In this paper, we investigate the connection between the parameters of 2P1R and the hardness of approximating the class of so-called connectivity problems, which includes as subclasses the survivable network design and (multi)cut problems. Based on recent development on 2P1R by Chan (STOC 2013) and several techniques in PCPs literature, we improve hardness results of some connectivity problems that are in the form kσ, for some (very) small constant σ > 0, to hardness results of the form kc for some explicit constant c, where k is a connectivity parameter. In addition, we show how to convert these hardness into hardness results of the form Dc', where D is the number of demand pairs (or the number of terminals).", "In the survivable network design problem (SNDP), the goal is to find a minimum-cost spanning subgraph satisfying certain connectivity requirements. We study the vertex-connectivity variant of SNDP in which the input specifies, for each pair of vertices, a required number of vertex-disjoint paths connecting them. We give the first strong lower bound on the approximability of SNDP, showing that the problem admits no efficient @math ratio approximation for any fixed @math , unless @math . We show hardness of approximation results for some important special cases of SNDP, and we exhibit the first lower bound on the approximability of the related classical NP-hard problem of augmenting the connectivity of a graph using edges from a given set.", "We present a 6-approximation algorithm for the minimum-cost @math -node connected spanning subgraph problem, assuming that the number of nodes is at least @math . We apply a combinatorial preprocessing, based on the Frank--Tardos algorithm for @math -outconnectivity, to transform any input into an instance such that the iterative rounding method gives a 2-approximation guarantee. This is the first constant factor approximation algorithm even in the asymptotic setting of the problem, that is, the restriction to instances where the number of nodes is lower bounded by a function of @math ." ] }
1904.03219
2925520559
We consider a new problem of designing a network with small @math - @math effective resistance. In this problem, we are given an undirected graph @math , two designated vertices @math , and a budget @math . The goal is to choose a subgraph of @math with at most @math edges to minimize the @math - @math effective resistance. This problem is an interpolation between the shortest path problem and the minimum cost flow problem and has applications in electrical network design. We present several algorithmic and hardness results for this problem and its variants. On the hardness side, we show that the problem is NP-hard, and the weighted version is hard to approximate within a factor smaller than two assuming the small-set expansion conjecture. On the algorithmic side, we analyze a convex programming relaxation of the problem and design a constant factor approximation algorithm. The key of the rounding algorithm is a randomized path-rounding procedure based on the optimality conditions and a flow decomposition of the fractional solution. We also use dynamic programming to obtain a fully polynomial time approximation scheme when the input graph is a series-parallel graph, with better approximation ratio than the integrality gap of the convex program for these graphs.
Spectral connectivity requirements were also studied, including spectral gap @cite_1 @cite_11 (closely related to graph expansion), total effective resistances @cite_8 , and mixing time @cite_13 . Some of the earlier works only proposed convex programming relaxations and heuristic algorithms. Approximation guarantees are only obtained in two recent papers for the more general experimental design problem. When every edge has the same cost, there is a @math -approximation algorithm for minimizing the total effective resistance when the budget is at least @math @cite_31 , and there is a @math -approximation algorithm for maximizing the spectral gap when the budget is at least @math @cite_5 . For our problem, the interesting regime is when @math is much smaller than @math , where the techniques in @cite_5 @cite_31 do not apply. We have developed a set of new techniques for analyzing and rounding the solutions to the convex program that will hopefully find applications for solving related problems.
{ "cite_N": [ "@cite_8", "@cite_1", "@cite_5", "@cite_31", "@cite_13", "@cite_11" ], "mid": [ "1987717935", "2116673134", "2769899835", "2788976390", "2134711723", "2014255080" ], "abstract": [ "The effective resistance between two nodes of a weighted graph is the electrical resistance seen between the nodes of a resistor network with branch conductances given by the edge weights. The effective resistance comes up in many applications and fields in addition to electrical network analysis, including, for example, Markov chains and continuous-time averaging networks. In this paper we study the problem of allocating edge weights on a given graph in order to minimize the total effective resistance, i.e., the sum of the resistances between all pairs of nodes. We show that this is a convex optimization problem and can be solved efficiently either numerically or, in some cases, analytically. We show that optimal allocation of the edge weights can reduce the total effective resistance of the graph (compared to uniform weights) by a factor that grows unboundedly with the size of the graph. We show that among all graphs with @math nodes, the path has the largest value of optimal total effective resistance and the complete graph has the least.", "The algebraic connectivity of a graph is the second smallest eigenvalue of the graph Laplacian, and is a measure of how well-connected the graph is. We study the problem of adding edges (from a set of candidate edges) to a graph so as to maximize its algebraic connectivity. This is a difficult combinatorial optimization, so we seek a heuristic for approximately solving the problem. The standard convex relaxation of the problem can be expressed as a semidefinite program (SDP); for modest sized problems, this yields a cheaply computable upper bound on the optimal value, as well as a heuristic for choosing the edges to be added. We describe a new greedy heuristic for the problem. The heuristic is based on the Fiedler vector, and therefore can be applied to very large graphs.", "The experimental design problem concerns the selection of k points from a potentially large design pool of p-dimensional vectors, so as to maximize the statistical efficiency regressed on the selected k design points. Statistical efficiency is measured by optimality criteria, including A(verage), D(eterminant), T(race), E(igen), V(ariance) and G-optimality. Except for the T-optimality, exact optimization is NP-hard. We propose a polynomial-time regret minimization framework to achieve a @math approximation with only @math design points, for all the optimality criteria above. In contrast, to the best of our knowledge, before our work, no polynomial-time algorithm achieves @math approximations for D E G-optimality, and the best poly-time algorithm achieving @math -approximation for A V-optimality requires @math design points.", "We study the optimal design problems where the goal is to choose a set of linear measurements to obtain the most accurate estimate of an unknown vector in @math dimensions. We study the @math -optimal design variant where the objective is to minimize the average variance of the error in the maximum likelihood estimate of the vector being measured. The problem also finds applications in sensor placement in wireless networks, sparse least squares regression, feature selection for @math -means clustering, and matrix approximation. In this paper, we introduce proportional volume sampling to obtain improved approximation algorithms for @math -optimal design. Our main result is to obtain improved approximation algorithms for the @math -optimal design problem by introducing the proportional volume sampling algorithm. Our results nearly optimal bounds in the asymptotic regime when the number of measurements done, @math , is significantly more than the dimension @math . We also give first approximation algorithms when @math is small including when @math . The proportional volume-sampling algorithm also gives approximation algorithms for other optimal design objectives such as @math -optimal design and generalized ratio objective matching or improving previous best known results. Interestingly, we show that a similar guarantee cannot be obtained for the @math -optimal design problem. We also show that the @math -optimal design problem is NP-hard to approximate within a fixed constant when @math .", "We consider a symmetric random walk on a connected graph, where each edge is labeled with the probability of transition between the two adjacent vertices. The associated Markov chain has a uniform equilibrium distribution; the rate of convergence to this distribution, i.e., the mixing rate of the Markov chain, is determined by the second largest eigenvalue modulus (SLEM) of the transition probability matrix. In this paper we address the problem of assigning probabilities to the edges of the graph in such a way as to minimize the SLEM, i.e., the problem of finding the fastest mixing Markov chain on the graph. We show that this problem can be formulated as a convex optimization problem, which can in turn be expressed as a semidefinite program (SDP). This allows us to easily compute the (globally) fastest mixing Markov chain for any graph with a modest number of edges (say, @math ) using standard numerical methods for SDPs. Larger problems can be solved by exploiting various types of symmetry and structure in the problem, and far larger problems (say, 100,000 edges) can be solved using a subgradient method we describe. We compare the fastest mixing Markov chain to those obtained using two commonly used heuristics: the maximum-degree method, and the Metropolis--Hastings algorithm. For many of the examples considered, the fastest mixing Markov chain is substantially faster than those obtained using these heuristic methods. We derive the Lagrange dual of the fastest mixing Markov chain problem, which gives a sophisticated method for obtaining (arbitrarily good) bounds on the optimal mixing rate, as well as the optimality conditions. Finally, we describe various extensions of the method, including a solution of the problem of finding the fastest mixing reversible Markov chain, on a fixed graph, with a given equilibrium distribution.", "We consider a variation of the spectral sparsification problem where we are required to keep a subgraph of the original graph. Formally, given a union of two weighted graphs G and W and an integer k, we are asked to find a k-edge weighted graph Wk such that G+Wk is a good spectral sparsifer of G+W. We will refer to this problem as the subgraph (spectral) sparsification. We present a nontrivial condition on G and W such that a good sparsifier exists and give a polynomial-time algorithm to find the sparsifer. As an application of our technique, we show that for each positive integer k, every n-vertex weighted graph has an (n-1+k)-edge spectral sparsifier with relative condition number at most n k log n, O(log log n) where O() hides lower order terms. Our bound nearly settles a question left open by Spielman and Teng about ultrasparsifiers. We also present another application of our technique to spectral optimization in which the goal is to maximize the algebraic connectivity of a graph (e.g. turn it into an expander) with a limited number of edges." ] }
1904.03283
2929710725
Recent literature has demonstrated the improved data discovery and delivery efficiency gained through applying the named data networking (NDN) to a variety of information-centric Internet of things (IoT) applications. However, from the data security perspective, the development of NDN-IoT raises several new authentication challenges. Particularly, NDN-IoT authentication requires per-packet-level signatures, thus imposing intolerably high computational and time costs on the resource-poor IoT end devices. This paper proposes an effective solution by seamlessly integrating the lightweight and unforgeable physical-layer identity (PHY-ID) into the existing NDN signature scheme for the mobile edge computing (MEC)-enabled NDN-IoT networks. The PHY-ID generation exploits the inherent per-signal-level device-specific radio-frequency imperfections of IoT devices, including the in-phase quadrature-phase imbalance, and thereby avoids adding any implementation complexity to the constrained IoT devices. We derive the offline maximum entropy-based quantization rule and propose an online two-step authentication scheme to improve the accuracy of the authentication decision-making. Consequently, a cooperative MEC device can securely execute the costly signing task on behalf of the authenticated IoT device in an optimal manner. The evaluation results demonstrate 1) elevated authentication time efficiency, 2) robustness to several impersonation attacks including the replay attack and the computation-based spoofing attack, and 3) increased differentiation rate and correct authentication probability through applying our integration design in MEC-enabled NDN-IoT networks.
In this case, @cite_7 highlighted the prospects of investigating a secure collaboration scheme that allows trusted proxies to sign on behalf of the constrained devices. To this end, offloading the signing task from the constrained IoT device to the MECD can be a promising research direction for MEC-enabled NDN-IoT networks. In @cite_0 @cite_19 @cite_26 , several collaborative task offloading schemes that exploit the CPU resources between MECDs and end users were investigated, but none of them took into account the PHY authentication in their offloading designs. Given that the signature in NDN-IoT is used for data provenance verification, the task provider'' authentication at MECD becomes especially necessary before the execution of any signing task offloading.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_26", "@cite_7" ], "mid": [ "2801399598", "2753328985", "2962912298", "2495924227" ], "abstract": [ "This letter considers a mobile edge computing system with one access point (AP) serving multiple users over a multicarrier channel, in the presence of a malicious eavesdropper. In this system, each user can execute the respective computation tasks by partitioning them into two parts, which are computed locally and offloaded to AP, respectively. We exploit the physical-layer security to secure the multiuser computation offloading from being overheard by the eavesdropper. Under this setup, we minimize the weighted sum-energy consumption for these users, subject to the newly imposed secrecy offloading rate constraints and the computation latency constraints, by jointly optimizing their computation and communication resource allocations. We propose an efficient algorithm to solve this problem.", "Scavenging the idling computation resources at the enormous number of mobile devices, ranging from small IoT devices to powerful laptop computers, can provide a powerful platform for local mobile cloud computing. The vision can be realized by peer-to-peer cooperative computing between edge devices, referred to as co-computing . This paper exploits the non-causal helper’s CPU-state information to design energy-efficient co-computing policies for scavenging time-varying spare computation resources at peer mobiles. Specifically, we consider a co-computing system where a user offloads computation of input data to a helper. The helper controls the offloading process for the objective of minimizing the user’s energy consumption based on a predicted helper’s CPU-idling profile that specifies the amount of available computation resource for co-computing. Consider the scenario that the user has one-shot input-data arrival and the helper buffers offloaded bits. The problem for energy-efficient co-computing is formulated as two sub-problems: the slave problem corresponding to adaptive offloading and the master one to data partitioning. Given a fixed offloaded data size, the adaptive offloading aims at minimizing the energy consumption for offloading by controlling the offloading rate under the deadline and buffer constraints. By deriving the necessary and sufficient conditions for the optimal solution, we characterize the structure of the optimal policies and propose algorithms for computing the policies. Furthermore, we show that the problem of optimal data partitioning for offloading and local computing at the user is convex, admitting a simple solution using the sub-gradient method. Finally, the developed design approach for co-computing is extended to the scenario of bursty data arrivals at the user accounting for data causality constraints. Simulation results verify the effectiveness of the proposed algorithms.", "Designing mobile edge computing (MEC) systems by jointly optimizing communication and computation resources, which can help increase mobile batteries' lifetime and improve quality of experience for computation-intensive and latency-sensitive applications, has received significant interest. In this paper, we consider energy-efficient resource allocation schemes for a multi-user mobile edge computing system with inelastic computation tasks and non-negligible task execution durations. First, we establish a mathematical model to characterize the offloading of a computation task from a mobile to the base station (BS) equipped with MEC servers. This computation-offloading model consists of three stages, i.e., task uploading, task executing, and computation result downloading, and allows parallel transmissions and executions for different tasks. Then, we formulate the weighted sum energy consumption minimization problem to optimally allocate the task operation sequence, the uploading and downloading time durations as well as the starting times for uploading, executing and downloading, which is a challenging mixed discrete- continuous optimization problem and is NP-hard in general. We propose a method to obtain an optimal solution and develop a low-complexity algorithm to obtain a suboptimal solution, by connecting the optimization problem to a three-stage flow-shop scheduling problem and utilizing Johnson's algorithm as well as convex optimization techniques. Finally, numerical results show that the proposed sub-optimal solution outperforms existing comparison schemes.", "The technological advances in mobile devices are pushing cloud computing to the network edge , where services such as data storage and processing can be offered by mobile devices locally with improved quality. In this letter, we identify named data networking (NDN) as a key enabler to support “by design” the peculiarities of decentralized edge clouds at the network layer . We extend NDN beyond its original scope of content retrieval facilitator, by letting names to address, not only “contents” but also “cloud services,” and enhancing the semantics of NDN primitives to efficiently and reliably support both the provider discovery and the service provisioning phases. An early evaluation is performed to showcase the benefits of the proposal, and also when compared with a traditional TCP IP-based approach." ] }
1904.03283
2929710725
Recent literature has demonstrated the improved data discovery and delivery efficiency gained through applying the named data networking (NDN) to a variety of information-centric Internet of things (IoT) applications. However, from the data security perspective, the development of NDN-IoT raises several new authentication challenges. Particularly, NDN-IoT authentication requires per-packet-level signatures, thus imposing intolerably high computational and time costs on the resource-poor IoT end devices. This paper proposes an effective solution by seamlessly integrating the lightweight and unforgeable physical-layer identity (PHY-ID) into the existing NDN signature scheme for the mobile edge computing (MEC)-enabled NDN-IoT networks. The PHY-ID generation exploits the inherent per-signal-level device-specific radio-frequency imperfections of IoT devices, including the in-phase quadrature-phase imbalance, and thereby avoids adding any implementation complexity to the constrained IoT devices. We derive the offline maximum entropy-based quantization rule and propose an online two-step authentication scheme to improve the accuracy of the authentication decision-making. Consequently, a cooperative MEC device can securely execute the costly signing task on behalf of the authenticated IoT device in an optimal manner. The evaluation results demonstrate 1) elevated authentication time efficiency, 2) robustness to several impersonation attacks including the replay attack and the computation-based spoofing attack, and 3) increased differentiation rate and correct authentication probability through applying our integration design in MEC-enabled NDN-IoT networks.
MECDs can access the local device-specific PHY information faster than the core content router @cite_18 and equip more computational resources than IoT devices. It is anticipated that the MECDs can make accurate authentication decisions based on their knowledge of the local devices' PHY fingerprints and further leverage PHY fingerprinting to strengthen the system robustness for preventing the aforementioned computation-based impersonation attacks.
{ "cite_N": [ "@cite_18" ], "mid": [ "2963391656" ], "abstract": [ "Information-centric networking (ICN) replaces the widely used host-centric networking paradigm in communication networks (e.g., Internet and mobile ad hoc networks) with an information-centric paradigm, which prioritizes the delivery of named content, oblivious of the contents’ origin. Content and client security, provenance, and identity privacy are intrinsic by design in the ICN paradigm as opposed to the current host centric paradigm where they have been instrumented as an after-thought. However, given its nascency, the ICN paradigm has several open security and privacy concerns. In this paper, we survey the existing literature in security and privacy in ICN and present open questions. More specifically, we explore three broad areas: 1) security threats; 2) privacy risks; and 3) access control enforcement mechanisms. We present the underlying principle of the existing works, discuss the drawbacks of the proposed approaches, and explore potential future research directions. In security, we review attack scenarios, such as denial of service, cache pollution, and content poisoning. In privacy, we discuss user privacy and anonymity, name and signature privacy, and content privacy. ICN’s feature of ubiquitous caching introduces a major challenge for access control enforcement that requires special attention. We review existing access control mechanisms including encryption-based, attribute-based, session-based, and proxy re-encryption-based access control schemes. We conclude the survey with lessons learned and scope for future work." ] }
1904.03339
2927167868
This paper describes our system, Joint Encoders for Stable Suggestion Inference (JESSI), for the SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums. JESSI is a combination of two sentence encoders: (a) one using multiple pre-trained word embeddings learned from log-bilinear regression (GloVe) and translation (CoVe) models, and (b) one on top of word encodings from a pre-trained deep bidirectional transformer (BERT). We include a domain adversarial training module when training for out-of-domain samples. Our experiments show that while BERT performs exceptionally well for in-domain samples, several runs of the model show that it is unstable for out-of-domain samples. The problem is mitigated tremendously by (1) combining BERT with a non-BERT encoder, and (2) using an RNN-based classifier on top of BERT. Our final models obtained second place with 77.78 F-Score on Subtask A (i.e. in-domain) and achieved an F-Score of 79.59 on Subtask B (i.e. out-of-domain), even without using any additional external data.
In text classification, training and test data distributions can be different, and thus domain adaptation techniques are used. These include non-neural methods that map the semantics between domains by aligning the vocabulary @cite_10 @cite_0 and generating labeled samples @cite_24 @cite_12 . Neural methods include the use of stacked denoising autoencoders @cite_30 , variational autoencoders @cite_8 @cite_21 . Our model uses a domain adversarial training module @cite_15 , an elegant way to effectively transfer knowledge between domains by training a separate domain classifier using an adversarial objective.
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_21", "@cite_0", "@cite_24", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "22861983", "2594718649", "2963326042", "2153353890", "2167660864", "1731081199", "1603678606", "2567698949" ], "abstract": [ "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.", "It is important to apply models trained on a large number of labeled samples to different domains because collecting many labeled samples in various domains is expensive. To learn discriminative representations for the target domain, we assume that artificially labeling the target samples can result in a good representation. Tri-training leverages three classifiers equally to provide pseudo-labels to unlabeled samples; however, the method does not assume labeling samples generated from a different domain. In this paper, we propose the use of an asymmetric tri-training method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train the neural networks as if they are true labels. In our work, we use three networks asymmetrically, and by asymmetric, we mean that two networks are used to label unlabeled target samples, and one network is trained by the pseudo-labeled samples to obtain target-discriminative representations. Our proposed method was shown to achieve a state-of-the-art performance on the benchmark digit recognition datasets for domain adaptation.", "", "Sentiment classification aims to automatically predict sentiment polarity (e.g., positive or negative) of users publishing sentiment data (e.g., reviews, blogs). Although traditional classification algorithms can be used to train sentiment classifiers from manually labeled text data, the labeling work can be time-consuming and expensive. Meanwhile, users often use some different words when they express sentiment in different domains. If we directly apply a classifier trained in one domain to other domains, the performance will be very low due to the differences between these domains. In this work, we develop a general solution to sentiment classification when we do not have any labels in a target domain but have some labeled data in a different domain, regarded as source domain. In this cross-domain sentiment classification setting, to bridge the gap between the domains, we propose a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, with the help of domain-independent words as a bridge. In this way, the clusters can be used to reduce the gap between domain-specific words of the two domains, which can be used to train sentiment classifiers in the target domain accurately. Compared to previous approaches, SFA can discover a robust representation for cross-domain data by fully exploiting the relationship between the domain-specific and domain-independent words via simultaneously co-clustering them in a common latent space. We perform extensive experiments on two real world datasets, and demonstrate that SFA significantly outperforms previous approaches to cross-domain sentiment classification.", "The lack of Chinese sentiment corpora limits the research progress on Chinese sentiment classification. However, there are many freely available English sentiment corpora on the Web. This paper focuses on the problem of cross-lingual sentiment classification, which leverages an available English corpus for Chinese sentiment classification by using the English corpus as training data. Machine translation services are used for eliminating the language gap between the training set and test set, and English features and Chinese features are considered as two independent views of the classification problem. We propose a cotraining approach to making use of unlabeled Chinese data. Experimental results show the effectiveness of the proposed approach, which can outperform the standard inductive classifiers and the transductive classifiers.", "We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.", "Recent work on the transfer of semantic information across languages has been recently applied to the development of resources annotated with Frame information for different non-English European languages. These works are based on the assumption that parallel corpora annotated for English can be used to transfer the semantic information to the other target languages. In this paper, a robust method based on a statistical machine translation step augmented with simple rule-based post-processing is presented. It alleviates problems related to preprocessing errors and the complex optimization required by syntax-dependent models of the cross-lingual mapping. Different alignment strategies are here investigated against the Europarl corpus. Results suggest that the quality of the derived annotations is surprisingly good and well suited for training semantic role labeling systems.", "" ] }
1904.03339
2927167868
This paper describes our system, Joint Encoders for Stable Suggestion Inference (JESSI), for the SemEval 2019 Task 9: Suggestion Mining from Online Reviews and Forums. JESSI is a combination of two sentence encoders: (a) one using multiple pre-trained word embeddings learned from log-bilinear regression (GloVe) and translation (CoVe) models, and (b) one on top of word encodings from a pre-trained deep bidirectional transformer (BERT). We include a domain adversarial training module when training for out-of-domain samples. Our experiments show that while BERT performs exceptionally well for in-domain samples, several runs of the model show that it is unstable for out-of-domain samples. The problem is mitigated tremendously by (1) combining BERT with a non-BERT encoder, and (2) using an RNN-based classifier on top of BERT. Our final models obtained second place with 77.78 F-Score on Subtask A (i.e. in-domain) and achieved an F-Score of 79.59 on Subtask B (i.e. out-of-domain), even without using any additional external data.
Inspired from the computer vision field, where ImageNet @cite_6 is used to pretrain models for other tasks @cite_17 , many recent attempts in the NLP community are successful on using language modeling as a pretraining step to extract feature representations @cite_20 , and to fine-tune NLP models @cite_5 @cite_18 . BERT @cite_18 is the most recent inclusion to these models, where it uses a deep bidirectional transformer trained on masked language modeling and next sentence prediction objectives. reported that BERT shows significant increase in improvements on many NLP tasks, and subsequent studies have shown that BERT is also effective on harder tasks such as open-domain question answering @cite_23 , multiple relation extraction @cite_29 , and table question answering @cite_19 , among others. In this paper, we also use BERT as an encoder, show its problem on out-of-domain samples, and mitigate the problem using multiple tricks.
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_6", "@cite_19", "@cite_23", "@cite_5", "@cite_20", "@cite_17" ], "mid": [ "2963341956", "2911327180", "2108598243", "2912624765", "2912817604", "", "2962739339", "" ], "abstract": [ "", "Most approaches to extraction multiple relations from a paragraph require multiple passes over the paragraph. In practice, multiple passes are computationally expensive and this makes difficult to scale to longer paragraphs and larger text corpora. In this work, we focus on the task of multiple relation extraction by encoding the paragraph only once (one-pass). We build our solution on the pre-trained self-attentive (Transformer) models, where we first add a structured prediction layer to handle extraction between multiple entity pairs, then enhance the paragraph embedding to capture multiple relational information associated with each entity with an entity-aware attention technique. We show that our approach is not only scalable but can also perform state-of-the-art on the standard benchmark ACE 2005.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "WikiSQL is the task of mapping a natural language question to a SQL query given a table from a Wikipedia article. We first show that learning highly context- and table-aware word representations is arguably the most important consideration for achieving a high accuracy in the task. We explore three variants of BERT-based architecture and our best model outperforms the previous state of the art by 8.2 and 2.5 in logical form and execution accuracy, respectively. We provide a detailed analysis of the models to guide how word contextualization can be utilized in a such semantic parsing task. We then argue that this score is near the upper bound in WikiSQL, where we observe that the most of the evaluation errors are due to wrong annotations. We also measure human accuracy on a portion of the dataset and show that our model exceeds the human performance, at least by 1.4 execution accuracy.", "We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans.", "", "We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pretrained network is crucial, allowing downstream models to mix different types of semi-supervision signals.", "" ] }
1904.03326
2932480350
360 images represent scenes captured in all possible viewing directions. They enable viewers to navigate freely around the scene and thus provide an immersive experience. Conversely, conventional images represent scenes in a single viewing direction with a small or limited field of view (FOV). As a result, only some parts of the scenes are observed, and valuable information about the surroundings is lost. In this paper, we propose a learning-based approach that reconstructs the scene in 360 x 180 from a sparse set of conventional images (typically 4 images). The proposed approach first estimates the FOV of input images relative to the panorama. The estimated FOV is then used as the prior for synthesizing a high-resolution 360 panoramic output. The proposed method overcomes the difficulty of learning-based approach in synthesizing high resolution images (up to 512 x 1024). Experimental results demonstrate that the proposed method produces 360 panorama with reasonable quality. It is shown that it outperforms the alternative method and can be generalize for non-panorama scene and images captured by a smartphone camera.
Inpainting methods interpolate missing or occluded regions of an input by filling these regions with plausible pixels. Most algorithms rely on neighboring pixels to propagate the pixel information to target regions @cite_24 @cite_29 . These methods generally handle images with narrow holes and do not perform well on large holes @cite_13 @cite_9 . Liu al @cite_22 recently proposed to solve arbitrary holes by training convolutional neural networks (CNN). To solve an inpainting task, they use partial convolution with binary a mask as prior for the missing holes. Iizuka al @cite_2 propose local and global discriminator networks to maintain consistent results for missing regions. However, these methods applied to known boundary images and are not suitable for handling images beyond the field of view.
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_29", "@cite_24", "@cite_2", "@cite_13" ], "mid": [ "2798365772", "", "", "2100415658", "2738588019", "1993120651" ], "abstract": [ "Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach.", "", "", "A variational approach for filling-in regions of missing data in digital images is introduced. The approach is based on joint interpolation of the image gray levels and gradient isophotes directions, smoothly extending in an automatic fashion the isophote lines into the holes of missing data. This interpolation is computed by solving the variational problem via its gradient descent flow, which leads to a set of coupled second order partial differential equations, one for the gray-levels and one for the gradient orientations. The process underlying this approach can be considered as an interpretation of the Gestaltist's principle of good continuation. No limitations are imposed on the topology of the holes, and all regions of missing data can be simultaneously processed, even if they are surrounded by completely different structures. Applications of this technique include the restoration of old photographs and removal of superimposed text like dates, subtitles, or publicity. Examples of these applications are given. We conclude the paper with a number of theoretical results on the proposed variational approach and its corresponding gradient descent flow.", "We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.", "This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods." ] }
1904.03326
2932480350
360 images represent scenes captured in all possible viewing directions. They enable viewers to navigate freely around the scene and thus provide an immersive experience. Conversely, conventional images represent scenes in a single viewing direction with a small or limited field of view (FOV). As a result, only some parts of the scenes are observed, and valuable information about the surroundings is lost. In this paper, we propose a learning-based approach that reconstructs the scene in 360 x 180 from a sparse set of conventional images (typically 4 images). The proposed approach first estimates the FOV of input images relative to the panorama. The estimated FOV is then used as the prior for synthesizing a high-resolution 360 panoramic output. The proposed method overcomes the difficulty of learning-based approach in synthesizing high resolution images (up to 512 x 1024). Experimental results demonstrate that the proposed method produces 360 panorama with reasonable quality. It is shown that it outperforms the alternative method and can be generalize for non-panorama scene and images captured by a smartphone camera.
This method generates images with different viewing directions. The task includes generating different poses transformed by a limited rotation @cite_6 . Dosovistsky @cite_19 proposed a learning based approach to synthesize diverse view variations; this method is capable of rendering different models of inputs. Zhou al @cite_5 proposed appearance flow method to synthesize object with extreme view variations, but this method is limited to a single object with a homogeneous background. These existing studies handled objects with limited shape variances and inward looking view, while our work handles outward looking views with relatively diverse scenes.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_6" ], "mid": [ "1893585201", "2348664362", "2263714001" ], "abstract": [ "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.", "We address the problem of novel view synthesis: given an input image, synthesizing new images of the same object or scene observed from arbitrary viewpoints. We approach this as a learning task but, critically, instead of learning to synthesize pixels from scratch, we learn to copy them from the input image. Our approach exploits the observation that the visual appearance of different views of the same instance is highly correlated, and such correlation could be explicitly learned by training a convolutional neural network (CNN) to predict appearance flows – 2-D coordinate vectors specifying which pixels in the input view could be used to reconstruct the target view. Furthermore, the proposed framework easily generalizes to multiple input views by learning how to optimally combine single-view predictions. We show that for both objects and scenes, our approach is able to synthesize novel views of higher perceptual quality than previous CNN-based techniques.", "We present a convolutional network capable of generating images of a previously unseen object from arbitrary viewpoints given a single image of this object. The input to the network is a single image and the desired new viewpoint; the output is a view of the object from this desired viewpoint. The network is trained on renderings of synthetic 3D models. It learns an implicit 3D representation of the object class, which allows it to transfer shape knowledge from training instances to a new object instance. Beside the color image, the network can also generate the depth map of an object from arbitrary viewpoints. This allows us to predict 3D point clouds from a single image, which can be fused into a surface mesh. We experimented with cars and chairs. Even though the network is trained on artificial data, it generalizes well to objects in natural images without any modifications." ] }
1904.03326
2932480350
360 images represent scenes captured in all possible viewing directions. They enable viewers to navigate freely around the scene and thus provide an immersive experience. Conversely, conventional images represent scenes in a single viewing direction with a small or limited field of view (FOV). As a result, only some parts of the scenes are observed, and valuable information about the surroundings is lost. In this paper, we propose a learning-based approach that reconstructs the scene in 360 x 180 from a sparse set of conventional images (typically 4 images). The proposed approach first estimates the FOV of input images relative to the panorama. The estimated FOV is then used as the prior for synthesizing a high-resolution 360 panoramic output. The proposed method overcomes the difficulty of learning-based approach in synthesizing high resolution images (up to 512 x 1024). Experimental results demonstrate that the proposed method produces 360 panorama with reasonable quality. It is shown that it outperforms the alternative method and can be generalize for non-panorama scene and images captured by a smartphone camera.
Framebreak @cite_23 yields impressive results in generating partial panorama from images with a small field of view. The method requires the manual selection of reference images to be aligned with input images. Guided patch-based texture synthesis is then used to generate missing pixels. The process requires reference images with high similarity with the input.
{ "cite_N": [ "@cite_23" ], "mid": [ "2131585809" ], "abstract": [ "We significantly extrapolate the field of view of a photograph by learning from a roughly aligned, wide-angle guide image of the same scene category. Our method can extrapolate typical photos into complete panoramas. The extrapolation problem is formulated in the shift-map image synthesis framework. We analyze the self-similarity of the guide image to generate a set of allowable local transformations and apply them to the input image. Our guided shift-map method reserves to the scene layout of the guide image when extrapolating a photograph. While conventional shift-map methods only support translations, this is not expressive enough to characterize the self-similarity of complex scenes. Therefore we additionally allow image transformations of rotation, scaling and reflection. To handle this increase in complexity, we introduce a hierarchical graph optimization method to choose the optimal transformation at each output pixel. We demonstrate our approach on a variety of indoor, outdoor, natural, and man-made scenes." ] }
1904.03326
2932480350
360 images represent scenes captured in all possible viewing directions. They enable viewers to navigate freely around the scene and thus provide an immersive experience. Conversely, conventional images represent scenes in a single viewing direction with a small or limited field of view (FOV). As a result, only some parts of the scenes are observed, and valuable information about the surroundings is lost. In this paper, we propose a learning-based approach that reconstructs the scene in 360 x 180 from a sparse set of conventional images (typically 4 images). The proposed approach first estimates the FOV of input images relative to the panorama. The estimated FOV is then used as the prior for synthesizing a high-resolution 360 panoramic output. The proposed method overcomes the difficulty of learning-based approach in synthesizing high resolution images (up to 512 x 1024). Experimental results demonstrate that the proposed method produces 360 panorama with reasonable quality. It is shown that it outperforms the alternative method and can be generalize for non-panorama scene and images captured by a smartphone camera.
Xiao al @cite_3 predicted the viewpoint of a given panoramic observation. The prediction generates rough panorama structure and compass-like prediction in determining the location of the viewpoint in 360 . Georgoulis al @cite_20 estimated environmental map from the reflectance properties of input images. They utilized these properties from a foreground object to estimate the background environment in a panoramic representation. In contrast to these works, ours does not rely on reference images as the input, and it synthesizes actual panoramic imagery.
{ "cite_N": [ "@cite_20", "@cite_3" ], "mid": [ "2741645903", "2160398734" ], "abstract": [ "How much does a single image reveal about the environment it was taken in? In this paper, we investigate how much of that information can be retrieved from a foreground object, combined with the background (i.e. the visible part of the environment). Assuming it is not perfectly diffuse, the foreground object acts as a complexly shaped andfar-from-perfect mirror An additional challenge is that its appearance confounds the light coming from the environment with the unknown materials it is made of. We propose a learning-based approach to predict the environment from multiple reflectance maps that are computed from approximate surface normals. The proposed method allows us to jointly model the statistics of environments and material properties. We train our system from synthesized training data, but demonstrate its applicability to real-world data. Interestingly, our analysis shows that the information obtained from objects made out of multiple materials often is complementary and leads to better performance.", "We introduce the problem of scene viewpoint recognition, the goal of which is to classify the type of place shown in a photo, and also recognize the observer's viewpoint within that category of place. We construct a database of 360° panoramic images organized into 26 place categories. For each category, our algorithm automatically aligns the panoramas to build a full-view representation of the surrounding place. We also study the symmetry properties and canonical viewpoint of each place category. At test time, given a photo of a scene, the model can recognize the place category, produce a compass-like indication of the observer's most likely viewpoint within that place, and use this information to extrapolate beyond the available view, filling in the probable visual layout that would appear beyond the boundary of the photo." ] }
1904.03326
2932480350
360 images represent scenes captured in all possible viewing directions. They enable viewers to navigate freely around the scene and thus provide an immersive experience. Conversely, conventional images represent scenes in a single viewing direction with a small or limited field of view (FOV). As a result, only some parts of the scenes are observed, and valuable information about the surroundings is lost. In this paper, we propose a learning-based approach that reconstructs the scene in 360 x 180 from a sparse set of conventional images (typically 4 images). The proposed approach first estimates the FOV of input images relative to the panorama. The estimated FOV is then used as the prior for synthesizing a high-resolution 360 panoramic output. The proposed method overcomes the difficulty of learning-based approach in synthesizing high resolution images (up to 512 x 1024). Experimental results demonstrate that the proposed method produces 360 panorama with reasonable quality. It is shown that it outperforms the alternative method and can be generalize for non-panorama scene and images captured by a smartphone camera.
Image-to-image translation proposed by Isola al @cite_7 translates input domain to another domain using input and output for training. This work was extended by Wang al @cite_15 to synthesize high-resolution images from input semantic labels. Despite the results, their work relied on semantic labels which are not always available and practical for most inputs. Karras al @cite_11 obtained impressive results in generating images with high-resolution using GAN, trained in an unconditional manner. The network can synthesize facial images with great details but it requires a large amount of computing power.
{ "cite_N": [ "@cite_15", "@cite_7", "@cite_11" ], "mid": [ "2963800363", "2963073614", "2962760235" ], "abstract": [ "We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.", "We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset." ] }
1904.03239
2928684767
Instance segmentation aims to detect and segment individual objects in a scene. Most existing methods rely on precise mask annotations of every category. However, it is difficult and costly to segment objects in novel categories because a large number of mask annotations is required. We introduce ShapeMask, which learns the intermediate concept of object shape to address the problem of generalization in instance segmentation to novel categories. ShapeMask starts with a bounding box detection and gradually refines it by first estimating the shape of the detected object through a collection of shape priors. Next, ShapeMask refines the coarse shape into an instance level mask by learning instance embeddings. The shape priors provide a strong cue for object-like prediction, and the instance embeddings model the instance specific appearance information. ShapeMask significantly outperforms the state-of-the-art by 6.4 and 3.8 AP when learning across categories, and obtains competitive performance in the fully supervised setting. It is also robust to inaccurate detections, decreased model capacity, and small training data. Moreover, it runs efficiently with 150ms inference time and trains within 11 hours on TPUs. With a larger backbone model, ShapeMask increases the gap with state-of-the-art to 9.4 and 6.2 AP across categories. Code will be released.
Recently, @cite_15 @cite_30 @cite_29 @cite_13 study instance segmentation algorithms that can generalize to categories without mask annotations. @cite_29 leverages the idea that given a bounding box for target object, one can obtain pseudo mask label from a grouping-based segmentation algorithm like GrabCut @cite_6 . @cite_15 studies open-set instance segmentation by using a boundary detector followed by grouping, while @cite_30 learns instance segmentation from image-level supervision by deep activation. Although effective, these approaches do not take advantage of instance mask labels to achieve better performance.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_6", "@cite_15", "@cite_13" ], "mid": [ "2963311325", "2552414813", "2124351162", "2806845529", "809122546" ], "abstract": [ "Weakly supervised instance segmentation with image-level labels, instead of expensive pixel-level masks, remains unexplored. In this paper, we tackle this challenging problem by exploiting class peak responses to enable a classification network for instance mask extraction. With image labels supervision only, CNN classifiers in a fully convolutional manner can produce class response maps, which specify classification confidence at each image location. We observed that local maximums, i.e., peaks, in a class response map typically correspond to strong visual cues residing inside each instance. Motivated by this, we first design a process to stimulate peaks to emerge from a class response map. The emerged peaks are then back-propagated and effectively mapped to highly informative regions of each object instance, such as instance boundaries. We refer to the above maps generated from class peak responses as Peak Response Maps (PRMs). PRMs provide a fine-detailed instance-level representation, which allows instance masks to be extracted even with some off-the-shelf methods. To the best of our knowledge, we for the first time report results for the challenging image-level supervised instance segmentation task. Extensive experiments show that our method also boosts weakly supervised pointwise localization as well as semantic segmentation performance, and reports state-of-the-art results on popular benchmarks, including PASCAL VOC 2012 and MS COCO.", "Semantic labelling and instance segmentation are two tasks that require particularly costly annotations. Starting from weak supervision in the form of bounding box detection annotations, we propose a new approach that does not require modification of the segmentation training procedure. We show that when carefully designing the input labels from given bounding boxes, even a single round of training is enough to improve over previously reported weakly supervised results. Overall, our weak supervision approach reaches 95 of the quality of the fully supervised model, both for semantic labelling and instance segmentation.", "The problem of efficient, interactive foreground background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for \"border matting\" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.", "This paper addresses the semantic instance segmentation task in the open-set conditions, where input images can contain known and unknown object classes. The training process of existing semantic instance segmentation methods requires annotation masks for all object instances, which is expensive to acquire or even infeasible in some realistic scenarios, where the number of categories may increase boundlessly. In this paper, we present a novel open-set semantic instance segmentation approach capable of segmenting all known and unknown object classes in images, based on the output of an object detector trained on known object classes. We formulate the problem using a Bayesian framework, where the posterior distribution is approximated with a simulated annealing optimization equipped with an efficient image partition sampler. We show empirically that our method is competitive with state-of-the-art supervised methods on known classes, but also performs well on unknown classes when compared with unsupervised methods.", "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation." ] }
1904.03239
2928684767
Instance segmentation aims to detect and segment individual objects in a scene. Most existing methods rely on precise mask annotations of every category. However, it is difficult and costly to segment objects in novel categories because a large number of mask annotations is required. We introduce ShapeMask, which learns the intermediate concept of object shape to address the problem of generalization in instance segmentation to novel categories. ShapeMask starts with a bounding box detection and gradually refines it by first estimating the shape of the detected object through a collection of shape priors. Next, ShapeMask refines the coarse shape into an instance level mask by learning instance embeddings. The shape priors provide a strong cue for object-like prediction, and the instance embeddings model the instance specific appearance information. ShapeMask significantly outperforms the state-of-the-art by 6.4 and 3.8 AP when learning across categories, and obtains competitive performance in the fully supervised setting. It is also robust to inaccurate detections, decreased model capacity, and small training data. Moreover, it runs efficiently with 150ms inference time and trains within 11 hours on TPUs. With a larger backbone model, ShapeMask increases the gap with state-of-the-art to 9.4 and 6.2 AP across categories. Code will be released.
In this paper, we focus on the instance segmentation problem @cite_13 , as opposed to the weakly-supervised setting @cite_29 @cite_30 . The main idea is to build a large scale instance segmentation model by leveraging large datasets with bounding box annotations e.g. @cite_27 , and smaller ones with detailed mask annotations e.g. @cite_34 . More specifically, the setup is that only box labels (not mask labels) are available for a subset of categories at training time. The model is required to perform instance segmentation on these categories at test time. Mask @math R-CNN @cite_13 tackles the problem by learning to predict weights of mask segmentation branch from the box detection branch. This transfer learning approach shows significant improvement over class-agnostic training, but there still exists a clear gap with the fully supervised system.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_27", "@cite_34", "@cite_13" ], "mid": [ "2963311325", "2552414813", "2277195237", "1861492603", "809122546" ], "abstract": [ "Weakly supervised instance segmentation with image-level labels, instead of expensive pixel-level masks, remains unexplored. In this paper, we tackle this challenging problem by exploiting class peak responses to enable a classification network for instance mask extraction. With image labels supervision only, CNN classifiers in a fully convolutional manner can produce class response maps, which specify classification confidence at each image location. We observed that local maximums, i.e., peaks, in a class response map typically correspond to strong visual cues residing inside each instance. Motivated by this, we first design a process to stimulate peaks to emerge from a class response map. The emerged peaks are then back-propagated and effectively mapped to highly informative regions of each object instance, such as instance boundaries. We refer to the above maps generated from class peak responses as Peak Response Maps (PRMs). PRMs provide a fine-detailed instance-level representation, which allows instance masks to be extracted even with some off-the-shelf methods. To the best of our knowledge, we for the first time report results for the challenging image-level supervised instance segmentation task. Extensive experiments show that our method also boosts weakly supervised pointwise localization as well as semantic segmentation performance, and reports state-of-the-art results on popular benchmarks, including PASCAL VOC 2012 and MS COCO.", "Semantic labelling and instance segmentation are two tasks that require particularly costly annotations. Starting from weak supervision in the form of bounding box detection annotations, we propose a new approach that does not require modification of the segmentation training procedure. We show that when carefully designing the input labels from given bounding boxes, even a single round of training is enough to improve over previously reported weakly supervised results. Overall, our weak supervision approach reaches 95 of the quality of the fully supervised model, both for semantic labelling and instance segmentation.", "Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that \"the person is riding a horse-drawn carriage.\" In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of @math 35 objects, @math 26 attributes, and @math 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.", "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation." ] }
1904.03377
2927862961
Deep learning based methods have dominated super-resolution (SR) field due to their remarkable performance in terms of effectiveness and efficiency. Most of these methods assume that the blur kernel during downsampling is predefined known (e.g., bicubic). However, the blur kernels involved in real applications are complicated and unknown, resulting in severe performance drop for the advanced SR methods. In this paper, we propose an Iterative Kernel Correction (IKC) method for blur kernel estimation in blind SR problem, where the blur kernels are unknown. We draw the observation that kernel mismatch could bring regular artifacts (either over-sharpening or over-smoothing), which can be applied to correct inaccurate blur kernels. Thus we introduce an iterative correction scheme -- IKC that achieves better results than direct kernel estimation. We further propose an effective SR network architecture using spatial feature transform (SFT) layers to handle multiple blur kernels, named SFTMD. Extensive experiments on synthetic and real-world images show that the proposed IKC method with SFTMD can provide visually favorable SR results and the state-of-the-art performance in blind SR problem.
Blind SR assume that the degradation kernels are unavailable. In recent years, the community has paid relatively less research attention to blind SR problem. Michaeli and Irani @cite_29 estimate the optimal blur kernel based on the property that small image patches will re-appear in images. There are also research works trying to employ deep learning in blind SR task. Yuan @cite_1 propose to learn not only SR mapping but also the degradation mapping using unsupervised learning. Shocher @cite_19 exploit the internal recurrence of information inside an image and propose an unsupervised SR method to super-resolve images with different blur kernels. They train a small CNN on examples extracted from the input image itself, the trained image-specific CNN is appropriate for super-resolving this image. Different from the previous works, our method employs the correlation between SR results and kernel mismatch. Our method uses the intermediate SR results to iteratively correct the estimation of blur kernels, thus provide artifact-free final SR results.
{ "cite_N": [ "@cite_19", "@cite_29", "@cite_1" ], "mid": [ "2779113761", "2167191464", "2892111734" ], "abstract": [ "Deep Learning has led to a dramatic leap in Super-Resolution (SR) performance in the past few years. However, being supervised, these SR methods are restricted to specific training data, where the acquisition of the low-resolution (LR) images from their high-resolution (HR) counterparts is predetermined (e.g., bicubic downscaling), without any distracting artifacts (e.g., sensor noise, image compression, non-ideal PSF, etc). Real LR images, however, rarely obey these restrictions, resulting in poor SR results by SotA (State of the Art) methods. In this paper we introduce \"Zero-Shot\" SR, which exploits the power of Deep Learning, but does not rely on prior training. We exploit the internal recurrence of information inside a single image, and train a small image-specific CNN at test time, on examples extracted solely from the input image itself. As such, it can adapt itself to different settings per image. This allows to perform SR of real old photos, noisy images, biological data, and other images where the acquisition process is unknown or non-ideal. On such images, our method outperforms SotA CNN-based SR methods, as well as previous unsupervised SR methods. To the best of our knowledge, this is the first unsupervised CNN-based SR method.", "Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function 'PSF' of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for \"blind\" super resolution. In particular, we show that: (i) Unlike the common belief, the PSF of the camera is the wrong blur kernel to use in SR algorithms. (ii) We show how the correct SR blur kernel can be recovered directly from the low-resolution image. This is done by exploiting the inherent recurrence property of small natural image patches (either internally within the same image, or externally in a collection of other natural images). In particular, we show that recurrence of small patches across scales of the low-res image (which forms the basis for single-image SR), can also be used for estimating the optimal blur kernel. This leads to significant improvement in SR results.", "We consider the single image super-resolution problem in a more general case that the low- high-resolution pairs and the down-sampling process are unavailable. Different from traditional super-resolution formulation, the low-resolution input is further degraded by noises and blurring. This complicated setting makes supervised learning and accurate kernel estimation impossible. To solve this problem, we resort to unsupervised learning without paired data, inspired by the recent successful image-to-image translation applications. With generative adversarial networks (GAN) as the basic component, we propose a Cycle-in-Cycle network structure to tackle the problem within three steps. First, the noisy and blurry input is mapped to a noise-free low-resolution space. Then the intermediate image is up-sampled with a pre-trained deep model. Finally, we fine-tune the two modules in an end-to-end manner to get the high-resolution output. Experiments on NTIRE2018 datasets demonstrate that the proposed unsupervised method achieves comparable results as the state-of-the-art supervised models." ] }
1904.03441
2927893015
Batch Normalization (BN) is ubiquitously employed for accelerating neural network training and improving the generalization capability by performing standardization within mini-batches. Decorrelated Batch Normalization (DBN) further boosts the above effectiveness by whitening. However, DBN relies heavily on either a large batch size, or eigen-decomposition that suffers from poor efficiency on GPUs. We propose Iterative Normalization (IterNorm), which employs Newton's iterations for much more efficient whitening, while simultaneously avoiding the eigen-decomposition. Furthermore, we develop a comprehensive study to show IterNorm has better trade-off between optimization and generalization, with theoretical and experimental support. To this end, we exclusively introduce Stochastic Normalization Disturbance (SND), which measures the inherent stochastic uncertainty of samples when applied to normalization operations. With the support of SND, we provide natural explanations to several phenomena from the perspective of optimization, e.g., why group-wise whitening of DBN generally outperforms full-whitening and why the accuracy of BN degenerates with reduced batch sizes. We demonstrate the consistently improved performance of IterNorm with extensive experiments on CIFAR-10 and ImageNet over BN and DBN.
, @cite_16 propose to perform normalization as a function over mini-batch data and back-propagate through the transformation. Multiple standardization options have been discovered for normalizing mini-batch data, including the L2 standardization @cite_16 , the L1-standardization @cite_51 @cite_32 and the @math -standardization @cite_32 . One critical issue with these methods, however, is that it normally requires a reasonable batch size for estimating the mean and variance. In order to address such an issue, a significant number of standardization approaches are proposed @cite_17 @cite_40 @cite_56 @cite_9 @cite_43 @cite_41 @cite_29 @cite_27 @cite_12 . Our work develops in an orthogonal direction to these approaches, and aims at improving BN with decorrelated activations.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_29", "@cite_32", "@cite_56", "@cite_43", "@cite_40", "@cite_27", "@cite_16", "@cite_51", "@cite_12", "@cite_17" ], "mid": [ "2533142838", "2811135961", "2902302607", "2793416812", "2568343048", "2963300719", "", "2962949994", "1836465849", "", "2963304263", "" ], "abstract": [ "This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216.", "We address a learning-to-normalize problem by proposing Switchable Normalization (SN), which learns to select different normalizers for different normalization layers of a deep neural network. SN employs three distinct scopes to compute statistics (means and variances) including a channel, a layer, and a minibatch. SN switches between them by learning their importance weights in an end-to-end manner. It has several good properties. First, it adapts to various network architectures and tasks (see Fig.1). Second, it is robust to a wide range of batch sizes, maintaining high performance even when small minibatch is presented (e.g. 2 images GPU). Third, SN does not have sensitive hyper-parameter, unlike group normalization that searches the number of groups as a hyper-parameter. Without bells and whistles, SN outperforms its counterparts on various challenging benchmarks, such as ImageNet, COCO, CityScapes, ADE20K, and Kinetics. Analyses of SN are also presented. We hope SN will help ease the usage and understand the normalization techniques in deep learning. The code of SN has been made available in this https URL.", "As an indispensable component, Batch Normalization (BN) has successfully improved the training of deep neural networks (DNNs) with mini-batches, by normalizing the distribution of the internal representation for each hidden layer. However, the effectiveness of BN would diminish with the scenario of micro-batch (e.g. less than 4 samples in a mini-batch), since the estimated statistics in a mini-batch are not reliable with insufficient samples. This limits BN's room in training larger models on segmentation, detection, and video-related problems, which require small batches constrained by memory consumption. In this paper, we present a novel normalization method, called Kalman Normalization (KN), for improving and accelerating the training of DNNs, particularly under the context of micro-batches. Specifically, unlike the existing solutions treating each hidden layer as an isolated system, KN treats all the layers in a network as a whole system, and estimates the statistics of a certain layer by considering the distributions of all its preceding layers, mimicking the merits of Kalman Filtering. On ResNet50 trained in ImageNet, KN has 3.4 lower error than its BN counterpart when using a batch size of 4; Even when using typical batch sizes, KN still maintains an advantage over BN while other BN variants suffer a performance degradation. Moreover, KN can be naturally generalized to many existing normalization variants to obtain gains, e.g. equipping Group Normalization with Group Kalman Normalization (GKN). KN can outperform BN and its variants for large scale object detection and segmentation task in COCO 2017.", "Over the past few years batch-normalization has been commonly used in deep networks, allowing faster training and high performance for a wide variety of applications. However, the reasons behind its merits remained unanswered, with several shortcomings that hindered its use for certain tasks. In this work we present a novel view on the purpose and function of normalization methods and weight-decay, as tools to decouple weights' norm from the underlying optimized objective. We also improve the use of weight-normalization and show the connection between practices such as normalization, weight decay and learning-rate adjustments. Finally, we suggest several alternatives to the widely used @math batch-norm, using normalization in @math and @math spaces that can substantially improve numerical stability in low-precision implementations as well as provide computational and memory benefits. We demonstrate that such methods enable the first batch-norm alternative to work for half-precision implementations.", "Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution.", "Batch Normalization is quite effective at accelerating and improving the training of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that this is due to the dependence of model layer inputs on all the examples in the minibatch, and different activations being produced between training and inference. We propose Batch Renormalization, a simple and effective extension to ensure that the training and inference models generate the same outputs that depend on individual examples rather than the entire minibatch. Models trained with Batch Renormalization perform substantially better than batchnorm when training with small or non-i.i.d. minibatches. At the same time, Batch Renormalization retains the benefits of batchnorm such as insensitivity to initialization and training efficiency.", "", "Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computationally expensive to train and difficult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can significantly improve convergence rates in feed-forward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to significantly reduce training time. In this paper, we investigate how batch normalization can be applied to RNNs. We show for both a speech recognition task and language modeling that the way we apply batch normalization leads to a faster convergence of the training criterion but doesn't seem to improve the generalization performance.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82 top-5 test error, exceeding the accuracy of human raters.", "", "We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. @PARASPLIT We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization.", "" ] }
1904.03441
2927893015
Batch Normalization (BN) is ubiquitously employed for accelerating neural network training and improving the generalization capability by performing standardization within mini-batches. Decorrelated Batch Normalization (DBN) further boosts the above effectiveness by whitening. However, DBN relies heavily on either a large batch size, or eigen-decomposition that suffers from poor efficiency on GPUs. We propose Iterative Normalization (IterNorm), which employs Newton's iterations for much more efficient whitening, while simultaneously avoiding the eigen-decomposition. Furthermore, we develop a comprehensive study to show IterNorm has better trade-off between optimization and generalization, with theoretical and experimental support. To this end, we exclusively introduce Stochastic Normalization Disturbance (SND), which measures the inherent stochastic uncertainty of samples when applied to normalization operations. With the support of SND, we provide natural explanations to several phenomena from the perspective of optimization, e.g., why group-wise whitening of DBN generally outperforms full-whitening and why the accuracy of BN degenerates with reduced batch sizes. We demonstrate the consistently improved performance of IterNorm with extensive experiments on CIFAR-10 and ImageNet over BN and DBN.
Beyond standardization, Huang al @cite_14 propose DBN, which uses ZCA-whitening by eigen-decomposition and back-propagates the transformation. Our approach aims at a much more efficient approximation of the ZCA-whitening matrix in DBN, and suggests that approximating whitening is more effective based on the analysis shown in Section .
{ "cite_N": [ "@cite_14" ], "mid": [ "2963743626" ], "abstract": [ "Batch Normalization (BN) is capable of accelerating the training of deep models by centering and scaling activations within mini-batches. In this work, we propose Decorrelated Batch Normalization (DBN), which not just centers and scales activations but whitens them. We explore multiple whitening techniques, and find that PCA whitening causes a problem we call stochastic axis swapping, which is detrimental to learning. We show that ZCA whitening does not suffer from this problem, permitting successful learning. DBN retains the desirable qualities of BN and further improves BN's optimization efficiency and generalization ability. We design comprehensive experiments to show that DBN can improve the performance of BN on multilayer perceptrons and convolutional neural networks. Furthermore, we consistently improve the accuracy of residual networks on CIFAR-10, CIFAR-100, and ImageNet." ] }
1904.03441
2927893015
Batch Normalization (BN) is ubiquitously employed for accelerating neural network training and improving the generalization capability by performing standardization within mini-batches. Decorrelated Batch Normalization (DBN) further boosts the above effectiveness by whitening. However, DBN relies heavily on either a large batch size, or eigen-decomposition that suffers from poor efficiency on GPUs. We propose Iterative Normalization (IterNorm), which employs Newton's iterations for much more efficient whitening, while simultaneously avoiding the eigen-decomposition. Furthermore, we develop a comprehensive study to show IterNorm has better trade-off between optimization and generalization, with theoretical and experimental support. To this end, we exclusively introduce Stochastic Normalization Disturbance (SND), which measures the inherent stochastic uncertainty of samples when applied to normalization operations. With the support of SND, we provide natural explanations to several phenomena from the perspective of optimization, e.g., why group-wise whitening of DBN generally outperforms full-whitening and why the accuracy of BN degenerates with reduced batch sizes. We demonstrate the consistently improved performance of IterNorm with extensive experiments on CIFAR-10 and ImageNet over BN and DBN.
Our approach is also related to works that normalize the network weights (e.g., either through re-parameterization @cite_52 @cite_39 @cite_45 or weight regularization @cite_48 @cite_3 @cite_21 ), and that specially design either scaling coefficients & bias values @cite_42 or nonlinear function @cite_22 , to normalize activation implicitly @cite_53 . IterNorm differs from these work in that it is a data dependent normalization, while these normalization approaches are independent of the data.
{ "cite_N": [ "@cite_22", "@cite_48", "@cite_53", "@cite_21", "@cite_42", "@cite_52", "@cite_3", "@cite_39", "@cite_45" ], "mid": [ "2963454111", "2144513243", "2795358072", "2962980542", "2962958829", "2963685250", "577198184", "2780149736", "2963543570" ], "abstract": [ "Deep Learning has revolutionized vision via convolutional neural networks (CNNs) and natural language processing via recurrent neural networks (RNNs). However, success stories of Deep Learning with standard feed-forward neural networks (FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot exploit many levels of abstract representations. We introduce self-normalizing neural networks (SNNs) to enable high-level abstract representations. While batch normalization requires explicit normalization, neuron activations of SNNs automatically converge towards zero mean and unit variance. The activation function of SNNs are \"scaled exponential linear units\" (SELUs), which induce self-normalizing properties. Using the Banach fixed-point theorem, we prove that activations close to zero mean and unit variance that are propagated through many network layers will converge towards zero mean and unit variance -- even under the presence of noise and perturbations. This convergence property of SNNs allows to (1) train deep networks with many layers, (2) employ strong regularization, and (3) to make learning highly robust. Furthermore, for activations not close to unit variance, we prove an upper and lower bound on the variance, thus, vanishing and exploding gradients are impossible. We compared SNNs on (a) 121 tasks from the UCI machine learning repository, on (b) drug discovery benchmarks, and on (c) astronomy tasks with standard FNNs and other machine learning methods such as random forests and support vector machines. For FNNs we considered (i) ReLU networks without normalization, (ii) batch normalization, (iii) layer normalization, (iv) weight normalization, (v) highway networks, (vi) residual networks. SNNs significantly outperformed all competing FNN methods at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and set a new record at an astronomy data set. The winning SNN architectures are often very deep.", "It has been observed in numerical simulations that a weight decay can improve generalization in a feed-forward neural network. This paper explains why. It is proven that a weight decay has two effects in a linear network. First, it suppresses any irrelevant components of the weight vector by choosing the smallest vector that solves the learning problem. Second, if the size is chosen right, a weight decay can suppress some of the effects of static noise on the targets, which improves generalization quite a lot. It is then shown how to extend these results to networks with hidden layers and non-linear units. Finally the theory is confirmed by some numerical simulations using the data from NetTalk.", "We address the problem of estimating statistics of hidden units in a neural network using a method of analytic moment propagation. These statistics are useful for approximate whitening of the inputs in front of saturating non-linearities such as a sigmoid function. This is important for initialization of training and for reducing the accumulated scale and bias dependencies (compensating covariate shift), which presumably eases the learning. In batch normalization, which is currently a very widely applied technique, sample estimates of statistics of hidden units over a batch are used. The proposed estimation uses an analytic propagation of mean and variance of the training set through the network. The result depends on the network structure and its current weights but not on the specific batch input. The estimates are suitable for initialization and normalization, efficient to compute and independent of the batch size. The experimental verification well supports these claims. However, the method does not share the generalization properties of BN, to which our experiments give some additional insight.", "Regularization is key for deep learning since it allows training more complex models while keeping lower levels of overfitting. However, the most prevalent regularizations do not leverage all the capacity of the models since they rely on reducing the effective number of parameters. Feature decorrelation is an alternative for using the full capacity of the models but the overfitting reduction margins are too narrow given the overhead it introduces. In this paper, we show that regularizing negatively correlated features is an obstacle for effective decorrelation and present OrthoReg, a novel regularization technique that locally enforces feature orthogonality. As a result, imposing locality constraints in feature decorrelation removes interferences between negatively correlated feature weights, allowing the regularizer to reach higher decorrelation bounds, and reducing the overfitting more effectively. In particular, we show that the models regularized with OrthoReg have higher accuracy bounds even when batch normalization and dropout are present. Moreover, since our regularization is directly performed on the weights, it is especially suitable for fully convolutional neural networks, where the weight space is constant compared to the feature map space. As a result, we are able to reduce the overfitting of state-of-the-art CNNs on CIFAR-10, CIFAR-100, and SVHN.", "While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks- Internal Covariate Shift- the current solution has certain drawbacks. For instance, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate due to shifting parameter values (especially during initial training epochs). Another fundamental problem with BN is that it cannot be used with batch-size 1 during training. We address these drawbacks of BN by proposing a non-adaptive normalization technique for removing covariate shift, that we call Normalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.", "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning.", "We revisit the choice of SGD for training deep neural networks by reconsidering the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network, and suggest Path-SGD, which is an approximate steepest descent method with respect to a path-wise regularizer related to max-norm regularization. Path-SGD is easy and efficient to implement and leads to empirical gains over SGD and Ada-Grad.", "Training deep neural networks is difficult for the pathological curvature problem. Re-parameterization is an effective way to relieve the problem by learning the curvature approximately or constraining the solutions of weights with good properties for optimization. This paper proposes to reparameterize the input weight of each neuron in deep neural networks by normalizing it with zero-mean and unit-norm, followed by a learnable scalar parameter to adjust the norm of the weight. This technique effectively stabilizes the distribution implicitly. Besides, it improves the conditioning of the optimization problem and thus accelerates the training of deep neural networks. It can be wrapped as a linear module in practice and plugged in any architecture to replace the standard linear module. We highlight the benefits of our method on both multi-layer perceptrons and convolutional neural networks, and demonstrate its scalability and efficiency on SVHN, CIFAR-10, CIFAR-100 and ImageNet datasets.", "" ] }
1904.03380
2934270752
Recently, convolutional neural networks (CNNs) have shown great success on the task of monocular depth estimation. A fundamental yet unanswered question is: how CNNs can infer depth from a single image. Toward answering this question, we consider visualization of inference of a CNN by identifying relevant pixels of an input image to depth estimation. We formulate it as an optimization problem of identifying the smallest number of image pixels from which the CNN can estimate a depth map with the minimum difference from the estimate from the entire image. To cope with a difficulty with optimization through a deep CNN, we propose to use another network to predict those relevant image pixels in a forward computation. In our experiments, we first show the effectiveness of this approach, and then apply it to different depth estimation networks on indoor and outdoor scene datasets. The results provide several findings that help exploration of the above question.
There are many studies that attempt to interpret inference of CNNs, most of which have focused on the task of image classification @cite_2 @cite_35 @cite_11 @cite_19 @cite_38 @cite_12 @cite_3 @cite_27 @cite_18 @cite_26 @cite_20 . However, there are only a few methods that have been recognized to be practically useful in the community @cite_40 @cite_23 @cite_41 .
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_18", "@cite_26", "@cite_41", "@cite_3", "@cite_19", "@cite_27", "@cite_40", "@cite_23", "@cite_2", "@cite_20", "@cite_12", "@cite_11" ], "mid": [ "2295107390", "2962680264", "2962981568", "2518775244", "2773497437", "2962858109", "2626639386", "2962961439", "2786715987", "2766047647", "2221625691", "2594633041", "2282821441", "2963382180" ], "abstract": [ "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.", "This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classifiers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classifiers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).", "As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks “look” in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations.", "Deconvolution is a popular method for visualizing deep convolutional neural networks; however, due to their heuristic nature, the meaning of deconvolutional visualizations is not entirely clear. In this paper, we introduce a family of reversed networks that generalizes and relates deconvolution, backpropagation and network saliency. We use this construction to thoroughly investigate and compare these methods in terms of quality and meaning of the produced images, and of what architectural choices are important in determining these properties. We also show an application of these generalized deconvolutional networks to weakly-supervised foreground object segmentation.", "DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks.", "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E.", "Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.", "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parametrised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.", "In the last years many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness sometimes at the cost of scarifying accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, delineating explicitly or implicitly its own definition of interpretability and explanation. The aim of this paper is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective.", "Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step ---adding a mean shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. We define input invariance as the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy a input invariance property are unreliable and can lead to misleading and inaccurate attribution.", "While feedforward deep convolutional neural networks (CNNs) have been a great success in computer vision, it is important to note that the human visual cortex generally contains more feedback than feedforward connections. In this paper, we will briefly introduce the background of feedbacks in the human visual cortex, which motivates us to develop a computational feedback mechanism in deep neural networks. In addition to the feedforward inference in traditional neural networks, a feedback loop is introduced to infer the activation status of hidden layer neurons according to the \"goal\" of the network, e.g., high-level semantic labels. We analogize this mechanism as \"Look and Think Twice.\" The feedback networks help better visualize and understand how deep neural networks work, and capture visual attention on expected objects, even in images with cluttered background and multiple objects. Experiments on ImageNet dataset demonstrate its effectiveness in solving tasks such as image classification and object localization.", "We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "" ] }
1904.03380
2934270752
Recently, convolutional neural networks (CNNs) have shown great success on the task of monocular depth estimation. A fundamental yet unanswered question is: how CNNs can infer depth from a single image. Toward answering this question, we consider visualization of inference of a CNN by identifying relevant pixels of an input image to depth estimation. We formulate it as an optimization problem of identifying the smallest number of image pixels from which the CNN can estimate a depth map with the minimum difference from the estimate from the entire image. To cope with a difficulty with optimization through a deep CNN, we propose to use another network to predict those relevant image pixels in a forward computation. In our experiments, we first show the effectiveness of this approach, and then apply it to different depth estimation networks on indoor and outdoor scene datasets. The results provide several findings that help exploration of the above question.
Gradient based methods @cite_19 @cite_26 @cite_20 compute a saliency map that visualizes sensitivity of each pixel of the input image to the final prediction, which is obtained by calculating the derivatives of the output of the model with respect to each image pixel.
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_20" ], "mid": [ "2626639386", "2518775244", "2594633041" ], "abstract": [ "Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.", "Deconvolution is a popular method for visualizing deep convolutional neural networks; however, due to their heuristic nature, the meaning of deconvolutional visualizations is not entirely clear. In this paper, we introduce a family of reversed networks that generalizes and relates deconvolution, backpropagation and network saliency. We use this construction to thoroughly investigate and compare these methods in terms of quality and meaning of the produced images, and of what architectural choices are important in determining these properties. We also show an application of these generalized deconvolutional networks to weakly-supervised foreground object segmentation.", "We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better." ] }
1904.03380
2934270752
Recently, convolutional neural networks (CNNs) have shown great success on the task of monocular depth estimation. A fundamental yet unanswered question is: how CNNs can infer depth from a single image. Toward answering this question, we consider visualization of inference of a CNN by identifying relevant pixels of an input image to depth estimation. We formulate it as an optimization problem of identifying the smallest number of image pixels from which the CNN can estimate a depth map with the minimum difference from the estimate from the entire image. To cope with a difficulty with optimization through a deep CNN, we propose to use another network to predict those relevant image pixels in a forward computation. In our experiments, we first show the effectiveness of this approach, and then apply it to different depth estimation networks on indoor and outdoor scene datasets. The results provide several findings that help exploration of the above question.
There are many methods that mask part of the input image to see its effects @cite_37 . General-purpose methods developed for interpreting inference of machine learning models, such as LIME @cite_12 and Prediction Difference Analysis @cite_38 , may be categorized in this class, when they are applied to CNNs classifying an input image.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_12" ], "mid": [ "2962680264", "1849277567", "2282821441" ], "abstract": [ "This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classifiers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classifiers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted." ] }
1904.03380
2934270752
Recently, convolutional neural networks (CNNs) have shown great success on the task of monocular depth estimation. A fundamental yet unanswered question is: how CNNs can infer depth from a single image. Toward answering this question, we consider visualization of inference of a CNN by identifying relevant pixels of an input image to depth estimation. We formulate it as an optimization problem of identifying the smallest number of image pixels from which the CNN can estimate a depth map with the minimum difference from the estimate from the entire image. To cope with a difficulty with optimization through a deep CNN, we propose to use another network to predict those relevant image pixels in a forward computation. In our experiments, we first show the effectiveness of this approach, and then apply it to different depth estimation networks on indoor and outdoor scene datasets. The results provide several findings that help exploration of the above question.
The most dependable method as of now for visualization of CNNs for classification is arguably the class activation map (CAM) @cite_35 , which calculate the linear combination of the activation of the last convolutional layers in its channel dimension. Its extension, Grad-CAM @cite_3 , is also widely used, which integrates the gradient-based method with CAM to enable to use general network architectures that cannot be dealt with by CAM.
{ "cite_N": [ "@cite_35", "@cite_3" ], "mid": [ "2295107390", "2962858109" ], "abstract": [ "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.", "We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad- CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal inputs (e.g. visual question answering) or reinforcement learning, without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are more faithful to the underlying model, and (d) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep network from a ‘weaker’ one even when both make identical predictions. Our code is available at https: github.com ramprs grad-cam along with a demo on CloudCV [2] and video at youtu.be COjUB9Izk6E." ] }
1904.03380
2934270752
Recently, convolutional neural networks (CNNs) have shown great success on the task of monocular depth estimation. A fundamental yet unanswered question is: how CNNs can infer depth from a single image. Toward answering this question, we consider visualization of inference of a CNN by identifying relevant pixels of an input image to depth estimation. We formulate it as an optimization problem of identifying the smallest number of image pixels from which the CNN can estimate a depth map with the minimum difference from the estimate from the entire image. To cope with a difficulty with optimization through a deep CNN, we propose to use another network to predict those relevant image pixels in a forward computation. In our experiments, we first show the effectiveness of this approach, and then apply it to different depth estimation networks on indoor and outdoor scene datasets. The results provide several findings that help exploration of the above question.
However, the above methods, which are developed mainly for explanation of classification, cannot directly be applied to CNNs performing depth estimation. In the case of depth estimation, the output of CNNs is a two-dimensional map, not a score for a category. This immediately excludes gradient based methods as well as CAM and its variants. The masking methods that employ fixed-shape masks @cite_38 or super-pixels obtained using low-level image features @cite_12 are not fit for our purpose, either, since there is no guarantee that their shapes match well with the depth cues in input images that are utilized by the CNNs.
{ "cite_N": [ "@cite_38", "@cite_12" ], "mid": [ "2962680264", "2282821441" ], "abstract": [ "This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classifiers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classifiers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted." ] }
1904.03275
2934848739
We study the problem of robust subspace recovery (RSR) in the presence of adversarial outliers. That is, we seek a subspace that contains a large portion of a dataset when some fraction of the data points are arbitrarily corrupted. We first examine a theoretical estimator that is intractable to calculate and use it to derive information-theoretic bounds of exact recovery. We then propose two tractable estimators: a variant of RANSAC and a simple relaxation of the theoretical estimator. The two estimators are fast to compute and achieve state-of-the-art theoretical performance in a noiseless RSR setting with adversarial outliers. The former estimator achieves better theoretical guarantees in the noiseless case, while the latter estimator is robust to small noise, and its guarantees significantly improve with non-adversarial models of outliers. We give a complete comparison of guarantees for the adversarial RSR problem, as well as a short discussion on the estimation of affine subspaces.
In this section, we will review the most important previous work on RSR with a particular emphasis on adversarial robustness. A comprehensive overview of the RSR problem in general is given in @cite_48 .
{ "cite_N": [ "@cite_48" ], "mid": [ "2963233224" ], "abstract": [ "This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a data set that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area." ] }
1904.03275
2934848739
We study the problem of robust subspace recovery (RSR) in the presence of adversarial outliers. That is, we seek a subspace that contains a large portion of a dataset when some fraction of the data points are arbitrarily corrupted. We first examine a theoretical estimator that is intractable to calculate and use it to derive information-theoretic bounds of exact recovery. We then propose two tractable estimators: a variant of RANSAC and a simple relaxation of the theoretical estimator. The two estimators are fast to compute and achieve state-of-the-art theoretical performance in a noiseless RSR setting with adversarial outliers. The former estimator achieves better theoretical guarantees in the noiseless case, while the latter estimator is robust to small noise, and its guarantees significantly improve with non-adversarial models of outliers. We give a complete comparison of guarantees for the adversarial RSR problem, as well as a short discussion on the estimation of affine subspaces.
One of the most important topics further explored in our work is the concept of stability constraints on models of data in the RSR problem. We borrow the concept of stability from previous work by @cite_25 @cite_50 @cite_48 . It is used to make a theoretical data model well-defined for an RSR dataset. To do this, it restricts the so-called alignment of outliers (which is, in some sense, how low-dimensional'' they are) and requires permeance of the inliers, which was mentioned earlier. A rigorous treatment of these concepts is given in III-A of @cite_48 . In this paper, we focus on the case where the magnitudes and directions of the outliers are unrestricted. Therefore, restriction of the alignment of outliers translates to restriction of their number, as will be seen later.
{ "cite_N": [ "@cite_48", "@cite_25", "@cite_50" ], "mid": [ "2963233224", "2050058873", "2962685343" ], "abstract": [ "This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a data set that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area.", "Consider a data set of vector-valued observations that consists of noisy inliers, which are explained well by a low-dimensional subspace, along with some number of outliers. This work describes a convex optimization problem, called reaper, that can reliably fit a low-dimensional model to this type of data. This approach parameterizes linear subspaces using orthogonal projectors and uses a relaxation of the set of orthogonal projectors to reach the convex formulation. The paper provides an efficient algorithm for solving the reaper problem, and it documents numerical experiments that confirm that reaper can dependably find linear structure in synthetic and natural data. In addition, when the inliers lie near a low-dimensional subspace, there is a rigorous theory that describes when reaper can approximate this subspace.", "" ] }
1904.03275
2934848739
We study the problem of robust subspace recovery (RSR) in the presence of adversarial outliers. That is, we seek a subspace that contains a large portion of a dataset when some fraction of the data points are arbitrarily corrupted. We first examine a theoretical estimator that is intractable to calculate and use it to derive information-theoretic bounds of exact recovery. We then propose two tractable estimators: a variant of RANSAC and a simple relaxation of the theoretical estimator. The two estimators are fast to compute and achieve state-of-the-art theoretical performance in a noiseless RSR setting with adversarial outliers. The former estimator achieves better theoretical guarantees in the noiseless case, while the latter estimator is robust to small noise, and its guarantees significantly improve with non-adversarial models of outliers. We give a complete comparison of guarantees for the adversarial RSR problem, as well as a short discussion on the estimation of affine subspaces.
Other RSR methods compute the top eigenspaces of robust covariance estimators. Many examples of robust covariance estimators are built on Maronna's original M-estimator of covariance @cite_33 , which was extended by @cite_18 @cite_45 @cite_34 @cite_26 @cite_10 @cite_44 , among many others. While some robust covariance estimators are not affected by scale differences between inliers and outliers @cite_34 @cite_10 @cite_44 , we are not aware of any tractable RSR estimator based on a robust covariance that is guaranteed to be robust to adversarial outliers. In this work we use for initialization purposes the spherical PCA @cite_10 @cite_44 @cite_17 , which is the simplest subspace estimator that is based on a robust covariance. It computes the top eigenspace of the spherized sample covariance matrix @math , which is also known as the spatial sign matrix'' @cite_44 . Specifically, the spherized sample covariance is given by
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_33", "@cite_44", "@cite_45", "@cite_34", "@cite_10", "@cite_17" ], "mid": [ "2039892753", "2019740336", "1716364890", "1981469185", "20683584", "2008418230", "2063016434", "149129625" ], "abstract": [ "", "Let y1, y2,..., yn∈ Rq be independent, identically distributed random vectors with nonsingular covariance matrix Σ, and let S = S(y1,..., yn) be an estimator for Σ. A quantity of particular interest is the condition number of Σ-1 S. If the yi are Gaussian and S is the sample covariance matrix, the condition number of Σ-1 S, i.e. the ratio of its extreme eigenvalues, equals 1 + Op((q n)1 2) as q →∞ and q n → 0. The present paper shows that the same result can be achieved with two estimators based on Tyler's (1987, Ann. Statist., 15, 234-251) M-functional of scatter, assuming only elliptical symmetry of ℒ(yi) or less. The main tool is a linear expansion for this M-functional which holds uniformly in the dimension q. As a by-product we obtain continuous Frechet-differentiability with respect to weak convergence.", "Classical methods in multivariate analysis require the estimation of means and covariance matrices. Although the sample mean and covariance matrix are optimal estimates of multivariate location and scatter when the data are multivariate normal, a small fraction of atypical points in the data (outliers) suffices to drastically alter them. This article describes different approaches to the development of substitutes to sample means and covariances that are resistant to outliers. Keywords: M-estimates; S-estimates; MM-estimates; τ-estimates; Stahel–Donoho estimates; P-estimates; regularization; missing data; independent contamination; equivariance", "The robust estimation of multivariate location and shape is one of the most challenging problems in statistics and crucial in many application areas. The objective is to find highly efficient, robust, computable and affine equivariant location and covariance matrix estimates. In this paper, three different concepts of multivariate sign and rank are considered and their ability to carry information about the geometry of the underlying distribution (or data cloud) are discussed. New techniques for robust covariance matrix estimation based on different sign and rank concepts are proposed and algorithms for computing them outlined. In addition, new tools for evaluating the qualitative and quantitative robustness of a covariance estimator are proposed. The use of these tools is demonstrated on two rank-based covariance matrix estimates. Finally, to illustrate the practical importance of the problem, a signal processing example where robust covariance matrix estimates are needed is given.", "", "On etablit l'existence et l'unicite d'une forme limite d'un estimateur M du type Huber d'un eparpillement multivariable", "A method for exploring the structure of populations of complex objects, such as images, is considered. The objects are summarized by feature vectors. The statistical backbone is Principal Component Analysis in the space of feature vectors. Visual insights come from representing the results in the original data space. In an ophthalmological example, endemic outliers motivate the development of a bounded influence approach to PCA.", "Classical statistical techniques fail to cope well with deviations from a standard distribution. Robust statistical methods take into account these deviations while estimating the parameters of parametric models, thus increasing the accuracy of the inference. Research into robust methods is flourishing, with new methods being developed and different applications considered. Robust Statistics sets out to explain the use of robust methods and their theoretical justification. It provides an up-to-date overview of the theory and practical application of the robust statistical methods in regression, multivariate analysis, generalized linear models and time series. This unique book: Enables the reader to select and use the most appropriate robust method for their particular statistical model. Features computational algorithms for the core methods. Covers regression methods for data mining applications. Includes examples with real data and applications using the S-Plus robust statistics library. Describes the theoretical and operational aspects of robust methods separately, so the reader can choose to focus on one or the other. Supported by a supplementary website featuring time-limited S-Plus download, along with datasets and S-Plus code to allow the reader to reproduce the examples given in the book. Robust Statistics aims to stimulate the use of robust methods as a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. It is ideal for researchers, practitioners and graduate students of statistics, electrical, chemical and biochemical engineering, and computer vision. There is also much to benefit researchers from other sciences, such as biotechnology, who need to use robust statistical methods in their work." ] }
1904.03275
2934848739
We study the problem of robust subspace recovery (RSR) in the presence of adversarial outliers. That is, we seek a subspace that contains a large portion of a dataset when some fraction of the data points are arbitrarily corrupted. We first examine a theoretical estimator that is intractable to calculate and use it to derive information-theoretic bounds of exact recovery. We then propose two tractable estimators: a variant of RANSAC and a simple relaxation of the theoretical estimator. The two estimators are fast to compute and achieve state-of-the-art theoretical performance in a noiseless RSR setting with adversarial outliers. The former estimator achieves better theoretical guarantees in the noiseless case, while the latter estimator is robust to small noise, and its guarantees significantly improve with non-adversarial models of outliers. We give a complete comparison of guarantees for the adversarial RSR problem, as well as a short discussion on the estimation of affine subspaces.
Some works have given theoretical guarantees for algorithms under the assumption of general position outliers @cite_2 @cite_13 @cite_9 (one possible definition of this notion appears in Definition ). In this case, the outliers can have arbitrary magnitude and may lie close to low-dimensional subspaces, but they are not allowed to have linearly dependent structure. While these distributions can be approximately adversarial, these algorithms cannot deal with arbitrary outlier distributions, which may contain low-dimensional structure.
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_2" ], "mid": [ "2775726300", "2313824013", "2962933442" ], "abstract": [ "We consider the RANSAC algorithm in the context of subspace recovery and subspace clustering. We derive some theory and perform some numerical experiments. We also draw some correspondences with the methods of Hardt and Moitra (2013) and Chen and Lerman (2009b).", "This paper considers the problem of robust subspace recovery: given a set of N points in RD, if many lie in a d-dimensional subspace, then can we recover the underlying subspace? We show that Tyler’s M-estimator can be used to recover the underlying subspace, if the percentage of the inliers is larger than d D and the data points lie in general position. Empirically, Tyler’s M-estimator compares favorably with other convex subspace recovery algorithms in both simulations and experiments on real data sets.", "We consider a fundamental problem in unsupervised learning called subspace recovery : given a collection of m points in R, if many but not necessarily all of these points are contained in a d-dimensional subspace T can we find it? The points contained in T are called inliers and the remaining points are outliers. This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to adversarial outliers, and the estimators from robust statistics are hard to compute in high dimensions. This is a serious and persistent issue not just in this application, but for many other problems in unsupervised learning. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds T when it contains more than a d n fraction of the points. Hence, for say d = n 2 this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is small set expansion hard to find T when the fraction of errors is any larger, thus giving evidence that our estimator is an optimal compromise between efficiency and robustness. In fact, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here." ] }
1904.03275
2934848739
We study the problem of robust subspace recovery (RSR) in the presence of adversarial outliers. That is, we seek a subspace that contains a large portion of a dataset when some fraction of the data points are arbitrarily corrupted. We first examine a theoretical estimator that is intractable to calculate and use it to derive information-theoretic bounds of exact recovery. We then propose two tractable estimators: a variant of RANSAC and a simple relaxation of the theoretical estimator. The two estimators are fast to compute and achieve state-of-the-art theoretical performance in a noiseless RSR setting with adversarial outliers. The former estimator achieves better theoretical guarantees in the noiseless case, while the latter estimator is robust to small noise, and its guarantees significantly improve with non-adversarial models of outliers. We give a complete comparison of guarantees for the adversarial RSR problem, as well as a short discussion on the estimation of affine subspaces.
An example of a method with guarantees for general position outliers is RandomizedFind. It searches for linearly dependent @math -subsets of points, and thus linearly dependent outliers present a problem @cite_2 . Another similar method is a RANSAC-type algorithm that sub-samples @math points until a linearly dependent subset is found @cite_9 . This procedure does not work with adversarial outliers for a similar reason. Indeed, suppose that outliers all lie on a 1-dimensional subspace. Then, if one samples a subset with just two of these outliers, one would return a corrupted estimate of the subspace. We will later discuss modifications of the RANSAC algorithm that work for these degenerate cases up to certain limits.
{ "cite_N": [ "@cite_9", "@cite_2" ], "mid": [ "2775726300", "2962933442" ], "abstract": [ "We consider the RANSAC algorithm in the context of subspace recovery and subspace clustering. We derive some theory and perform some numerical experiments. We also draw some correspondences with the methods of Hardt and Moitra (2013) and Chen and Lerman (2009b).", "We consider a fundamental problem in unsupervised learning called subspace recovery : given a collection of m points in R, if many but not necessarily all of these points are contained in a d-dimensional subspace T can we find it? The points contained in T are called inliers and the remaining points are outliers. This problem has received considerable attention in computer science and in statistics. Yet efficient algorithms from computer science are not robust to adversarial outliers, and the estimators from robust statistics are hard to compute in high dimensions. This is a serious and persistent issue not just in this application, but for many other problems in unsupervised learning. Are there algorithms for subspace recovery that are both robust to outliers and efficient? We give an algorithm that finds T when it contains more than a d n fraction of the points. Hence, for say d = n 2 this estimator is both easy to compute and well-behaved when there are a constant fraction of outliers. We prove that it is small set expansion hard to find T when the fraction of errors is any larger, thus giving evidence that our estimator is an optimal compromise between efficiency and robustness. In fact, this basic problem has a surprising number of connections to other areas including small set expansion, matroid theory and functional analysis that we make use of here." ] }
1904.03275
2934848739
We study the problem of robust subspace recovery (RSR) in the presence of adversarial outliers. That is, we seek a subspace that contains a large portion of a dataset when some fraction of the data points are arbitrarily corrupted. We first examine a theoretical estimator that is intractable to calculate and use it to derive information-theoretic bounds of exact recovery. We then propose two tractable estimators: a variant of RANSAC and a simple relaxation of the theoretical estimator. The two estimators are fast to compute and achieve state-of-the-art theoretical performance in a noiseless RSR setting with adversarial outliers. The former estimator achieves better theoretical guarantees in the noiseless case, while the latter estimator is robust to small noise, and its guarantees significantly improve with non-adversarial models of outliers. We give a complete comparison of guarantees for the adversarial RSR problem, as well as a short discussion on the estimation of affine subspaces.
Recent algorithms in the literature on adversarial robustness can also be used to learn subspaces @cite_21 . For example, the Robustly Learning a Gaussian (RLG) algorithm could be used to estimate an underlying low-rank covariance matrix up to @math error when there is an @math -percentage of outliers. However, this result requires the inlier data to be Gaussian. Another recent paper by @cite_57 uses resilience to approximate low-rank matrices in the presence of adversarial outliers. This method, which we call Resilience Recovery (RR), outputs a matrix of higher rank that can approximate a low-rank matrix. For both of these methods, exact recovery of a subspace is not possible.
{ "cite_N": [ "@cite_57", "@cite_21" ], "mid": [ "2597655115", "1539726260" ], "abstract": [ "We introduce a criterion, resilience, which allows properties of a dataset (such as its mean or best low rank approximation) to be robustly computed, even in the presence of a large fraction of arbitrary additional data. Resilience is a weaker condition than most other properties considered so far in the literature, and yet enables robust estimation in a broader variety of settings. We provide new information-theoretic results on robust distribution learning, robust estimation of stochastic block models, and robust mean estimation under bounded kth moments. We also provide new algorithmic results on robust distribution learning, as well as robust mean estimation in p-norms. Among our proof techniques is a method for pruning a high-dimensional distribution with bounded 1st moments to a stable \"core\" with bounded 2nd moments, which may be of independent interest.", "We describe ways to define and calculate L 1 -norm signal subspaces that are less sensitive to outlying data than L 2 -calculated subspaces. We start with the computation of the L 1 maximum-projection principal component of a data matrix containing N signal samples of dimension D. We show that while the general problem is formally NP-hard in asymptotically large N, D, the case of engineering interest of fixed dimension D and asymptotically large sample size N is not. In particular, for the case where the sample size is less than the fixed dimension , we present in explicit form an optimal algorithm of computational cost 2 N . For the case N ≥ D, we present an optimal algorithm of complexity O(N D ). We generalize to multiple L 1 -max-projection components and present an explicit optimal L 1 subspace calculation algorithm of complexity O(N DK-K+1 ) where K is the desired number of L 1 principal components (subspace rank). We conclude with illustrations of L 1 -subspace signal processing in the fields of data dimensionality reduction, direction-of-arrival estimation, and image conditioning restoration." ] }
1904.03485
2926503272
Discriminative learning based image denoisers have achieved promising performance on synthetic noise such as the additive Gaussian noise. However, their performance on images with real noise is often not satisfactory. The main reason is that real noises are mostly spatially channel-correlated and spatial channel-variant. In contrast, the synthetic Additive White Gaussian Noise (AWGN) adopted in most previous work is pixel-independent. In this paper, we propose a novel approach to boost the performance of a real image denoiser which is trained only with synthetic pixel-independent noise data. First, we train a deep model that consists of a noise estimator and a denoiser with mixed AWGN and Random Value Impulse Noise (RVIN). We then investigate Pixel-shuffle Down-sampling (PD) strategy to adapt the trained model to real noises. Extensive experiments demonstrate the effectiveness and generalization ability of the proposed approach. Notably, our method achieves state-of-the-art performance on real sRGB images in the DND benchmark. Codes are available at this https URL.
Real noises of CCD cameras are complicated and are related to optical sensors and in-camera process. Specifically, multiple noise sources like photon noise, read-out noise and processing including demosaicing, color and gamma transformation introduce the main characteristics of real noises: spatial channel correlation, variance, and signal-dependence. To approximate real noise, multiple types of synthetic noise are explored in previous work, including Gaussian-Poisson @cite_22 @cite_14 , Gaussian Mixture Model (GMM) @cite_24 , in-camera process simulation @cite_20 @cite_10 and GAN-generated noises @cite_40 , to name a few. CBDNet @cite_18 first simulated real noise and trained a subnetwork for noise estimation, in which spatial-variance noise is represented as spatial maps. Besides, multi-channel @cite_25 @cite_10 and multi-scale @cite_21 @cite_31 @cite_27 @cite_47 strategy were also investigated for adaptation. Different from all the aforementioned works which focus on directly synthesizing or simulating noises for training, in this work, we apply AWGN-RVIN model and focus on pixel-shuffle adaptation strategy to fill in the gap between pixel-independent synthetic and pixel-correlated real noises.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_47", "@cite_22", "@cite_21", "@cite_24", "@cite_40", "@cite_27", "@cite_31", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "2832157980", "", "", "2136035751", "1504409388", "2474817805", "2798278116", "", "", "", "2963315679", "2159736423" ], "abstract": [ "While deep convolutional neural networks (CNNs) have achieved impressive success in image denoising with additive white Gaussian noise (AWGN), their performance remains limited on real-world noisy photographs. The main reason is that their learned models are easy to overfit on the simplified AWGN model which deviates severely from the complicated real-world noise model. In order to improve the generalization ability of deep CNN denoisers, we suggest training a convolutional blind denoising network (CBDNet) with more realistic noise model and real-world noisy-clean image pairs. On the one hand, both signal-dependent noise and in-camera signal processing pipeline is considered to synthesize realistic noisy images. On the other hand, real-world noisy photographs and their nearly noise-free counterparts are also included to train our CBDNet. To further provide an interactive strategy to rectify denoising result conveniently, a noise estimation subnetwork with asymmetric learning to suppress under-estimation of noise level is embedded into CBDNet. Extensive experimental results on three datasets of real-world noisy photographs clearly demonstrate the superior performance of CBDNet over state-of-the-arts in terms of quantitative metrics and visual quality. The code has been made available at this https URL.", "", "", "We present a simple and usable noise model for the raw-data of digital imaging sensors. This signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data. We further explicitly take into account the clipping of the data (over- and under-exposure), faithfully reproducing the nonlinear response of the sensor. We propose an algorithm for the fully automatic estimation of the model parameters given a single noisy image. Experiments with synthetic images and with real raw-data from various sensors prove the practical applicability of the method and the accuracy of the proposed model.", "Arguably several thousands papers are dedicated to image denoising. Most papers assume a fixed noise model, mainly white Gaussian or Poissonian. This assumption is only valid for raw images. Yet, in most images handled by the public and even by scientists, the noise model is imperfectly known or unknown. End users only dispose the result of a complex image processing chain effectuated by uncontrolled hardware and software (and sometimes by chemical means). For such images, recent progress in noise estimation permits to estimate from a single image a noise model, which is simultaneously signal and frequency dependent. We propose here a multiscale denoising algorithm adapted to this broad noise model. This leads to a blind denoising algorithm which we demonstrate on real JPEG images and on scans of old photographs for which the formation model is unknown. The consistency of this algorithm is also verified on simulated distorted images. This algorithm is finally compared with the unique state of the art previous blind denoising method.", "Traditional image denoising algorithms always assume the noise to be homogeneous white Gaussian distributed. However, the noise on real images can be much more complex empirically. This paper addresses this problem and proposes a novel blind image denoising algorithm which can cope with real-world noisy images even when the noise model is not provided. It is realized by modeling image noise with mixture of Gaussian distribution (MoG) which can approximate large varieties of continuous distributions. As the number of components for MoG is unknown practically, this work adopts Bayesian nonparametric technique and proposes a novel Low-rank MoG filter (LR-MoG) to recover clean signals (patches) from noisy ones contaminated by MoG noise. Based on LR-MoG, a novel blind image denoising approach is developed. To test the proposed method, this study conducts extensive experiments on synthesis and real images. Our method achieves the state-of the-art performance consistently.", "In this paper, we consider a typical image blind denoising problem, which is to remove unknown noise from noisy images. As we all know, discriminative learning based methods, such as DnCNN, can achieve state-of-the-art denoising results, but they are not applicable to this problem due to the lack of paired training data. To tackle the barrier, we propose a novel two-step framework. First, a Generative Adversarial Network (GAN) is trained to estimate the noise distribution over the input noisy images and to generate noise samples. Second, the noise patches sampled from the first step are utilized to construct a paired training dataset, which is used, in turn, to train a deep Convolutional Neural Network (CNN) for denoising. Extensive experiments have been done to demonstrate the superiority of our approach in image blind denoising.", "", "", "", "Most of the existing denoising algorithms are developed for grayscale images. It is not trivial to extend them for color image denoising since the noise statistics in R, G, and B channels can be very different for real noisy images. In this paper, we propose a multi-channel (MC) optimization model for real color image denoising under the weighted nuclear norm minimization (WNNM) framework. We concatenate the RGB patches to make use of the channel redundancy, and introduce a weight matrix to balance the data fidelity of the three channels in consideration of their different noise statistics. The proposed MC-WNNM model does not have an analytical solution. We reformulate it into a linear equality-constrained problem and solve it via alternating direction method of multipliers. Each alternative updating step has a closed-form solution and the convergence can be guaranteed. Experiments on both synthetic and real noisy image datasets demonstrate the superiority of the proposed MC-WNNM over state-of-the-art denoising methods.", "Image denoising algorithms often assume an additive white Gaussian noise (AWGN) process that is independent of the actual RGB values. Such approaches cannot effectively remove color noise produced by today's CCD digital camera. In this paper, we propose a unified framework for two tasks: automatic estimation and removal of color noise from a single image using piecewise smooth image models. We introduce the noise level function (NLF), which is a continuous function describing the noise level as a function of image brightness. We then estimate an upper bound of the real NLF by fitting a lower envelope to the standard deviations of per-segment image variances. For denoising, the chrominance of color noise is significantly removed by projecting pixel values onto a line fit to the RGB values in each segment. Then, a Gaussian conditional random field (GCRF) is constructed to obtain the underlying clean image from the noisy input. Extensive experiments are conducted to test the proposed algorithm, which is shown to outperform state-of-the-art denoising algorithms." ] }
1904.03310
2949505205
In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo's contextualized word vectors. First, we conduct several intrinsic analyses and find that (1) training data for ELMo contains significantly more male than female entities, (2) the trained ELMo embeddings systematically encode gender information and (3) ELMo unequally encodes gender information about male and female entities. Then, we show that a state-of-the-art coreference system that depends on ELMo inherits its bias and demonstrates significant bias on the WinoBias probing corpus. Finally, we explore two methods to mitigate such gender bias and show that the bias demonstrated on WinoBias can be eliminated.
Gender bias has been shown to affect several real-world applications relying on automatic language analysis, including online news @cite_14 , advertisements @cite_7 , abusive language detection @cite_8 , machine translation @cite_13 @cite_23 , and web search @cite_16 . In many cases, a model not only replicates bias in the training data but also amplifies it @cite_20 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_23", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2000632877", "2095932468", "2962990575", "2955868760", "2149252982", "2909620036", "2962787423" ], "abstract": [ "Feminist news researchers have long argued that in the macho culture of most newsrooms, journalists’ daily decisions about what is newsworthy remain firmly based on masculine news values. As such, issues and topics traditionally seen to be particularly relevant to women tend to be pushed to the margins of the news where the implicit assumption is that they are less important than those which interest men. In so doing, men’s views and voices are privileged over women’s, thereby contributing to the ongoing secondary status of women’s participation as citizens. In this article, we draw upon data we collected from the UK and the Republic of Ireland as part of the larger, 108-country study, which comprised the 2010 Global Media Monitoring Project (GMMP). We argue that while there have been some positive improvements in women’s representation as news actors, sources and journalists in the British and Irish news media since the first GMMP day of monitoring in 1995, women’s voices, experiences and expertise conti...", "A Google search for a person's name, such as “Trevon Jones”, may yield a personalized ad for public records about Trevon that may be neutral, such as “Looking for Trevon Jones? …”, or may be suggestive of an arrest record, such as “Trevon Jones, Arrested?...”. This writing investigates the delivery of these kinds of ads by Google AdSense using a sample of racially associated names and finds statistically significant discrimination in ad delivery based on searches of 2184 racially associated personal names across two websites. First names, previously identified by others as being assigned at birth to more black or white babies, are found predictive of race (88 black, 96 white), and those assigned primarily to black babies, such as DeShawn, Darnell and Jermaine, generated ads suggestive of an arrest in 81 to 86 percent of name searches on one website and 92 to 95 percent on the other, while those assigned at birth primarily to whites, such as Geoffrey, Jill and Emma, generated more neutral copy: the word \"arrest\" appeared in 23 to 29 percent of name searches on one site and 0 to 60 percent on the other. On the more ad trafficked website, a black-identifying name was 25 more likely to get an ad suggestive of an arrest record. A few names did not follow these patterns: Dustin, a name predominantly given to white babies, generated an ad suggestive of arrest 81 and 100 percent of the time. All ads return results for actual individuals and ads appear regardless of whether the name has an arrest record in the company’s database. Notwithstanding these findings, the company maintains Google received the same ad text for groups of last names (not first names), raising questions as to whether Google's advertising technology exposes racial bias in society and how ad and search technology can develop to assure racial fairness.", "", "Speakers of different languages must attend to and encode strikingly different aspects of the world in order to use their language correctly (Sapir, 1921; Slobin, 1996). One such difference is related to the way gender is expressed in a language. Saying “I am happy” in English, does not encode any additional knowledge of the speaker that uttered the sentence. However, many other languages do have grammatical gender systems and so such knowledge would be encoded. In order to correctly translate such a sentence into, say, French, the inherent gender information needs to be retained recovered. The same sentence would become either “Je suis heureux”, for a male speaker or “Je suis heureuse” for a female one. Apart from morphological agreement, demographic factors (gender, age, etc.) also influence our use of language in terms of word choices or even on the level of syntactic constructions (Tannen, 1991; , 2003). We integrate gender information into NMT systems. Our contribution is twofold: (1) the compilation of large datasets with speaker information for 20 language pairs, and (2) a simple set of experiments that incorporate gender information into NMT for multiple language pairs. Our experiments show that adding a gender feature to an NMT system significantly improves the translation quality for some language pairs.", "Information environments have the power to affect people's perceptions and behaviors. In this paper, we present the results of studies in which we characterize the gender bias present in image search results for a variety of occupations. We experimentally evaluate the effects of bias in image search results on the images people choose to represent those careers and on people's perceptions of the prevalence of men and women in each occupation. We find evidence for both stereotype exaggeration and systematic underrepresentation of women in search results. We also find that people rate search results higher when they are consistent with stereotypes for a career, and shifting the representation of gender in image search results can shift people's perceptions about real-world distributions. We also discuss tensions between desires for high-quality results and broader societ al goals for equality of representation in this space.", "Neural machine translation has significantly pushed forward the quality of the field. However, there are remaining big issues with the translations and one of them is fairness. Neural models are trained on large text corpora which contains biases and stereotypes. As a consequence, models inherit these social biases. Recent methods have shown results in reducing gender bias in other natural language processing applications such as word embeddings. We take advantage of the fact that word embeddings are used in neural machine translation to propose the first debiased machine translation system. Specifically, we propose, experiment and analyze the integration of two debiasing techniques over GloVe embeddings in the Transformer translation architecture. We evaluate our proposed system on a generic English-Spanish task, showing gains up to one BLEU point. As for the gender bias evaluation, we generate a test set of occupations and we show that our proposed system learns to equalize existing biases from the baseline system.", "" ] }
1904.03396
2953740644
Data-to-text generation can be conceptually divided into two parts: ordering and structuring the information (planning), and generating fluent language describing the information (realization). Modern neural generation systems conflate these two steps into a single end-to-end differentiable system. We propose to split the generation process into a symbolic text-planning stage that is faithful to the input, followed by a neural generation stage that focuses only on realization. For training a plan-to-text generator, we present a method for matching reference texts to their corresponding text plans. For inference time, we describe a method for selecting high-quality text plans for new inputs. We implement and evaluate our approach on the WebNLG benchmark. Our results demonstrate that decoupling text planning from neural realization indeed improves the system's reliability and adequacy while maintaining fluent output. We observe improvements both in BLEU scores and in manual evaluations. Another benefit of our approach is the ability to output diverse realizations of the same input, paving the way to explicit control over the generated text structure.
More complex tasks, like RotoWire @cite_10 require modeling also document-level planning. explored a method to explicitly model document planning using the attention mechanism.
{ "cite_N": [ "@cite_10" ], "mid": [ "2949417144" ], "abstract": [ "Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate human-generated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copy- and reconstruction-based extensions lead to noticeable improvements." ] }
1904.03396
2953740644
Data-to-text generation can be conceptually divided into two parts: ordering and structuring the information (planning), and generating fluent language describing the information (realization). Modern neural generation systems conflate these two steps into a single end-to-end differentiable system. We propose to split the generation process into a symbolic text-planning stage that is faithful to the input, followed by a neural generation stage that focuses only on realization. For training a plan-to-text generator, we present a method for matching reference texts to their corresponding text plans. For inference time, we describe a method for selecting high-quality text plans for new inputs. We implement and evaluate our approach on the WebNLG benchmark. Our results demonstrate that decoupling text planning from neural realization indeed improves the system's reliability and adequacy while maintaining fluent output. We observe improvements both in BLEU scores and in manual evaluations. Another benefit of our approach is the ability to output diverse realizations of the same input, paving the way to explicit control over the generated text structure.
The neural text generation community has also recently been interested in controllable'' text generation @cite_24 , where various aspects of the text (often sentiment) are manipulated @cite_7 or transferred @cite_6 @cite_15 @cite_3 . In contrast, like in @cite_4 , here we focused on controlling either the content of a generation or the way it is expressed by manipulating the sentence plan used in realizing the generation.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_3", "@cite_6", "@cite_24", "@cite_15" ], "mid": [ "2951103768", "2735574368", "", "2963366196", "2735642330", "" ], "abstract": [ "While neural, encoder-decoder models have had significant empirical success in text generation, there remain several unaddressed problems with this style of generation. Encoder-decoder models are largely (a) uninterpretable, and (b) difficult to control in terms of their phrasing or content. This work proposes a neural generation system using a hidden semi-markov model (HSMM) decoder, which learns latent, discrete templates jointly with learning to generate. We show that this model learns useful templates, and that these templates make generation both more interpretable and controllable. Furthermore, we show that this approach scales to real data sets and achieves strong performance nearing that of encoder-decoder text generation models.", "Most work on neural natural language generation (NNLG) focus on controlling the content of the generated text. We experiment with controlling several stylistic aspects of the generated text, in addition to its content. The method is based on conditioned RNN language model, where the desired content as well as the stylistic parameters serve as conditioning contexts. We demonstrate the approach on the movie reviews domain and show that it is successful in generating coherent sentences corresponding to the required linguistic style and content.", "", "This paper focuses on style transfer on the basis of non-parallel text. This is an instance of a broad family of problems including machine translation, decipherment, and sentiment modification. The key challenge is to separate the content from other aspects such as style. We assume a shared latent content distribution across different text corpora, and propose a method that leverages refined alignment of latent representations to perform style transfer. The transferred sentences from one style should match example sentences from the other style as a population. We demonstrate the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.", "Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible natural language sentences, whose attributes are dynamically controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns highly interpretable representations from even only word annotations, and produces realistic sentences with desired attributes. Quantitative evaluation validates the accuracy of sentence and attribute generation.", "" ] }
1904.02969
2930360627
We present semantic attribute matching networks (SAM-Net) for jointly establishing correspondences and transferring attributes across semantically similar images, which intelligently weaves the advantages of the two tasks while overcoming their limitations. SAM-Net accomplishes this through an iterative process of establishing reliable correspondences by reducing the attribute discrepancy between the images and synthesizing attribute transferred images using the learned correspondences. To learn the networks using weak supervisions in the form of image pairs, we present a semantic attribute matching loss based on the matching similarity between an attribute transferred source feature and a warped target feature. With SAM-Net, the state-of-the-art performance is attained on several benchmarks for semantic matching and attribute transfer.
Most conventional methods for semantic correspondence that use handcrafted features and regularization methods @cite_29 @cite_19 @cite_39 @cite_14 @cite_52 have provided limited performance due to a low discriminative power. Recent approaches have used deep CNNs for extracting their features @cite_30 @cite_34 @cite_8 @cite_41 and regularizing correspondence fields @cite_45 @cite_11 @cite_2 . @cite_11 @cite_2 proposed deep architecture for estimating a geometric matching model, but these methods estimate only globally-varying geometric fields. To deal with locally-varying geometric deformations, some methods such as UCN @cite_30 and CAT-FCSS @cite_15 were proposed based on STNs @cite_17 . Recently, PARN @cite_25 , NC-Net @cite_53 , and RTNs @cite_7 were proposed to estimate locally-varying transformation fields using a coarse-to-fine scheme @cite_25 , neighbourhood consensus @cite_53 , and an iteration technique @cite_7 . These methods @cite_25 @cite_53 @cite_7 presume that the attribute variations between source and target images are negligible in the deep feature space. However, in practice the deep features often show limited performance in handling different attributes. @cite_3 presented a method to deal with the attribute variations between the images using a variant of instance normalization @cite_28 . However, the method does not have an explicit learnable module to reduce the attribute discrepancy, thus yielding limited performance.
{ "cite_N": [ "@cite_30", "@cite_41", "@cite_29", "@cite_3", "@cite_2", "@cite_15", "@cite_8", "@cite_52", "@cite_39", "@cite_17", "@cite_7", "@cite_28", "@cite_19", "@cite_34", "@cite_25", "@cite_14", "@cite_53", "@cite_45", "@cite_11" ], "mid": [ "2435623039", "2606149788", "", "2801249507", "", "2793003838", "", "2464606141", "209424029", "2951005624", "2891202958", "", "2124861766", "2474531669", "", "1926639317", "2890346701", "2612584387", "2604233003" ], "abstract": [ "A computer-implemented method for training a convolutional neural network (CNN) is presented. The method includes extracting coordinates of corresponding points in the first and second locations, identifying positive points in the first and second locations, identifying negative points in the first and second locations, training features that correspond to positive points of the first and second locations to move closer to each other, and training features that correspond to negative points in the first and second locations to move away from each other.", "Despite significant progress of deep learning in recent years, state-of-the-art semantic matching methods still rely on legacy features such as SIFT or HoG. We argue that the strong invariance properties that are key to the success of recent deep architectures on the classification task make them unfit for dense correspondence tasks, unless a large amount of supervision is used. In this work, we propose a deep network, termed AnchorNet, that produces image representations that are well-suited for semantic matching. It relies on a set of filters whose response is geometrically consistent across different object instances, even in the presence of strong intra-class, scale, or viewpoint variations. Trained only with weak image-level labels, the final representation successfully captures information about the object structure and improves results of state-of-the-art semantic matching methods such as the deformable spatial pyramid or the proposal flow methods. We show positive results on the cross-instance matching task where different instances of the same object category are matched as well as on a new cross-category semantic matching task aligning pairs of instances each from a different object class.", "", "A deep learning based method for sparse correspondence between pairs of objects that belong to different semantic categories and may differ drastically in their appearance, but contain semantically related parts.", "", "We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. Unlike traditional dense correspondence approaches for estimating depth or optical flow, semantic correspondence estimation poses additional challenges due to intra-class appearance and shape variations among different instances within the same object or scene category. To robustly match points across semantically similar images, we formulate FCSS using local self-similarity (LSS), which is inherently insensitive to intra-class appearance variations. LSS is incorporated through a proposed convolutional self-similarity (CSS) layer, where the sampling patterns and the self-similarity measure are jointly learned in an end-to-end and multi-scale manner. Furthermore, to address shape variations among different object instances, we propose a convolutional affine transformer (CAT) layer that estimates explicit affine transformation fields at each pixel to transform the sampling patterns and corresponding receptive fields. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in most existing datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS significantly outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks.", "", "We propose a new technique to jointly recover cosegmentation and dense per-pixel correspondence in two images. Our method parameterizes the correspondence field using piecewise similarity transformations and recovers a mapping between the estimated common \"foreground\" regions in the two images allowing them to be precisely aligned. Our formulation is based on a hierarchical Markov random field model with segmentation and transformation labels. The hierarchical structure uses nested image regions to constrain inference across multiple scales. Unlike prior hierarchical methods which assume that the structure is given, our proposed iterative technique dynamically recovers the structure along with the labeling. This joint inference is performed in an energy minimization framework using iterated graph cuts. We evaluate our method on a new dataset of 400 image pairs with manually obtained ground truth, where it outperforms state-of-the-art methods designed specifically for either cosegmentation or correspondence estimation.", "Determining dense semantic correspondences across objects and scenes is a difficult problem that underpins many higher-level computer vision algorithms. Unlike canonical dense correspondence problems which consider images that are spatially or temporally adjacent, semantic correspondence is characterized by images that share similar high-level structures whose exact appearance and geometry may differ. Motivated by object recognition literature and recent work on rapidly estimating linear classifiers, we treat semantic correspondence as a constrained detection problem, where an exemplar LDA classifier is learned for each pixel. LDA classifiers have two distinct benefits: (i) they exhibit higher average precision than similarity metrics typically used in correspondence problems, and (ii) unlike exemplar SVM, can output globally interpretable posterior probabilities without calibration, whilst also being significantly faster to train. We pose the correspondence problem as a graphical model, where the unary potentials are computed via convolution with the set of exemplar classifiers, and the joint potentials enforce smoothly varying correspondence assignment.", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.", "We present recurrent transformer networks (RTNs) for obtaining dense correspondences between semantically similar images. Our networks accomplish this through an iterative process of estimating spatial transformations between the input images and using these transformations to generate aligned convolutional activations. By directly estimating the transformations between an image pair, rather than employing spatial transformer networks to independently normalize each individual image, we show that greater accuracy can be achieved. This process is conducted in a recursive manner to refine both the transformation estimates and the feature representations. In addition, a technique is presented for weakly-supervised training of RTNs that is based on a proposed classification loss. With RTNs, state-of-the-art performance is attained on several benchmarks for semantic correspondence.", "", "We introduce a fast deformable spatial pyramid (DSP) matching algorithm for computing dense pixel correspondences. Dense matching methods typically enforce both appearance agreement between matched pixels as well as geometric smoothness between neighboring pixels. Whereas the prevailing approaches operate at the pixel level, we propose a pyramid graph model that simultaneously regularizes match consistency at multiple spatial extents-ranging from an entire image, to coarse grid cells, to every single pixel. This novel regularization substantially improves pixel-level matching in the face of challenging image variations, while the \"deformable\" aspect of our model overcomes the strict rigidity of traditional spatial pyramids. Results on Label Me and Caltech show our approach outperforms state-of-the-art methods (SIFT Flow [15] and Patch-Match [2]), both in terms of accuracy and run time.", "Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and realto-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms stateof-the-art pairwise matching methods in correspondencerelated tasks.", "", "Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixel-wise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fully-connected correspondence flow graph with each node representing an image, and each edge representing the correspondence flow field between a pair of images, i.e. a vector field indicating how each pixel in one image can find a corresponding pixel in the other image. Correspondence flow is related to optical flow but allows for correspondences between visually dissimilar regions if there is evidence they correspond transitively on the graph. Our algorithm starts by initializing all edges of this complete graph with an off-the-shelf, pairwise flow method. We then iteratively update the graph to force it to be more self-consistent. Once the algorithm converges, dense, globally-consistent correspondences can be read off the graph. Our results suggest that FlowWeb improves alignment accuracy over previous pairwise as well as joint alignment methods.", "We address the problem of finding reliable dense correspondences between a pair of images. This is a challenging task due to strong appearance differences between the corresponding scene elements and ambiguities generated by repetitive patterns. The contributions of this work are threefold. First, inspired by the classic idea of disambiguating feature matches using semi-local constraints, we develop an end-to-end trainable convolutional neural network architecture that identifies sets of spatially consistent matches by analyzing neighbourhood consensus patterns in the 4D space of all possible correspondences between a pair of images without the need for a global geometric model. Second, we demonstrate that the model can be trained effectively from weak supervision in the form of matching and non-matching image pairs without the need for costly manual annotation of point to point correspondences. Third, we show the proposed neighbourhood consensus network can be applied to a range of matching tasks including both category- and instance-level matching, obtaining the state-of-the-art results on the PF Pascal dataset and the InLoc indoor visual localization benchmark.", "This paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regularizer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially outperforms both recent deep learning architectures and previous methods based on hand-crafted features.", "We address the problem of determining correspondences between two images in agreement with a geometric model such as an affine or thin-plate spline transformation, and estimating its parameters. The contributions of this work are three-fold. First, we propose a convolutional neural network architecture for geometric matching. The architecture is based on three main components that mimic the standard steps of feature extraction, matching and simultaneous inlier detection and model parameter estimation, while being trainable end-to-end. Second, we demonstrate that the network parameters can be trained from synthetically generated imagery without the need for manual annotation and that our matching layer significantly increases generalization capabilities to never seen before images. Finally, we show that the same model can perform both instance-level and category-level matching giving state-of-the-art results on the challenging Proposal Flow dataset." ] }
1904.02969
2930360627
We present semantic attribute matching networks (SAM-Net) for jointly establishing correspondences and transferring attributes across semantically similar images, which intelligently weaves the advantages of the two tasks while overcoming their limitations. SAM-Net accomplishes this through an iterative process of establishing reliable correspondences by reducing the attribute discrepancy between the images and synthesizing attribute transferred images using the learned correspondences. To learn the networks using weak supervisions in the form of image pairs, we present a semantic attribute matching loss based on the matching similarity between an attribute transferred source feature and a warped target feature. With SAM-Net, the state-of-the-art performance is attained on several benchmarks for semantic matching and attribute transfer.
There have been a lot of works on the transfer of visual attributes, e.g., color, texture, and style, from one image to another, and most approaches are tailored to their specific objectives @cite_1 @cite_31 @cite_47 @cite_13 @cite_49 @cite_51 . Since our method represents and synthesizes deep features to transfer the attribute between semantically similar images, the neural style transfer @cite_55 @cite_35 @cite_26 @cite_38 is highly related to ours. In general, these approaches can be classified into parametric and non-parametric methods.
{ "cite_N": [ "@cite_35", "@cite_47", "@cite_26", "@cite_38", "@cite_55", "@cite_1", "@cite_49", "@cite_51", "@cite_31", "@cite_13" ], "mid": [ "2564755245", "1999360130", "2950689937", "2613099748", "2475287302", "2129112648", "1987474052", "2471440592", "2160530465", "2119798818" ], "abstract": [ "Artistic style transfer is an image synthesis problem where the content of an image is reproduced with the style of another. Recent works show that a visually appealing style transfer can be achieved by using the hidden activations of a pretrained convolutional neural network. However, existing methods either apply (i) an optimization procedure that works for any style image but is very expensive, or (ii) an efficient feedforward network that only allows a limited number of trained styles. In this work we propose a simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network. We show that our objective has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video. Furthermore, we use 80,000 natural images and 80,000 paintings to train an inverse network that approximates the result of the optimization. This results in a procedure for artistic style transfer that is efficient but also allows arbitrary content and style images.", "We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer — rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be re-rendered in the style of a different image. The method works directly on the images and does not require 3D information.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "The seminal work of demonstrated the power of Convolutional Neural Networks (CNNs) in creating artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). Since then, NST has become a trending topic both in academic literature and industrial applications. It is receiving increasing attention and a variety of approaches are proposed to either improve or extend the original NST algorithm. In this paper, we aim to provide a comprehensive overview of the current progress towards NST. We first propose a taxonomy of current algorithms in the field of NST. Then, we present several evaluation methods and compare different NST algorithms both qualitatively and quantitatively. The review concludes with a discussion of various applications of NST and open problems for future research. A list of papers discussed in this review, corresponding codes, pre-trained models and more comparison results are publicly available at this https URL.", "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.", "We use a simple statistical analysis to impose one image's color characteristics on another. We can achieve color correction by choosing an appropriate source image and apply its characteristic to another image.", "Example-based stylization provides an easy way of making artistic effects for images and videos. However, most existing methods do not consider the content and style separately. In this paper, we propose a style transfer algorithm via a novel component analysis approach, based on various image processing techniques. First, inspired by the steps of drawing a picture, an image is decomposed into three components: draft, paint and edge, which describe the content, main style, and strengthened strokes along the boundaries. Then the style is transferred from the template image to the source image in the paint and edge components. Style transfer is formulated as a global optimization problem by using Markov random fields, and a coarse-to-fine belief propagation algorithm is used to solve the optimization problem. To combine the draft component and the obtained style information, the final artistic result can be achieved via a reconstruction step. Compared to other algorithms, our method not only synthesizes the style, but also preserves the image content well. We also extend our algorithm from single image stylization to video personalization, by maintaining the temporal coherence and identifying faces in video sequences. The results indicate that our approach performs excellently in stylization and personalization for images and videos.", "This paper presents a novel unsupervised method to transfer the style of an example image to a source image. The complex notion of image style is here considered as a local texture transfer, eventually coupled with a global color transfer. For the local texture transfer, we propose a new method based on an adaptive patch partition that captures the style of the example image and preserves the structure of the source image. More precisely, this example-based partition predicts how well a source patch matches an example patch. Results on various images show that our method outperforms the most recent techniques.", "We address the problem of regional color transfer between two natural images by probabilistic segmentation. We use a new expectation-maximization (EM) scheme to impose both spatial and color smoothness to infer natural connectivity among pixels. Unlike previous work, our method takes local color information into consideration, and segment image with soft region boundaries for seamless color transfer and compositing. Our modified EM method has two advantages in color manipulation: first, subject to different levels of color smoothness in image space, our algorithm produces an optimal number of regions upon convergence, where the color statistics in each region can be adequately characterized by a component of a Gaussian mixture model (GMM). Second, we allow a pixel to fall in several regions according to our estimated probability distribution in the EM step, resulting in a transparency-like ratio for compositing different regions seamlessly. Hence, natural color transition across regions can be achieved, where the necessary intra-region and inter-region smoothness are enforced without losing original details. We demonstrate results on a variety of applications including image deblurring, enhanced color transfer, and colorizing gray scale images. Comparisons with previous methods are also presented.", "The article presents an algorithm for texture transfer between images that is up to several orders of magnitude faster than current state-of-the-art techniques. I demonstrate how the technique can leverage self-similarity of complex images to increase resolution of some types of images and to create novel, artistic looking images from photographs without any prior artistic source. Compared to other alternatives, methods based on texture transfer are global in the sense that the user need not deal with details such as defining and painting individual brush strokes. Texture transfer methods are also more general since they don't need to emulate any particular artistic style (line drawing, hatching, realistic oil painting, and so on). Not surprisingly, there is a price to pay for this generality - an algorithm designed for a specific artistic style will most likely produce results superior to those presented in the paper for that particular case." ] }