aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1904.00818 | 2931376380 | The absence of large scale datasets with pixel-level supervisions is a significant obstacle for the training of deep convolutional networks for scene text segmentation. For this reason, synthetic data generation is normally employed to enlarge the training dataset. Nonetheless, synthetic data cannot reproduce the complexity and variability of natural images. In this paper, a weakly supervised learning approach is used to reduce the shift between training on real and synthetic data. Pixel-level supervisions for a text detection dataset (i.e. where only bounding-box annotations are available) are generated. In particular, the COCO-Text-Segmentation (COCO_TS) dataset, which provides pixel-level supervisions for the COCO-Text dataset, is created and released. The generated annotations are used to train a deep convolutional neural network for semantic segmentation. Experiments show that the proposed dataset can be used instead of synthetic data, allowing us to use only a fraction of the training samples and significantly improving the performances. | Image semantic segmentation aims at inferring the class of each pixel of an image. Recent semantic segmentation algorithms often convert existing CNN architectures, designed for image classification, to fully convolutional networks @cite_6 . These networks have generally an encoder--decoder structure. Moreover, the level of details required by semantic segmentation inspired the use of dilated convolution to enlarge the receptive field without decreasing the resolution @cite_10 . Besides, different solutions have been proposed to deal with the presence of objects at different scales. The Pyramid Scene Parsing Network (PSPNet) @cite_15 applies a pyramid of pooling to collect contextual information at different scales. Instead, Deeplab @cite_26 employs atrous spatial pyramid pooling, which consists of parallel dilated convolutions with different rates. | {
"cite_N": [
"@cite_15",
"@cite_26",
"@cite_10",
"@cite_6"
],
"mid": [
"2560023338",
"2412782625",
"1764806478",
"1903029394"
],
"abstract": [
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"Deep Convolutional Neural Networks (DCNNs) commonly use generic 'max-pooling' (MP) layers to extract deformation-invariant features, but we argue in favor of a more refined treatment. First, we introduce epitomic con-volution as a building block alternative to the common convolution-MP cascade of DCNNs; while having identical complexity to MP, Epitomic Convolution allows for pa-rameter sharing across different filters, resulting in faster convergence and better generalization. Second, we introduce a Multiple Instance Learning approach to explicitly accommodate global translation and scaling when training a DCNN exclusively with class labels. For this we rely on a 'patchwork' data structure that efficiently lays out all image scales and positions as candidates to a DCNN. Factoring global and local deformations allows a DCNN to 'focus its resources' on the treatment of non-rigid defor-mations and yields a substantial classification accuracy improvement. Third, further pursuing this idea, we develop an efficient DCNN sliding window object detector that employs explicit search over position, scale, and aspect ratio. We provide competitive image classification and localization results on the ImageNet dataset and object detection results on the Pascal VOC 2007 benchmark.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image."
]
} |
1904.00818 | 2931376380 | The absence of large scale datasets with pixel-level supervisions is a significant obstacle for the training of deep convolutional networks for scene text segmentation. For this reason, synthetic data generation is normally employed to enlarge the training dataset. Nonetheless, synthetic data cannot reproduce the complexity and variability of natural images. In this paper, a weakly supervised learning approach is used to reduce the shift between training on real and synthetic data. Pixel-level supervisions for a text detection dataset (i.e. where only bounding-box annotations are available) are generated. In particular, the COCO-Text-Segmentation (COCO_TS) dataset, which provides pixel-level supervisions for the COCO-Text dataset, is created and released. The generated annotations are used to train a deep convolutional neural network for semantic segmentation. Experiments show that the proposed dataset can be used instead of synthetic data, allowing us to use only a fraction of the training samples and significantly improving the performances. | Document image segmentation has a long history and was originally based on thresholding approaches (local, global or adaptive) @cite_1 @cite_21 @cite_17 . The application of these methods to scene text segmentation is quite challenging, due to the high variability of conditions that can be found in natural images. To face this variability, in @cite_22 , low level features are used to identify the seed points of texts and backgrounds and then to segment the text using semi--supervised learning. In @cite_14 , the binarization of scene text has been formulated as a Markov Random Field model optimization problem, where the optimal binarization is obtained iteratively with Graph Cuts. To improve the segmentation performance, a multilevel maximally stable extremal region approach, applied together with a text candidate selection algorithm based on hand--extracted text--specific features, has been presented in @cite_18 . Finally, in @cite_20 , a CNN approach to scene text segmentation is described, which employs three stages for extraction, refinement and classification. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_20",
"@cite_17"
],
"mid": [
"1979122072",
"1995790019",
"2084594934",
"2081868602",
"2133059825",
"2574887079",
"2022989269"
],
"abstract": [
"The segmentation of scene text from the image background has shown great importance in scene text recognition. In this paper, we propose a multi-level MSER technology that identifies the best-quality text candidates from a set of stable regions that are extracted from different color channel images. In order to identify the best-quality text candidates, a segmentation score is defined which exploits four measures to evaluate the text probability of each stable region including: 1) Stroke width that measures the small stroke width variation of the text, 2) Boundary curvature that measures the smoothness of the stable region boundary, 3) Character confidence that measures the likelihood of a stable region being text based on a pre-trained support vector classifier, 4) Color constancy that measures the global color consistency of each selected text candidate. Finally, the MSERs with the best segmentation score from each channel are combined to form the final segmentation. The proposed method is evaluated on the ICDAR2003 and SVT datasets and experiments show that it outperforms both popular document image binarization methods and state of the art scene text segmentation methods.",
"Inspired by the success of MRF models for solving object segmentation problems, we formulate the binarization problem in this framework. We represent the pixels in a document image as random variables in an MRF, and introduce a new energy (or cost) function on these variables. Each variable takes a foreground or background label, and the quality of the binarization (or labelling) is determined by the value of the energy function. We minimize the energy function, i.e. find the optimal binarization, using an iterative graph cut scheme. Our model is robust to variations in foreground and background colours as we use a Gaussian Mixture Model in the energy function. In addition, our algorithm is efficient to compute, and adapts to a variety of document images. We show results on word images from the challenging ICDAR 2003 dataset, and compare our performance with previously reported methods. Our approach shows significant improvement in pixel level accuracy as well as OCR accuracy.",
"Scene text extraction, i.e., segmenting text pixels from background, is an important step before the text can be recognized. It is a challenging problem due to the cluttered background and the variation of lighting. In this paper, we propose a seed-based segmentation method that can automatically judge the text polarity, extract seed points of text and background, and segment texts by semi-supervised learning (SSL). First, we estimate the text polarity and the stroke width using gradient local correlation. Then, all the points in the middle of stroke edge pairs satisfying the width and polarity are taken as foreground seeds, and the points in the middle of the edge pairs with opposite polarity are taken as background seeds. The whole image is then segmented into text and background using an SSL algorithm. Owing to the accurate estimate of text polarity and extraction of seed points, the proposed method yields good segmentation performance. Experimental results on the KAIST dataset demonstrate the superiority of the method.",
"This paper presents a new document image binarization technique that segments the text from badly degraded historical document images. The proposed technique makes use of the image contrast that is defined by the local image maximum and minimum. Compared with the image gradient, the image contrast evaluated by the local maximum and minimum has a nice property that it is more tolerant to the uneven illumination and other types of document degradation such as smear. Given a historical document image, the proposed technique first constructs a contrast image and then detects the high contrast image pixels which usually lie around the text stroke boundary. The document text is then segmented by using local thresholds that are estimated from the detected high contrast pixels within a local neighborhood window. The proposed technique has been tested over the dataset that is used in the recent Document Image Binarization Contest (DIBCO) 2009. Experiments show its superior performance.",
"",
"Scene text detection and segmentation are two important and challenging research problems in the field of computer vision. This paper proposes a novel method for scene text detection and segmentation based on cascaded convolution neural networks (CNNs). In this method, a CNN-based text-aware candidate text region (CTR) extraction model (named detection network, DNet) is designed and trained using both the edges and the whole regions of text, with which coarse CTRs are detected. A CNN-based CTR refinement model (named segmentation network, SNet) is then constructed to precisely segment the coarse CTRs into text to get the refined CTRs. With DNet and SNet, much fewer CTRs are extracted than with traditional approaches while more true text regions are kept. The refined CTRs are finally classified using a CNN-based CTR classification model (named classification network, CNet) to get the final text regions. All of these CNN-based models are modified from VGGNet-16. Extensive experiments on three benchmark data sets demonstrate that the proposed method achieves the state-of-the-art performance and greatly outperforms other scene text detection and segmentation approaches.",
"This paper describes a new algorithm for document binarization, building upon recent work in energy-based segmentation methods. It uses the Laplacian operator to assess the local likelihood of foreground and background labels, Canny edge detection to identify likely discontinuities, and a graph cut implementation to efficiently find the minimum energy solution of an objective function combining these concepts. The results of this algorithm place it near the top on both the DIBCO-09 and H-DIBCO assessments."
]
} |
1904.00767 | 2949362007 | Visual attention has shown usefulness in image captioning, with the goal of enabling a caption model to selectively focus on regions of interest. Existing models typically rely on top-down language information and learn attention implicitly by optimizing the captioning objectives. While somewhat effective, the learned top-down attention can fail to focus on correct regions of interest without direct supervision of attention. Inspired by the human visual system which is driven by not only the task-specific top-down signals but also the visual stimuli, we in this work propose to use both types of attention for image captioning. In particular, we highlight the complementary nature of the two types of attention and develop a model (Boosted Attention) to integrate them for image captioning. We validate the proposed approach with state-of-the-art performance across various evaluation metrics. | To boost the performance of image captioning models, a few works attempt to use human stimulus-based attention. Sugano @cite_4 utilize ground truth human gaze to split top-down attention for gazed and non-gazed regions. Cornia @cite_24 integrate human attention in a captioning model similar as @cite_4 but replace the human gaze with predicted saliency maps. In @cite_6 , Tavakoli analyze the effects on stimulus-based attention in captioning by substituting the top-down attention with stimulus-based attention. While these models suggest that human attention can have positive effects on image captioning, they either incorporate only stimulus-based attention or use stimulus-based attention to separate the top-down attention at different locations, resulting in relatively marginal improvement over corresponding baselines. | {
"cite_N": [
"@cite_24",
"@cite_4",
"@cite_6"
],
"mid": [
"2751076261",
"2510850444",
"2962781144"
],
"abstract": [
"Image and video captioning are important tasks in visual data analytics, as they concern the capability of describing visual content in natural language. They are the pillars of query answering systems, improve indexing and search and allow a natural form of human-machine interaction. Even though promising deep learning strategies are becoming popular, the heterogeneity of large image archives makes this task still far from being solved. In this paper we explore how visual saliency prediction can support image captioning. Recently, some forms of unsupervised machine attention mechanisms have been spreading, but the role of human attention prediction has never been examined extensively for captioning. We propose a machine attention model driven by saliency prediction to provide captions in images, which can be exploited for many services on cloud and on multimedia data. Experimental evaluations are conducted on the SALICON dataset, which provides groundtruths for both saliency and captioning, and on the large Microsoft COCO dataset, the most widely used for image captioning.",
"Gaze reflects how humans process visual scenes and is therefore increasingly used in computer vision systems. Previous works demonstrated the potential of gaze for object-centric tasks, such as object localization and recognition, but it remains unclear if gaze can also be beneficial for scene-centric tasks, such as image captioning. We present a new perspective on gaze-assisted image captioning by studying the interplay between human gaze and the attention mechanism of deep neural networks. Using a public large-scale gaze dataset, we first assess the relationship between state-of-the-art object and scene recognition models, bottom-up visual saliency, and human gaze. We then propose a novel split attention model for image captioning. Our model integrates human gaze information into an attention-based long short-term memory architecture, and allows the algorithm to allocate attention selectively to both fixated and non-fixated image regions. Through evaluation on the COCO SALICON datasets we show that our method improves image captioning performance and that gaze can complement machine attention for semantic scene understanding tasks.",
"To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliency-boosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data."
]
} |
1904.00952 | 2928483034 | In order to complete tasks in a new environment, robots must be able to recognize unseen, unique objects. Fully supervised methods have made great strides on the object segmentation task, but require many examples of each object class and don't scale to unseen environments. In this work, we present a method that acquires pixelwise object labels for manipulable in-hand objects with no human supervision. Our two-step approach does a foreground-background segmentation informed by robot kinematics then uses a self-recognition network to segment the robot from the object in the foreground. We are able to achieve 49.4 mIoU performance on a difficult and varied assortment of items. | Many works rely on scene change over time, calling anything which violates the static scene assumption an object @cite_18 @cite_24 @cite_19 @cite_12 @cite_17 . The benefit of these methods is that they can segment many objects at the same time; however, they rely on the movement of objects over time which is not guaranteed. Also in this category, some methods push objects to create movement in the scene and group together pixels which move together @cite_15 @cite_21 @cite_20 . These methods have more control over object movement, but do not allow for full pose or environmental control. They also require a work surface and objects which allow for the pushing technique. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_15",
"@cite_20",
"@cite_12",
"@cite_17"
],
"mid": [
"2112073811",
"2078415683",
"2060775765",
"2082761562",
"1980076217",
"2008656436",
"2319436128",
"1490003351"
],
"abstract": [
"The performance of indoor robots that stay in a single environment can be enhanced by gathering detailed knowledge of objects that frequently occur in that environment. We use an inexpensive sensor providing dense color and depth, and fuse information from multiple sensing modalities to detect changes between two 3-D maps. We adapt a recent SLAM technique to align maps. A probabilistic model of sensor readings lets us reason about movement of surfaces. Our method handles arbitrary shapes and motions, and is robust to lack of texture. We demonstrate the ability to find whole objects in complex scenes by regularizing over surface patches.",
"Humans can effortlessly perceive an object they encounter for the first time in a possibly cluttered scene and memorize its appearance for later recognition. Such performance is still difficult to achieve with artificial vision systems because it is not clear how to define the concept of objectness in its full generality. In this paper we propose a paradigm that integrates the robot's manipulation and sensing capabilities to detect a new, previously unknown object and learn its visual appearance. By making use of the robot's manipulation capabilities and force sensing, we introduce additional information that can be utilized to reliably separate unknown objects from the background. Once an object has been identified, the robot can continuously manipulate it to accumulate more information about it and learn its complete visual appearance. We demonstrate the feasibility of the proposed approach by applying it to the problem of autonomous learning of visual representations for viewpoint-independent object recognition on a humanoid robot.",
"We build on recent fast and accurate 3-D reconstruction techniques to segment objects during scene reconstruction. We take object outline information from change detection to build 3-D models of rigid objects and represent the scene as static and dynamic components. Object models are updated online during mapping, and can integrate segmentation information from sources other than change detection.",
"In this paper, we present a system for automatically learning segmentations of objects given changes in dense RGB-D maps over the lifetime of a robot. Using recent advances in RGB-D mapping to construct multiple dense maps, we detect changes between mapped regions from multiple traverses by performing a 3-D difference of the scenes. Our method takes advantage of the free space seen in each map to account for variability in how the maps were created. The resulting changes from the 3-D difference are our discovered objects, which are then used to train multiple segmentation algorithms in the original map. The final objects can then be matched in other maps given their corresponding features and learned segmentation method. If the same object is discovered multiple times in different contexts, the features and segmentation method are refined, incorporating all instances to better learn objects over time. We verify our approach with multiple objects in numerous and varying maps.",
"",
"We present an approach for autonomous interactive object segmentation by a humanoid robot. The visual segmentation of unknown objects in a complex scene is an important prerequisite for e.g. object learning or grasping, but extremely difficult to achieve through passive observation only. Our approach uses the manipulative capabilities of humanoid robots to induce motion on the object and thus integrates the robots manipulation and sensing capabilities to segment previously unknown objects. We show that this is possible without any human guidance or pre-programmed knowledge, and that the resulting motion allows for reliable and complete segmentation of new objects in an unknown and cluttered environment. We extend our previous work, which was restricted to textured objects, by devising new methods for the generation of object hypotheses and the estimation of their motion after being pushed by the robot. These methods are mainly based on the analysis of motion of color annotated 3D points obtained from stereo vision, and allow the segmentation of textured as well as non-textured rigid objects. In order to evaluate the quality of the obtained segmentations, they are used to train a simple object recognizer. The approach has been implemented and tested on the humanoid robot ARMAR-III, and the experimental results confirm its applicability on a wide variety of objects even in highly cluttered scenes.",
"In this article, we present and evaluate a system, which allows a mobile robot to autonomously detect, model, and re-recognize objects in everyday environments. While other systems have demonstrated one of these elements, to our knowledge, we present the first system, which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modeling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally, these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.",
"We show how a robot can autonomously learn an ontology of objects to explain aspects of its sensor input from an unknown dynamic world. Unsupervised learning about objects is an important conceptual step in developmental learning, whereby the agent clusters observations across space and time to construct stable perceptual representations of objects. Our proposed unsupervised learning method uses the properties of allocentric occupancy grids to classify individual sensor readings as static or dynamic. Dynamic readings are clustered and the clusters are tracked over time to identify objects, separating them both from the background of the environment and from the noise of unexplainable sensor readings. Once trackable clusters of sensor readings (i.e., objects) have been identified, we build shape models where they are stable and consistent properties of these objects. However, the representation can tolerate, represent, and track amorphous objects as well as those that have well-defined shape. In the end, the learned ontology makes it possible for the robot to describe a cluttered dynamic world with symbolic object descriptions along with a static environment model, both models grounded in sensory experience, and learned without external supervision."
]
} |
1904.00952 | 2928483034 | In order to complete tasks in a new environment, robots must be able to recognize unseen, unique objects. Fully supervised methods have made great strides on the object segmentation task, but require many examples of each object class and don't scale to unseen environments. In this work, we present a method that acquires pixelwise object labels for manipulable in-hand objects with no human supervision. Our two-step approach does a foreground-background segmentation informed by robot kinematics then uses a self-recognition network to segment the robot from the object in the foreground. We are able to achieve 49.4 mIoU performance on a difficult and varied assortment of items. | Another class of work learns objects that are placed in uncluttered or known environments @cite_25 @cite_2 @cite_14 . Typically, the core focus of these works is not the object segmentation and their object isolation methods will not work in cluttered, real world environments. However, they have made other contributions such as appearance modeling @cite_25 and joint modeling over vision and language @cite_2 . | {
"cite_N": [
"@cite_14",
"@cite_25",
"@cite_2"
],
"mid": [
"1970218068",
"2963678509",
"2949559657"
],
"abstract": [
"The task addressed in this paper is to plan iteratively a set views in order to reconstruct an object using a mobile manipulator robot with an \"eye-in-hand\" sensor. The proposed method plans views directly in the configuration space avoiding the need of inverse kinematics. It is based on a fast evaluation and rejection of a set of candidate configurations. The main contributions are: a utility function to rank the views and an evaluation strategy implemented as a series of filters. Given that the candidate views are configurations, motion planning is solved using a rapidly-exploring random tree. The system is experimentally evaluated in simulation, contrasting it with previous work. We also present experiments with a real mobile manipulator robot, demonstrating the effectiveness of our method.",
"Robot warehouse automation has attracted significant interest in recent years, perhaps most visibly in the Amazon Picking Challenge (APC) [1]. A fully autonomous warehouse pick-and-place system requires robust vision that reliably recognizes and locates objects amid cluttered environments, self-occlusions, sensor noise, and a large variety of objects. In this paper we present an approach that leverages multiview RGB-D data and self-supervised, data-driven learning to overcome those difficulties. The approach was part of the MIT-Princeton Team system that took 3rd- and 4th-place in the stowing and picking tasks, respectively at APC 2016. In the proposed approach, we segment and label multiple views of a scene with a fully convolutional neural network, and then fit pre-scanned 3D object models to the resulting segmentation to get the 6D object pose. Training a deep neural network for segmentation typically requires a large amount of training data. We propose a self-supervised method to generate a large labeled dataset without tedious manual segmentation. We demonstrate that our system can reliably estimate the 6D pose of objects under a variety of scenarios. All code, data, and benchmarks are available at http: apc.cs.princeton.edu",
"As robots become more ubiquitous and capable, it becomes ever more important to enable untrained users to easily interact with them. Recently, this has led to study of the language grounding problem, where the goal is to extract representations of the meanings of natural language tied to perception and actuation in the physical world. In this paper, we present an approach for joint learning of language and perception models for grounded attribute induction. Our perception model includes attribute classifiers, for example to detect object color and shape, and the language model is based on a probabilistic categorial grammar that enables the construction of rich, compositional meaning representations. The approach is evaluated on the task of interpreting sentences that describe sets of objects in a physical workspace. We demonstrate accurate task performance and effective latent-variable concept induction in physical grounded scenes."
]
} |
1904.00952 | 2928483034 | In order to complete tasks in a new environment, robots must be able to recognize unseen, unique objects. Fully supervised methods have made great strides on the object segmentation task, but require many examples of each object class and don't scale to unseen environments. In this work, we present a method that acquires pixelwise object labels for manipulable in-hand objects with no human supervision. Our two-step approach does a foreground-background segmentation informed by robot kinematics then uses a self-recognition network to segment the robot from the object in the foreground. We are able to achieve 49.4 mIoU performance on a difficult and varied assortment of items. | Da Costa Rocha's @cite_6 is the first method that we know of to "make use of the kinematic model of a robot in order to generate training labels". Our self-supervised manipulator recognition differs from theirs in our use of joint locations and depth sensor readings to initialize grab cut rather than a projection of a full geometric model of the gripper. Additionally, the primary focus of this work is on in-hand object isolation for object learning rather than robotic self-recognition. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2912317562"
],
"abstract": [
"Surgical tool segmentation in endoscopic images is the first step towards pose estimation and (sub-)task automation in challenging minimally invasive surgical operations. While many approaches in the literature have shown great results using modern machine learning methods such as convolutional neural networks, the main bottleneck lies in the acquisition of a large number of manually-annotated images for efficient learning. This is especially true in surgical context, where patient-to-patient differences impede the overall generalizability. In order to cope with this lack of annotated data, we propose a self-supervised approach in a robot-assisted context. To our knowledge, the proposed approach is the first to make use of the kinematic model of the robot in order to generate training labels. The core contribution of the paper is to propose an optimization method to obtain good labels for training despite an unknown hand-eye calibration and an imprecise kinematic model. The labels can subsequently be used for fine-tuning a fully-convolutional neural network for pixel-wise classification. As a result, the tool can be segmented in the endoscopic images without needing a single manually-annotated image. Experimental results on phantom and in vivo datasets obtained using a flexible robotized endoscopy system are very promising."
]
} |
1904.00758 | 2925657591 | Semantic scene segmentation has primarily been addressed by forming representations of single images both with supervised and unsupervised methods. The problem of semantic segmentation in dynamic scenes has begun to recently receive attention with video object segmentation approaches. What is not known is how much extra information the temporal dynamics of the visual scene carries that is complimentary to the information available in the individual frames of the video. There is evidence that the human visual system can effectively perceive the scene from temporal dynamics information of the scene's changing visual characteristics without relying on the visual characteristics of individual snapshots themselves. Our work takes steps to explore whether machine perception can exhibit similar properties by combining appearance-based representations and temporal dynamics representations in a joint-learning problem that reveals the contribution of each toward successful dynamic scene segmentation. Additionally, we provide the MIT Driving Scene Segmentation dataset, which is a large-scale full driving scene segmentation dataset, densely annotated for every pixel and every one of 5,000 video frames. This dataset is intended to help further the exploration of the value of temporal dynamics information for semantic segmentation in video. | Overall, progress in semantic segmentation of still images has continued @cite_15 , and it is possible that eventually any approach to semantic segmentation in video will eventually completely disregard temporal dynamics of the scene, as it has for the state-of-the-art tracking by detection approaches. However, this possible eventuality is far from guaranteed, and is currently one of the open problems of computer vision: how valuable is temporal dynamics information for scene understanding in video? Our work seeks to take steps toward answering this question. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2630837129"
],
"abstract": [
"In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark."
]
} |
1904.00355 | 2966682325 | Deep part-based methods in recent literature have revealed the great potential of learning local part-level representation for pedestrian image in the task of person re-identification. However, global features that capture discriminative holistic information of human body are usually ignored or not well exploited. This motivates us to investigate joint learning global and local features from pedestrian images. Specifically, in this work, we propose a novel framework termed tree branch network (TBN) for person re-identification. Given a pedestrain image, the feature maps generated by the backbone CNN, are partitioned recursively into serveral pieces, each of which is followed by a bottleneck structure that learns finer-grained features for each level in the hierarchical tree-like framework. In this way, represenations are learned in a coarse-to-fine manner and finally assembled to produce more discriminative image descriptions. Experimental results demonstrate the effectiveness of the global and local feature learning method in the proposed TBN framework. We also show significant improvement in performance over state-of-the-art methods on three public benchmarks: Market-1501, CUHK-03 and DukeMTMC. | In @cite_23 @cite_20 @cite_29 , the images of pedestrians are divided horizontally into different local regions, in which the body features of color, texture feature, and shape context are extracted. However, their robustness and discriminative power are still weak for the practical lighting and attitude changes. As the emergence of deep learning, Zhao @cite_29 proposed that images can be aligned macroscopically and microscopically by capturing semantic features from different body regions. In addition, the learned region features from different semantic regions are merged with a competitive scheme. Sun @cite_23 proposed a unified segmentation strategy and conducted independent convolution with a refine part pooling. However, the network structure becomes relatively complex and will be damaged by the cumulative error of the key point information of the human body. In addition, @cite_23 only considered the alignment of local information, ignoring the role of global information. Compared with directly partition the methods of each part, method of gradually partition can learn more discriminating features. Zhang @cite_28 proposed to use the dynamic programming to find the shortest distance of the pedestrian image. Similarity, they also consider global information of pedestrian image and achieve promising accuracy with only global features. | {
"cite_N": [
"@cite_28",
"@cite_29",
"@cite_20",
"@cite_23"
],
"mid": [
"",
"2736410039",
"2755066373",
"2963842104"
],
"abstract": [
"",
"Person re-identification (ReID) is an important task in video surveillance and has various applications. It is non-trivial due to complex background clutters, varying illumination conditions, and uncontrollable camera settings. Moreover, the person body misalignment caused by detectors or pose variations is sometimes too severe for feature matching across images. In this study, we propose a novel Convolutional Neural Network (CNN), called Spindle Net, based on human body region guided multi-stage feature decomposition and tree-structured competitive feature fusion. It is the first time human body structure information is considered in a CNN framework to facilitate feature learning. The proposed Spindle Net brings unique advantages: 1) it separately captures semantic features from different body regions thus the macro-and micro-body features can be well aligned across images, 2) the learned region features from different semantic regions are merged with a competitive scheme and discriminative features can be well preserved. State of the art performance can be achieved on multiple datasets by large margins. We further demonstrate the robustness and effectiveness of the proposed Spindle Net on our proposed dataset SenseReID without fine-tuning.",
"The huge variance of human pose and the misalignment of detected human images significantly increase the difficulty of person Re-Identification (Re-ID). Moreover, efficient Re-ID systems are required to cope with the massive visual data being produced by video surveillance systems. Targeting to solve these problems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an efficient indexing and retrieval framework, respectively. GLAD explicitly leverages the local and global cues in human body to generate a discriminative and robust representation. It consists of part extraction and descriptor learning modules, where several part regions are first detected and then deep neural networks are designed for representation learning on both the local and global regions. A hierarchical indexing and retrieval framework is designed to eliminate the huge redundancy in the gallery set, and accelerate the online Re-ID procedure. Extensive experimental results show GLAD achieves competitive accuracy compared to the state-of-the-art methods. Our retrieval framework significantly accelerates the online Re-ID procedure without loss of accuracy. Therefore, this work has potential to work better on person Re-ID tasks in real scenarios.",
"Employing part-level features offers fine-grained information for pedestrian image description. A prerequisite of part discovery is that each part should be well located. Instead of using external resources like pose estimator, we consider content consistency within each part for precise part location. Specifically, we target at learning discriminative part-informed features for person retrieval and make two contributions. (i) A network named Part-based Convolutional Baseline (PCB). Given an image input, it outputs a convolutional descriptor consisting of several part-level features. With a uniform partition strategy, PCB achieves competitive results with the state-of-the-art methods, proving itself as a strong convolutional baseline for person retrieval. (ii) A refined part pooling (RPP) method. Uniform partition inevitably incurs outliers in each part, which are in fact more similar to other parts. RPP re-assigns these outliers to the parts they are closest to, resulting in refined parts with enhanced within-part consistency. Experiment confirms that RPP allows PCB to gain another round of performance boost. For instance, on the Market-1501 dataset, we achieve (77.4+4.2) mAP and (92.3+1.5) rank-1 accuracy, surpassing the state of the art by a large margin. Code is available at: https: github.com syfafterzy PCB_RPP"
]
} |
1904.00355 | 2966682325 | Deep part-based methods in recent literature have revealed the great potential of learning local part-level representation for pedestrian image in the task of person re-identification. However, global features that capture discriminative holistic information of human body are usually ignored or not well exploited. This motivates us to investigate joint learning global and local features from pedestrian images. Specifically, in this work, we propose a novel framework termed tree branch network (TBN) for person re-identification. Given a pedestrain image, the feature maps generated by the backbone CNN, are partitioned recursively into serveral pieces, each of which is followed by a bottleneck structure that learns finer-grained features for each level in the hierarchical tree-like framework. In this way, represenations are learned in a coarse-to-fine manner and finally assembled to produce more discriminative image descriptions. Experimental results demonstrate the effectiveness of the global and local feature learning method in the proposed TBN framework. We also show significant improvement in performance over state-of-the-art methods on three public benchmarks: Market-1501, CUHK-03 and DukeMTMC. | Among these methods, mutual learning has achieved promising performance. The deep mutual learning proposed by Zhang @cite_12 is a typical model distillation method, which makes two deep networks learn from each other with Kullback-Leibler Divergence. As a very effective model distillation method, it achieves the goal of improving the effectiveness of a single student network by calculating the relative entropy between the different student models as mutual losses. The cooperative learning between two models will prefer to learn better generalization. Following @cite_12 , Zhang @cite_28 adopted a mutual learning strategy to greatly improve the accuracy of pedestrian re-identification. Besides, Tong @cite_13 proposed to combined detection with identification learning, and they get a good performance with ID-discriminative embedding. | {
"cite_N": [
"@cite_28",
"@cite_13",
"@cite_12"
],
"mid": [
"",
"2963574614",
"2620998106"
],
"abstract": [
"",
"Existing person re-identification benchmarks and methods mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be searched from a gallery of whole scene images. To close the gap, we propose a new deep learning framework for person search. Instead of breaking it down into two separate tasks—pedestrian detection and person re-identification, we jointly handle both aspects in a single convolutional neural network. An Online Instance Matching (OIM) loss function is proposed to train the network effectively, which is scalable to datasets with numerous identities. To validate our approach, we collect and annotate a large-scale benchmark dataset for person search. It contains 18,184 images, 8,432 identities, and 96,143 pedestrian bounding boxes. Experiments show that our framework outperforms other separate approaches, and the proposed OIM loss function converges much faster and better than the conventional Softmax loss.",
"Model distillation is an effective and widely used technique to transfer knowledge from a teacher to a student network. The typical application is to transfer from a powerful large network or ensemble to a small network, in order to meet the low-memory or fast execution requirements. In this paper, we present a deep mutual learning (DML) strategy. Different from the one-way transfer between a static pre-defined teacher and a student in model distillation, with DML, an ensemble of students learn collaboratively and teach each other throughout the training process. Our experiments show that a variety of network architectures benefit from mutual learning and achieve compelling results on both category and instance recognition tasks. Surprisingly, it is revealed that no prior powerful teacher network is necessary - mutual learning of a collection of simple student networks works, and moreover outperforms distillation from a more powerful yet static teacher."
]
} |
1904.00605 | 2928920860 | As Deep Neural Networks (DNNs) have demonstrated superhuman performance in many computer vision tasks, there is an increasing interest in revealing the complex internal mechanisms of DNNs. In this paper, we propose Relative Attributing Propagation (RAP), which decomposes the output predictions of DNNs with a new perspective that precisely separates the positive and negative attributions. By identifying the fundamental causes of activation and the proper inversion of relevance, RAP allows each neuron to be assigned an actual contribution to the output. Furthermore, we devise pragmatic methods to handle the effect of bias and batch normalization properly in the attributing procedures. Therefore, our method makes it possible to interpret various kinds of very deep neural network models with clear and attentive visualizations of positive and negative attributions. By utilizing the region perturbation method and comparing the distribution of attributions for a quantitative evaluation, we verify the correctness of our RAP whether the positive and negative attributions correctly account for each meaning. The positive and negative attributions propagated by RAP show the characteristics of vulnerability and robustness to the distortion of the corresponding pixels, respectively. We apply RAP to DNN models; VGG-16, ResNet-50 and Inception-V3, demonstrating its generation of more intuitive and improved interpretation compared to the existing attribution methods. | There are several studies on understanding of what a DNN model has learned. From the standpoint of interpreting a DNN model, the manner in which a DNN works can be visualized by maximizing the activation of hidden layers @cite_17 or generating salient feature maps @cite_13 @cite_16 @cite_0 @cite_25 @cite_8 @cite_29 . @cite_10 introduced the input switched affine network, which can decompose the contributions of previous characters to the current prediction, and @cite_19 proposed the influence function to understand model behavior, debug models, detect dataset errors, and even create visually indistinguishable training-set attacks. @cite_6 proposed LIME, an algorithm that explains the predictions of classifier by learning an interpretable model locally around the prediction. | {
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_29",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"1849277567",
"2963523513",
"2963081790",
"2282821441",
"2190008860",
"2597603852",
"2962851944",
"2963715038",
"2295107390",
""
],
"abstract": [
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.",
"",
"The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. In this work, we describe Network Dissection, a method that interprets networks by providing meaningful labels to their individual units. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and visual semantic concepts. By identifying the best alignments, units are given interpretable labels ranging from colors, materials, textures, parts, objects and scenes. The method reveals that deep representations are more transparent and interpretable than they would be under a random equivalently powerful basis. We apply our approach to interpret and compare the latent representations of several network architectures trained to solve a wide range of supervised and self-supervised tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initialization parameters, as well as networks depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a given CNN prediction for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into what hierarchical structures can learn.",
"Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.",
"Image representations, from SIFT and bag of visual words to convolutional neural networks (CNNs) are a crucial component of almost all computer vision systems. However, our understanding of them remains limited. In this paper we study several landmark representations, both shallow and deep, by a number of complementary visualization techniques. These visualizations are based on the concept of \"natural pre-image\", namely a natural-looking image whose representation has some notable property. We study in particular three such visualizations: inversion, in which the aim is to reconstruct an image from its representation, activation maximization, in which we search for patterns that maximally stimulate a representation component, and caricaturization, in which the visual patterns that a representation detects in an image are exaggerated. We pose these as a regularized energy-minimization framework and demonstrate its generality and effectiveness. In particular, we show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"How can we explain the predictions of a black-box model? In this paper, we use influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. We train a masking model to manipulate the scores of the classifier by masking salient parts of the input image. Our model generalises well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems. We test our approach on CIFAR-10 and ImageNet datasets and show that the produced saliency maps are easily interpretable, sharp, and free of artifacts. We suggest a new metric for saliency and test our method on the ImageNet object localisation task. We achieve results outperforming other weakly supervised methods.",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.",
""
]
} |
1904.00605 | 2928920860 | As Deep Neural Networks (DNNs) have demonstrated superhuman performance in many computer vision tasks, there is an increasing interest in revealing the complex internal mechanisms of DNNs. In this paper, we propose Relative Attributing Propagation (RAP), which decomposes the output predictions of DNNs with a new perspective that precisely separates the positive and negative attributions. By identifying the fundamental causes of activation and the proper inversion of relevance, RAP allows each neuron to be assigned an actual contribution to the output. Furthermore, we devise pragmatic methods to handle the effect of bias and batch normalization properly in the attributing procedures. Therefore, our method makes it possible to interpret various kinds of very deep neural network models with clear and attentive visualizations of positive and negative attributions. By utilizing the region perturbation method and comparing the distribution of attributions for a quantitative evaluation, we verify the correctness of our RAP whether the positive and negative attributions correctly account for each meaning. The positive and negative attributions propagated by RAP show the characteristics of vulnerability and robustness to the distortion of the corresponding pixels, respectively. We apply RAP to DNN models; VGG-16, ResNet-50 and Inception-V3, demonstrating its generation of more intuitive and improved interpretation compared to the existing attribution methods. | From the standpoint of explaining the decision of a DNN, the contributions of the input are propagated backward, resulting in a redistribution of relevance in the pixel space. Sensitivity analysis visualizes the sensitivities of input images classified by a DNN while explaining the factors that reduce increase the evidence for the predicted results @cite_32 . @cite_8 proposed a deconvolution method to identify the patterns of a predicted input image from a DNN. Layerwise relevance propagation (LRP) @cite_15 was introduced to backpropagate a relevance, which makes the network output become fully redistributed throughout the layers of a DNN. @cite_34 showed that the LRP algorithm qualitatively and quantitatively provides a better explanation than do either the sensitivity-based approach or the deconvolution method. | {
"cite_N": [
"@cite_15",
"@cite_34",
"@cite_32",
"@cite_8"
],
"mid": [
"1787224781",
"2240067561",
"2150165932",
"1849277567"
],
"abstract": [
"Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.",
"Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the “importance” of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.",
"After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1904.00605 | 2928920860 | As Deep Neural Networks (DNNs) have demonstrated superhuman performance in many computer vision tasks, there is an increasing interest in revealing the complex internal mechanisms of DNNs. In this paper, we propose Relative Attributing Propagation (RAP), which decomposes the output predictions of DNNs with a new perspective that precisely separates the positive and negative attributions. By identifying the fundamental causes of activation and the proper inversion of relevance, RAP allows each neuron to be assigned an actual contribution to the output. Furthermore, we devise pragmatic methods to handle the effect of bias and batch normalization properly in the attributing procedures. Therefore, our method makes it possible to interpret various kinds of very deep neural network models with clear and attentive visualizations of positive and negative attributions. By utilizing the region perturbation method and comparing the distribution of attributions for a quantitative evaluation, we verify the correctness of our RAP whether the positive and negative attributions correctly account for each meaning. The positive and negative attributions propagated by RAP show the characteristics of vulnerability and robustness to the distortion of the corresponding pixels, respectively. We apply RAP to DNN models; VGG-16, ResNet-50 and Inception-V3, demonstrating its generation of more intuitive and improved interpretation compared to the existing attribution methods. | Guided BackProp @cite_2 and Integrated Gradients @cite_21 each compute the single and average partial derivatives of the output to attribute the prediction of a DNN. Deep Taylor Decomposition @cite_31 is an extension of LRP for interpreting the decision of a DNN by decomposing the activation of a neuron in terms of the contributions from its inputs. DeepLIFT @cite_18 decomposes the output prediction by assigning the differences of contribution scores between the activation of each neuron to its reference activation. @cite_22 approached the problem of the attribution value from a theoretical perspective and formally proved the conditions of equivalence and approximation between four attribution methods: Guided Input, Integrated Gradients, LRP and DeepLIFT. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_21",
"@cite_2",
"@cite_31"
],
"mid": [
"2605409611",
"2785760873",
"2594633041",
"2123045220",
"2195388612"
],
"abstract": [
"The purported \"black box\" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: http: goo.gl qKb7pL, code: http: goo.gl RM8jvH.",
"Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures.",
"We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.",
"Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.",
"Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method called deep Taylor decomposition efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets. HighlightsA novel method to explain nonlinear classification decisions in terms of input variables is introduced.The method is based on Taylor expansions and decomposes the output of a deep neural network in terms of input variables.The resulting deep Taylor decomposition can be applied directly to existing neural networks without retraining.The method is tested on two large-scale neural networks for image classification: BVLC CaffeNet and GoogleNet."
]
} |
1904.00634 | 2927924433 | Deep learning methods have witnessed the great progress in image restoration with specific metrics (e.g., PSNR, SSIM). However, the perceptual quality of the restored image is relatively subjective, and it is necessary for users to control the reconstruction result according to personal preferences or image characteristics, which cannot be done using existing deterministic networks. This motivates us to exquisitely design a unified interactive framework for general image restoration tasks. Under this framework, users can control continuous transition of different objectives, e.g., the perception-distortion trade-off of image super-resolution, the trade-off between noise reduction and detail preservation. We achieve this goal by controlling latent features of the designed network. To be specific, our proposed framework, named Controllable Feature Space Network (CFSNet), is entangled by two branches based on different objectives. Our model can adaptively learn the coupling coefficients of different layers and channels, which provides finer control of the restored image quality. Experiments on several typical image restoration tasks fully validate the effective benefits of the proposed method. | Deep learning methods have been widely used in image restoration. @cite_0 @cite_29 @cite_9 @cite_28 @cite_22 @cite_45 @cite_34 continuously deepen, widen or lighten the network structure, aiming at improving the super-resolution accuracy as much as possible. While @cite_23 @cite_5 @cite_33 @cite_12 paid more attention to the design of loss function in order to improve visual quality. Besides, @cite_49 @cite_43 @cite_38 explored the perception-distortion trade-off. In @cite_19 , Dong adopted ARCNN built with several stacked convolutional layers for JPEG image deblocking. Zhang @cite_39 proposed FFDNet to make image denoising more flexible and effective. Guo @cite_26 designed CBDNet to handle blind denoising of real images. Different from these task-specific methods, @cite_40 @cite_18 @cite_38 @cite_16 proposed some unified schemes that can be employed to different image restoration tasks. However, these fixed networks are not flexible enough to deal with volatile user needs and application requirements. | {
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_43",
"@cite_5",
"@cite_38",
"@cite_18",
"@cite_39",
"@cite_23",
"@cite_49",
"@cite_26",
"@cite_28",
"@cite_19",
"@cite_40",
"@cite_34",
"@cite_16",
"@cite_12",
"@cite_33",
"@cite_9",
"@cite_0",
"@cite_45"
],
"mid": [
"",
"",
"",
"",
"",
"",
"2764207251",
"2331128040",
"",
"2963725279",
"",
"2142683286",
"2508457857",
"",
"",
"",
"",
"",
"1885185971",
""
],
"abstract": [
"",
"",
"",
"",
"",
"",
"Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including: 1) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network; 2) the ability to remove spatially variant noise by specifying a non-uniform noise level map; and 3) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.",
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"",
"",
"",
"Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar \"easy to hard\" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use cases (i.e. Twitter).",
"The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.",
"",
"",
"",
"",
"",
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.",
""
]
} |
1904.00634 | 2927924433 | Deep learning methods have witnessed the great progress in image restoration with specific metrics (e.g., PSNR, SSIM). However, the perceptual quality of the restored image is relatively subjective, and it is necessary for users to control the reconstruction result according to personal preferences or image characteristics, which cannot be done using existing deterministic networks. This motivates us to exquisitely design a unified interactive framework for general image restoration tasks. Under this framework, users can control continuous transition of different objectives, e.g., the perception-distortion trade-off of image super-resolution, the trade-off between noise reduction and detail preservation. We achieve this goal by controlling latent features of the designed network. To be specific, our proposed framework, named Controllable Feature Space Network (CFSNet), is entangled by two branches based on different objectives. Our model can adaptively learn the coupling coefficients of different layers and channels, which provides finer control of the restored image quality. Experiments on several typical image restoration tasks fully validate the effective benefits of the proposed method. | In high-level vision task, many technologies have been explored to implement controllable image transformation. @cite_2 and @cite_32 incorporated facial attribute vector into network to control facial appearance (, gender, age, beard). In @cite_48 , deep feature interpolation was adopt to implement automatic high-resolution image transformation. @cite_27 also proposed a scheme that control adaptive instance normalization (AdaIN) in feature space to adjust high-level attributes. Shoshan @cite_24 inserted some tuning blocks in the main network to allow interactively modification of the network. However, all of these methods are designed for high-level vision tasks and can not be directly applied to image restoration. To apply controllable image transformation to low-level vision tasks, Wang @cite_31 performed interpolation in the parameter space, but this method can not guarantee the optimality of the outputs, which inspires us to further explore fine-grain control of image restoration. | {
"cite_N": [
"@cite_48",
"@cite_32",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_31"
],
"mid": [
"2963870144",
"2798691622",
"2900981359",
"2962770929",
"2895749211",
"2962848800"
],
"abstract": [
"We propose Deep Feature Interpolation (DFI), a new data-driven baseline for automatic high-resolution image transformation. As the name suggests, DFI relies only on simple linear interpolation of deep convolutional features from pre-trained convnets. We show that despite its simplicity, DFI can perform high-level semantic transformations like make older younger, make bespectacled, add smile, among others, surprisingly well–sometimes even matching or outperforming the state-of-the-art. This is particularly unexpected as DFI requires no specialized network architecture or even any deep network to be trained for these tasks. DFI therefore can be used as a new baseline to evaluate more complex algorithms and provides a practical answer to the question of which image transformation tasks are still challenging after the advent of deep learning.",
"Given a tiny face image, existing face hallucination methods aim at super-resolving its high-resolution (HR) counterpart by learning a mapping from an exemplar dataset. Since a low-resolution (LR) input patch may correspond to many HR candidate patches, this ambiguity may lead to distorted HR facial details and wrong attributes such as gender reversal. An LR input contains low-frequency facial components of its HR version while its residual face image, defined as the difference between the HR ground-truth and interpolated LR images, contains the missing high-frequency facial details. We demonstrate that supplementing residual images or feature maps with additional facial attribute information can significantly reduce the ambiguity in face super-resolution. To explore this idea, we develop an attribute-embedded upsampling network, which consists of an upsampling network and a discriminative network. The upsampling network is composed of an autoencoder with skip-connections, which incorporates facial attribute vectors into the residual features of LR inputs at the bottleneck of the autoencoder and deconvolutional layers used for upsampling. The discriminative network is designed to examine whether super-resolved faces contain the desired attributes or not and then its loss is used for updating the upsampling network. In this manner, we can super-resolve tiny (16A—16 pixels) unaligned face images with a large upscaling factor of 8A— while reducing the uncertainty of one-to-many mappings remarkably. By conducting extensive evaluations on a large-scale dataset, we demonstrate that our method achieves superior face hallucination results and outperforms the state-of-the-art.",
"One of the key ingredients for successful optimization of modern CNNs is identifying a suitable objective. To date, the objective is fixed a-priori at training time, and any variation to it requires re-training a new network. In this paper we present a first attempt at alleviating the need for re-training. Rather than fixing the network at training time, we train a \"Dynamic-Net\" that can be modified at inference time. Our approach considers an \"objective-space\" as the space of all linear combinations of two objectives, and the Dynamic-Net can traverse this objective-space at test-time, without any further training. We show that this upgrades pre-trained networks by providing an out-of-learning extension, while maintaining the performance quality. The solution we propose is fast and allows a user to interactively modify the network, in real-time, in order to obtain the result he she desires. We show the benefits of such an approach via several different applications.",
"",
"We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a high-res face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to (1) handle unpaired training data because the training low high-res and high-res attribute images may not necessarily align with each other, and to (2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate high-quality results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces high-quality and interesting results on identity transfer. We demonstrate three applications on identity-guided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method.",
""
]
} |
1904.00344 | 2909703150 | Deep Neural Networks have created a paradigm shift in our ability to comprehend raw data in various important fields ranging from computer vision and natural language processing to intelligence warfare and healthcare. While DNNs are increasingly deployed either in a white-box setting where the model internal is publicly known, or a black-box setting where only the model outputs are known, a practical concern is protecting the models against Intellectual Property (IP) infringement. We propose BlackMarks, the first end-to-end multi-bit watermarking framework that is applicable in the black-box scenario. BlackMarks takes the pre-trained unmarked model and the owner's binary signature as inputs and outputs the corresponding marked model with a set of watermark keys. To do so, BlackMarks first designs a model-dependent encoding scheme that maps all possible classes in the task to bit '0' and bit '1' by clustering the output activations into two groups. Given the owner's watermark signature (a binary string), a set of key image and label pairs are designed using targeted adversarial attacks. The watermark (WM) is then embedded in the prediction behavior of the target DNN by fine-tuning the model with generated WM key set. To extract the WM, the remote model is queried by the WM key images and the owner's signature is decoded from the corresponding predictions according to the designed encoding scheme. We perform a comprehensive evaluation of BlackMarks's performance on MNIST, CIFAR10, ImageNet datasets and corroborate its effectiveness and robustness. BlackMarks preserves the functionality of the original DNN and incurs negligible WM embedding runtime overhead as low as 2.054 . | Digital watermarks are invisible identifiers embedded as an integral part of the host design and have been widely adopted in the multimedia domain for IP protection @cite_4 @cite_34 . Conventional digital watermarking techniques consist of two phases: WM embedding and WM extraction. Figure shows the workflow of a typical constraint-based watermarking system. The original problem is used as the cover constraints to hide the owner's WM signature. To embed the WM, the IP designer creates the stego-key and a set of additional constraints that do not conflict with cover constraints. Combining these two constraints yields the stego-problem, which is solved to produce the stego-solution. Note that the stego-solution will simultaneously satisfy both the original constraints and the WM-specific constraints, thus enables the designer to extract the WM and claim the authorship. An effective watermarking method is required to meet a set of criteria including imperceptibility, robustness, verifiability, capacity, and low overhead @cite_3 . | {
"cite_N": [
"@cite_34",
"@cite_4",
"@cite_3"
],
"mid": [
"",
"1485029172",
"2184611825"
],
"abstract": [
"",
"This work explores the myriad of issues regarding multimedia security. It covers various issues, including perceptual fidelity analysis, image, audio, and 3D mesh object watermarking, medical watermarking, and error detection (authentication) and concealment.",
"The expansion of the Internet has frequently increased the availability of digital data such as audio, images and videos to the public. Digital watermarking is a technology being developed to ensure and facilitate data authentication, security and copyright protection of digital media. This paper incorporate the detail study watermarking definition, concept and the main contributions in this field such as categories of watermarking process that tell which watermarking method should be used. It starts with overview, classification, features, framework, techniques, application, challenges, limitations and performance metric of watermarking and a comparative analysis of some major watermarking techniques. In the survey our prime concern is image only."
]
} |
1904.00649 | 2935077499 | Automatic detection and recognition of traffic signs plays a crucial role in management of the traffic-sign inventory. It provides accurate and timely way to manage traffic-sign inventory with a minimal human effort. In the computer vision community the recognition and detection of traffic signs is a well-researched problem. A vast majority of existing approaches perform well on traffic signs needed for advanced drivers-assistance and autonomous systems. However, this represents a relatively small number of all traffic signs (around 50 categories out of several hundred) and performance on the remaining set of traffic signs, which are required to eliminate the manual labor in traffic-sign inventory management, remains an open question. In this paper, we address the issue of detecting and recognizing a large number of traffic-sign categories suitable for automating traffic-sign inventory management. We adopt a convolutional neural network (CNN) approach, the Mask R-CNN, to address the full pipeline of detection and recognition with automatic end-to-end learning. We propose several improvements that are evaluated on the detection of traffic signs and result in an improved overall performance. This approach is applied to detection of 200 traffic-sign categories represented in our novel dataset. Results are reported on highly challenging traffic-sign categories that have not yet been considered in previous works. We provide comprehensive analysis of the deep learning method for the detection of traffic signs with large intra-category appearance variation and show below 3 error rates with the proposed approach, which is sufficient for deployment in practical applications of traffic-sign inventory management. | An enormous amount of literature exists on the topics of TSR and TSD, and several review papers are available @cite_32 @cite_2 . In general, it is very difficult to decide which approach gives better overall results, mainly due to the lack of a standard publicly available benchmark dataset that would contain an extensive set of various traffic-sign categories, as emphasized in several recent studies @cite_2 @cite_20 . Most authors evaluate their approaches on one of the many public datasets with a relatively limited number of traffic-sign categories: The German Traffic-Sign Detection Benchmark (GTSDB) @cite_1 : 3 super-categories, primarily intended for detection. | {
"cite_N": [
"@cite_1",
"@cite_32",
"@cite_20",
"@cite_2"
],
"mid": [
"2002427601",
"2126628495",
"2258105403",
"2435703814"
],
"abstract": [
"Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"In this paper, we provide a survey of the traffic sign detection literature, detailing detection systems for traffic sign recognition (TSR) for driver assistance. We separately describe the contributions of recent works to the various stages inherent in traffic sign detection: segmentation, feature extraction, and final sign detection. While TSR is a well-established research area, we highlight open research issues in the literature, including a dearth of use of publicly available image databases and the over-representation of European traffic signs. Furthermore, we discuss future directions of TSR research, including the integration of context and localization. We also introduce a new public database containing U.S. traffic signs.",
"In this paper, we present a computer vision based system for fast robust Traffic Sign Detection and Recognition (TSDR), consisting of three steps. The first step consists on image enhancement and thresholding using the three components of the Hue Saturation and Value (HSV) space. Then we refer to distance to border feature and Random Forests classifier to detect circular, triangular and rectangular shapes on the segmented images. The last step consists on identifying the information included in the detected traffic signs. We compare four features descriptors which include Histogram of Oriented Gradients (HOG), Gabor, Local Binary Pattern (LBP), and Local Self-Similarity (LSS). We also compare their different combinations. For the classifiers we have carried out a comparison between Random Forests and Support Vector Machines (SVMs). The best results are given by the combination HOG with LSS together with the Random Forest classifier. The proposed method has been tested on the Swedish Traffic Signs Data set and gives satisfactory results.",
"Developing real-time Advanced Driver Assistance Systems (ADAS) based on video aiming to extract reliable vehicle state information has attracted a lot of attention during the past decades. This ADAS system includes inter-vehicle communication, driver behavioral monitoring, and human-machine interactions. In these systems, robust and reliable traffic sign detection and recognition (TSDR) technique is a critical step for ensuring vehicle safety. This paper provides a comprehensive survey on traffic sign detection and recognition system based on image and video data. Our main focus is to present the current trends and challenges in the field of developing an efficient TSDR system followed by a detail comparative study between different renowned methods used by various researchers. Finally, conclusion followed by some future suggestion is provided to develop an efficient TSDR system is provided. This survey will hopefully lead to develop an effective traffic sign detection and recognition system which will ensure driver safety in future. Streszczenie. System ADAS (Advanced Driver Assistance System) obejmuje takze metody rozpoznawania znakow drogowych. W artykule przedstawiono przegląd metod detekcji i rozpoznawania znakow drogowych bazujących na obrazie video. W artykule dokonano oceny istniejących metod oraz zaproponowano środki poprawy ich efektywności. Studium porownawcze metod detekcji i rozpoznawania znakow drogowych"
]
} |
1904.00649 | 2935077499 | Automatic detection and recognition of traffic signs plays a crucial role in management of the traffic-sign inventory. It provides accurate and timely way to manage traffic-sign inventory with a minimal human effort. In the computer vision community the recognition and detection of traffic signs is a well-researched problem. A vast majority of existing approaches perform well on traffic signs needed for advanced drivers-assistance and autonomous systems. However, this represents a relatively small number of all traffic signs (around 50 categories out of several hundred) and performance on the remaining set of traffic signs, which are required to eliminate the manual labor in traffic-sign inventory management, remains an open question. In this paper, we address the issue of detecting and recognizing a large number of traffic-sign categories suitable for automating traffic-sign inventory management. We adopt a convolutional neural network (CNN) approach, the Mask R-CNN, to address the full pipeline of detection and recognition with automatic end-to-end learning. We propose several improvements that are evaluated on the detection of traffic signs and result in an improved overall performance. This approach is applied to detection of 200 traffic-sign categories represented in our novel dataset. Results are reported on highly challenging traffic-sign categories that have not yet been considered in previous works. We provide comprehensive analysis of the deep learning method for the detection of traffic signs with large intra-category appearance variation and show below 3 error rates with the proposed approach, which is sufficient for deployment in practical applications of traffic-sign inventory management. | The Mapping and Assessing the State of Traffic Infrastructure (MASTIF) @cite_15 : 9 original categories, extended to 31 categories @cite_12 , acquired for road maintenance assessment service in Croatia. | {
"cite_N": [
"@cite_15",
"@cite_12"
],
"mid": [
"2086002527",
"2317354861"
],
"abstract": [
"Geoinformation inventories are often employed as a tool for providing a comprehensive view onto the required state of traffic control infrastructure. They are especially important in road safety inspection where, in combination with georeferenced video, they enable repeatable off-line and off-site assessments as an attractive aternative to classic onsite inspection. Nevertheless, manual assessments are tedious and time-consuming even when performed off-line, and this seriously impairs the potential of the geoinformation inventory concept. This paper therefore researches a hypothesis that suitable georeferenced video processing techniques would allow reliable automation of the following operations: i) creation of the traffic inventory from the given video, and ii) assessing the video against the state in the inventory. Prominent computer vision approaches have been rigorously and systematically evaluated and the obtained results are presented. The results seem to support the hypothesis, although further work is required for a more definite answer.",
"This paper proposes a computationally efficient method for traffic sign recognition (TSR). This proposed method consists of two modules: 1) extraction of histogram of oriented gradient variant (HOGv) feature and 2) a single classifier trained by extreme learning machine (ELM) algorithm. The presented HOGv feature keeps a good balance between redundancy and local details such that it can represent distinctive shapes better. The classifier is a single-hidden-layer feedforward network. Based on ELM algorithm, the connection between input and hidden layers realizes the random feature mapping while only the weights between hidden and output layers are trained. As a result, layer-by-layer tuning is not required. Meanwhile, the norm of output weights is included in the cost function. Therefore, the ELM-based classifier can achieve an optimal and generalized solution for multiclass TSR. Furthermore, it can balance the recognition accuracy and computational cost. Three datasets, including the German TSR benchmark dataset, the Belgium traffic sign classification dataset and the revised mapping and assessing the state of traffic infrastructure (revised MASTIF) dataset, are used to evaluate this proposed method. Experimental results have shown that this proposed method obtains not only high recognition accuracy but also extremely high computational efficiency in both training and recognition processes in these three datasets."
]
} |
1904.00649 | 2935077499 | Automatic detection and recognition of traffic signs plays a crucial role in management of the traffic-sign inventory. It provides accurate and timely way to manage traffic-sign inventory with a minimal human effort. In the computer vision community the recognition and detection of traffic signs is a well-researched problem. A vast majority of existing approaches perform well on traffic signs needed for advanced drivers-assistance and autonomous systems. However, this represents a relatively small number of all traffic signs (around 50 categories out of several hundred) and performance on the remaining set of traffic signs, which are required to eliminate the manual labor in traffic-sign inventory management, remains an open question. In this paper, we address the issue of detecting and recognizing a large number of traffic-sign categories suitable for automating traffic-sign inventory management. We adopt a convolutional neural network (CNN) approach, the Mask R-CNN, to address the full pipeline of detection and recognition with automatic end-to-end learning. We propose several improvements that are evaluated on the detection of traffic signs and result in an improved overall performance. This approach is applied to detection of 200 traffic-sign categories represented in our novel dataset. Results are reported on highly challenging traffic-sign categories that have not yet been considered in previous works. We provide comprehensive analysis of the deep learning method for the detection of traffic signs with large intra-category appearance variation and show below 3 error rates with the proposed approach, which is sufficient for deployment in practical applications of traffic-sign inventory management. | The Laboratory for Intelligent and Safe Automobiles (LISA) Dataset @cite_32 : 49 categories of traffic signs, acquired on the roads in the USA. | {
"cite_N": [
"@cite_32"
],
"mid": [
"2126628495"
],
"abstract": [
"In this paper, we provide a survey of the traffic sign detection literature, detailing detection systems for traffic sign recognition (TSR) for driver assistance. We separately describe the contributions of recent works to the various stages inherent in traffic sign detection: segmentation, feature extraction, and final sign detection. While TSR is a well-established research area, we highlight open research issues in the literature, including a dearth of use of publicly available image databases and the over-representation of European traffic signs. Furthermore, we discuss future directions of TSR research, including the integration of context and localization. We also introduce a new public database containing U.S. traffic signs."
]
} |
1904.00649 | 2935077499 | Automatic detection and recognition of traffic signs plays a crucial role in management of the traffic-sign inventory. It provides accurate and timely way to manage traffic-sign inventory with a minimal human effort. In the computer vision community the recognition and detection of traffic signs is a well-researched problem. A vast majority of existing approaches perform well on traffic signs needed for advanced drivers-assistance and autonomous systems. However, this represents a relatively small number of all traffic signs (around 50 categories out of several hundred) and performance on the remaining set of traffic signs, which are required to eliminate the manual labor in traffic-sign inventory management, remains an open question. In this paper, we address the issue of detecting and recognizing a large number of traffic-sign categories suitable for automating traffic-sign inventory management. We adopt a convolutional neural network (CNN) approach, the Mask R-CNN, to address the full pipeline of detection and recognition with automatic end-to-end learning. We propose several improvements that are evaluated on the detection of traffic signs and result in an improved overall performance. This approach is applied to detection of 200 traffic-sign categories represented in our novel dataset. Results are reported on highly challenging traffic-sign categories that have not yet been considered in previous works. We provide comprehensive analysis of the deep learning method for the detection of traffic signs with large intra-category appearance variation and show below 3 error rates with the proposed approach, which is sufficient for deployment in practical applications of traffic-sign inventory management. | The Tsinghua-Tencent 100K dataset @cite_28 : 45 categories, large dataset with 10000 images containing at least one traffic sign and 90000 background images. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2479866714"
],
"abstract": [
"Although promising results have been achieved in the areas of traffic-sign detection and classification, few works have provided simultaneous solutions to these two tasks for realistic real world images. We make two contributions to this problem. Firstly, we have created a large traffic-sign benchmark from 100000 Tencent Street View panoramas, going beyond previous benchmarks. It provides 100000 images containing 30000 traffic-sign instances. These images cover large variations in illuminance and weather conditions. Each traffic-sign in the benchmark is annotated with a class label, its bounding box and pixel mask. We call this benchmark Tsinghua-Tencent 100K. Secondly, we demonstrate how a robust end-to-end convolutional neural network (CNN) can simultaneously detect and classify trafficsigns. Most previous CNN image processing solutions target objects that occupy a large proportion of an image, and such networks do not work well for target objects occupying only a small fraction of an image like the traffic-signs here. Experimental results show the robustness of our network and its superiority to alternatives. The benchmark, source code and the CNN model introduced in this paper is publicly available1."
]
} |
1904.00649 | 2935077499 | Automatic detection and recognition of traffic signs plays a crucial role in management of the traffic-sign inventory. It provides accurate and timely way to manage traffic-sign inventory with a minimal human effort. In the computer vision community the recognition and detection of traffic signs is a well-researched problem. A vast majority of existing approaches perform well on traffic signs needed for advanced drivers-assistance and autonomous systems. However, this represents a relatively small number of all traffic signs (around 50 categories out of several hundred) and performance on the remaining set of traffic signs, which are required to eliminate the manual labor in traffic-sign inventory management, remains an open question. In this paper, we address the issue of detecting and recognizing a large number of traffic-sign categories suitable for automating traffic-sign inventory management. We adopt a convolutional neural network (CNN) approach, the Mask R-CNN, to address the full pipeline of detection and recognition with automatic end-to-end learning. We propose several improvements that are evaluated on the detection of traffic signs and result in an improved overall performance. This approach is applied to detection of 200 traffic-sign categories represented in our novel dataset. Results are reported on highly challenging traffic-sign categories that have not yet been considered in previous works. We provide comprehensive analysis of the deep learning method for the detection of traffic signs with large intra-category appearance variation and show below 3 error rates with the proposed approach, which is sufficient for deployment in practical applications of traffic-sign inventory management. | To enrich the set of considered traffic signs, some approaches sample images from multiple datasets to perform the evaluation @cite_26 @cite_5 . On the other hand, a vast number of authors use their own private datasets @cite_45 @cite_6 @cite_4 @cite_43 . To the best of our knowledge, the largest set of categories was considered in the private dataset of @cite_4 , distinguishing between 131 categories of non-text traffic signs from the roads of United Kingdom. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_6",
"@cite_43",
"@cite_45",
"@cite_5"
],
"mid": [
"401751405",
"2052533453",
"2074643422",
"2110086022",
"2066327668",
"2000353334"
],
"abstract": [
"Abstract The robust and accurate detection of traffic signs is a challenging problem due to the many issues that are often encountered in real traffic video capturing such as the various weather conditions, shadows and partial occlusion. To address such adverse factors, in this paper, we propose a new traffic sign detection method by integrating color invariants based image segmentation and pyramid histogram of oriented gradients (PHOG) features based shape matching. Given the target image, we first extract its color invariants in Gaussian color model, and then segment the image into different regions to get the candidate regions of interests (ROIs) by clustering on the color invariants. Next, PHOG is adopted to represent the shape features of ROIs and support vector machine is used to identify the traffic signs. The traditional PHOG is sensitive to the cluttered background of traffic sign when extracting the object contour. To boost the discriminative power of PHOG, we propose introducing Chromatic-edge to enhance object contour while suppress the noises. Extensive experiments demonstrate that our method can robustly detect traffic signs under varying weather, shadow, occlusion and complex background conditions.",
"This paper proposes a novel system for the automatic detection and recognition of traffic signs. The proposed system detects candidate regions as maximally stable extremal regions (MSERs), which offers robustness to variations in lighting conditions. Recognition is based on a cascade of support vector machine (SVM) classifiers that were trained using histogram of oriented gradient (HOG) features. The training data are generated from synthetic template images that are freely available from an online database; thus, real footage road signs are not required as training data. The proposed system is accurate at high vehicle speeds, operates under a range of weather conditions, runs at an average speed of 20 frames per second, and recognizes all classes of ideogram-based (nontext) traffic symbols from an online road sign database. Comprehensive comparative results to illustrate the performance of the system are presented.",
"Mobile mapping systems acquire massive amount of data under uncontrolled conditions and pose new challenges to the development of robust computer vision algorithms. In this work, we show how a combination of solid image analysis and pattern recognition techniques can be used to tackle the problem of traffic sign detection in mobile mapping data. Different from the majority of existing systems, our pipeline is based on interest regions extraction rather than sliding window detection. Thanks to the robustness of local features, the proposed pipeline can withstand great appearance variations, which typically occur in outdoor data, especially dramatic illumination and scale changes. The proposed approach has been specialized and tested in three variants, each aimed at detecting one of the three categories of mandatory, prohibitory and danger traffic signs, according to the experimental setup of the recent German Traffic Sign Detection Benchmark competition. Besides achieving very good performance in the on-line competition, our proposal has been successfully evaluated on a novel, more challenging dataset of Italian signs, thereby proving its robustness and suitability to automatic analysis of real-world mobile mapping data. HighlightsThe paper presents in detail the design of a Traffic Sign Detection pipeline.Interest regions are an effective tool to feed a Traffic Sign Detection pipeline.A context-aware and a traffic light filter can effectively prune false positives.Our algorithm obtains competitive results on a public benchmark dataset.Our pipeline achieves promising results on a challenging mobile mapping dataset.",
"In this paper we present two variant formulations of the well-known Histogram of Oriented Gradients (HOG) features and provide a comparison of these features on a large scale sign detection problem. The aim of this research is to find features capable of driving further improvements atop a preexisting detection framework used commercially to detect traffic signs on the scale of entire national road networks (1000's of kilometres of video). We assume the computationally efficient framework of a cascade of boosted weak classifiers. Rather than comparing features on the general problem of detection we compare their merits in the final stages of a cascaded detection problem where a feature's ability to reduce error is valued more highly than computational efficiency. Results show the benefit of the two new features on a New Zealand speed sign detection problem. We also note the importance of using non-sign training and validation instances taken from the same video data that contains the training and validation positives. This is attributed to the potential for the more powerful HOG features to overfit on specific local patterns which may be present in alternative video data.",
"Abstract Traffic signs are an essential part of any circulation system, and failure detection by the driver may significantly increase the accident risk. Currently, automatic traffic sign detection systems still have some performance limitations, specially for achromatic signs and variable lighting conditions. In this work, we propose an automatic traffic-sign detection method capable of detecting both chromatic and achromatic signs, while taking into account rotations, scale changes, shifts, partial deformations, and shadows. The proposed system is divided into three stages: (1) segmentation of chromatic and achromatic scene elements using L ⁎ a ⁎ b ⁎ and HSI spaces, where two machine learning techniques ( k -Nearest Neighbors and Support Vector Machines) are benchmarked; (2) post-processing in order to discard non-interest regions, to connect fragmented signs, and to separate signs located at the same post; and (3) sign-shape classification by using Fourier Descriptors, which yield significant advantage in comparison to other contour-based methods, and subsequent shape recognition with machine learning techniques. Experiments with two databases of real-world images captured with different cameras yielded a sign detection rate of about 97 with a false alarm rate between 3 and 4 , depending on the database. Our method can be readily used for maintenance, inventory, or driver support system applications.",
"In this paper, we design the color fused multiple features to describe a traffic sign, and we further implement this description method to detect traffic signs and to classify multi-class traffic signs. At the detection stage, we utilize the GentleAdaboost classifier to separate traffic signs from the background; at the classification stage, we implement the random forest classifier to classify multi-class traffic signs. We do the extensive experiments on the popular standard traffic sign datasets: the German Traffic Sign Recognition Benchmark and the Swedish Traffic Signs Dataset. We compare eight features which include the HOG feature, the LBP feature, the color cues and their different combinations. We also compare the popular classifiers for traffic sign recognition. The experimental results demonstrate that the color fused feature achieves better classification performance than the feature without color cues, and the GentleAdaboost classifier achieves the better comprehensive performance of the binary classification, and the random forest classifier achieves the best multi-class classification accuracy."
]
} |
1904.00649 | 2935077499 | Automatic detection and recognition of traffic signs plays a crucial role in management of the traffic-sign inventory. It provides accurate and timely way to manage traffic-sign inventory with a minimal human effort. In the computer vision community the recognition and detection of traffic signs is a well-researched problem. A vast majority of existing approaches perform well on traffic signs needed for advanced drivers-assistance and autonomous systems. However, this represents a relatively small number of all traffic signs (around 50 categories out of several hundred) and performance on the remaining set of traffic signs, which are required to eliminate the manual labor in traffic-sign inventory management, remains an open question. In this paper, we address the issue of detecting and recognizing a large number of traffic-sign categories suitable for automating traffic-sign inventory management. We adopt a convolutional neural network (CNN) approach, the Mask R-CNN, to address the full pipeline of detection and recognition with automatic end-to-end learning. We propose several improvements that are evaluated on the detection of traffic signs and result in an improved overall performance. This approach is applied to detection of 200 traffic-sign categories represented in our novel dataset. Results are reported on highly challenging traffic-sign categories that have not yet been considered in previous works. We provide comprehensive analysis of the deep learning method for the detection of traffic signs with large intra-category appearance variation and show below 3 error rates with the proposed approach, which is sufficient for deployment in practical applications of traffic-sign inventory management. | Despite a large number of traffic-sign datasets, a comparison of traffic-sign detectors for large numbers of categories remains a challenging problem. In contrast to existing benchmarks that focus mostly on small numbers of super-categories (GTSDB @cite_1 ), or on small numbers of simple traffic signs (BTS @cite_27 , MASTIF @cite_15 , STSD @cite_38 , LISA @cite_32 ), our comprehensive dataset contains 200 traffic-sign categories, including a large number of categories with significant intra-category variability. The closest large-scale dataset is the Tsinghua-Tencent 100K dataset; however, their evaluation still focuses only on 45 simple traffic signs. On the other hand, our dataset enables a comprehensive analysis of detectors in the context of traffic-sign inventory management. | {
"cite_N": [
"@cite_38",
"@cite_1",
"@cite_32",
"@cite_27",
"@cite_15"
],
"mid": [
"1566134554",
"2002427601",
"2126628495",
"2169810643",
"2086002527"
],
"abstract": [
"Traffic sign recognition is important for the development of driver assistance systems and fully autonomous vehicles. Even though GPS navigator systems works well for most of the time, there will always be situations when they fail. In these cases, robust vision based systems are required. Traffic signs are designed to have distinct colored fields separated by sharp boundaries. We propose to use locally segmented contours combined with an implicit star-shaped object model as prototypes for the different sign classes. The contours are described by Fourier descriptors. Matching of a query image to the sign prototype database is done by exhaustive search. This is done efficiently by using the correlation based matching scheme for Fourier descriptors and a fast cascaded matching scheme for enforcing the spatial requirements. We demonstrated on a publicly available database state of the art performance.",
"Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"In this paper, we provide a survey of the traffic sign detection literature, detailing detection systems for traffic sign recognition (TSR) for driver assistance. We separately describe the contributions of recent works to the various stages inherent in traffic sign detection: segmentation, feature extraction, and final sign detection. While TSR is a well-established research area, we highlight open research issues in the literature, including a dearth of use of publicly available image databases and the over-representation of European traffic signs. Furthermore, we discuss future directions of TSR research, including the integration of context and localization. We also introduce a new public database containing U.S. traffic signs.",
"Several applications require information about street furniture. Part of the task is to survey all traffic signs. This has to be done for millions of km of road, and the exercise needs to be repeated every so often. A van with 8 roof-mounted cameras drove through the streets and took images every meter. The paper proposes a pipeline for the efficient detection and recognition of traffic signs. The task is challenging, as illumination conditions change regularly, occlusions are frequent, 3D positions and orientations vary substantially, and the actual signs are far less similar among equal types than one might expect. We combine 2D and 3D techniques to improve results beyond the state-of-the-art, which is still very much preoccupied with single view analysis.",
"Geoinformation inventories are often employed as a tool for providing a comprehensive view onto the required state of traffic control infrastructure. They are especially important in road safety inspection where, in combination with georeferenced video, they enable repeatable off-line and off-site assessments as an attractive aternative to classic onsite inspection. Nevertheless, manual assessments are tedious and time-consuming even when performed off-line, and this seriously impairs the potential of the geoinformation inventory concept. This paper therefore researches a hypothesis that suitable georeferenced video processing techniques would allow reliable automation of the following operations: i) creation of the traffic inventory from the given video, and ii) assessing the video against the state in the inventory. Prominent computer vision approaches have been rigorously and systematically evaluated and the obtained results are presented. The results seem to support the hypothesis, although further work is required for a more definite answer."
]
} |
1904.00649 | 2935077499 | Automatic detection and recognition of traffic signs plays a crucial role in management of the traffic-sign inventory. It provides accurate and timely way to manage traffic-sign inventory with a minimal human effort. In the computer vision community the recognition and detection of traffic signs is a well-researched problem. A vast majority of existing approaches perform well on traffic signs needed for advanced drivers-assistance and autonomous systems. However, this represents a relatively small number of all traffic signs (around 50 categories out of several hundred) and performance on the remaining set of traffic signs, which are required to eliminate the manual labor in traffic-sign inventory management, remains an open question. In this paper, we address the issue of detecting and recognizing a large number of traffic-sign categories suitable for automating traffic-sign inventory management. We adopt a convolutional neural network (CNN) approach, the Mask R-CNN, to address the full pipeline of detection and recognition with automatic end-to-end learning. We propose several improvements that are evaluated on the detection of traffic signs and result in an improved overall performance. This approach is applied to detection of 200 traffic-sign categories represented in our novel dataset. Results are reported on highly challenging traffic-sign categories that have not yet been considered in previous works. We provide comprehensive analysis of the deep learning method for the detection of traffic signs with large intra-category appearance variation and show below 3 error rates with the proposed approach, which is sufficient for deployment in practical applications of traffic-sign inventory management. | Various methods have been employed in TSR and TSD. Traditionally hand-crafted features have been used, like histogram of oriented gradients (HOG) @cite_40 @cite_4 @cite_39 @cite_20 @cite_25 @cite_12 @cite_1 , scale invariant feature transform (SIFT) @cite_25 , local binary patterns (LBP) @cite_20 or integral channel features @cite_39 . A wide range of machine learning methods have also been employed, ranging from support vector machine (SVM) @cite_4 @cite_20 @cite_9 , logistic regression @cite_33 , and random forests @cite_20 @cite_9 , to artificial neural networks in the form of an extreme learning machine (ELM) @cite_12 . | {
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_9",
"@cite_1",
"@cite_39",
"@cite_40",
"@cite_12",
"@cite_25",
"@cite_20"
],
"mid": [
"2052533453",
"2052490028",
"2014852133",
"2002427601",
"1977610018",
"2042082479",
"2317354861",
"1597629222",
"2258105403"
],
"abstract": [
"This paper proposes a novel system for the automatic detection and recognition of traffic signs. The proposed system detects candidate regions as maximally stable extremal regions (MSERs), which offers robustness to variations in lighting conditions. Recognition is based on a cascade of support vector machine (SVM) classifiers that were trained using histogram of oriented gradient (HOG) features. The training data are generated from synthetic template images that are freely available from an online database; thus, real footage road signs are not required as training data. The proposed system is accurate at high vehicle speeds, operates under a range of weather conditions, runs at an average speed of 20 frames per second, and recognizes all classes of ideogram-based (nontext) traffic symbols from an online road sign database. Comprehensive comparative results to illustrate the performance of the system are presented.",
"Correlations in image sequences can be potentially useful for recovering feature representation and subsequently prompting classification performance, which are often neglected by traditional classification approaches. In this letter, we present a supervised low-rank matrix recovery model to leverage these correlations for classification tasks by introducing a supervised penalty term to the classic low-rank matrix recovery model. This allows us to not only exploit these correlations to recover the underlying feature representation from corrupted observation, but also preserve discriminative information for classification. Our model is evaluated on both real-world data and synthetic data, and experimental results show that our model obtains highly competitive performance with state-of-the-art algorithms and is especially robust to different levels of corruptions.",
"Traffic Sign Recognition (TSR) is an important component of Advanced Driver Assistance Systems (ADAS). The traffic signs enhance traffic safety by informing the driver of speed limits or possible dangers such as icy roads, imminent road works or pedestrian crossings. We present a three-stage real-time Traffic Sign Recognition system in this paper, consisting of a segmentation, a detection and a classification phase. We combine the color enhancement with an adaptive threshold to extract red regions in the image. The detection is performed using an efficient linear Support Vector Machine (SVM) with Histogram of Oriented Gradients (HOG) features. The tree classifiers, K-d tree and Random Forest, identify the content of the traffic signs found. A spatial weighting approach is proposed to improve the performance of the K-d tree. The Random Forest and Fisher's Criterion are used to reduce the feature space and accelerate the classification. We show that only a subset of about one third of the features is sufficient to attain a high classification accuracy on the German Traffic Sign Recognition Benchmark (GTSRB).",
"Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"Traffic sign recognition has been a recurring application domain for visual objects detection. The public datasets have only recently reached large enough size and variety to enable proper empirical studies. We revisit the topic by showing how modern methods perform on two large detection and classification datasets (thousand of images, tens of categories) captured in Belgium and Germany. We show that, without any application specific modification, existing methods for pedestrian detection, and for digit and face classification; can reach performances in the range of 95 99 of the perfect solution. We show detailed experiments and discuss the trade-off of different options. Our top performing methods use modern variants of HOG features for detection, and sparse representations for classification.",
"Traffic-sign recognition (TSR) is an essential component of a driver assistance system (DAS), providing drivers with safety and precaution information. In this paper, we evaluate the performance of k-d trees, random forests, and support vector machines (SVMs) for traffic-sign classification using different-sized histogram-of-oriented-gradient (HOG) descriptors and distance transforms (DTs). We also use the Fisher's criterion and random forests for the feature selection to reduce the memory requirements and enhance the performance. We use the German Traffic Sign Recognition Benchmark (GTSRB) data set containing 43 classes and more than 50 000 images.",
"This paper proposes a computationally efficient method for traffic sign recognition (TSR). This proposed method consists of two modules: 1) extraction of histogram of oriented gradient variant (HOGv) feature and 2) a single classifier trained by extreme learning machine (ELM) algorithm. The presented HOGv feature keeps a good balance between redundancy and local details such that it can represent distinctive shapes better. The classifier is a single-hidden-layer feedforward network. Based on ELM algorithm, the connection between input and hidden layers realizes the random feature mapping while only the weights between hidden and output layers are trained. As a result, layer-by-layer tuning is not required. Meanwhile, the norm of output weights is included in the cost function. Therefore, the ELM-based classifier can achieve an optimal and generalized solution for multiclass TSR. Furthermore, it can balance the recognition accuracy and computational cost. Three datasets, including the German TSR benchmark dataset, the Belgium traffic sign classification dataset and the revised mapping and assessing the state of traffic infrastructure (revised MASTIF) dataset, are used to evaluate this proposed method. Experimental results have shown that this proposed method obtains not only high recognition accuracy but also extremely high computational efficiency in both training and recognition processes in these three datasets.",
"In this work we developed a novel and fast traffic sign recognition system, a very important part for advanced driver assistance system and for autonomous driving. Traffic signs play a very vital role in safe driving and avoiding accident. We have used image processing and topic discovery model pLSA to tackle this challenging multiclass classification problem. Our algorithm is consist of two parts, shape classification and sign classification for improved accuracy. For processing and representation of image we have used bag of features model with SIFT local descriptor. Where a visual vocabulary of size 300 words are formed using k-means codebook formation algorithm. We exploited the concept that every image is a collection of visual topics and images having same topics will belong to same category. Our algorithm is tested on German traffic sign recognition benchmark (GTSRB) and gives very promising result near to existing state of the art techniques.",
"In this paper, we present a computer vision based system for fast robust Traffic Sign Detection and Recognition (TSDR), consisting of three steps. The first step consists on image enhancement and thresholding using the three components of the Hue Saturation and Value (HSV) space. Then we refer to distance to border feature and Random Forests classifier to detect circular, triangular and rectangular shapes on the segmented images. The last step consists on identifying the information included in the detected traffic signs. We compare four features descriptors which include Histogram of Oriented Gradients (HOG), Gabor, Local Binary Pattern (LBP), and Local Self-Similarity (LSS). We also compare their different combinations. For the classifiers we have carried out a comparison between Random Forests and Support Vector Machines (SVMs). The best results are given by the combination HOG with LSS together with the Random Forest classifier. The proposed method has been tested on the Swedish Traffic Signs Data set and gives satisfactory results."
]
} |
1904.00649 | 2935077499 | Automatic detection and recognition of traffic signs plays a crucial role in management of the traffic-sign inventory. It provides accurate and timely way to manage traffic-sign inventory with a minimal human effort. In the computer vision community the recognition and detection of traffic signs is a well-researched problem. A vast majority of existing approaches perform well on traffic signs needed for advanced drivers-assistance and autonomous systems. However, this represents a relatively small number of all traffic signs (around 50 categories out of several hundred) and performance on the remaining set of traffic signs, which are required to eliminate the manual labor in traffic-sign inventory management, remains an open question. In this paper, we address the issue of detecting and recognizing a large number of traffic-sign categories suitable for automating traffic-sign inventory management. We adopt a convolutional neural network (CNN) approach, the Mask R-CNN, to address the full pipeline of detection and recognition with automatic end-to-end learning. We propose several improvements that are evaluated on the detection of traffic signs and result in an improved overall performance. This approach is applied to detection of 200 traffic-sign categories represented in our novel dataset. Results are reported on highly challenging traffic-sign categories that have not yet been considered in previous works. We provide comprehensive analysis of the deep learning method for the detection of traffic signs with large intra-category appearance variation and show below 3 error rates with the proposed approach, which is sufficient for deployment in practical applications of traffic-sign inventory management. | Our proposed deep-learning-based approach differs from previous related works. In contrast to traditional approaches with hand-crafted features and machine learning @cite_40 @cite_4 , we propose full feature learning with end-to-end learning. Our approach also differs from other deep-learning-based traffic-sign detection methods. Our method, which is based on Mask R-CNN, uses region proposal network instead of using a separate method for generating region proposals as in @cite_30 , and in contrast to @cite_28 , we employ deeper networks based on the VGG16 @cite_46 and ResNet-50 @cite_0 architectures. As opposed to both @cite_30 and @cite_28 , we also employ network pre-trained on ImageNet, which significantly reduces the need for training samples. In addition, we have implemented several extensions leading to superior performance. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_28",
"@cite_0",
"@cite_40",
"@cite_46"
],
"mid": [
"2480140235",
"2052533453",
"2479866714",
"2194775991",
"2042082479",
"1686810756"
],
"abstract": [
"Detecting and recognizing traffic signs is a hot topic in the field of computer vision with lots of applications, e.g., safe driving, path planning, robot navigation etc. We propose a novel framework with two deep learning components including fully convolutional network (FCN) guided traffic sign proposals and deep convolutional neural network (CNN) for object classification. Our core idea is to use CNN to classify traffic sign proposals to perform fast and accurate traffic sign detection and recognition. Due to the complexity of the traffic scene, we improve the state-of-the-art object proposal method, EdgeBox, by incorporating with a trained FCN. The FCN guided object proposals can produce more discriminative candidates, which help to make the whole detection system fast and accurate. In the experiments, we have evaluated the proposed method on publicly available traffic sign benchmark, Swedish Traffic Signs Dataset (STSD), and achieved the state-of-the-art results.",
"This paper proposes a novel system for the automatic detection and recognition of traffic signs. The proposed system detects candidate regions as maximally stable extremal regions (MSERs), which offers robustness to variations in lighting conditions. Recognition is based on a cascade of support vector machine (SVM) classifiers that were trained using histogram of oriented gradient (HOG) features. The training data are generated from synthetic template images that are freely available from an online database; thus, real footage road signs are not required as training data. The proposed system is accurate at high vehicle speeds, operates under a range of weather conditions, runs at an average speed of 20 frames per second, and recognizes all classes of ideogram-based (nontext) traffic symbols from an online road sign database. Comprehensive comparative results to illustrate the performance of the system are presented.",
"Although promising results have been achieved in the areas of traffic-sign detection and classification, few works have provided simultaneous solutions to these two tasks for realistic real world images. We make two contributions to this problem. Firstly, we have created a large traffic-sign benchmark from 100000 Tencent Street View panoramas, going beyond previous benchmarks. It provides 100000 images containing 30000 traffic-sign instances. These images cover large variations in illuminance and weather conditions. Each traffic-sign in the benchmark is annotated with a class label, its bounding box and pixel mask. We call this benchmark Tsinghua-Tencent 100K. Secondly, we demonstrate how a robust end-to-end convolutional neural network (CNN) can simultaneously detect and classify trafficsigns. Most previous CNN image processing solutions target objects that occupy a large proportion of an image, and such networks do not work well for target objects occupying only a small fraction of an image like the traffic-signs here. Experimental results show the robustness of our network and its superiority to alternatives. The benchmark, source code and the CNN model introduced in this paper is publicly available1.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Traffic-sign recognition (TSR) is an essential component of a driver assistance system (DAS), providing drivers with safety and precaution information. In this paper, we evaluate the performance of k-d trees, random forests, and support vector machines (SVMs) for traffic-sign classification using different-sized histogram-of-oriented-gradient (HOG) descriptors and distance transforms (DTs). We also use the Fisher's criterion and random forests for the feature selection to reduce the memory requirements and enhance the performance. We use the German Traffic Sign Recognition Benchmark (GTSRB) data set containing 43 classes and more than 50 000 images.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
} |
1904.00478 | 2925578274 | Many algorithmic steps require more than one statement to implement, but not big enough to be a method (e.g., add element, find the maximum, determine a value, etc.). These steps are generally implemented by loops. Internal comments for the loops often describe these intermediary steps, however, unfortunately a very small percentage of code is well documented to help new users coders. As a result, information at levels of abstraction between the individual statement and the whole method is not leveraged by current source code analyses, as that information is not easily available beyond any internal comments describing the code blocks. Hence, this project explores the generality of an approach to automatically determine the high level actions of loop constructs. The approach is to mine loop characteristics of a given loop structure over the repository of the Quorum language source code, map it to an (already developed for Java) action identification model, and thus identify the action performed by the specified loop. The results are promising enough to conclude that this approach could be applied to other programming languages too. | The first general and extensible approach to automatically identifying action units and abstracting them as high level action phrases without manually creating templates was developed by @cite_2 . The closest work is by who automatically generated high-level actions within methods @cite_10 , by using a small set of templates that were developed by manually examining code. | {
"cite_N": [
"@cite_10",
"@cite_2"
],
"mid": [
"2166879716",
"2133795027"
],
"abstract": [
"One approach to easing program comprehension is to reduce the amount of code that a developer has to read. Describing the high level abstract algorithmic actions associated with code fragments using succinct natural language phrases potentially enables a newcomer to focus on fewer and more abstract concepts when trying to understand a given method. Unfortunately, such descriptions are typically missing because it is tedious to create them manually. We present an automatic technique for identifying code fragments that implement high level abstractions of actions and expressing them as a natural language description. Our studies of 1000 Java programs indicate that our heuristics for identifying code fragments implementing high level actions are widely applicable. Judgements of our generated descriptions by 15 experienced Java programmers strongly suggest that indeed they view the fragments that we identify as representing high level actions and our synthesized descriptions accurately express the abstraction.",
"Some high level algorithmic steps require more than one statement to implement, but are not large enough to be a method on their own. Specifically, many algorithmic steps (e.g., count, compare pairs of elements, find the maximum) are implemented as loop structures, which lack the higher level abstraction of the action being performed, and can negatively affect both human readers and automatic tools. Additionally, in a study of 14,317 projects, we found that less than 20 of loops are documented to help readers. In this paper, we present a novel automatic approach to identify the high level action implemented by a given loop. We leverage the available, large source of high-quality open source projects to mine loop characteristics and develop an action identification model. We use the model and feature vectors extracted from loop code to automatically identify the high level actions implemented by loops. We have evaluated the accuracy of the loop action identification and coverage of the model over 7159 open source programs. The results show great promise for this approach to automatically insert internal comments and provide additional higher level naming for loop actions to be used by tools such as code search."
]
} |
1904.00478 | 2925578274 | Many algorithmic steps require more than one statement to implement, but not big enough to be a method (e.g., add element, find the maximum, determine a value, etc.). These steps are generally implemented by loops. Internal comments for the loops often describe these intermediary steps, however, unfortunately a very small percentage of code is well documented to help new users coders. As a result, information at levels of abstraction between the individual statement and the whole method is not leveraged by current source code analyses, as that information is not easily available beyond any internal comments describing the code blocks. Hence, this project explores the generality of an approach to automatically determine the high level actions of loop constructs. The approach is to mine loop characteristics of a given loop structure over the repository of the Quorum language source code, map it to an (already developed for Java) action identification model, and thus identify the action performed by the specified loop. The results are promising enough to conclude that this approach could be applied to other programming languages too. | This project is related to generating internal comments for the identified high level actions. Most comment generation work is focused on creating summaries for methods or classes @cite_3 @cite_11 . However, mine question and answer sites for automatic comment generation @cite_12 . They extract code-description mappings from the question title and text, use heuristics to refine the descriptions, and use code clone detection to find source code snippets that are almost identical to the code-description mapping. | {
"cite_N": [
"@cite_12",
"@cite_3",
"@cite_11"
],
"mid": [
"2023925487",
"2082160726",
"2081749632"
],
"abstract": [
"Code comments improve software maintainability. To address the comment scarcity issue, we propose a new automatic comment generation approach, which mines comments from a large programming Question and Answer (Q&A) site. Q&A sites allow programmers to post questions and receive solutions, which contain code segments together with their descriptions, referred to as code-description mappings. We develop AutoComment to extract such mappings, and leverage them to generate description comments automatically for similar code segments matched in open-source projects. We apply AutoComment to analyze Java and Android tagged Q&A posts to extract 132,767 code-description mappings, which help AutoComment to generate 102 comments automatically for 23 Java and Android projects. The user study results show that the majority of the participants consider the generated comments accurate, adequate, concise, and useful in helping them understand the code.",
"Studies have shown that good comments can help programmers quickly understand what a method does, aiding program comprehension and software maintenance. Unfortunately, few software projects adequately comment the code. One way to overcome the lack of human-written summary comments, and guard against obsolete comments, is to automatically generate them. In this paper, we present a novel technique to automatically generate descriptive summary comments for Java methods. Given the signature and body of a method, our automatic comment generator identifies the content for the summary and generates natural language text that summarizes the method's overall actions. According to programmers who judged our generated comments, the summaries are accurate, do not miss important content, and are reasonably concise.",
"Most software engineering tasks require developers to understand parts of the source code. When faced with unfamiliar code, developers often rely on (internal or external) documentation to gain an overall understanding of the code and determine whether it is relevant for the current task. Unfortunately, the documentation is often absent or outdated. This paper presents a technique to automatically generate human readable summaries for Java classes, assuming no documentation exists. The summaries allow developers to understand the main goal and structure of the class. The focus of the summaries is on the content and responsibilities of the classes, rather than their relationships with other classes. The summarization tool determines the class and method stereotypes and uses them, in conjunction with heuristics, to select the information to be included in the summaries. Then it generates the summaries using existing lexicalization tools. A group of programmers judged a set of generated summaries for Java classes and determined that they are readable and understandable, they do not include extraneous information, and, in most cases, they are not missing essential information."
]
} |
1904.00386 | 2926332763 | With the rapid development of deep convolutional neural network, face detection has made great progress in recent years. WIDER FACE dataset, as a main benchmark, contributes greatly to this area. A large amount of methods have been put forward where PyramidBox designs an effective data augmentation strategy (Data-anchor-sampling) and context-based module for face detector. In this report, we improve each part to further boost the performance, including Balanced-data-anchor-sampling, Dual-PyramidAnchors and Dense Context Module. Specifically, Balanced-data-anchor-sampling obtains more uniform sampling of faces with different sizes. Dual-PyramidAnchors facilitate feature learning by introducing progressive anchor loss. Dense Context Module with dense connection not only enlarges receptive filed, but also passes information efficiently. Integrating these techniques, PyramidBox++ is constructed and achieves state-of-the-art performance in hard set. | Since the WIDER FACE dataset is built, large number of face detectors are proposed to locate faces under challenging environment, such as low- resolution imaging, tiny scale faces, large pose variations and occlusions in video surveillance. Wherein finding tiny faces is one of the research hotspots. S3FD @cite_41 and @cite_14 propose anchor matching strategy to improve the recall rate of tiny faces. Pyramidbox @cite_3 fully exploits the context information to provide extra supervision for small faces. The super-resolution based on GAN @cite_36 is introduced to face detection to make up the feature of low-resolution faces. Based on RefineDet @cite_27 , SRN @cite_6 investigates the effectiveness of cascade regression and classification on each level and find that two-step classification is used in shallow layers while two-step regression is used in deeper layers. DSFD @cite_35 improves several parts in Pyramidbox and achieves state-of-the-art performance. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_41",
"@cite_36",
"@cite_3",
"@cite_6",
"@cite_27"
],
"mid": [
"2895911141",
"2788627818",
"2750317406",
"2798748382",
"2963856926",
"2889877887",
"2770184303"
],
"abstract": [
"In this paper, we propose a novel face detection network with three novel contributions that address three key aspects of face detection, including better feature learning, progressive loss design and anchor assign based data augmentation, respectively. First, we propose a Feature Enhance Module (FEM) for enhancing the original feature maps to extend the single shot detector to dual shot detector. Second, we adopt Progressive Anchor Loss (PAL) computed by two different sets of anchors to effectively facilitate the features. Third, we use an Improved Anchor Matching (IAM) by integrating novel anchor assign strategy into data augmentation to provide better initialization for the regressor. Since these techniques are all related to the two-stream design, we name the proposed network as Dual Shot Face Detector (DSFD). Extensive experiments on popular benchmarks, WIDER FACE and FDDB, demonstrate the superiority of DSFD over the state-of-the-art face detectors.",
"This paper introduces a novel anchor design to support anchor-based face detection for superior scale-invariant performance, especially on tiny faces. To achieve this, we explicitly address the problem that anchor-based detectors drop performance drastically on faces with tiny sizes, e.g. less than 16x16 pixels. In this paper, we investigate why this is the case. We discover that current anchor design cannot guarantee high overlaps between tiny faces and anchor boxes, which increases the difficulty of training. The new Expected Max Overlapping (EMO) score is proposed which can theoretically explain the low overlapping issue and inspire several effective strategies of new anchor design leading to higher face overlaps, including anchor stride reduction with new network architectures, extra shifted anchors, and stochastic face shifting. Comprehensive experiments show that our proposed method significantly outperforms the baseline anchor-based detector, while consistently achieving state-of-the-art results on challenging face detection datasets with competitive runtime speed.",
"This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.",
"Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.",
"Face detection has been well studied for many years and one of remaining challenges is to detect small, blurred and partially occluded faces in uncontrolled environment. This paper proposes a novel context-assisted single shot face detector, named PyramidBox to handle the hard face detection problem. Observing the importance of the context, we improve the utilization of contextual information in the following three aspects. First, we design a novel context anchor to supervise high-level contextual feature learning by a semi-supervised method, which we call it PyramidAnchors. Second, we propose the Low-level Feature Pyramid Network to combine adequate high-level context semantic feature and Low-level facial feature together, which also allows the PyramidBox to predict faces of all scales in a single shot. Third, we introduce a context-sensitive structure to increase the capacity of prediction network to improve the final accuracy of output. In addition, we use the method of Data-anchor-sampling to augment the training samples across different scales, which increases the diversity of training data for smaller faces. By exploiting the value of context, PyramidBox achieves superior performance among the state-of-the-art over the two common face detection benchmarks, FDDB and WIDER FACE. Our code is available in PaddlePaddle: https: github.com PaddlePaddle models tree develop fluid face_detection.",
"High performance face detection remains a very challenging problem, especially when there exists many tiny faces. This paper presents a novel single-shot face detector, named Selective Refinement Network (SRN), which introduces novel two-step classification and regression operations selectively into an anchor-based face detector to reduce false positives and improve location accuracy simultaneously. In particular, the SRN consists of two modules: the Selective Two-step Classification (STC) module and the Selective Two-step Regression (STR) module. The STC aims to filter out most simple negative anchors from low level detection layers to reduce the search space for the subsequent classifier, while the STR is designed to coarsely adjust the locations and sizes of anchors from high level detection layers to provide better initialization for the subsequent regressor. Moreover, we design a Receptive Field Enhancement (RFE) block to provide more diverse receptive field, which helps to better capture faces in some extreme poses. As a consequence, the proposed SRN detector achieves state-of-the-art performance on all the widely used face detection benchmarks, including AFW, PASCAL face, FDDB, and WIDER FACE datasets. Codes will be released to facilitate further studies on the face detection problem.",
"For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multi-task loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at this https URL"
]
} |
1904.00597 | 2931335216 | Graph matching refers to finding node correspondence between graphs, such that the corresponding node and edge's affinity can be maximized. In addition with its NP-completeness nature, another important challenge is effective modeling of the node-wise and structure-wise affinity across graphs and the resulting objective, to guide the matching procedure effectively finding the true matching against noises. To this end, this paper devises an end-to-end differentiable deep network pipeline to learn the affinity for graph matching. It involves a supervised permutation loss regarding with node correspondence to capture the combinatorial nature for graph matching. Meanwhile deep graph embedding models are adopted to parameterize both intra-graph and cross-graph affinity functions, instead of the traditional shallow and simple parametric forms e.g. a Gaussian kernel. The embedding can also effectively capture the higher-order structure beyond second-order edges. The permutation loss model is agnostic to the number of nodes, and the embedding model is shared among nodes such that the network allows for varying numbers of nodes in graphs for training and inference. Moreover, our network is class-agnostic with some generalization capability across different categories. All these features are welcomed for real-world applications. Experiments show its superiority against state-of-the-art graph matching learning methods. | Recently a number of studies show various techniques for affinity function learning. Based on the extent to which ground truth correspondence information is used for training, methods are either unsupervised @cite_16 , semi-supervised @cite_24 , or supervised @cite_43 @cite_30 @cite_19 . | {
"cite_N": [
"@cite_30",
"@cite_24",
"@cite_43",
"@cite_19",
"@cite_16"
],
"mid": [
"2142726150",
"2126448214",
"2150760714",
"2799132636",
"2152953631"
],
"abstract": [
"Many tasks in computer vision are formulated as graph matching problems. Despite the NP-hard nature of the problem, fast and accurate approximations have led to significant progress in a wide range of applications. Learning graph models from observed data, however, still remains a challenging issue. This paper presents an effective scheme to parameterize a graph model, and learn its structural attributes for visual object matching. For this, we propose a graph representation with histogram-based attributes, and optimize them to increase the matching accuracy. Experimental evaluations on synthetic and real image datasets demonstrate the effectiveness of our approach, and show significant improvement in matching accuracy over graphs with pre-defined structures.",
"Graph and hypergraph matching are important problems in computer vision. They are successfully used in many applications requiring 2D or 3D feature matching, such as 3D reconstruction and object recognition. While graph matching is limited to using pairwise relationships, hypergraph matching permits the use of relationships between sets of features of any order. Consequently, it carries the promise to make matching more robust to changes in scale, deformations and outliers. In this paper we make two contributions. First, we present a first semi-supervised algorithm for learning the parameters that control the hypergraph matching model and demonstrate experimentally that it significantly improves the performance of current state-of-the-art methods. Second, we propose a novel efficient hypergraph matching algorithm, which outperforms the state-of-the-art, and, when used in combination with other higher-order matching algorithms, it consistently improves their performance.",
"As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.",
"The problem of graph matching under node and pairwise constraints is fundamental in areas as diverse as combinatorial optimization, machine learning or computer vision, where representing both the relations between nodes and their neighborhood structure is essential. We present an end-to-end model that makes it possible to learn all parameters of the graph matching process, including the unary and pairwise node neighborhoods, represented as deep feature extraction hierarchies. The challenge is in the formulation of the different matrix computation layers of the model in a way that enables the consistent, efficient propagation of gradients in the complete pipeline from the loss function, through the combinatorial optimization layer solving the matching problem, and the feature extraction hierarchy. Our computer vision experiments and ablation studies on challenging datasets like PASCAL VOC keypoints, Sintel and CUB show that matching models refined end-to-end are superior to counterparts based on feature hierarchies trained for other problems.",
"Graph matching is an essential problem in computer vision that has been successfully applied to 2D and 3D feature matching and object recognition. Despite its importance, little has been published on learning the parameters that control graph matching, even though learning has been shown to be vital for improving the matching rate. In this paper we show how to perform parameter learning in an unsupervised fashion, that is when no correct correspondences between graphs are given during training. Our experiments reveal that unsupervised learning compares favorably to the supervised case, both in terms of efficiency and quality, while avoiding the tedious manual labeling of ground truth correspondences. We verify experimentally that our learning method can improve the performance of several state-of-the art graph matching algorithms. We also show that a similar method can be successfully applied to parameter learning for graphical models and demonstrate its effectiveness empirically."
]
} |
1904.00597 | 2931335216 | Graph matching refers to finding node correspondence between graphs, such that the corresponding node and edge's affinity can be maximized. In addition with its NP-completeness nature, another important challenge is effective modeling of the node-wise and structure-wise affinity across graphs and the resulting objective, to guide the matching procedure effectively finding the true matching against noises. To this end, this paper devises an end-to-end differentiable deep network pipeline to learn the affinity for graph matching. It involves a supervised permutation loss regarding with node correspondence to capture the combinatorial nature for graph matching. Meanwhile deep graph embedding models are adopted to parameterize both intra-graph and cross-graph affinity functions, instead of the traditional shallow and simple parametric forms e.g. a Gaussian kernel. The embedding can also effectively capture the higher-order structure beyond second-order edges. The permutation loss model is agnostic to the number of nodes, and the embedding model is shared among nodes such that the network allows for varying numbers of nodes in graphs for training and inference. Moreover, our network is class-agnostic with some generalization capability across different categories. All these features are welcomed for real-world applications. Experiments show its superiority against state-of-the-art graph matching learning methods. | Previous graph matching affinity learning methods are mostly based on simple and shallow parametric models, which use popular distances (typically weighted Euclid distance) in the node and edge feature space plus a similarity kernel function (e.g. Gaussian kernel) to derive the final affinity score. In particular, a unified (shallow) parametric graph structure learning model is devised between two graphs in a vector form @math @cite_30 . The authors in @cite_30 observe that the above simple model can incorporate most previous shallow learning models, including @cite_43 @cite_16 @cite_12 . Therefore, this method will be compared in our experiment. | {
"cite_N": [
"@cite_30",
"@cite_43",
"@cite_16",
"@cite_12"
],
"mid": [
"2142726150",
"2150760714",
"2152953631",
"1744214816"
],
"abstract": [
"Many tasks in computer vision are formulated as graph matching problems. Despite the NP-hard nature of the problem, fast and accurate approximations have led to significant progress in a wide range of applications. Learning graph models from observed data, however, still remains a challenging issue. This paper presents an effective scheme to parameterize a graph model, and learn its structural attributes for visual object matching. For this, we propose a graph representation with histogram-based attributes, and optimize them to increase the matching accuracy. Experimental evaluations on synthetic and real image datasets demonstrate the effectiveness of our approach, and show significant improvement in matching accuracy over graphs with pre-defined structures.",
"As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.",
"Graph matching is an essential problem in computer vision that has been successfully applied to 2D and 3D feature matching and object recognition. Despite its importance, little has been published on learning the parameters that control graph matching, even though learning has been shown to be vital for improving the matching rate. In this paper we show how to perform parameter learning in an unsupervised fashion, that is when no correct correspondences between graphs are given during training. Our experiments reveal that unsupervised learning compares favorably to the supervised case, both in terms of efficiency and quality, while avoiding the tedious manual labeling of ground truth correspondences. We verify experimentally that our learning method can improve the performance of several state-of-the art graph matching algorithms. We also show that a similar method can be successfully applied to parameter learning for graphical models and demonstrate its effectiveness empirically.",
"In this paper we present a new approach for establishing correspondences between sparse image features related by an unknown non-rigid mapping and corrupted by clutter and occlusion, such as points extracted from a pair of images containing a human figure in distinct poses. We formulate this matching task as an energy minimization problem by defining a complex objective function of the appearance and the spatial arrangement of the features. Optimization of this energy is an instance of graph matching, which is in general a NP-hard problem. We describe a novel graph matching optimization technique, which we refer to as dual decomposition (DD), and demonstrate on a variety of examples that this method outperforms existing graph matching algorithms. In the majority of our examples DD is able to find the global minimum within a minute. The ability to globally optimize the objective allows us to accurately learn the parameters of our matching model from training examples. We show on several matching tasks that our learned model yields results superior to those of state-of-the-art methods."
]
} |
1904.00597 | 2931335216 | Graph matching refers to finding node correspondence between graphs, such that the corresponding node and edge's affinity can be maximized. In addition with its NP-completeness nature, another important challenge is effective modeling of the node-wise and structure-wise affinity across graphs and the resulting objective, to guide the matching procedure effectively finding the true matching against noises. To this end, this paper devises an end-to-end differentiable deep network pipeline to learn the affinity for graph matching. It involves a supervised permutation loss regarding with node correspondence to capture the combinatorial nature for graph matching. Meanwhile deep graph embedding models are adopted to parameterize both intra-graph and cross-graph affinity functions, instead of the traditional shallow and simple parametric forms e.g. a Gaussian kernel. The embedding can also effectively capture the higher-order structure beyond second-order edges. The permutation loss model is agnostic to the number of nodes, and the embedding model is shared among nodes such that the network allows for varying numbers of nodes in graphs for training and inference. Moreover, our network is class-agnostic with some generalization capability across different categories. All these features are welcomed for real-world applications. Experiments show its superiority against state-of-the-art graph matching learning methods. | There is a seminal work @cite_19 presenting a method for adopting deep neural networks for learning the affinity matrix for graph matching. However, as will be shown later in the paper, their pixel offset based loss function does not fit well with the combinatorial nature of matching, which will be addressed in this paper. In addition, node embedding is not considered which has been shown being able to effectively capture the local structure of the node, which can go beyond second-order for more effective affinity modeling. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2799132636"
],
"abstract": [
"The problem of graph matching under node and pairwise constraints is fundamental in areas as diverse as combinatorial optimization, machine learning or computer vision, where representing both the relations between nodes and their neighborhood structure is essential. We present an end-to-end model that makes it possible to learn all parameters of the graph matching process, including the unary and pairwise node neighborhoods, represented as deep feature extraction hierarchies. The challenge is in the formulation of the different matrix computation layers of the model in a way that enables the consistent, efficient propagation of gradients in the complete pipeline from the loss function, through the combinatorial optimization layer solving the matching problem, and the feature extraction hierarchy. Our computer vision experiments and ablation studies on challenging datasets like PASCAL VOC keypoints, Sintel and CUB show that matching models refined end-to-end are superior to counterparts based on feature hierarchies trained for other problems."
]
} |
1904.00597 | 2931335216 | Graph matching refers to finding node correspondence between graphs, such that the corresponding node and edge's affinity can be maximized. In addition with its NP-completeness nature, another important challenge is effective modeling of the node-wise and structure-wise affinity across graphs and the resulting objective, to guide the matching procedure effectively finding the true matching against noises. To this end, this paper devises an end-to-end differentiable deep network pipeline to learn the affinity for graph matching. It involves a supervised permutation loss regarding with node correspondence to capture the combinatorial nature for graph matching. Meanwhile deep graph embedding models are adopted to parameterize both intra-graph and cross-graph affinity functions, instead of the traditional shallow and simple parametric forms e.g. a Gaussian kernel. The embedding can also effectively capture the higher-order structure beyond second-order edges. The permutation loss model is agnostic to the number of nodes, and the embedding model is shared among nodes such that the network allows for varying numbers of nodes in graphs for training and inference. Moreover, our network is class-agnostic with some generalization capability across different categories. All these features are welcomed for real-world applications. Experiments show its superiority against state-of-the-art graph matching learning methods. | Graph matching bears the combinatorial nature. There is an emerging thread using learning to seek efficient solution, especially with deep networks. In @cite_41 , the well known NP-hard problem for coloring very large graphs is addressed using deep reinforcement learning. The resulting algorithm can learn new state of the art heuristics for graph coloring. While the Travelling Salesman Problem (TSP) is studied in @cite_29 and the authors propose a graph attention network based method which learns a heuristic algorithm that employs neural network policy to find a tour. Deep learning for node set is also explored in @cite_25 which seeks permutation invariant objective functions to a set of nodes. | {
"cite_N": [
"@cite_41",
"@cite_29",
"@cite_25"
],
"mid": [
"2919364174",
"2790770538",
""
],
"abstract": [
"We show that recent innovations in deep reinforcement learning can effectively color very large graphs -- a well-known NP-hard problem with clear commercial applications. Because the Monte Carlo Tree Search with Upper Confidence Bound algorithm used in AlphaGoZero can improve the performance of a given heuristic, our approach allows deep neural networks trained using high performance computing (HPC) technologies to transform computation into improved heuristics with zero prior knowledge. Key to our approach is the introduction of a novel deep neural network architecture (FastColorNet) that has access to the full graph context and requires @math time and space to color a graph with @math vertices, which enables scaling to very large graphs that arise in real applications like parallel computing, compilers, numerical solvers, and design automation, among others. As a result, we are able to learn new state of the art heuristics for graph coloring.",
"We propose a framework for solving combinatorial optimization problems of which the output can be represented as a sequence of input elements. As an alternative to the Pointer Network, we parameterize a policy by a model based entirely on (graph) attention layers, and train it efficiently using REINFORCE with a simple and robust baseline based on a deterministic (greedy) rollout of the best policy found during training. We significantly improve over state-of-the-art results for learning algorithms for the 2D Euclidean TSP, reducing the optimality gap for a single tour construction by more than 75 (to 0.33 ) and 50 (to 2.28 ) for instances with 20 and 50 nodes respectively.",
""
]
} |
1904.00625 | 2929753309 | The performance on deep learning is significantly affected by volume of training data. Models pre-trained from massive dataset such as ImageNet become a powerful weapon for speeding up training convergence and improving accuracy. Similarly, models based on large dataset are important for the development of deep learning in 3D medical images. However, it is extremely challenging to build a sufficiently large dataset due to difficulty of data acquisition and annotation in 3D medical imaging. We aggregate the dataset from several medical challenges to build 3DSeg-8 dataset with diverse modalities, target organs, and pathologies. To extract general medical three-dimension (3D) features, we design a heterogeneous 3D network called Med3D to co-train multi-domain 3DSeg-8 so as to make a series of pre-trained models. We transfer Med3D pre-trained models to lung segmentation in LIDC dataset, pulmonary nodule classification in LIDC dataset and liver segmentation on LiTS challenge. Experiments show that the Med3D can accelerate the training convergence speed of target 3D medical tasks 2 times compared with model pre-trained on Kinetics dataset, and 10 times compared with training from scratch as well as improve accuracy ranging from 3 to 20 . Transferring our Med3D model on state-the-of-art DenseASPP segmentation network, in case of single model, we achieve 94.6 Dice coefficient which approaches the result of top-ranged algorithms on the LiTS challenge. | @cite_20 have shown that the greater the amount of data, the better the performance of deep learning networks. In natural images, people have been building lots of large-amount datasets for decades, such as ImageNet @cite_35 , PASCAL VOC @cite_6 , MS COCO @cite_29 , etc., which include abundant annotations on millions of images. Pre-trained models based on these large-scale datasets can extract useful general features which are widely used in classification, detection, and segmentation tasks. Studies @cite_11 @cite_26 @cite_41 @cite_17 @cite_33 have repeatedly proved that pre-trained models can accelerate the training convergence speed and increase the accuracy of the target model. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_33",
"@cite_41",
"@cite_29",
"@cite_17",
"@cite_6",
"@cite_20",
"@cite_11"
],
"mid": [
"2117539524",
"2346062110",
"2901394229",
"2117499988",
"1861492603",
"2293499654",
"2031489346",
"2962843773",
"2521537533"
],
"abstract": [
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.",
"Training a deep convolutional neural network (CNN) from scratch is difficult because it requires a large amount of labeled training data and a great deal of expertise to ensure proper convergence. A promising alternative is to fine-tune a CNN that has been pre-trained using, for instance, a large set of labeled natural images. However, the substantial differences between natural and medical images may advise against such knowledge transfer. In this paper, we seek to answer the following central question in the context of medical image analysis: Can the use of pre-trained deep CNNs with sufficient fine-tuning eliminate the need for training a deep CNN from scratch? To address this question, we considered four distinct medical imaging applications in three specialties (radiology, cardiology, and gastroenterology) involving classification, detection, and segmentation from three different imaging modalities, and investigated how the performance of deep CNNs trained from scratch compared with the pre-trained CNNs fine-tuned in a layer-wise manner. Our experiments consistently demonstrated that 1) the use of a pre-trained CNN with adequate fine-tuning outperformed or, in the worst case, performed as well as a CNN trained from scratch; 2) fine-tuned CNNs were more robust to the size of training sets than CNNs trained from scratch; 3) neither shallow tuning nor deep tuning was the optimal choice for a particular application; and 4) our layer-wise fine-tuning scheme could offer a practical way to reach the best performance for the application at hand based on the amount of available data.",
"We report competitive results on object detection and instance segmentation on the COCO dataset using standard models trained from random initialization. The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pre-trained models, with the sole exception of increasing the number of training iterations so the randomly initialized models may converge. Training from random initialization is surprisingly robust; our results hold even when: (i) using only 10 of the training data, (ii) for deeper and wider models, and (iii) for multiple tasks and metrics. Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy. To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data---a result on par with the top COCO 2017 competition results that used ImageNet pre-training. These observations challenge the conventional wisdom of ImageNet pre-training for dependent tasks and we expect these discoveries will encourage people to rethink the current de facto paradigm of pre-training and fine-tuning' in computer vision.",
"Whereas theoretical work suggests that deep architectures might be more e cient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pretraining. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this di cult learning problem. Answering these questions is important if learning in deep architectures is to be further improved. We attempt to shed some light on these questions through extensive simulations. The experiments confirm and clarify the advantage of unsupervised pre-training. They demonstrate the robustness of the training procedure with respect to the random initialization, the positive e ect of pre-training in terms of optimization and its role as a regularizer. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples.",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"In this paper, we examined the effectiveness of deep convolutional neural network (DCNN) for food photo recognition task. Food recognition is a kind of fine-grained visual recognition which is relatively harder problem than conventional image recognition. To tackle this problem, we sought the best combination of DCNN-related techniques such as pre-training with the large-scale ImageNet data, fine-tuning and activation features extracted from the pre-trained DCNN. From the experiments, we concluded the fine-tuned DCNN which was pre-trained with 2000 categories in the ImageNet including 1000 food-related categories was the best method, which achieved 78.77 as the top-1 accuracy for UEC-FOOD100 and 67.57 for UEC-FOOD256, both of which were the best results so far. In addition, we applied the food classifier employing the best combination of the DCNN techniques to Twitter photo data. We have achieved the great improvements on food photo mining in terms of both the number of food photos and accuracy. In addition to its high classification accuracy, we found that DCNN was very suitable for large-scale image data, since it takes only 0.03 seconds to classify one food photo with GPU.",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10 × or 100 × ? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between ‘enormous data’ and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pre-training) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-the-art results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.",
"The ability to automatically learn task specific feature representations has led to a huge success of deep learning methods. When large training data is scarce, such as in medical imaging problems, transfer learning has been very effective. In this paper, we systematically investigate the process of transferring a Convolutional Neural Network, trained on ImageNet images to perform image classification, to kidney detection problem in ultrasound images. We study how the detection performance depends on the extent of transfer. We show that a transferred and tuned CNN can outperform a state-of-the-art feature engineered pipeline and a hybridization of these two techniques achieves 20 higher performance. We also investigate how the evolution of intermediate response images from our network. Finally, we compare these responses to state-of-the-art image processing filters in order to gain greater insight into how transfer learning is able to effectively manage widely varying imaging regimes."
]
} |
1904.00625 | 2929753309 | The performance on deep learning is significantly affected by volume of training data. Models pre-trained from massive dataset such as ImageNet become a powerful weapon for speeding up training convergence and improving accuracy. Similarly, models based on large dataset are important for the development of deep learning in 3D medical images. However, it is extremely challenging to build a sufficiently large dataset due to difficulty of data acquisition and annotation in 3D medical imaging. We aggregate the dataset from several medical challenges to build 3DSeg-8 dataset with diverse modalities, target organs, and pathologies. To extract general medical three-dimension (3D) features, we design a heterogeneous 3D network called Med3D to co-train multi-domain 3DSeg-8 so as to make a series of pre-trained models. We transfer Med3D pre-trained models to lung segmentation in LIDC dataset, pulmonary nodule classification in LIDC dataset and liver segmentation on LiTS challenge. Experiments show that the Med3D can accelerate the training convergence speed of target 3D medical tasks 2 times compared with model pre-trained on Kinetics dataset, and 10 times compared with training from scratch as well as improve accuracy ranging from 3 to 20 . Transferring our Med3D model on state-the-of-art DenseASPP segmentation network, in case of single model, we achieve 94.6 Dice coefficient which approaches the result of top-ranged algorithms on the LiTS challenge. | Pan and Yang @cite_3 have indicated that the more similar the data distribution between source domain and target domain, the better the transferring effect. According to the above observation the above defects, we believe that the pre-trained model based on 3D medical dataset should be superior to natural scene video in 3D medical target tasks. Due to the lack of large-scale 3D medical images, co-trained models based on multi-domain 3D medical image dataset may be a solution. Training network cross different domains simultaneously is a challenging task. @cite_38 proposed a data-dependent regularizer based on support vector machine for video concept detection. @cite_46 introduced a mixture-transform model for object classification and @cite_7 presented a domain adaptation network for video tracking. Since the pixel presentation and range of pixel value of medical imaging on different domains are completely different, the above methods for natural images cannot be directly applied onto the medical imaging. | {
"cite_N": [
"@cite_38",
"@cite_46",
"@cite_7",
"@cite_3"
],
"mid": [
"2069057437",
"1822439997",
"1857884451",
"2165698076"
],
"abstract": [
"We propose a multiple source domain adaptation method, referred to as Domain Adaptation Machine (DAM), to learn a robust decision function (referred to as target classifier) for label prediction of patterns from the target domain by leveraging a set of pre-computed classifiers (referred to as auxiliary source classifiers) independently learned with the labeled patterns from multiple source domains. We introduce a new data-dependent regularizer based on smoothness assumption into Least-Squares SVM (LS-SVM), which enforces that the target classifier shares similar decision values with the auxiliary classifiers from relevant source domains on the unlabeled patterns of the target domain. In addition, we employ a sparsity regularizer to learn a sparse target classifier. Comprehensive experiments on the challenging TRECVID 2005 corpus demonstrate that DAM outperforms the existing multiple source domain adaptation methods for video concept detection in terms of effectiveness and efficiency.",
"Recent domain adaptation methods successfully learn cross-domain transforms to map points between source and target domains. Yet, these methods are either restricted to a single training domain, or assume that the separation into source domains is known a priori. However, most available training data contains multiple unknown domains. In this paper, we present both a novel domain transform mixture model which outperforms a single transform model when multiple domains are present, and a novel constrained clustering method that successfully discovers latent domains. Our discovery method is based on a novel hierarchical clustering technique that uses available object category information to constrain the set of feasible domain separations. To illustrate the effectiveness of our approach we present experiments on two commonly available image datasets with and without known domain labels: in both cases our method outperforms baseline techniques which use no domain adaptation or domain adaptation methods that presume a single underlying domain shift.",
"We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research."
]
} |
1904.00551 | 2926593466 | Weakly supervised object detection aims at learning precise object detectors, given image category labels. In recent prevailing works, this problem is generally formulated as a multiple instance learning module guided by an image classification loss. The object bounding box is assumed to be the one contributing most to the classification among all proposals. However, the region contributing most is also likely to be a crucial part or the supporting context of an object. To obtain a more accurate detector, in this work we propose a novel end-to-end weakly supervised detection approach, where a newly introduced generative adversarial segmentation module interacts with the conventional detection module in a collaborative loop. The collaboration mechanism takes full advantages of the complementary interpretations of the weakly supervised localization task, namely detection and segmentation tasks, forming a more comprehensive solution. Consequently, our method obtains more precise object bounding boxes, rather than parts or irrelevant surroundings. Expectedly, the proposed method achieves an accuracy of 51.0 on the PASCAL VOC 2007 dataset, outperforming the state-of-the-arts and demonstrating its superiority for weakly supervised object detection. | MIL @cite_37 is a concept in machine learning, illustrating the essence of inexact supervision problem, in which only coarse-grained labels are available @cite_5 . Formally, given a training image @math , all instances in some specific form constitute a bag". object proposals (in detection task) or image pixels (in segmentation task) can be different forms of instances. If the image @math is labeled with class @math , then the bag" of @math is positive with regard to @math , meaning that there is at least one positive instance of class @math in this bag. If @math is not labeled with class @math , the corresponding bag" is negative to @math and there is no instance of class @math in this image. The MIL models aim at predicting the label of an input bag, and more importantly, finding positive instances in positive bags. | {
"cite_N": [
"@cite_5",
"@cite_37"
],
"mid": [
"2746791238",
"2110119381"
],
"abstract": [
"Supervised learning techniques construct predictive models by learning from a large number of training examples, where each training example has a label indicating its ground-truth output. Though current techniques have achieved great success, it is noteworthy that in many tasks it is difficult to get strong supervision information like fully ground-truth labels due to the high cost of the data-labeling process. Thus, it is desirable for machine-learning techniques to work with weak supervision. This article reviews some research progress of weakly supervised learning, focusing on three typical types of weak supervision: incomplete supervision, where only a subset of training data is given with labels; inexact supervision, where the training data are given with only coarse-grained labels; and inaccurate supervision, where the given labels are not always ground-truth.",
"The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89 correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms."
]
} |
1904.00152 | 2933168239 | We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and remove outliers that lie away from this subspace. It is used together with an encoder and a decoder. The encoder maps the data into the latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a manifold" close to the original data. We illustrate algorithmic choices and performance for artificial data with corrupted manifold structure. We also demonstrate competitive precision and recall for image datasets. | Several recent works have used autoencoders for anomaly detection. Autoencoders tend to maintain the main structure in the data and often produce large reconstruction errors for outliers, which help identify them. The earliest work on anomaly detection by large reconstruction error of an autoencoder is @cite_12 . It applies an iterative and cyclic scheme, where in each iteration, an inlier set is determined and used for updating the parameters of the autoencoder. @cite_46 apply @math normalization for the latent code of the autoencoder and also consider the case of multiple modes for the normal samples. Instead of using the reconstruction error, they apply @math -means clustering for the latent code, and identify outliers as points whose latent representations are far from all the cluster centers. @cite_11 also use an autoencoder with clustered latent code, but they fit a Gaussian Mixture Model using an additional neural network. Restricted Boltzmann Machines (RBMs) are similar to autoencoders. @cite_5 define energy functions'' for RBMs that are similar to the reconstruction losses for autoencoders. They identify anomalous samples according to large energy values. | {
"cite_N": [
"@cite_5",
"@cite_46",
"@cite_12",
"@cite_11"
],
"mid": [
"2398119937",
"2782351397",
"2204904589",
"2786088545"
],
"abstract": [
"In this paper, we attack the anomaly detection problem by directly modeling the data distribution with deep architectures. We propose deep structured energy based models (DSEBMs), where the energy function is the output of a deterministic deep neural network with structure. We develop novel model architectures to integrate EBMs with different types of data such as static data, sequential data, and spatial data, and apply appropriate model architectures to adapt to the data structure. Our training algorithm is built upon the recent development of score matching sm , which connects an EBM with a regularized autoencoder, eliminating the need for complicated sampling method. Statistically sound decision criterion can be derived for anomaly detection purpose from the perspective of the energy landscape of the data distribution. We investigate two decision criteria for performing anomaly detection: the energy score and the reconstruction error. Extensive empirical studies on benchmark tasks demonstrate that our proposed model consistently matches or outperforms all the competing methods.",
"This paper uses network packet capture data to demonstrate how Robust Principal Component Analysis (RPCA) can be used in a new way to detect anomalies which serve as cyber-network attack indicators. The approach requires only a few parameters to be learned using partitioned training data and shows promise of ameliorating the need for an exhaustive set of examples of different types of network attacks. For Lincoln Lab's DARPA intrusion detection data set, the method achieves low false-positive rates while maintaining reasonable true-positive rates on individual packets. In addition, the method correctly detected packet streams in which an attack which was not previously encountered, or trained on, appears.",
"We study the problem of automatically removing outliers from noisy data, with application for removing outlier images from an image collection. We address this problem by utilizing the reconstruction errors of an autoencoder. We observe that when data are reconstructed from low-dimensional representations, the inliers and the outliers can be well separated according to their reconstruction errors. Based on this basic observation, we gradually inject discriminative information in the learning process of an autoencoder to make the inliers and the outliers more separable. Experiments on a variety of image datasets validate our approach.",
"Unsupervised anomaly detection on multi- or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14 improvement based on the standard F1 score."
]
} |
1904.00152 | 2933168239 | We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and remove outliers that lie away from this subspace. It is used together with an encoder and a decoder. The encoder maps the data into the latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a manifold" close to the original data. We illustrate algorithmic choices and performance for artificial data with corrupted manifold structure. We also demonstrate competitive precision and recall for image datasets. | The above works are designed for datasets with a small fraction of outliers. However, when this fraction increases, outliers are often not distinguished by high reconstruction errors or low similarity scores. In order to identify them, additional assumptions on the structure of the normal data need to be incorporated. For example, Zhou and Paffenroth @cite_18 decompose the input data into two parts: low-rank and sparse (or column-sparse). The low-rank part is fed into an autoencoder and the sparse part is imposed as a penalty term with the @math -norm (or @math -norm for column-sparsity). | {
"cite_N": [
"@cite_18"
],
"mid": [
"2743138268"
],
"abstract": [
"Deep autoencoders, and other deep neural networks, have demonstrated their effectiveness in discovering non-linear features across many problem domains. However, in many real-world problems, large outliers and pervasive noise are commonplace, and one may not have access to clean training data as required by standard deep denoising autoencoders. Herein, we demonstrate novel extensions to deep autoencoders which not only maintain a deep autoencoders' ability to discover high quality, non-linear features but can also eliminate outliers and noise without access to any clean training data. Our model is inspired by Robust Principal Component Analysis, and we split the input data X into two parts, @math , where @math can be effectively reconstructed by a deep autoencoder and @math contains the outliers and noise in the original data X. Since such splitting increases the robustness of standard deep autoencoders, we name our model a \"Robust Deep Autoencoder (RDA)\". Further, we present generalizations of our results to grouped sparsity norms which allow one to distinguish random anomalies from other types of structured corruptions, such as a collection of features being corrupted across many instances or a collection of instances having more corruptions than their fellows. Such \"Group Robust Deep Autoencoders (GRDA)\" give rise to novel anomaly detection approaches whose superior performance we demonstrate on a selection of benchmark problems."
]
} |
1904.00152 | 2933168239 | We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and remove outliers that lie away from this subspace. It is used together with an encoder and a decoder. The encoder maps the data into the latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a manifold" close to the original data. We illustrate algorithmic choices and performance for artificial data with corrupted manifold structure. We also demonstrate competitive precision and recall for image datasets. | @cite_19 explore GAN-based methods for anomaly detection, that simultaneously learn encoder, generator and discriminator during training. Their procedure reduces the test time since it eliminates the need for recovering a latent representation. A combination of reconstruction and discrimination losses is used to determine whether a sample is anomalous. A similar idea is used in @cite_38 , where the score is the fake-class probability''. | {
"cite_N": [
"@cite_19",
"@cite_38"
],
"mid": [
"2787947370",
"2785731816"
],
"abstract": [
"Generative adversarial networks (GANs) are able to model the complex highdimensional distributions of real-world data, which suggests they could be effective for anomaly detection. However, few works have explored the use of GANs for the anomaly detection task. We leverage recently developed GAN models for anomaly detection, and achieve state-of-the-art performance on image and network intrusion datasets, while being several hundred-fold faster at test time than the only published GAN-based method.",
"The ability of a classifier to recognize unknown inputs is important for many classification-based systems. We discuss the problem of simultaneous classification and novelty detection, i.e. determining whether an input is from the known set of classes and from which specific class, or from an unknown domain and does not belong to any of the known classes. We propose a method based on the Generative Adversarial Networks (GAN) framework. We show that a multi-class discriminator trained with a generator that generates samples from a mixture of nominal and novel data distributions is the optimal novelty detector. We approximate that generator with a mixture generator trained with the Feature Matching loss and empirically show that the proposed method outperforms conventional methods for novelty detection. Our findings demonstrate a simple, yet powerful new application of the GAN framework for the task of novelty detection."
]
} |
1904.00152 | 2933168239 | We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and remove outliers that lie away from this subspace. It is used together with an encoder and a decoder. The encoder maps the data into the latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a manifold" close to the original data. We illustrate algorithmic choices and performance for artificial data with corrupted manifold structure. We also demonstrate competitive precision and recall for image datasets. | Another idea which directly affects this work is an addition of a linear self-expressive layer to an autoencoder @cite_21 . Such a layer is helpful for extracting useful latent features to tackle unsupervised subspace clustering problems. By imposing the self-expressiveness, the autoencoder is robust to an increasing number of clusters. Although self-expressiveness also improves robustness to noise and outliers, @cite_21 aims at clustering and thus its goal is different than ours. Furthermore, their self-expressive energy does not explicitly consider robustness, while ours does. @cite_26 consider a somewhat parallel idea of imposing a loss function to increase the robustness of representation. However, their goal is to increase the margin between classes and their method only applies to a supervised setting in anomaly detection, where the normal data is multi-modal. | {
"cite_N": [
"@cite_21",
"@cite_26"
],
"mid": [
"2963365397",
"2963240808"
],
"abstract": [
"We present a novel deep neural network architecture for unsupervised subspace clustering. This architecture is built upon deep auto-encoders, which non-linearly map the input data into a latent space. Our key idea is to introduce a novel self-expressive layer between the encoder and the decoder to mimic the \"self-expressiveness\" property that has proven effective in traditional subspace clustering. Being differentiable, our new self-expressive layer provides a simple but effective way to learn pairwise affinities between all data points through a standard back-propagation procedure. Being nonlinear, our neural-network based method is able to cluster data points having complex (often nonlinear) structures. We further propose pre-training and fine-tuning strategies that let us effectively learn the parameters of our subspace clustering networks. Our experiments show that the proposed method significantly outperforms the state-of-the-art unsupervised subspace clustering methods.",
"Deep neural networks trained using a softmax layer at the top and the cross-entropy loss are ubiquitous tools for image classification. Yet, this does not naturally enforce intra-class similarity nor inter-class margin of the learned deep representations. To simultaneously achieve these two goals, different solutions have been proposed in the literature, such as the pairwise or triplet losses. However, these carry the extra task of selecting pairs or triplets, and the extra computational burden of computing and learning for many combinations of them. In this paper, we propose a plug-and-play loss term for deep networks that explicitly reduces intra-class variance and enforces inter-class margin simultaneously, in a simple and elegant geometric manner. For each class, the deep features are collapsed into a learned linear subspace, or union of them, and inter-class subspaces are pushed to be as orthogonal as possible. Our proposed Orthogonal Low-rank Embedding (OLA‰) does not require carefully crafting pairs or triplets of samples for training, and works standalone as a classification loss, being the first reported deep metric learning framework of its kind. Because of the improved margin between features of different classes, the resulting deep networks generalize better, are more discriminative, and more robust. We demonstrate improved classification performance in general object recognition, plugging the proposed loss term into existing off-the-shelf architectures. In particular, we show the advantage of the proposed loss in the small data model scenario, and we significantly advance the state-of-the-art on the Stanford STL-10 benchmark."
]
} |
1904.00152 | 2933168239 | We propose a neural network for unsupervised anomaly detection with a novel robust subspace recovery layer (RSR layer). This layer seeks to extract the underlying subspace from a latent representation of the given data and remove outliers that lie away from this subspace. It is used together with an encoder and a decoder. The encoder maps the data into the latent space, from which the RSR layer extracts the subspace. The decoder then smoothly maps back the underlying subspace to a manifold" close to the original data. We illustrate algorithmic choices and performance for artificial data with corrupted manifold structure. We also demonstrate competitive precision and recall for image datasets. | There are many kernel and manifold-learning methods for anomaly detection. Some of them were reviewed in @cite_25 . However, they are not as competitive as the methods reviewed here and are not directly relevant to our work. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2122646361"
],
"abstract": [
"Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with."
]
} |
1904.00325 | 2934708523 | Image representation is a fundamental task in computer vision. However, most of the existing approaches for image representation ignore the relations between images and consider each input image independently. Intuitively, relations between images can help to understand the images and maintain model consistency over related images. In this paper, we consider modeling the image-level relations to generate more informative image representations, and propose ImageGCN, an end-to-end graph convolutional network framework for multi-relational image modeling. We also apply ImageGCN to chest X-ray (CXR) images where rich relational information is available for disease identification. Unlike previous image representation models, ImageGCN learns the representation of an image using both its original pixel features and the features of related images. Besides learning informative representations for images, ImageGCN can also be used for object detection in a weakly supervised manner. The Experimental results on ChestX-ray14 dataset demonstrate that ImageGCN can outperform respective baselines in both disease identification and localization tasks and can achieve comparable and often better results than the state-of-the-art methods. | The previous research on relational model in computer vision mainly focused on pixel-level relations @cite_45 @cite_30 , object-level relations @cite_16 @cite_40 @cite_27 @cite_64 @cite_42 and label-level relations @cite_11 @cite_50 . image-level similarity relation were also studied in literature @cite_54 @cite_50 . However, Few studies are found to model the natural image-level relations for image representation. | {
"cite_N": [
"@cite_30",
"@cite_64",
"@cite_54",
"@cite_42",
"@cite_27",
"@cite_40",
"@cite_45",
"@cite_50",
"@cite_16",
"@cite_11"
],
"mid": [
"",
"2788537604",
"2770468159",
"2890531016",
"2607855566",
"2581887665",
"2963630186",
"2057508887",
"2050964073",
"2963052338"
],
"abstract": [
"",
"",
"We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extended to variants of few-shot learning, such as semi-supervised or active learning, demonstrating the ability of graph-based models to operate well on 'relational' tasks.",
"It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1 to 128.7 on COCO testing set.",
"Relationships among objects play a crucial role in image understanding. Despite the great success of deep learning techniques in recognizing individual objects, reasoning about the relationships among objects remains a challenging task. Previous methods often treat this as a classification problem, considering each type of relationship (e.g. ride) or each distinct visual phrase (e.g. person-ride-horse) as a category. Such approaches are faced with significant difficulties caused by the high diversity of visual appearance for each kind of relationships or the large number of distinct visual phrases. We propose an integrated framework to tackle this problem. At the heart of this framework is the Deep Relational Network, a novel formulation designed specifically for exploiting the statistical dependencies between objects and their relationships. On two large data sets, the proposed method achieves substantial improvement over state-of-the-art.",
"One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.",
"Spectral embedding provides a framework for solving perceptual organization problems, including image segmentation and figure ground organization. From an affinity matrix describing pairwise relationships between pixels, it clusters pixels into regions, and, using a complex-valued extension, orders pixels according to layer. We train a convolutional neural network (CNN) to directly predict the pair-wise relationships that define this affinity matrix. Spectral embedding then resolves these predictions into a globally-consistent segmentation and figure ground organization of the scene. Experiments demonstrate significant benefit to this direct coupling compared to prior works which use explicit intermediate stages, such as edge detection, on the pathway from image to affinities. Our results suggest spectral embedding as a powerful alternative to the conditional random field (CRF)-based globalization schemes typically coupled to deep neural networks.",
"Image annotation is usually formulated as a multi-label semi-supervised learning problem. Traditional graph-based methods only utilize the data (images) graph induced from image similarities, while ignore the label (semantic terms) graph induced from label correlations of a multi-label image data set. In this paper, we propose a novel Bi-relational Graph (BG) model that comprises both the data graph and the label graph as subgraphs, and connect them by an additional bipartite graph induced from label assignments. By considering each class and its labeled images as a semantic group, we perform random walk on the BG to produce group-to-vertex relevance, including class-to-image and class-to-class relevances. The former can be used to predict labels for unannotated images, while the latter are new class relationships, called as Causal Relationships (CR), which are asymmetric. CR is learned from input data and has better semantic meaning to enhance the label prediction for unannotated images. We apply the proposed approaches to automatic image annotation and semantic image retrieval tasks on four benchmark multi-label image data sets. The superior performance of our approaches compared to state-of-the-art multi-label classification methods demonstrate their effectiveness.",
"Psychologists have proposed that many human-object interaction activities form unique classes of scenes. Recognizing these scenes is important for many social functions. To enable a computer to do this is however a challenging task. Take people-playing-musical-instrument (PPMI) as an example; to distinguish a person playing violin from a person just holding a violin requires subtle distinction of characteristic image features and feature arrangements that differentiate these two scenes. Most of the existing image representation methods are either too coarse (e.g. BoW) or too sparse (e.g. constellation models) for performing this task. In this paper, we propose a new image feature representation called “grouplet”. The grouplet captures the structured information of an image by encoding a number of discriminative visual features and their spatial configurations. Using a dataset of 7 different PPMI activities, we show that grouplets are more effective in classifying and detecting human-object interactions than other state-of-the-art methods. In particular, our method can make a robust distinction between humans playing the instruments and humans co-occurring with the instruments without playing.",
"In this paper, we propose a novel deep learning architecture for multi-label zero-shot learning (ML-ZSL), which is able to predict multiple unseen class labels for each input instance. Inspired by the way humans utilize semantic knowledge between objects of interests, we propose a framework that incorporates knowledge graphs for describing the relationships between multiple labels. Our model learns an information propagation mechanism from the semantic label space, which can be applied to model the interdependencies between seen and unseen class labels. With such investigation of structured knowledge graphs for visual reasoning, we show that our model can be applied for solving multi-label classification and ML-ZSL tasks. Compared to state-of-the-art approaches, comparable or improved performances can be achieved by our method."
]
} |
1904.00129 | 2927016023 | This work presents computational methods for transferring body movements from one person to another with videos collected in the wild. Specifically, we train a personalized model on a single video from the Internet which can generate videos of this target person driven by the motions of other people. Our model is built on two generative networks: a human (foreground) synthesis net which generates photo-realistic imagery of the target person in a novel pose, and a fusion net which combines the generated foreground with the scene (background), adding shadows or reflections as needed to enhance realism. We validate the the efficacy of our proposed models over baselines with qualitative and quantitative evaluations as well as a subjective test. | Pixel to pixel translation: Recent developments in generative models have started to make high quality photo-realistic image synthesis possible. Pixel to pixel translation frameworks are conditioned on input images or video frames and learn a mapping from the input domain to the same or different output domain. Image-to-image translation @cite_24 was one of the first papers to present a general framework for handling a variety of translation tasks, such as edge to pixel or grayscale to color translation, using a conditional Generative Adversarial Network (GAN) @cite_8 to learn mappings from paired data. CycleGAN @cite_7 further introduced a novel cycle consistency loss to learn domain translations without requiring paired image data. In follow-on work, Recycle-GAN @cite_19 added spatial and temporal constraints to enable video to video translation. Cascaded refinement frameworks @cite_10 @cite_9 were designed to synthesize images from semantic layouts, achieving visually appealing generation. To further improve results, pix2pixHD @cite_17 proposed a multi-scale conditional GAN to synthesize high-resolution images from semantic labels, while spatio-temporal adversarial constraints were added to generate temporally consistent results for video to video synthesis @cite_6 . | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_10",
"@cite_17"
],
"mid": [
"2962793481",
"2099471712",
"2963539305",
"2963841322",
"2963073614",
"2963917969",
"2963522749",
"2963800363"
],
"abstract": [
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"We present a semi-parametric approach to photographic image synthesis from semantic layouts. The approach combines the complementary strengths of parametric and nonparametric techniques. The nonparametric component is a memory bank of image segments constructed from a training set of images. Given a novel semantic layout at test time, the memory bank is used to retrieve photographic references that are provided as source material to a deep network. The synthesis is performed by a deep network that draws on the provided photographic material. Experiments on multiple semantic segmentation datasets show that the presented approach yields considerably more realistic images than recent purely parametric techniques.",
"We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image synthesis problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without understanding temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a novel approach for the video-to-video synthesis problem under adversarial learning framework. Through the introduction of new generator and discriminator architectures, coupled with a spatial-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, not possible before our work. Finally, we apply our approach to future video prediction, outperforming several state-of-the-art competing systems. (Note: using Adobe Reader is highly recommended to view the paper.)",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"We introduce a data-driven approach for unsupervised video retargeting that translates content from one domain to another while preserving the style native to a domain, i.e., if contents of John Oliver’s speech were to be transferred to Stephen Colbert, then the generated content speech should be in Stephen Colbert’s style. Our approach combines both spatial and temporal information along with adversarial losses for content translation and style preservation. In this work, we first study the advantages of using spatiotemporal constraints over spatial constraints for effective retargeting. We then demonstrate the proposed approach for the problems where information in both space and time matters such as face-to-face translation, flower-to-flower, wind and cloud synthesis, sunrise and sunset.",
"We present an approach to synthesizing photographic images conditioned on semantic layouts. Given a semantic label map, our approach produces an image with photographic appearance that conforms to the input layout. The approach thus functions as a rendering engine that takes a two-dimensional semantic specification of the scene and produces a corresponding photographic image. Unlike recent and contemporaneous work, our approach does not rely on adversarial training. We show that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective. The presented approach scales seamlessly to high resolutions; we demonstrate this by synthesizing photographic images at 2-megapixel resolution, the full resolution of our training data. Extensive perceptual experiments on datasets of outdoor and indoor scenes demonstrate that images synthesized by the presented approach are considerably more realistic than alternative approaches.",
"We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing."
]
} |
1904.00170 | 2953187236 | In most recent years, zero-shot recognition (ZSR) has gained increasing attention in machine learning and image processing fields. It aims at recognizing unseen class instances with knowledge transferred from seen classes. This is typically achieved by exploiting a pre-defined semantic feature space (FS), i.e., semantic attributes or word vectors, as a bridge to transfer knowledge between seen and unseen classes. However, due to the absence of unseen classes during training, the conventional ZSR easily suffers from domain shift and hubness problems. In this paper, we propose a novel ZSR learning framework that can handle these two issues well by adaptively adjusting semantic FS. To the best of our knowledge, our work is the first to consider the adaptive adjustment of semantic FS in ZSR. Moreover, our solution can be formulated to a more efficient framework that significantly boosts the training. Extensive experiments show the remarkable performance improvement of our model compared with other existing methods. | Zero-shot recognition (ZSR) imitates human ability in recognizing new unseen classes. It is achieved by exploiting labeled seen class instances and certain knowledge that is shared between seen and unseen classes @cite_5 @cite_17 . This knowledge, i.e., attributes, exists in a high dimensional vector space called semantic feature space (FS). The attributes are meaningful high-level information about instances such as their shapes, colors, components, textures, etc. Semantic features describe a class or an instance, in contrast to the typical classification, which names an instance. Intuitively, the similar classes have similar patterns in the semantic FS. These particular patterns are called prototypes. In ZSR, the common practice is first to map an unseen class instance from its original FS, i.e., visual FS, to semantic FS by a mapping function trained on seen classes. Then with such semantic features, we search its most closely related prototype whose corresponding class is set to this instance. | {
"cite_N": [
"@cite_5",
"@cite_17"
],
"mid": [
"2128532956",
"2963220594"
],
"abstract": [
"We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.",
"Prevalent techniques in zero-shot learning do not generalize well to other related problem scenarios. Here, we present a unified approach for conventional zero-shot, generalized zero-shot, and few-shot learning problems. Our approach is based on a novel class adapting principal directions’ (CAPDs) concept that allows multiple embeddings of image features into a semantic space. Given an image, our method produces one principal direction for each seen class. Then, it learns how to combine these directions to obtain the principal direction for each unseen class such that the CAPD of the test image is aligned with the semantic embedding of the true class and opposite to the other classes. This allows efficient and class-adaptive information transfer from seen to unseen classes. In addition, we propose an automatic process for the selection of the most useful seen classes for each unseen class to achieve robustness in zero-shot learning. Our method can update the unseen CAPD taking the advantages of few unseen images to work in a few-shot learning scenario. Furthermore, our method can generalize the seen CAPDs by estimating seen–unseen diversity that significantly improves the performance of generalized zero-shot learning. Our extensive evaluations demonstrate that the proposed approach consistently achieves superior performance in zero-shot, generalized zero-shot, and few one-shot learning problems."
]
} |
1904.00170 | 2953187236 | In most recent years, zero-shot recognition (ZSR) has gained increasing attention in machine learning and image processing fields. It aims at recognizing unseen class instances with knowledge transferred from seen classes. This is typically achieved by exploiting a pre-defined semantic feature space (FS), i.e., semantic attributes or word vectors, as a bridge to transfer knowledge between seen and unseen classes. However, due to the absence of unseen classes during training, the conventional ZSR easily suffers from domain shift and hubness problems. In this paper, we propose a novel ZSR learning framework that can handle these two issues well by adaptively adjusting semantic FS. To the best of our knowledge, our work is the first to consider the adaptive adjustment of semantic FS in ZSR. Moreover, our solution can be formulated to a more efficient framework that significantly boosts the training. Extensive experiments show the remarkable performance improvement of our model compared with other existing methods. | However, as one of the key building blocks in ZSR, the mapping function is trained solely on seen classes. Although the knowledge is shared by both seen and unseen classes, the training and testing classes are intuitively different. Due to the absence of unseen classes during training, ZSR easily suffers from the domain shift problem @cite_8 which refers to the phenomenon that when mapping unseen class instances from their visual to semantic FS, the obtained results may shift away from the real ones (prototypes). Moreover, during searching step, a small number of prototypes may easily become the most related prototypes to most testing unseen class instances. This challenge is the so-called hubness problem @cite_21 . | {
"cite_N": [
"@cite_21",
"@cite_8"
],
"mid": [
"1492420801",
"2141350700"
],
"abstract": [
"This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we prove that the proposed approach indeed reduces hubness. This was verified empirically on the tasks of bilingual lexicon extraction and image labeling: hubness was reduced with both of these tasks and the accuracy was improved accordingly.",
"Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks."
]
} |
1904.00170 | 2953187236 | In most recent years, zero-shot recognition (ZSR) has gained increasing attention in machine learning and image processing fields. It aims at recognizing unseen class instances with knowledge transferred from seen classes. This is typically achieved by exploiting a pre-defined semantic feature space (FS), i.e., semantic attributes or word vectors, as a bridge to transfer knowledge between seen and unseen classes. However, due to the absence of unseen classes during training, the conventional ZSR easily suffers from domain shift and hubness problems. In this paper, we propose a novel ZSR learning framework that can handle these two issues well by adaptively adjusting semantic FS. To the best of our knowledge, our work is the first to consider the adaptive adjustment of semantic FS in ZSR. Moreover, our solution can be formulated to a more efficient framework that significantly boosts the training. Extensive experiments show the remarkable performance improvement of our model compared with other existing methods. | To deal with these issues, several transductive learning based methods @cite_8 assume that the unseen class instances (unlabelled) are available at once during training. DeViSE @cite_2 trains a linear mapping between visual and semantic FS by an effective ranking loss formulation. ESZSL @cite_12 utilizes the square loss to learn the bilinear compatibility and adds regularization to the objective with respect to Frobenius norm. SSE @cite_20 uses the mixture of seen class parts as the intermediate FS. AMP @cite_16 embeds the visual features into the attribute space. SynC @math @cite_7 and CLN+KRR @cite_0 jointly embed several kinds of textual features and visual features to ground attributes. MFMR @cite_23 leverages the sophisticated technique of matrix tri-factorization with manifold regularizers to enhance the mapping between visual and semantic FS. With the popularity of generative adversarial networks (GANs), GANZrl @cite_15 applies GANs to synthesize instances with specified semantics to cover a higher diversity of seen classes. Instead, GAZSL @cite_9 leverages GANs to imagine unseen classes from text descriptions. Despite the efforts made, the domain shift and hubness problems are still open issues. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_12",
"@cite_20"
],
"mid": [
"2289084343",
"2141350700",
"2799215068",
"2610617295",
"",
"2123024445",
"2789002497",
"2962830213",
"652269744",
"2964086552"
],
"abstract": [
"Given semantic descriptions of object classes, zeroshot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.",
"Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.",
"Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.",
"Robust object recognition systems usually rely on powerful feature extraction mechanisms from a large number of real images. However, in many realistic applications, collecting sufficient images for ever-growing new classes is unattainable. In this paper, we propose a new Zero-shot learning (ZSL) framework that can synthesise visual features for unseen classes without acquiring real images. Using the proposed Unseen Visual Data Synthesis (UVDS) algorithm, semantic attributes are effectively utilised as an intermediate clue to synthesise unseen visual features at the training stage. Hereafter, ZSL recognition is converted into the conventional supervised problem, i.e. the synthesised visual features can be straightforwardly fed to typical classifiers such as SVM. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data. Extensive experimental results suggest that our proposed approach significantly improve the state-of-the-art results.",
"",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"",
"This paper addresses the task of zero-shot image classification. The key contribution of the proposed approach is to control the semantic embedding of images – one of the main ingredients of zero-shot learning – by formulating it as a metric learning problem. The optimized empirical criterion associates two types of sub-task constraints: metric discriminating capacity and accurate attribute prediction. This results in a novel expression of zero-shot learning not requiring the notion of class in the training phase: only pairs of image attributes, augmented with a consistency indicator, are given as ground truth. At test time, the learned model can predict the consistency of a test image with a given set of attributes, allowing flexible ways to produce recognition inferences. Despite its simplicity, the proposed approach gives state-of-the-art results on four challenging datasets used for zero-shot recognition evaluation.",
"Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 .",
"In this paper we consider a version of the zero-shot learning problem where seen class source and target domain data are provided. The goal during test-time is to accurately predict the class label of an unseen target domain instance based on revealed source domain side information (e.g. attributes) for unseen classes. Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class. This perspective leads us to learning source target embedding functions that map an arbitrary source target domain data into a same semantic space where similarity can be readily measured. We develop a max-margin framework to learn these similarity functions and jointly optimize parameters by means of cross validation. Our test results are compelling, leading to significant improvement in terms of accuracy on most benchmark datasets for zero-shot recognition."
]
} |
1903.12650 | 2926655273 | There has been a strong demand for algorithms that can execute machine learning as faster as possible and the speed of deep learning has accelerated by 30 times only in the past two years. Distributed deep learning using the large mini-batch is a key technology to address the demand and is a great challenge as it is difficult to achieve high scalability on large clusters without compromising accuracy. In this paper, we introduce optimization methods which we applied to this challenge. We achieved the training time of 74.7 seconds using 2,048 GPUs on ABCI cluster applying these methods. The training throughput is over 1.73 million images sec and the top-1 validation accuracy is 75.08 . | Generally, the mini-batch size should be large for distributed deep learning on large clusters. @cite_6 proposed the warm-up technique to keep the validation accuracy with 8,192 mini-batch size. Google @cite_10 and Sony @cite_2 used the variable mini-batch size which becomes larger and achieved highly parallel processing. | {
"cite_N": [
"@cite_10",
"@cite_6",
"@cite_2"
],
"mid": [
"",
"2622263826",
"2920668770"
],
"abstract": [
"",
"Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a hyper-parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves 90 scaling efficiency when moving from 8 to 256 GPUs. Our findings enable training visual recognition models on internet-scale data with high efficiency.",
"Scaling the distributed deep learning to a massive GPU cluster level is challenging due to the instability of the large mini-batch training and the overhead of the gradient synchronization. We address the instability of the large mini-batch training with batch-size control and label smoothing. We address the overhead of the gradient synchronization with 2D-Torus all-reduce. Specifically, 2D-Torus all-reduce arranges GPUs in a logical 2D grid and performs a series of collective operation in different orientations. These two techniques are implemented with Neural Network Libraries (NNL). We have successfully trained ImageNet ResNet-50 in 122 seconds without significant accuracy loss on ABCI cluster."
]
} |
1903.12650 | 2926655273 | There has been a strong demand for algorithms that can execute machine learning as faster as possible and the speed of deep learning has accelerated by 30 times only in the past two years. Distributed deep learning using the large mini-batch is a key technology to address the demand and is a great challenge as it is difficult to achieve high scalability on large clusters without compromising accuracy. In this paper, we introduce optimization methods which we applied to this challenge. We achieved the training time of 74.7 seconds using 2,048 GPUs on ABCI cluster applying these methods. The training throughput is over 1.73 million images sec and the top-1 validation accuracy is 75.08 . | Hence the difference between the weight gradient norm and the weight norm of each layer causes the unstable of the training, LARS of @cite_7 normalizes the difference of each layer and the DNN can train with 32,768 without the loss of validation accuracy. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2757910899"
],
"abstract": [
"A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. But training with large batch size often results in the lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome this optimization difficulties we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in accuracy."
]
} |
1903.12650 | 2926655273 | There has been a strong demand for algorithms that can execute machine learning as faster as possible and the speed of deep learning has accelerated by 30 times only in the past two years. Distributed deep learning using the large mini-batch is a key technology to address the demand and is a great challenge as it is difficult to achieve high scalability on large clusters without compromising accuracy. In this paper, we introduce optimization methods which we applied to this challenge. We achieved the training time of 74.7 seconds using 2,048 GPUs on ABCI cluster applying these methods. The training throughput is over 1.73 million images sec and the top-1 validation accuracy is 75.08 . | @cite_0 achieved ResNet-50 training in 15 minutes using 1,024 GPUs. @cite_4 also achieved ResNet-50 training in 6.6 minutes using 2,048 GPUs. @cite_1 achieved 1.05 million images sec by using 1,024 TPU v3 processors. The training times of ResNet-50 with 32,768 and 65,536 mini-batch sizes are 2.2 and 1.8 minutes. These results are summarized in the table . | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_4"
],
"mid": [
"2769856846",
"2901541570",
"2884711234"
],
"abstract": [
"We demonstrate that training ResNet-50 on ImageNet for 90 epochs can be achieved in 15 minutes with 1024 Tesla P100 GPUs. This was made possible by using a large minibatch size of 32k. To maintain accuracy with this large minibatch size, we employed several techniques such as RMSprop warm-up, batch normalization without moving averages, and a slow-start learning rate schedule. This paper also describes the details of the hardware and software of the system used to achieve the above performance.",
"Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters. Training deep learning models at petaFLOPS scale requires overcoming both algorithmic and systems software challenges. In this paper, we discuss three systems-related optimizations: (1) distributed batch normalization to control per-replica batch sizes, (2) input pipeline optimizations to sustain model throughput, and (3) 2-D torus all-reduce to speed up gradient summation. We combine these optimizations to train ResNet-50 on ImageNet to 76.3 accuracy in 2.2 minutes on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million images second and no accuracy drop.",
"Synchronized stochastic gradient descent (SGD) optimizers with data parallelism are widely used in training large-scale deep neural networks. Although using larger mini-batch sizes can improve the system scalability by reducing the communication-to-computation ratio, it may hurt the generalization ability of the models. To this end, we build a highly scalable deep learning training system for dense GPU clusters with three main contributions: (1) We propose a mixed-precision training method that significantly improves the training throughput of a single GPU without losing accuracy. (2) We propose an optimization approach for extremely large mini-batch size (up to 64k) that can train CNN models on the ImageNet dataset without losing accuracy. (3) We propose highly optimized all-reduce algorithms that achieve up to 3x and 11x speedup on AlexNet and ResNet-50 respectively than NCCL-based training on a cluster with 1024 Tesla P40 GPUs. On training ResNet-50 with 90 epochs, the state-of-the-art GPU-based system with 1024 Tesla P100 GPUs spent 15 minutes and achieved 74.9 top-1 test accuracy, and another KNL-based system with 2048 Intel KNLs spent 20 minutes and achieved 75.4 accuracy. Our training system can achieve 75.8 top-1 test accuracy in only 6.6 minutes using 2048 Tesla P40 GPUs. When training AlexNet with 95 epochs, our system can achieve 58.7 top-1 test accuracy within 4 minutes, which also outperforms all other existing systems."
]
} |
1903.12344 | 2934961104 | In this paper we present our scientific discovery that good representation can be learned via continuous attention during the interaction between Unsupervised Learning(UL) and Reinforcement Learning(RL) modules driven by intrinsic motivation. Specifically, we designed intrinsic rewards generated from UL modules for driving the RL agent to focus on objects for a period of time and to learn good representations of objects for later object recognition task. We evaluate our proposed algorithm in both with and without extrinsic reward settings. Experiments with end-to-end training in simulated environments with applications to few-shot object recognition demonstrated the effectiveness of the proposed algorithm. | The goal of RL agents is to maximize discounted cumulative reward @cite_14 @cite_18 @cite_27 . Reward functions should be defined generically and lead to good long-term outcomes for an agent. However, most RL algorithms suffer from sparse rewards. Reward shaping @cite_32 provides a way to deal with sparse extrinsic rewards; anther way is intrinsic motivation @cite_16 . Intrinsic rewards such as prediction gain @cite_2 , learning progress @cite_0 , compression progress @cite_7 , variational information maximization @cite_24 @cite_12 , have been employed to augment the environment's reward signal for encouraging to discover novel behavior patterns. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_32",
"@cite_0",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_16",
"@cite_12"
],
"mid": [
"",
"2145339207",
"2034806191",
"1777239053",
"2000514530",
"2963639957",
"",
"2963276097",
"2061868368",
""
],
"abstract": [
"",
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
"The simple, but general formal theory of fun and intrinsic motivation and creativity (1990-2010) is based on the concept of maximizing intrinsic reward for the active creation or discovery of novel, surprising patterns allowing for improved prediction or data compression. It generalizes the traditional field of active learning, and is related to old, but less formal ideas in aesthetics theory and developmental psychology. It has been argued that the theory explains many essential aspects of intelligence including autonomous development, science, art, music, and humor. This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities. Emphasis is put on the importance of limited computational resources for online prediction and compression. Discrete and continuous time formulations are given. Previous practical, but nonoptimal implementations (1991, 1995, and 1997-2002) are reviewed, as well as several recent variants by others (2005-2010). A simplified typology addresses current confusion concerning the precise nature of intrinsic motivation.",
"",
"Intrinsic motivation, the causal mechanism for spontaneous exploration and curiosity, is a central concept in developmental psychology. It has been argued to be a crucial mechanism for open-ended cognitive development in humans, and as such has gathered a growing interest from developmental roboticists in the recent years. The goal of this paper is threefold. First, it provides a synthesis of the different approaches of intrinsic motivation in psychology. Second, by interpreting these approaches in a computational reinforcement learning framework, we argue that they are not operational and even sometimes inconsistent. Third, we set the ground for a systematic operational study of intrinsic motivation by presenting a formal typology of possible computational approaches. This typology is partly based on existing computational models, but also presents new ways of conceptualizing intrinsic motivation. We argue that this kind of computational typology might be useful for opening new avenues for research both in psychology and developmental robotics.",
"Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as epsilon-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.",
"",
"We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA'S REVENGE.",
"Even in the absence of external reward, babies and scientists and others explore their world. Using some sort of adaptive predictive world model, they improve their ability to answer questions such as what happens if I do this or that? They lose interest in both the predictable things and those predicted to remain unpredictable despite some effort. One can design curious robots that do the same. The author’s basic idea (1990, 1991) for doing so is a reinforcement learning (RL) controller is rewarded for action sequences that improve the predictor. Here, this idea is revisited in the context of recent results on optimal predictors and optimal RL machines. Several new variants of the basic principle are proposed. Finally, it is pointed out how the fine arts can be formally understood as a consequence of the principle: given some subjective observer, great works of art and music yield observation histories exhibiting more novel, previously unknown compressibility regularity predictability (with respect to th...",
""
]
} |
1903.12530 | 2931744530 | Gaze redirection is the task of changing the gaze to a desired direction for a given monocular eye patch image. Many applications such as videoconferencing, films and games, and generation of training data for gaze estimation require redirecting the gaze, without distorting the appearance of the area surrounding the eye and while producing photo-realistic images. Existing methods lack the ability to generate perceptually plausible images. In this work, we present a novel method to alleviate this problem by leveraging generative adversarial training to synthesize an eye image conditioned on a target gaze direction. Our method ensures perceptual similarity and consistency of synthesized images to the real images. Furthermore, a gaze estimation loss is used to control the gaze direction accurately. To attain high-quality images, we incorporate perceptual and cycle consistency losses into our architecture. In extensive evaluations we show that the proposed method outperforms state-of-the-art approaches in terms of both image quality and redirection precision. Finally, we show that generated images can bring significant improvement for the gaze estimation task if used to augment real training data. | GANs @cite_10 have successfully been applied to many computer vision tasks, such as image super-resolution @cite_25 and image compression @cite_5 , and a myriad of further variants have been proposed in recent years (e.g., @cite_13 @cite_23 @cite_21 @cite_21 @cite_34 ). GAN-based approaches have also been proposed for the task of image-to-image translation, resulting in impressive results @cite_1 @cite_2 . However, these methods typically require paired data to train. Zhu al proposed CycleGAN which functions without such requirement @cite_28 . Several derivatives of CycleGAN exist for various tasks @cite_6 @cite_37 . Our method is based on the GAN model while differing from these works in two aspects. First, we focus on a different task, namely that of gaze redirection. Second, we use a number of special purpose losses, including a perceptual loss between ground-truth and synthesized images and a gaze direction preservation loss for training, which we show experimentally to significantly impact the models performance. | {
"cite_N": [
"@cite_37",
"@cite_28",
"@cite_10",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_34",
"@cite_13",
"@cite_25"
],
"mid": [
"",
"2962793481",
"2099471712",
"",
"2125389028",
"2797650215",
"",
"",
"2795709826",
"2605135824",
"2605195953",
"2523714292"
],
"abstract": [
"",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, we propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework. We assume that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain. We analyze the proposed framework and establish several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, our framework allows users to control the style of translation outputs by providing an example style image. Code and pretrained models are available at this https URL",
"",
"",
"We propose a framework for extreme learned image compression based on Generative Adversarial Networks (GANs), obtaining visually pleasing images at significantly lower bitrates than previous methods. This is made possible through our GAN formulation of learned compression combined with a generator decoder which operates on the full-resolution image and is trained in combination with a multi-scale discriminator. Additionally, our method can fully synthesize unimportant regions in the decoded image such as streets and trees from a semantic label map extracted from the original image, therefore only requiring the storage of the preserved region and the semantic label map. A user study confirms that for low bitrates, our approach significantly outperforms state-of-the-art methods, saving up to 67 compared to the next-best method BPG.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method."
]
} |
1903.12356 | 2941424089 | Question answering over knowledge base (KB-QA) has recently become a popular research topic in NLP. One popular way to solve the KB-QA problem is to make use of a pipeline of several NLP modules, including entity discovery and linking (EDL) and relation detection. Recent success on KB-QA task usually involves complex network structures with sophisticated heuristics. Inspired by a previous work that builds a strong KB-QA baseline, we propose a simple but general neural model composed of fixed-size ordinally forgetting encoding (FOFE) and deep neural networks, called FOFE-net to solve KB-QA problem at different stages. For evaluation, we use two popular KB-QA datasets, SimpleQuestions and WebQSP, and a newly created dataset, FreebaseQA. The experimental results show that FOFE-net performs well on KB-QA subtasks, entity discovery and linking (EDL) and relation detection, and in turn pushing overall KB-QA system to achieve strong results on all datasets. | Our work is closely related to another line of research which tackle the KB-QA problem through multiple steps. P17-1053 propose a KB-QA system built on top of existing entity linkers @cite_1 @cite_15 , where they use an entity re-ranker to further improve the entity linking result and a hierarchical BiLSTM model for the relation detection. C18-1277 use BiLSTM-CRF to detect the entity mention and extract the question pattern. Fact selection was conducted by computing the cosine similarity between LSTM encoding of question pattern and knowledge base facts. | {
"cite_N": [
"@cite_15",
"@cite_1"
],
"mid": [
"2950843240",
"2419173203"
],
"abstract": [
"Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions. We apply S-MART to the task of tweet entity linking --- a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems.",
"This work focuses on answering single-relation factoid questions over Freebase. Each question can acquire the answer from a single fact of form (subject, predicate, object) in Freebase. This task, simple question answering (SimpleQA), can be addressed via a two-step pipeline: entity linking and fact selection. In fact selection, we match the subject entity in a fact candidate with the entity mention in the question by a character-level convolutional neural network (char-CNN), and match the predicate in that fact with the question by a word-level CNN (word-CNN). This work makes two main contributions. (i) A simple and effective entity linker over Freebase is proposed. Our entity linker outperforms the state-of-the-art entity linker over SimpleQA task. (ii) A novel attentive maxpooling is stacked over word-CNN, so that the predicate representation can be matched with the predicate-focused question representation more effectively. Experiments show that our system sets new state-of-the-art in this task."
]
} |
1903.12356 | 2941424089 | Question answering over knowledge base (KB-QA) has recently become a popular research topic in NLP. One popular way to solve the KB-QA problem is to make use of a pipeline of several NLP modules, including entity discovery and linking (EDL) and relation detection. Recent success on KB-QA task usually involves complex network structures with sophisticated heuristics. Inspired by a previous work that builds a strong KB-QA baseline, we propose a simple but general neural model composed of fixed-size ordinally forgetting encoding (FOFE) and deep neural networks, called FOFE-net to solve KB-QA problem at different stages. For evaluation, we use two popular KB-QA datasets, SimpleQuestions and WebQSP, and a newly created dataset, FreebaseQA. The experimental results show that FOFE-net performs well on KB-QA subtasks, entity discovery and linking (EDL) and relation detection, and in turn pushing overall KB-QA system to achieve strong results on all datasets. | The contribution of this paper is two-fold. First, the stage-based KB-QA system usually combines the various structure of models in different stages. For example, P17-1053 combines S-mart @cite_15 with hierarchical residual BiLSTM relation detector to solve the KB-QA problem. D18-1455 use S-mart to produce a fully-connected graph of the question with edges weighted by the similarity of relation vector and LSTM representation of the question. While in this paper, we only use the same model structure, namely FOFE-net, to solve problems at different stages, which makes the whole pipeline more straightforward and easier to implement. Second, we run our KBQA system on a newly created dataset, FreebaseQA, and provide a detailed analysis and empirical results for it, which may be helpful for the community that has interest in this dataset. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2950843240"
],
"abstract": [
"Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions. We apply S-MART to the task of tweet entity linking --- a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems."
]
} |
1903.12577 | 2938001473 | Deep learning methods capable of handling relational data have proliferated over the last years. In contrast to traditional relational learning methods that leverage first-order logic for representing such data, these deep learning methods aim at re-representing symbolic relational data in Euclidean spaces. They offer better scalability, but can only numerically approximate relational structures and are less flexible in terms of reasoning tasks supported. This paper introduces a novel framework for relational representation learning that combines the best of both worlds. This framework, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the original and latent representation is done by means of logic programs instead of neural networks. We show how learning can be cast as a constraint optimisation problem for which existing solvers can be used. The use of logic as a representation language makes the proposed framework more accurate (as the representation is exact, rather than approximate), more flexible, and more interpretable than deep learning methods. We experimentally show that these latent representations are indeed beneficial in relational learning tasks. | The most prominent paradigm in merging SRL and DL are (knowledge) graph embeddings @cite_22 @cite_11 . In contrast to s , these methods do not retain full relational data representation but approximate it by vectorisation. Several works @cite_20 @cite_19 impose logical constraints on embeddings but do not retain the relational representation. | {
"cite_N": [
"@cite_19",
"@cite_22",
"@cite_20",
"@cite_11"
],
"mid": [
"2963290083",
"1529533208",
"2738938906",
"2963460103"
],
"abstract": [
"Methods based on representation learning currently hold the state-of-the-art in many natural language processing and knowledge base inference tasks. Yet, a major challenge is how to efficiently incorporate commonsense knowledge into such models. A recent approach regularizes relation and entity representations by propositionalization of first-order logic rules. However, propositionalization does not scale beyond domains with only few entities and rules. In this paper we present a highly efficient method for incorporating implication rules into distributed representations for automated knowledge base construction. We map entity-tuple embeddings into an approximately Boolean space and encourage a partial ordering over relation embeddings based on implication rules mined from WordNet. Surprisingly, we find that the strong restriction of the entity-tuple embedding space does not hurt the expressiveness of the model and even acts as a regularizer that improves generalization. By incorporating few commonsense rules, we achieve an increase of 2 percentage points mean average precision over a matrix factorization baseline, while observing a negligible increase in runtime.",
"Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two fundamentally different kinds of statistical relational models, both of which can scale to massive data sets. The first is based on latent feature models such as tensor factorization and multiway neural networks. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. To this end, we also discuss Google's knowledge vault project as an example of such combination.",
"In adversarial training, a set of models learn together by pursuing competing goals, usually defined on single data instances. However, in relational learning and other non-i.i.d domains, goals can also be defined over sets of instances. For example, a link predictor for the is-a relation needs to be consistent with the transitivity property: if is-a(x_1, x_2) and is-a(x_2, x_3) hold, is-a(x_1, x_3) needs to hold as well. Here we use such assumptions for deriving an inconsistency loss, measuring the degree to which the model violates the assumptions on an adversarially-generated set of examples. The training objective is defined as a minimax problem, where an adversary finds the most offending adversarial examples by maximising the inconsistency loss, and the model is trained by jointly minimising a supervised loss and the inconsistency loss on the adversarial examples. This yields the first method that can use function-free Horn clauses (as in Datalog) to regularise any neural link predictor, with complexity independent of the domain size. We show that for several link prediction models, the optimisation problem faced by the adversary has efficient closed-form solutions. Experiments on link prediction benchmarks indicate that given suitable prior knowledge, our method can significantly improve neural link predictors on all relevant metrics.",
""
]
} |
1903.12577 | 2938001473 | Deep learning methods capable of handling relational data have proliferated over the last years. In contrast to traditional relational learning methods that leverage first-order logic for representing such data, these deep learning methods aim at re-representing symbolic relational data in Euclidean spaces. They offer better scalability, but can only numerically approximate relational structures and are less flexible in terms of reasoning tasks supported. This paper introduces a novel framework for relational representation learning that combines the best of both worlds. This framework, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the original and latent representation is done by means of logic programs instead of neural networks. We show how learning can be cast as a constraint optimisation problem for which existing solvers can be used. The use of logic as a representation language makes the proposed framework more accurate (as the representation is exact, rather than approximate), more flexible, and more interpretable than deep learning methods. We experimentally show that these latent representations are indeed beneficial in relational learning tasks. | We draw inspiration from program induction and synthesis @cite_14 , in particular, unsupervised methods for program induction @cite_7 @cite_5 . However, these methods do not create new latent concepts. | {
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_7"
],
"mid": [
"2194321275",
"",
"2187800623"
],
"abstract": [
"People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.",
"",
"We introduce an unsupervised learning algorithm that combines probabilistic modeling with solver-based techniques for program synthesis. We apply our techniques to both a visual learning domain and a language learning problem, showing that our algorithm can learn many visual concepts from only a few examples and that it can recover some English inflectional morphology. Taken together, these results give both a new approach to unsupervised learning of symbolic compositional structures, and a technique for applying program synthesis tools to noisy data."
]
} |
1903.12452 | 2921404976 | Abstract The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82 F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test. | The task of fake review detection has been studied since 2007, with the analysis of review spamming @cite_34 . In this work, the authors analyzed the case of Amazon, concluding that manually labeling fake reviews may result challenging, as fake reviewers could carefully craft their reviews in order to make them more reliable for other users. Consequently, they proposed the use of duplicates or nearly-duplicates as spam in order to develop a model that detects fake reviews @cite_34 . Research on distributional footprints has also been carried out, showing a connection between distribution anomalies and deceptive reviews from Amazon products and TripAdvisor hotels @cite_40 . | {
"cite_N": [
"@cite_40",
"@cite_34"
],
"mid": [
"2202307757",
"2144415203"
],
"abstract": [
"This paper postulates that there are natural distributions of opinions in product reviews. In particular, we hypothesize that for a given domain, there is a set of representative distributions of review rating scores. A deceptive business entity that hires people to write fake reviews will necessarily distort its distribution of review scores, leaving distributional footprints behind. In order to validate this hypothesis, we introduce strategies to create dataset with pseudo-gold standard that is labeled automatically based on different types of distributional footprints. A range of experiments confirm the hypothesized connection between the distributional anomaly and deceptive reviews. This study also provides novel quantitative insights into the characteristics of natural distributions of opinions in the TripAdvisor hotel review and the Amazon product review domains.",
"It is now a common practice for e-commerce Web sites to enable their customers to write reviews of products that they have purchased. Such reviews provide valuable sources of information on these products. They are used by potential customers to find opinions of existing users before deciding to purchase a product. They are also used by product manufacturers to identify problems of their products and to find competitive intelligence information about their competitors. Unfortunately, this importance of reviews also gives good incentive for spam, which contains false positive or malicious negative opinions. In this paper, we make an attempt to study review spam and spam detection. To the best of our knowledge, there is still no reported study on this problem."
]
} |
1903.12452 | 2921404976 | Abstract The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82 F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test. | Fake review detection is a specific application of the general problem of deception detection, where both verbal and nonverbal clues can be used @cite_42 . Fake review detection research has mainly exploited textual and behavioral features, while other approaches have taken into account social or temporal aspects. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2401672198"
],
"abstract": [
"A growing field in computer applications is the use of algorithms to spot the lie. The most promising area within this field is the analysis of the language of the liar since speakers effectively control only the meaning they wish to convey, but not the linguistic style of the communication. With the advent of computational means to analyze language, we now have the ability to recognize differences in the way speakers phrase their lies as opposed to their truths. The main goal of this book is to cover the advances of the last 10 years in automatically discriminating truths from lies. To give the reader a grounding in deception studies, it describes a range of behaviors (physiological, gestural as well as verbal) that have been proposed as indicators of deception. An overview of the primary psychological and cognitive theories that have been offered as explanations of deceptive behaviors gives context for the description of specific behaviors. The book also addresses the differences between data collected in a laboratory and real-world data with respect to the emotional and cognitive state of the liar. It discusses sources of real-world data and problematic issues in its collection and identifies the primary areas in which applied studies based on real-world data are critical, including police, security, border crossing, customs, and asylum interviews; congressional hearings; financial reporting; legal depositions; human resource evaluation; predatory communications that include Internet scams, identity theft, fraud, and false product reviews. Having established the background, the book concentrates on computational analyses of deceptive verbal behavior, which have enabled the field of deception studies to move from individual cues to overall differences in behavior. The book concludes with a set of open questions that the computational work has generated."
]
} |
1903.12452 | 2921404976 | Abstract The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82 F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test. | Textual features have been proposed in several papers. @cite_20 employed psycholinguistic features based on LIWC @cite_19 combined with standard word and POS n-gram features. @cite_21 extend that work including also style and POS based features, such as deep syntax and POS sequence patterns. However, the detection of fake reviews based only on textual features is challenging. Other articles propose additional textual features such as semantic similarity and emotion @cite_24 , a wide variety of lexical and syntactic features @cite_48 and deeper details such as understandability, level of details, writing style and cognition indicators @cite_36 . | {
"cite_N": [
"@cite_36",
"@cite_48",
"@cite_21",
"@cite_24",
"@cite_19",
"@cite_20"
],
"mid": [
"2085893569",
"2089902509",
"893486657",
"2547680778",
"2140910804",
"2949957935"
],
"abstract": [
"Before making a purchase, users are increasingly inclined to browse online reviews that are posted to share post-purchase experiences of products and services. However, not all reviews are necessarily authentic. Some entries could be fake yet written to appear authentic. Conceivably, authentic and fake reviews are not easy to differentiate. Hence, this paper uses supervised learning algorithms to analyze the extent to which authentic and fake reviews could be distinguished based on four linguistic clues, namely, understandability, level of details, writing style, and cognition indicators. The model performance was compared with two baselines. The results were generally promising.",
"The services and products of E-Commerce portals in this digital age are heavily reviewed by the users. These reviews provide useful insights on the quality usage of these products. Due to such importance of reviews, they can be faked to give false opinions about products and subsequently mislead the users. In this paper we are proposing new set of lexical and syntactic features set and applying supervised algorithms for performing classification on fake reviews dataset (gold standard). We focus on the writing style, that include type of punctuation mark, Part-of- Speech (POS) etc., that are helpful for detection of reviews spam. The final results give promising accuracy 91.51 for detecting fake reviews.",
"Online reviews have become a valuable resource for decision making. However, its usefulness brings forth a curse ‒ deceptive opinion spam. In recent years, fake review detection has attracted significant attention. However, most review sites still do not publicly filter fake reviews. Yelp is an exception which has been filtering reviews over the past few years. However, Yelp’s algorithm is trade secret. In this work, we attempt to find out what Yelp might be doing by analyzing its filtered reviews. The results will be useful to other review hosting sites in their filtering effort. There are two main approaches to filtering: supervised and unsupervised learning. In terms of features used, there are also roughly two types: linguistic features and behavioral features. In this work, we will take a supervised approach as we can make use of Yelp’s filtered reviews for training. Existing approaches based on supervised learning are all based on pseudo fake reviews rather than fake reviews filtered by a commercial Web site. Recently, supervised learning using linguistic n-gram features has been shown to perform extremely well (attaining around 90 accuracy) in detecting crowdsourced fake reviews generated using Amazon Mechanical Turk (AMT). We put these existing research methods to the test and evaluate performance on the real-life Yelp data. To our surprise, the behavioral features perform very well, but the linguistic features are not as effective. To investigate, a novel information theoretic analysis is proposed to uncover the precise psycholinguistic difference between AMT reviews and Yelp reviews (crowdsourced vs. commercial fake reviews). We find something quite interesting. This analysis and experimental results allow us to postulate that Yelp’s filtering is reasonable and its filtering algorithm seems to be correlated with abnormal spamming behaviors.",
"As people are spending more time to shop and view reviews on line, some reviewer write fake reviews to earn credit and to promote (demote) the sales of product and stores. Detecting fake reviews and spammers becomes more important when the spamming behavior is becoming damaging. This paper proposes three types of new features which include review density, semantic and emotion and gives the model and algorithm to construct each feature. Experiments show that the proposed model, algorithm and features are efficient in fake review detection task than traditional method based on content, reviewer info and behavior.",
"We are in the midst of a technological revolution whereby, for the first time, researchers can link daily word use to a broad array of real-world behaviors. This article reviews several computerized text analysis methods and describes how Linguistic Inquiry and Word Count (LIWC) was created and validated. LIWC is a transparent text analysis program that counts words in psychologically meaningful categories. Empirical results using LIWC demonstrate its ability to detect meaning in a wide variety of experimental settings, including to show attentional focus, emotionality, social relationships, thinking styles, and individual differences.",
"Consumers increasingly rate, review and research products online. Consequently, websites containing consumer reviews are becoming targets of opinion spam. While recent work has focused primarily on manually identifiable instances of opinion spam, in this work we study deceptive opinion spam---fictitious opinions that have been deliberately written to sound authentic. Integrating work from psychology and computational linguistics, we develop and compare three approaches to detecting deceptive opinion spam, and ultimately develop a classifier that is nearly 90 accurate on our gold-standard opinion spam dataset. Based on feature analysis of our learned models, we additionally make several theoretical contributions, including revealing a relationship between deceptive opinions and imaginative writing."
]
} |
1903.12452 | 2921404976 | Abstract The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82 F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test. | Behavioral features refer to nonverbal characteristics of review activity, such as the number of reviews or the time and device where the review was posted. They were used in order to improve the classification model resulting in encouraging results. @cite_50 introduced behavioral features on Amazon reviews, distinguising among review features (e.g. number of feedbacks, position of the review, textual features, rating features, etc.), product features (e.g. price, sales rank) and reviewer features (e.g. average rating, ratio of the number of reviews that the reviewer wrote which were the first reviews, etc.). In another work, @cite_38 explore the effect of both textual and behavioural features in the restaurant and hotel domain, showing that non-textual features result more relevant for the task of fake review detection. Also, regarding the restaurant domain, some interesting findings were described by @cite_47 . Restaurants are more likely to make review fraud when they have a lower reputation, including having few reviewers or bad scorings. | {
"cite_N": [
"@cite_38",
"@cite_47",
"@cite_50"
],
"mid": [
"2554367394",
"1804563568",
"2047756776"
],
"abstract": [
"AbstractThe value and credibility of online consumer reviews are compromised by significantly increasing yet difficult-to-identify fake reviews. Extant models for automated online fake review detection rely heavily on verbal behaviors of reviewers while largely ignoring their nonverbal behaviors. This research identifies a variety of nonverbal behavioral features of online reviewers and examines their relative importance for the detection of fake reviews in comparison to that of verbal behavioral features. The results of an empirical evaluation using real-world online reviews reveal that incorporating nonverbal features of reviewers can significantly improve the performance of online fake review detection models. Moreover, compared with verbal features, nonverbal features of reviewers are shown to be more important for fake review detection. Furthermore, model pruning based on a sensitivity analysis improves the parsimony of the developed fake review detection model without sacrificing its performance.",
"Consumer reviews are now part of everyday decision-making. Yet, the credibility of these reviews is fundamentally undermined when businesses commit review fraud, creating fake reviews for themselves or their competitors. We investigate the economic incentives to commit review fraud on the popular review platform Yelp, using two complementary approaches and datasets. We begin by analyzing restaurant reviews that are identified by Yelp's filtering algorithm as suspicious, or fake ? and treat these as a proxy for review fraud (an assumption we provide evidence for). We present four main findings. First, roughly 16 of restaurant reviews on Yelp are filtered. These reviews tend to be more extreme (favorable or unfavorable) than other reviews, and the prevalence of suspicious reviews has grown significantly over time. Second, a restaurant is more likely to commit review fraud when its reputation is weak, i.e., when it has few reviews, or it has recently received bad reviews. Third, chain restaurants ? which benefit less from Yelp ? are also less likely to commit review fraud. Fourth, when restaurants face increased competition, they become more likely to receive unfavorable fake reviews. Using a separate dataset, we analyze businesses that were caught soliciting fake reviews through a sting conducted by Yelp. These data support our main results, and shed further light on the economic incentives behind a business's decision to leave fake reviews.",
"Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them"
]
} |
1903.12452 | 2921404976 | Abstract The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82 F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test. | Apart from using textual and behavioral features, other methodologies were followed for the fake review detection task. @cite_41 proposed a review graph with the aim of capturing relationships between reviewers, reviews and stores reviewed by the reviewers. Making use of this graph, an iterative model was used to identify suspicious reviewers. Following also a graph model, network effects were analyzed by @cite_53 , following two steps: user and review scoring for fraud detection and grouping for visualization. | {
"cite_N": [
"@cite_41",
"@cite_53"
],
"mid": [
"2112213600",
"2282288858"
],
"abstract": [
"Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.",
"User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However,review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on syntheticand real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database."
]
} |
1903.12452 | 2921404976 | Abstract The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82 F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test. | Another methodological approach focuses on temporal aspects, and concerns the burstiness of reviews and their impact on businesses. Bursts of reviews can be either due to sudden popularity of products or spam attacks @cite_25 , which were also analyzed in @cite_37 along with other behavioral and textual features. A deeper time series approach was made by @cite_49 and @cite_24 propose other types of features such as review density in temporal windows, along with semantic and emotion features. Spatial and temporal features were used in a Chinese site by @cite_32 . | {
"cite_N": [
"@cite_37",
"@cite_32",
"@cite_24",
"@cite_49",
"@cite_25"
],
"mid": [
"2745030004",
"2192783609",
"2547680778",
"2306432270",
"2189187207"
],
"abstract": [
"In the enterprise marketing process, on-line data plays more and more important role. As a type of data, spam (or fake) reviews of the products, however, have been seriously affecting the reliability of both decision making and data analysis of the enterprise. To detect spam reviews, the paper presents a set of opinion spam detection's identification indicators based on behavior features of the spammer. Two algorithms are then proposed to recognize similar reviews and relevant reviews from all reviews. Compared to the traditional algorithm, our review identification algorithm achieves shorter execution time. More importantly, the proposed algorithm for recognizing relevant reviews can be used to analyze the relevancy between the review content and the given review topic by using the automatic word segmentation technique. The Experimental results show that the number of fake reviews by our algorithms is higher than that of the traditional algorithm. Moreover, we found that about 46 of the mobile phone reviews on the Amazon website were irrelevant to the product's topic, and 54.7 of the reviews were similar to other reviews.",
"Although opinion spam (or fake review) detection has attracted significant research attention in recent years, the problem is far from solved. One key reason is that there is no large-scale ground truth labeled dataset available for model building. Some review hosting sites such as Yelp.com and Dianping.com have built fake review filtering systems to ensure the quality of their reviews, but their algorithms are trade secrets. Working with Dianping, we present the first large-scale analysis of restaurant reviews filtered by Dianping's fake review filtering system. Along with the analysis, we also propose some novel temporal and spatial features for supervised opinion spam detection. Our results show that these features significantly outperform existing state-of-art features.",
"As people are spending more time to shop and view reviews on line, some reviewer write fake reviews to earn credit and to promote (demote) the sales of product and stores. Detecting fake reviews and spammers becomes more important when the spamming behavior is becoming damaging. This paper proposes three types of new features which include review density, semantic and emotion and gives the model and algorithm to construct each feature. Experiments show that the proposed model, algorithm and features are efficient in fake review detection task than traditional method based on content, reviewer info and behavior.",
"Proposing a novel model to detect spam reviews efficiently.Demonstrating the integral role of burst patterns in detection of spam reviews.Comparing the approach with two common methods to show how significant it is. Today's e-commerce is highly depended on increasingly growing online customers' reviews posted in opinion sharing websites. This fact, unfortunately, has tempted spammers to target opinion sharing websites in order to promote and demote products. To date, different types of opinion spam detection methods have been proposed in order to provide reliable resources for customers, manufacturers and researchers. However, supervised approaches suffer from imbalance data due to scarcity of spam reviews in datasets, rating deviation based filtering systems are easily cheated by smart spammers, and content based methods are very expensive and majority of them have not been tested on real data hitherto.The aim of this paper is to propose a robust review spam detection system wherein the rating deviation, content based factors and activeness of reviewers are employed efficiently. To overcome the aforementioned drawbacks, all these factors are synthetically investigated in suspicious time intervals captured from time series of reviews by a pattern recognition technique. The proposed method could be a great asset in online spam filtering systems and could be used in data mining and knowledge discovery tasks as a standalone system to purify product review datasets. These systems can reap benefit from our method in terms of time efficiency and high accuracy. Empirical analyses on real dataset show that the proposed approach is able to successfully detect spam reviews. Comparison with two of the current common methods, indicates that our method is able to achieve higher detection accuracy (F-Score: 0.86) while removing the need for having specific fields of Meta data and reducing heavy computation required for investigation purposes.",
"Online product reviews have become an important source of user opinions. Due to profit or fame, imposters have been writing deceptive or fake reviews to promote and or to demote some target products or services. Such imposters are called review spammers. In the past few years, several approaches have been proposed to deal with the problem. In this work, we take a different approach, which exploits the burstiness nature of reviews to identify review spammers. Bursts of reviews can be either due to sudden popularity of products or spam attacks. Reviewers and reviews appearing in a burst are often related in the sense that spammers tend to work with other spammers and genuine reviewers tend to appear together with other genuine reviewers. This paves the way for us to build a network of reviewers appearing in different bursts. We then model reviewers and their cooccurrence in bursts as a Markov Random Field (MRF), and employ the Loopy Belief Propagation (LBP) method to infer whether a reviewer is a spammer or not in the graph. We also propose several features and employ feature induced message passing in the LBP framework for network inference. We further propose a novel evaluation method to evaluate the detected spammers automatically using supervised classification of their reviews. Additionally, we employ domain experts to perform a human evaluation of the identified spammers and non-spammers. Both the classification result and human evaluation result show that the proposed method outperforms strong baselines, which demonstrate the effectiveness of the method."
]
} |
1903.12452 | 2921404976 | Abstract The impact of online reviews on businesses has grown significantly during last years, being crucial to determine business success in a wide array of sectors, ranging from restaurants, hotels to e-commerce. Unfortunately, some users use unethical means to improve their online reputation by writing fake reviews of their businesses or competitors. Previous research has addressed fake review detection in a number of domains, such as product or business reviews in restaurants and hotels. However, in spite of its economical interest, the domain of consumer electronics businesses has not yet been thoroughly studied. This article proposes a feature framework for detecting fake reviews that has been evaluated in the consumer electronics domain. The contributions are fourfold: (i) Construction of a dataset for classifying fake reviews in the consumer electronics domain in four different cities based on scraping techniques; (ii) definition of a feature framework for fake review detection; (iii) development of a fake review classification method based on the proposed framework and (iv) evaluation and analysis of the results for each of the cities under study. We have reached an 82 F-Score on the classification task and the Ada Boost classifier has been proven to be the best one by statistical means according to the Friedman test. | Apart from supervised learning, other approaches have been followed, since collecting data for experiments is a hard task. @cite_30 , authors propose a prediction model based on semi-supervised learning and a set of textual and behavioural features. Additionally, @cite_1 propose a semi-supervised technique called PU-learning. | {
"cite_N": [
"@cite_30",
"@cite_1"
],
"mid": [
"1775665607",
"1979504415"
],
"abstract": [
"In the past few years, sentiment analysis and opinion mining becomes a popular and important task. These studies all assume that their opinion resources are real and trustful. However, they may encounter the faked opinion or opinion spam problem. In this paper, we study this issue in the context of our product review mining system. On product review site, people may write faked reviews, called review spam, to promote their products, or defame their competitors' products. It is important to identify and filter out the review spam. Previous work only focuses on some heuristic rules, such as helpfulness voting, or rating deviation, which limits the performance of this task. In this paper, we exploit machine learning methods to identify review spam. Toward the end, we manually build a spam collection from our crawled reviews. We first analyze the effect of various features in spam identification. We also observe that the review spammer consistently writes spam. This provides us another view to identify review spam: we can identify if the author of the review is spammer. Based on this observation, we provide a twoview semi-supervised method, co-training, to exploit the large amount of unlabeled data. The experiment results show that our proposed method is effective. Our designed machine learning methods achieve significant improvements in comparison to the heuristic baselines.",
"Detection of negative deceptive opinion spam.Improved PU-learning approach.Compares the performance of the proposed approach and the original PU-learning method.The role of opinions' polarity in the detection of deception.Reports experimental results on a set of negative deceptive opinions. Nowadays a large number of opinion reviews are posted on the Web. Such reviews are a very important source of information for customers and companies. The former rely more than ever on online reviews to make their purchase decisions, and the latter to respond promptly to their clients' expectations. Unfortunately, due to the business that is behind, there is an increasing number of deceptive opinions, that is, fictitious opinions that have been deliberately written to sound authentic, in order to deceive the consumers promoting a low quality product (positive deceptive opinions) or criticizing a potentially good quality one (negative deceptive opinions). In this paper we focus on the detection of both types of deceptive opinions, positive and negative. Due to the scarcity of examples of deceptive opinions, we propose to approach the problem of the detection of deceptive opinions employing PU-learning. PU-learning is a semi-supervised technique for building a binary classifier on the basis of positive (i.e., deceptive opinions) and unlabeled examples only. Concretely, we propose a novel method that with respect to its original version is much more conservative at the moment of selecting the negative examples (i.e., not deceptive opinions) from the unlabeled ones. The obtained results show that the proposed PU-learning method consistently outperformed the original PU-learning approach. In particular, results show an average improvement of 8.2 and 1.6 over the original approach in the detection of positive and negative deceptive opinions respectively."
]
} |
1903.12363 | 2935408319 | Extracting key information from documents, such as receipts or invoices, and preserving the interested texts to structured data is crucial in the document-intensive streamline processes of office automation in areas that includes but not limited to accounting, financial, and taxation areas. To avoid designing expert rules for each specific type of document, some published works attempt to tackle the problem by learning a model to explore the semantic context in text sequences based on the Named Entity Recognition (NER) method in the NLP field. In this paper, we propose to harness the effective information from both semantic meaning and spatial distribution of texts in documents. Specifically, our proposed model, Convolutional Universal Text Information Extractor (CUTIE), applies convolutional neural networks on gridded texts where texts are embedded as features with semantical connotations. We further explore the effect of employing different structures of convolutional neural network and propose a fast and portable structure. We demonstrate the effectiveness of the proposed method on a dataset with up to @math labelled receipts, without any pre-training or post-processing, achieving state of the art performance that is much higher than BERT but with only @math parameters and without requiring the @math M word dataset for pre-training. Experimental results also demonstrate that the CUTIE being able to achieve state of the art performance with much smaller amount of training data. | Several rule-based invoice analysis systems were proposed in @cite_2 @cite_9 @cite_4 . Intellix by DocuWare requires the a template being annotated with relevant fields @cite_2 . For that reason, a collection of templates have to be constructed. SmartFix employs specifically designed configuration rules for each template @cite_9 . uses a dataset of fixed key information positions for each template @cite_4 . It is not hard to find that the rule-based methods rely heavily on the pre-defined template rules to extract information from specific invoice layouts. | {
"cite_N": [
"@cite_9",
"@cite_4",
"@cite_2"
],
"mid": [
"1791544012",
"1963728304",
"2078777599"
],
"abstract": [
"Although the internet offers a wide-spread platform for information interchange, day-to-day work in large companies still means the processing of tens of thousands of printed documents every day. This paper presents the system smartFIX which is a document analysis and understanding system developed by the DFKI spin-off INSIDERS. It permits the processing of documents ranging from fixed format forms to unstructured letters of any format. Apart from the architecture, the main components and system characteristics, we also show some results when applying smartFIX to medical bills and prescriptions.",
"Archiving official written documents such as invoices, reminders and account statements in business and private area gets more and more important. Creating appropriate index entries for document archives like sender's name, creation date or document number is a tedious manual work. We present a novel approach to handle automatic indexing of documents based on generic positional extraction of index terms. For this purpose we apply the knowledge of document templates stored in a common full text search index to find index positions that were successfully extracted in the past.",
"Automatic information extraction from scanned business documents is especially valuable in the application domain of document archiving. But current systems for automated document processing still require a lot of configuration work that can only be done by experienced users or administrators. We present an approach for information extraction which purely builds on end-user provided training examples and intentionally omits efficient known extraction techniques like rule based extraction that require intense training and or information extraction expertise. Our evaluation on a large corpus of business documents shows competitive results of above 85 F1-measure on 10 commonly used fields like document type, sender, receiver and date. The system is deployed and used inside the commercial document management system DocuWare."
]
} |
1903.12363 | 2935408319 | Extracting key information from documents, such as receipts or invoices, and preserving the interested texts to structured data is crucial in the document-intensive streamline processes of office automation in areas that includes but not limited to accounting, financial, and taxation areas. To avoid designing expert rules for each specific type of document, some published works attempt to tackle the problem by learning a model to explore the semantic context in text sequences based on the Named Entity Recognition (NER) method in the NLP field. In this paper, we propose to harness the effective information from both semantic meaning and spatial distribution of texts in documents. Specifically, our proposed model, Convolutional Universal Text Information Extractor (CUTIE), applies convolutional neural networks on gridded texts where texts are embedded as features with semantical connotations. We further explore the effect of employing different structures of convolutional neural network and propose a fast and portable structure. We demonstrate the effectiveness of the proposed method on a dataset with up to @math labelled receipts, without any pre-training or post-processing, achieving state of the art performance that is much higher than BERT but with only @math parameters and without requiring the @math M word dataset for pre-training. Experimental results also demonstrate that the CUTIE being able to achieve state of the art performance with much smaller amount of training data. | CloudScan is a work attempting to extract key information with learning based models @cite_3 . Firstly, N-grams features are formed by connecting expert designed rules calculated results on texts of each document line. Then, the features are feed to train a RNN-based or a logistic regression based classifier for key information extraction. Certain post-processing are added to further enhance extraction results. However, the line-based feature extraction method can not achieve its best performance if document texts are not perfectly aligned in line. Moreover, the RNN-based classifier, bi-directional LSTM model in CloudScan, has limited ability to learn relationship among distant words. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2748159032"
],
"abstract": [
"We present CloudScan; an invoice analysis system that requires zero configuration or upfront annotation. In contrast to previous work, CloudScan does not rely on templates of invoice layout, instead it learns a single global model of invoices that naturally generalizes to unseen invoice layouts. The model is trained using data automatically extracted from end-user provided feedback. This automatic training data extraction removes the requirement for users to annotate the data precisely. We describe a recurrent neural network model that can capture long range context and compare it to a baseline logistic regression model corresponding to the current CloudScan production system. We train and evaluate the system on 8 important fields using a dataset of 326,471 invoices. The recurrent neural network and baseline model achieve 0.891 and 0.887 average F1 scores respectively on seen invoice layouts. For the harder task of unseen invoice layouts, the recurrent neural network model outperforms the baseline with 0.840 average F1 compared to 0.788."
]
} |
1903.12549 | 2930586974 | Time series forecasting is one of the challenging problems for humankind. Traditional forecasting methods using mean regression models have severe shortcomings in reflecting real-world fluctuations. While new probabilistic methods rush to rescue, they fight with technical difficulties like quantile crossing or selecting a prior distribution. To meld the different strengths of these fields while avoiding their weaknesses as well as to push the boundary of the state-of-the-art, we introduce ForGAN - one step ahead probabilistic forecasting with generative adversarial networks. ForGAN utilizes the power of the conditional generative adversarial network to learn the data generating distribution and compute probabilistic forecasts from it. We argue how to evaluate ForGAN in opposition to regression methods. To investigate probabilistic forecasting of ForGAN, we create a new dataset and demonstrate our method abilities on it. This dataset will be made publicly available for comparison. Furthermore, we test ForGAN on two publicly available datasets, namely Mackey-Glass dataset and Internet traffic dataset (A5M) where the impressive performance of ForGAN demonstrate its high capability in forecasting future values. | Most research regarding applying GAN on sequential data is concerned with discrete problems, e.g. text generation task. Since the discrete space of words cannot be differentiated in mathematics, modifying a GAN to work with discrete data is a challenging task. Many papers have been published to address this problem and they have reported remarkable results @cite_26 @cite_55 @cite_59 @cite_66 . However, we are interested in the (quasi) continues regime. Therefore, these techniques are not directly applicable here. | {
"cite_N": [
"@cite_55",
"@cite_26",
"@cite_66",
"@cite_59"
],
"mid": [
"2951520714",
"2622385665",
"",
"2964268978"
],
"abstract": [
"In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator---analagous to the human evaluator in the Turing test--- to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues. In addition to adversarial training we describe a model for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines.",
"Generative Adversarial Networks (GANs) have shown great promise recently in image generation. Training GANs for language generation has proven to be more difficult, because of the non-differentiable nature of generating text with recurrent neural networks. Consequently, past work has either resorted to pre-training with maximum-likelihood or used convolutional networks for generation. In this work, we show that recurrent neural networks can be trained to generate text with GANs from scratch using curriculum learning, by slowly teaching the model to generate sequences of increasing and variable length. We empirically show that our approach vastly improves the quality of generated sequences compared to a convolutional baseline.",
"",
"As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines."
]
} |
1903.12549 | 2930586974 | Time series forecasting is one of the challenging problems for humankind. Traditional forecasting methods using mean regression models have severe shortcomings in reflecting real-world fluctuations. While new probabilistic methods rush to rescue, they fight with technical difficulties like quantile crossing or selecting a prior distribution. To meld the different strengths of these fields while avoiding their weaknesses as well as to push the boundary of the state-of-the-art, we introduce ForGAN - one step ahead probabilistic forecasting with generative adversarial networks. ForGAN utilizes the power of the conditional generative adversarial network to learn the data generating distribution and compute probabilistic forecasts from it. We argue how to evaluate ForGAN in opposition to regression methods. To investigate probabilistic forecasting of ForGAN, we create a new dataset and demonstrate our method abilities on it. This dataset will be made publicly available for comparison. Furthermore, we test ForGAN on two publicly available datasets, namely Mackey-Glass dataset and Internet traffic dataset (A5M) where the impressive performance of ForGAN demonstrate its high capability in forecasting future values. | In the continuous regime, we find GANs being utilized to generate auditory data. C-RNN-GAN @cite_6 works on music waveforms as continuous sequential data to generate polyphonic music. This GAN uses Bidirectional LSTM in the structure of the generator and discriminator. Moreover, there are many other studies on auditory data which work on audio spectrograms and consider them as 2D images. For instance, @cite_65 as well as Michelsanti, @cite_96 employ GAN on audio spectrograms for speech enhancement. @cite_12 propose a GAN for separating the singing voice from background music. @cite_27 propose a GAN for synthesizing raw-waveform audio and @cite_45 employ GAN for synthesizing of impersonated voices. However, contrary to our work these studies are first not concerned with forecasting and second, the results are intuitive. The latter point is important as analogous to the image domain, music can be judged by listening to it. | {
"cite_N": [
"@cite_96",
"@cite_65",
"@cite_6",
"@cite_27",
"@cite_45",
"@cite_12"
],
"mid": [
"2746457594",
"2768416973",
"2559110679",
"2786254735",
"2788830260",
"2766737753"
],
"abstract": [
"Improving speech system performance in noisy environments remains a challenging task, and speech enhancement (SE) is one of the effective techniques to solve the problem. Motivated by the promising results of generative adversarial networks (GANs) in a variety of image processing tasks, we explore the potential of conditional GANs (cGANs) for SE, and in particular, we make use of the image processing framework proposed by [1] to learn a mapping from the spectrogram of noisy speech to an enhanced counterpart. The SE cGAN consists of two networks, trained in an adversarial manner: a generator that tries to enhance the input noisy spectrogram, and a discriminator that tries to distinguish between enhanced spectrograms provided by the generator and clean ones from the database using the noisy spectrogram as a condition. We evaluate the performance of the cGAN method in terms of perceptual evaluation of speech quality (PESQ), short-time objective intelligibility (STOI), and equal error rate (EER) of speaker verification (an example application). Experimental results show that the cGAN method overall outperforms the classical short-time spectral amplitude minimum mean square error (STSA-MMSE) SE algorithm, and is comparable to a deep neural network-based SE approach (DNN-SE).",
"We investigate the effectiveness of generative adversarial networks (GANs) for speech enhancement, in the context of improving noise robustness of automatic speech recognition (ASR) systems. Prior work demonstrates that GANs can effectively suppress additive noise in raw waveform speech signals, improving perceptual quality metrics; however this technique was not justified in the context of ASR. In this work, we conduct a detailed study to measure the effectiveness of GANs in enhancing speech contaminated by both additive and reverberant noise. Motivated by recent advances in image processing, we propose operating GANs on log-Mel filterbank spectra instead of waveforms, which requires less computation and is more robust to reverberant noise. While GAN enhancement improves the performance of a clean-trained ASR system on noisy speech, it falls short of the performance achieved by conventional multi-style training (MTR). By appending the GAN-enhanced features to the noisy inputs and retraining, we achieve a 7 WER improvement relative to the MTR system.",
"Generative adversarial networks have been proposed as a way of efficiently training deep generative neural networks. We propose a generative adversarial model that works on continuous sequential data, and apply it by training it on a collection of classical music. We conclude that it generates music that sounds better and better as the model is trained, report statistics on generated music, and let the reader judge the quality by downloading the generated songs.",
"While Generative Adversarial Networks (GANs) have seen wide success at the problem of synthesizing realistic images, they have seen little application to the problem of unsupervised audio generation. Unlike for images, a barrier to success is that the best discriminative representations for audio tend to be non-invertible, and thus cannot be used to synthesize listenable outputs. In this paper, we introduce WaveGAN, a first attempt at applying GANs to raw audio synthesis in an unsupervised setting. Our experiments on speech demonstrate that WaveGAN can produce intelligible words from a small vocabulary of human speech, as well as synthesize audio from other domains such as bird vocalizations, drums, and piano. Qualitatively, we find that human judges prefer the generated examples from WaveGAN over those from a method which naively apply GANs on image-like audio feature representations.",
"Voice impersonation is not the same as voice transformation, although the latter is an essential element of it. In voice impersonation, the resultant voice must convincingly convey the impression of having been naturally produced by the target speaker, mimicking not only the pitch and other perceivable signal qualities, but also the style of the target speaker. In this paper, we propose a novel neural network based speech quality- and style- mimicry framework for the synthesis of impersonated voices. The framework is built upon a fast and accurate generative adversarial network model. Given spectrographic representations of source and target speakers' voices, the model learns to mimic the target speaker's voice quality and style, regardless of the linguistic content of either's voice, generating a synthetic spectrogram from which the time domain signal is reconstructed using the Griffin-Lim method. In effect, this model reframes the well-known problem of style-transfer for images as the problem of style-transfer for speech signals, while intrinsically addressing the problem of durational variability of speech sounds. Experiments demonstrate that the model can generate extremely convincing samples of impersonated speech. It is even able to impersonate voices across different genders effectively. Results are qualitatively evaluated using standard procedures for evaluating synthesized voices.",
"Separating two sources from an audio mixture is an important task with many applications. It is a challenging problem since only one signal channel is available for analysis. In this paper, we propose a novel framework for singing voice separation using the generative adversarial network (GAN) with a time-frequency masking function. The mixture spectra is considered to be a distribution and is mapped to the clean spectra which is also considered a distribtution. The approximation of distributions between mixture spectra and clean spectra is performed during the adversarial training process. In contrast with current deep learning approaches for source separation, the parameters of the proposed framework are first initialized in a supervised setting and then optimized by the training procedure of GAN in an unsupervised setting. Experimental results on three datasets (MIR-1K, iKala and DSD100) show that performance can be improved by the proposed framework consisting of conventional networks."
]
} |
1903.12549 | 2930586974 | Time series forecasting is one of the challenging problems for humankind. Traditional forecasting methods using mean regression models have severe shortcomings in reflecting real-world fluctuations. While new probabilistic methods rush to rescue, they fight with technical difficulties like quantile crossing or selecting a prior distribution. To meld the different strengths of these fields while avoiding their weaknesses as well as to push the boundary of the state-of-the-art, we introduce ForGAN - one step ahead probabilistic forecasting with generative adversarial networks. ForGAN utilizes the power of the conditional generative adversarial network to learn the data generating distribution and compute probabilistic forecasts from it. We argue how to evaluate ForGAN in opposition to regression methods. To investigate probabilistic forecasting of ForGAN, we create a new dataset and demonstrate our method abilities on it. This dataset will be made publicly available for comparison. Furthermore, we test ForGAN on two publicly available datasets, namely Mackey-Glass dataset and Internet traffic dataset (A5M) where the impressive performance of ForGAN demonstrate its high capability in forecasting future values. | Since there is no consensus on a process for evaluating GANs, application of GANs beyond text and auditory data is a very challenging task. We found a few attempts on the application of GANs beyond these data types. Hyland and Esteban @cite_100 propose RGAN and RCGAN to produce realistic real-valued multi-dimensional medical time series. Both of these GANs employ LSTM in their generator and discriminator while RCGAN uses Conditional GAN instead of Vanilla GAN to incorporate a condition in the process of data generation. They also describe novel evaluation methods for GANs, where they generate a synthetic labeled training dataset and train a model using this set. Then, they test this model using real data. They repeat the same process using a real train set and synthetic labeled test set. GAN-AD @cite_1 is proposed to model time-series for anomaly detection in Cyber-Physical Sytems (CPSs). This GAN uses LSTM in both generator and discriminator, too. @cite_43 propose a conditional GAN for generating synthetic time-series in smart-grids. Unlike previous work, this GAN employs CNN to construct generator and discriminator. | {
"cite_N": [
"@cite_100",
"@cite_1",
"@cite_43"
],
"mid": [
"2622068151",
"2891273344",
"2906805076"
],
"abstract": [
"Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from ‘serialised’ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN.",
"Today's Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor's and actuator's time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.",
"The availability of fine grained time series data is a pre-requisite for research in smart-grids. While data for transmission systems is relatively easily obtainable, issues related to data collection, security and privacy hinder the widespread public availability accessibility of such datasets at the distribution system level. This has prevented the larger research community from effectively applying sophisticated machine learning algorithms to significantly improve the distribution-level accuracy of predictions and increase the efficiency of grid operations. Synthetic dataset generation has proven to be a promising solution for addressing data availability issues in various domains such as computer vision, natural language processing and medicine. However, its exploration in the smart grid context remains unsatisfactory. Previous works have tried to generate synthetic datasets by modeling the underlying system dynamics: an approach which is difficult, time consuming, error prone and often times infeasible in many problems. In this work, we propose a novel data-driven approach to synthetic dataset generation by utilizing deep generative adversarial networks (GAN) to learn the conditional probability distribution of essential features in the real dataset and generate samples based on the learned distribution. To evaluate our synthetically generated dataset, we measure the maximum mean discrepancy (MMD) between real and synthetic datasets as probability distributions, and show that their sampling distance converges. To further validate our synthetic dataset, we perform common smart grid tasks such as k-means clustering and short-term prediction on both datasets. Experimental results show the efficacy of our synthetic dataset approach: the real and synthetic datasets are indistinguishable by solely examining the output of these tasks."
]
} |
1903.12549 | 2930586974 | Time series forecasting is one of the challenging problems for humankind. Traditional forecasting methods using mean regression models have severe shortcomings in reflecting real-world fluctuations. While new probabilistic methods rush to rescue, they fight with technical difficulties like quantile crossing or selecting a prior distribution. To meld the different strengths of these fields while avoiding their weaknesses as well as to push the boundary of the state-of-the-art, we introduce ForGAN - one step ahead probabilistic forecasting with generative adversarial networks. ForGAN utilizes the power of the conditional generative adversarial network to learn the data generating distribution and compute probabilistic forecasts from it. We argue how to evaluate ForGAN in opposition to regression methods. To investigate probabilistic forecasting of ForGAN, we create a new dataset and demonstrate our method abilities on it. This dataset will be made publicly available for comparison. Furthermore, we test ForGAN on two publicly available datasets, namely Mackey-Glass dataset and Internet traffic dataset (A5M) where the impressive performance of ForGAN demonstrate its high capability in forecasting future values. | To the best of our knowledge, this is for the first time that a GAN is employed for the forecasting task. Our work is analogous to RCGAN @cite_100 , however, we pursue a different goal. As a result, we need to take a different approach to train and evaluate the performance of ForGAN. | {
"cite_N": [
"@cite_100"
],
"mid": [
"2622068151"
],
"abstract": [
"Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from ‘serialised’ MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN."
]
} |
1903.12302 | 2928338990 | Mobile robots, performing long-term manipulation activities in human environments, have to perceive a wide variety of objects possessing very different visual characteristics and need to reliably keep track of these throughout the execution of a task. In order to be efficient, robot perception capabilities need to go beyond what is currently perceivable and should be able to answer queries about both current and past scenes. In this paper we investigate a perception system for long-term robot manipulation that keeps track of the changing environment and builds a representation of the perceived world. Specifically we introduce an amortized component that spreads perception tasks throughout the execution cycle. The resulting query driven perception system asynchronously integrates results from logged images into a symbolic and numeric (what we call sub-symbolic) representation that forms the perceptual belief state of the robot. | The most comprehensive work on object belief states was done by Blodow @cite_2 , laying the ground work for what is necessary to create and manage such representations. @cite_11 present a Markov Logic Network based framework dynamically resolving the entities of objects. The idea of entity resolution is further developed by @cite_17 , where a pervasive calm perception component for the framework @cite_0 is built. | {
"cite_N": [
"@cite_0",
"@cite_11",
"@cite_17",
"@cite_2"
],
"mid": [
"1499297823",
"2133815132",
"2186752256",
""
],
"abstract": [
"We present RoboSherlock, an open source software framework for implementing perception systems for robots performing human-scale everyday manipulation tasks. In RoboSherlock, perception and interpretation of realistic scenes is formulated as an unstructured information management (UIM) problem. The application of the UIM principle supports the implementation of perception systems that can answer task-relevant queries about objects in a scene, boost object recognition performance by combining the strengths of multiple perception algorithms, support knowledge-enabled reasoning about objects and enable automatic and knowledge-driven generation of processing pipelines. We demonstrate the potential of the proposed framework by three feasibility studies of systems for real-world scene perception that have been built on top of RoboSherlock.",
"Knowing precisely where objects are located enables a robot to perform its tasks both more efficiently and more reliably. To acquire the respective knowledge and to effectively use it as a resource, a robot has to go through the world with “open eyes”. Specifically, it has to become environment-aware by keeping track of where objects of interest are located and explicitly represent their geometrical properties. In this paper, we propose to equip robots with a perception system that passively monitors the environment using a 3D data acquisition system, identifying objects that might become the subject of future manipulation tasks. Our system encompasses a 3D semantic mapping and reconstruction pipeline and a storage and data merging unit for perceived information that provides on-demand modeling and comparison capabilities. Based on probabilistic logical models, we address the important perceptual subtask of object identity resolution, i.e. inferring which observations refer to which entities in the real world (perceptual anchoring). Our system can be used as a bootstrapping system for the generation of object-centric knowledge and can, in this way, be used as a mid-level perception system that enables activity recognition, scene recognition and high-level planning.",
"A major bottleneck in the realization of autonomous robotic agents performing complex manipulation tasks are the requirements that these tasks impose onto perception mechanisms. There is a strong need to scale robot perception capabilities along two dimensions: First, the variations of appearances and perceptual properties that real-world objects exhibit. Second, the variety of perceptual tasks, like categorizing and localizing, decomposing objects into their functional parts, perceiving the affordances they provide. This paper, addresses this need by organizing perception into a two-stage process. First, a pervasive and 'calm' perceptual component runs continually and interprets the incoming image stream to form a general purpose hybrid (symbolic sub-symbolic) belief state. This is used by the second component, the task-directed perception subsystem, to perform the respective perception tasks in a more informed way. We describe and discuss the first component and explain how it can manage realistic belief states, form a memory of past perceptual experiences, and compute valuable perceptual attributes without delaying plan execution. It does so by exploiting that perception is not a one-shot task but rather a secondary task that is pervasively and calmly performed throughout the lifetime of the robot. We show system operating on a leading-edge manipulation platform.",
""
]
} |
1903.12302 | 2928338990 | Mobile robots, performing long-term manipulation activities in human environments, have to perceive a wide variety of objects possessing very different visual characteristics and need to reliably keep track of these throughout the execution of a task. In order to be efficient, robot perception capabilities need to go beyond what is currently perceivable and should be able to answer queries about both current and past scenes. In this paper we investigate a perception system for long-term robot manipulation that keeps track of the changing environment and builds a representation of the perceived world. Specifically we introduce an amortized component that spreads perception tasks throughout the execution cycle. The resulting query driven perception system asynchronously integrates results from logged images into a symbolic and numeric (what we call sub-symbolic) representation that forms the perceptual belief state of the robot. | Another interesting approach is taken by @cite_7 . They argue for the importance of a correct belief management and highlight connections to biological systems. @cite_16 and @cite_1 present SPARK, a framework for belief management similar to the work presented here, but with a focus more on spatial reasoning and the knowledge component. Parts of SPARK consist of managing an object belief state but since emphasis is put on spatial reasoning the scenes and objects used in the experiments are idealized. | {
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_7"
],
"mid": [
"2020456813",
"2057401894",
"2055002866"
],
"abstract": [
"In daily human interactions, spatial reasoning occupies an important place. In this paper we present a situation assessment reasoner that generates relevant symbolic information from the geometry of the environment with respect to relations between objects and human capabilities. The role of SPARK (SPAtial Reasoning and Knowledge) component is to permanently maintain a state of the world in order to provide a basis for the robot to plan, to act, to react and to interact. More precisely, we describe here the way the system manages the hypotheses to be able to handle such knowledge in a flexible manner. Equipped with such capabilities, a robot that will interact with humans should be able to extract, compute or infer these relations and capabilities in order to communicate and interact efficiently in a natural way. To illustrate our work, we will explain how the robot is able to manage and update agents beliefs and pass Sally-Anne test. This work is part of a broader effort to develop a complete decisional framework for human-robot interactive task achievement.",
"We have designed and implemented new spatio-temporal reasoning skills for a cognitive robot, which explicitly reasons about human beliefs on object positions. It enables the robot to build symbolic models reflecting each agent's perspective on the world. Using these models, the robot has a better understanding of what humans say and do, and is able to reason on what human should know to achieve a given goal. These new capabilities are also demonstrated experimentally.",
"Perceptual and control systems are tasked with the challenge of accurately and efficiently estimating the dynamic states of objects in the environment. To properly account for uncertainty, it is necessary to maintain a dynamical belief state representation rather than a single state vector. In this review, canonical algorithms for computing and updating belief states in robotic applications are delineated, and connections to biological systems are highlighted. A navigation example is used to illustrate the importance of properly accounting for correlations between belief state components, and to motivate the need for further investigations in psychophysics and neurobiology."
]
} |
1903.12302 | 2928338990 | Mobile robots, performing long-term manipulation activities in human environments, have to perceive a wide variety of objects possessing very different visual characteristics and need to reliably keep track of these throughout the execution of a task. In order to be efficient, robot perception capabilities need to go beyond what is currently perceivable and should be able to answer queries about both current and past scenes. In this paper we investigate a perception system for long-term robot manipulation that keeps track of the changing environment and builds a representation of the perceived world. Specifically we introduce an amortized component that spreads perception tasks throughout the execution cycle. The resulting query driven perception system asynchronously integrates results from logged images into a symbolic and numeric (what we call sub-symbolic) representation that forms the perceptual belief state of the robot. | There is an increasing body of literature on semantic mapping and SLAM approaches in robotics, large parts of which are closely related to our work. A thorough survey of these is presented by @cite_19 . In this survey authors identify several open problems, like that of semantic mapping being much more than a categorization problem, leading to a need for high-level, rich representations. Most of the approaches for semantic mapping are concerned with capturing the world around the robot as accurate as possible with the agent being an observer rather than an actor in the environment. We see object belief states as something complementary to these semantic object maps. Perceptual belief states for manipulation tasks do not have to be one hundred percent accurate since interaction with the world can validate or contradict these beliefs. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2461937780"
],
"abstract": [
"Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map ), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?"
]
} |
1903.12349 | 2941974930 | The development and design of visualization solutions that are truly usable is essential for ensuring both their adoption and effectiveness. User-centered design principles, which focus on involving users throughout the entire development process, are well suited for visualization and have been shown to be effective in numerous information visualization endeavors. In this paper, we report a two year long collaboration with combustion scientists that, by applying these design principles, generated multiple results including an in situ visualization technique and a post hoc probability distribution function (PDF) exploration tool. Furthermore, we examine the importance of user-centered design principles and describe lessons learned over the design process in an effort to aid others who also seek to work with scientists for developing effective and usable scientific visualization solutions. | Making sure the proposed visualization techniques systems are useful, has been a focus in the recent visualization works. To achieve this goal, many researchers have incorporated extensive evaluations. @cite_14 and @cite_62 performed comprehensive reviews of visualization literature and came up with a categorization of evaluations. As indicated by their research, researchers in scientific visualization have been mostly using algorithmic performance and case studies as the evaluation methods. Even though they are both very important techniques, it is generally not enough to demonstrate the usability of a visualization system. With an increasing focus on evaluations, user experience has become an important aspect. In this context, @cite_58 and @cite_17 suggested that visualization designs should meet both usability and user experience goals. | {
"cite_N": [
"@cite_14",
"@cite_58",
"@cite_17",
"@cite_62"
],
"mid": [
"2058203255",
"1526426957",
"2529391596",
"1992743299"
],
"abstract": [
"We take a new, scenario-based look at evaluation in information visualization. Our seven scenarios, evaluating visual data analysis and reasoning, evaluating user performance, evaluating user experience, evaluating environments and work practices, evaluating communication through visualization, evaluating visualization algorithms, and evaluating collaborative data analysis were derived through an extensive literature review of over 800 visualization publications. These scenarios distinguish different study goals and types of research questions and are illustrated through example studies. Through this broad survey and the distillation of these scenarios, we make two contributions. One, we encapsulate the current practices in the information visualization research community and, two, we provide a different approach to reaching decisions about what might be the most effective evaluation of a given information visualization. Scenarios can be used to choose appropriate research questions and goals and the provided examples can be consulted for guidance on how to design one's own study.",
"A revision of the #1 text in the Human Computer Interaction field, Interaction Design, the third edition is an ideal resource for learning the interdisciplinary skills needed for interaction design, human-computer interaction, information design, web design and ubiquitous computing.The authorsare acknowledged leaders and educators in their field, with a strong global reputation. They bring depth of scope to the subject in this new edition, encompassing the latest technologies and devices including social networking, Web 2.0 and mobile devices. The third edition also adds, develops and updates cases, examples and questions to bring the book in line with the latest in Human Computer Interaction.Interaction Design offers a cross-disciplinary, practical and process-oriented approach to Human Computer Interaction, showing not just what principles ought to apply to Interaction Design, but crucially how they can be applied. The book focuses on how to design interactive products that enhance and extend the way people communicate, interact and work. Motivating examples are included to illustrate both technical, but also social and ethical issues, making the book approachable and adaptable for both Computer Science and non-Computer Science users. Interviews with key HCI luminaries are included and provide an insight into current and future trends.The book has an accompanying website www.id-book.com which has been updated to include resources to match the new edition.",
"Traditionally, studies of data visualization techniques and systems have evaluated visualizations with respect to usability goals such as effectiveness and efficiency. These studies assess performance-related metrics such as time and correctness of participants completing analytic tasks. Alternatively, several studies in InfoVis recently have evaluated visualizations by investigating user experience goals such as memorability, engagement, enjoyment and fun. These studies employ somewhat different evaluation methodologies to assess these other goals. The growing number of these studies, their alternative methodologies, and disagreements concerning their importance have motivated us to more carefully examine them. In this article, we review this growing collection of visualization evaluations that examine user experience goals and we discuss multiple issues regarding the studies including questions about their motivation and utility. Our aim is to provide a resource for future work that plans to evaluate visualizations using these goals.",
"We present an assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference. Our goal is to reflect on a meta-level about evaluation in our community through a systematic understanding of the characteristics and goals of presented evaluations. For this purpose we conducted a systematic review of ten years of evaluations in the published papers using and extending a coding scheme previously established by [2012]. The results of our review include an overview of the most common evaluation goals in the community, how they evolved over time, and how they contrast or align to those of the IEEE Information Visualization conference. In particular, we found that evaluations specific to assessing resulting images and algorithm performance are the most prevalent (with consistently 80-90 of all papers since 1997). However, especially over the last six years there is a steady increase in evaluation methods that include participants, either by evaluating their performances and subjective feedback or by evaluating their work practices and their improved analysis and reasoning capabilities using visual tools. Up to 2010, this trend in the IEEE Visualization conference was much more pronounced than in the IEEE Information Visualization conference which only showed an increasing percentage of evaluation through user performance and experience testing. Since 2011, however, also papers in IEEE Information Visualization show such an increase of evaluations of work practices and analysis as well as reasoning using visual tools. Further, we found that generally the studies reporting requirements analyses and domain-specific work practices are too informally reported which hinders cross-comparison and lowers external validity."
]
} |
1903.12349 | 2941974930 | The development and design of visualization solutions that are truly usable is essential for ensuring both their adoption and effectiveness. User-centered design principles, which focus on involving users throughout the entire development process, are well suited for visualization and have been shown to be effective in numerous information visualization endeavors. In this paper, we report a two year long collaboration with combustion scientists that, by applying these design principles, generated multiple results including an in situ visualization technique and a post hoc probability distribution function (PDF) exploration tool. Furthermore, we examine the importance of user-centered design principles and describe lessons learned over the design process in an effort to aid others who also seek to work with scientists for developing effective and usable scientific visualization solutions. | The term user-centered design was first used by Donald Norman @cite_54 @cite_73 , who described a design process that primarily focuses on specific needs of the user rather than less important factors, such as aesthetics. The topics of usability engineering and usability testing were heavily influenced by user-centered design as presented in multiple books @cite_40 @cite_55 @cite_69 . Later in 2010, an ISO standard @cite_4 was established on human-centered design processes for interactive systems, which identifies key activities as: understanding context of use, determining requirements, producing designs, and performing evaluations. | {
"cite_N": [
"@cite_69",
"@cite_4",
"@cite_55",
"@cite_54",
"@cite_40",
"@cite_73"
],
"mid": [
"1558663422",
"",
"1981610984",
"2021878536",
"",
"1528027857"
],
"abstract": [
"From the Publisher: In A Practical Guide to Usability Testing, the authors begin by defining usability, advocating and explaining the methods of usability engineering and reviewing many techniques for assessing and assuring usability throughout the development process. They then take you through all the steps in planning and conducting a usability test, analyzing data, and using the results to improve both products and processes. Written in plain English and filled with examples from many types of products and tests, A Practical Guide to Usability Testing discusses the full range of testing options from quick studies with a few subjects to more formal tests with carefully designed controls. The authors discuss the place of usability laboratories in testing as well as the skills you need to conduct a test. Included are forms that you can use or modify to conduct a usability test and layouts of existing labs that will help you build your own. The authors, a human factors psychologist and a linguist, have extensive experience conducting research on usability, doing usability testing, helping companies set up usability labs and programs, and teaching usability engineering and testing.",
"",
"A supremely usable nuts-and-bolts guide for beginners A daily tool of the trade for specialists Handbook of Usability Testing gives you practical, step-by-step guidelines in plain English. Written by Jeffrey Rubin, it arms beginners with the full complement of proven testing tools and techniques. From software, GUIs, and technical documentation, to medical instruments, VCRs, and exercise bikes, no matter what your product, you'll learn to design and administer extremely reliable tests to ensure that people find it easy and desirable to use. * Requires no engineering or human factors training* A rigorous, step-by-step approach - with an eye to common gaffes and pitfalls - saves you months of trial and error* Liberally peppered with real-life examples and case histories taken from a wide range of industries* Packed with extremely usable templates, models, tables, test plans, and other indispensable tools of the trade",
"Contents: S.W. Draper, D.A. Norman, C. Lewis, Introduction. Part I:User Centered System Design. K. Hooper, Architectural Design: An Analogy. L.J. Bannon, Issues in Design: Some Notes. D.A. Norman, Cognitive Engineering. Part II:The Interface Experience. B.K. Laurel, Interface as Mimesis. E.L. Hutchins, J.D. Hollan, D.A. NormanDirect Manipulation Interfaces. A.A. diSessa, Notes on the Future of Programming: Breaking the Utility Barrier. Part III:Users' Understandings. M.S. Riley, User Understanding. C. Lewis, Understanding What's Happening in System Interactions. D. Owen, Naive Theories of Computation. A.A. diSessa, Models of Computation. W. Mark, Knowledge-Based Interface Design. Part IV:User Activities. A. Cypher, The Structure of Users' Activities. Y. Miyata, D.A. Norman, Psychological Issues in Support of Multiple Activities. R. Reichman, Communication Paradigms for a Window System. Part V:Toward a Pragmatics of Human-Machine Communication. W. Buxton, There's More to Interaction Than Meets the Eye: Some Issues in Manual Input. S.W. Draper, Display Managers as the Basis for User-Machine Communication. Part VI:Information Flow. D. Owen, Answers First, Then Questions. C.E. O'Malley, Helping Users Help Themselves. L.J. Bannon, Helping Users Help Each Other. C. Lewis, D.A. Norman, Designing for Error. L.J. Bannon, Computer-Mediated Communication. Part VII:The Context of Computing. J.S. Brown, From Cognitive to Social Ergonomics and Beyond.",
"",
"Revealing how smart design is the new competitive frontier, this innovative book is a powerful primer on how--and why--some products satisfy customers while others only frustrate them."
]
} |
1903.12349 | 2941974930 | The development and design of visualization solutions that are truly usable is essential for ensuring both their adoption and effectiveness. User-centered design principles, which focus on involving users throughout the entire development process, are well suited for visualization and have been shown to be effective in numerous information visualization endeavors. In this paper, we report a two year long collaboration with combustion scientists that, by applying these design principles, generated multiple results including an in situ visualization technique and a post hoc probability distribution function (PDF) exploration tool. Furthermore, we examine the importance of user-centered design principles and describe lessons learned over the design process in an effort to aid others who also seek to work with scientists for developing effective and usable scientific visualization solutions. | The information visualization community has been utilizing user-centered design processes and developing guidelines for design studies. @cite_37 presented a methodological framework for conducting design studies. They based the guidelines and pitfalls presented on the authors' combined experiences and an extensive literature review. Lloyd and Dykes @cite_52 provided invaluable recommendations on how to incorporate user-centered design in developing geo-visualizations by presenting the experiences they gained from multiple cases in a long-term study. Similar works have also been presented to share the experiences of adopting a user-centered design process @cite_1 @cite_0 @cite_56 . On a finer granularity, in @cite_18 aimed to specifically analyze how to better establish requirements in real-world visualization projects. These works have done an effective job of promoting user-centered design, as more and more works in information visualization @cite_2 @cite_19 @cite_24 @cite_8 @cite_33 have adopted the idea. | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_33",
"@cite_8",
"@cite_1",
"@cite_52",
"@cite_56",
"@cite_0",
"@cite_19",
"@cite_24",
"@cite_2"
],
"mid": [
"",
"2735758797",
"",
"2144296370",
"2052034548",
"2161114148",
"2134823802",
"",
"2152223035",
"2494421730",
""
],
"abstract": [
"",
"",
"",
"We enhance a user-centered design process with techniques that deliberately promote creativity to identify opportunities for the visualization of data generated by a major energy supplier. Visualization prototypes developed in this way prove effective in a situation whereby data sets are largely unknown and requirements open - enabling successful exploration of possibilities for visualization in Smart Home data analysis. The process gives rise to novel designs and design metaphors including data sculpting. It suggests: that the deliberate use of creativity techniques with data stakeholders is likely to contribute to successful, novel and effective solutions; that being explicit about creativity may contribute to designers developing creative solutions; that using creativity techniques early in the design process may result in a creative approach persisting throughout the process. The work constitutes the first systematic visualization design for a data rich source that will be increasingly important to energy suppliers and consumers as Smart Meter technology is widely deployed. It is novel in explicitly employing creativity techniques at the requirements stage of visualization design and development, paving the way for further use and study of creativity methods in visualization design.",
"Involving analysts in visualisation design has obvious benefits, but the knowledge-gap between domain experts ('analysts') and visualisation designers ('designers') often makes the degree of their involvement fall short of that aspired. By promoting a culture of mutual learning, understanding and contribution between both analysts and designers from the outset, participants can be raised to a level at which all can usefully contribute to both requirement definition and design. We describe the process we use to do this for tightly-scoped and short design exercises -- with meetings workshops, iterative bursts of design prototyping over relatively short periods of time, and workplace-based evaluation -- illustrating this with examples of our own experience from recent work with bird ecologists.",
"Working with three domain specialists we investigate human-centered approaches to geovisualization following an ISO13407 taxonomy covering context of use, requirements and early stages of design. Our case study, undertaken over three years, draws attention to repeating trends: that generic approaches fail to elicit adequate requirements for geovis application design; that the use of real data is key to understanding needs and possibilities; that trust and knowledge must be built and developed with collaborators. These processes take time but modified human-centred approaches can be effective. A scenario developed through contextual inquiry but supplemented with domain data and graphics is useful to geovis designers. Wireframe, paper and digital prototypes enable successful communication between specialist and geovis domains when incorporating real and interesting data, prompting exploratory behaviour and eliciting previously unconsidered requirements. Paper prototypes are particularly successful at eliciting suggestions, especially for novel visualization. Enabling specialists to explore their data freely with a digital prototype is as effective as using a structured task protocol and is easier to administer. Autoethnography has potential for framing the design process. We conclude that a common understanding of context of use, domain data and visualization possibilities are essential to successful geovis design and develop as this progresses. HC approaches can make a significant contribution here. However, modified approaches, applied with flexibility, are most promising. We advise early, collaborative engagement with data - through simple, transient visual artefacts supported by data sketches and existing designs - before moving to successively more sophisticated data wireframes and data prototypes.",
"Domain Analysis for Data Visualization (DADV) is a technique to use when investigating a domain where data visualizations are going to be designed and added to existing software systems. DADV was used to design the data visualization in VisEIO-LCA, which is a framework to visualize environmental data about products. Most of the visualizations are designed using the following stages: formatting data in tables, selecting visual structures, and rendering the data on the screen. Although many visualization authors perform implicit domain analysis, in this paper domain analysis is added explicitly to the process of designing visualizations with the goal of producing move usable software tools. Environmental Life-Cycle Assessment (LCA) is used as a test bed for this technique.",
"",
"An adaptive bit synchronizer is operable to extract digital data and its associated clock from a transmitted digital signal, and includes a tunable matched filter set for modifying the input signal to correct for deviations in offset and gain, which filter set includes data, transition and derivative matched filters. A sampling device samples the output of the data matched filter for making bit decision and for estimating the reliability thereof. A clock-producing device is connected with the matched filter set for producing at least two clocks, use being made of an optimum phase detector for estimating the time error between the proper clock edge and the actual clock edges. A loop filter circuit smooths the estimates of the proper clock time to generate clock signals, and a device responsive to the average square error of the clock signals varies the loop parameters of the loop filter means to minimize average square phase error.",
"",
""
]
} |
1903.12476 | 2931083803 | Recent progress on salient object detection mainly aims at exploiting how to effectively integrate multi-scale convolutional features in convolutional neural networks (CNNs). Many state-of-the-art methods impose deep supervision to perform side-output predictions that are linearly aggregated for final saliency prediction. In this paper, we theoretically and experimentally demonstrate that linear aggregation of side-output predictions is suboptimal, and it only makes limited use of the side-output information obtained by deep supervision. To solve this problem, we propose Deeply-supervised Nonlinear Aggregation (DNA) for better leveraging the complementary information of various side-outputs. Compared with existing methods, it i) aggregates side-output features rather than predictions, and ii) adopts nonlinear instead of linear transformations. Experiments demonstrate that DNA can successfully break through the bottleneck of current linear approaches. Specifically, the proposed saliency detector, a modified U-Net architecture with DNA, performs favorably against state-of-the-art methods on various datasets and evaluation metrics without bells and whistles. Code and data will be released upon paper acceptance. | Salient object detection is a very active research field due to its wide range of applications and challenging scenarios. Early methods extract hand-crafted low-level features and apply machine learning models to classify these features @cite_26 @cite_46 @cite_24 . Some heuristic saliency priors are utilized to ensure the accuracy, such as color contrast @cite_14 @cite_33 , center prior @cite_16 @cite_36 and background prior @cite_18 @cite_17 . With vast successes achieved by deep CNNs in computer vision, CNN-based methods have been introduced to improve saliency detection @cite_67 @cite_31 @cite_37 @cite_6 @cite_43 . @cite_1 @cite_4 @cite_50 @cite_47 @cite_25 appeared in the early era of deep learning based saliency. These approaches view each image patch as a basic processing unit to perform saliency detection. More recently, has dominated this field. We continue our discussion by briefly categorizing multi-scale deep learning into four classes: , , , and . | {
"cite_N": [
"@cite_47",
"@cite_36",
"@cite_43",
"@cite_18",
"@cite_67",
"@cite_4",
"@cite_46",
"@cite_17",
"@cite_37",
"@cite_26",
"@cite_6",
"@cite_50",
"@cite_16",
"@cite_25",
"@cite_14",
"@cite_33",
"@cite_1",
"@cite_24",
"@cite_31"
],
"mid": [
"2953227099",
"2161185676",
"2147347517",
"2039313011",
"2777511827",
"1947031653",
"2472480899",
"2047670868",
"",
"",
"",
"1894057436",
"2162681317",
"2270657321",
"2100470808",
"2037954058",
"1942214758",
"2754188632",
"2795251047"
],
"abstract": [
"Recent advances in saliency detection have utilized deep learning to obtain high level features to detect salient regions in a scene. These advances have demonstrated superior results over previous works that utilize hand-crafted low level features for saliency detection. In this paper, we demonstrate that hand-crafted features can provide complementary information to enhance performance of saliency detection that utilizes only high level features. Our method utilizes both high level and low level features for saliency detection under a unified deep learning framework. The high level features are extracted using the VGG-net, and the low level features are compared with other parts of an image to form a low level distance map. The low level distance map is then encoded using a convolutional neural network(CNN) with multiple 1X1 convolutional and ReLU layers. We concatenate the encoded low level distance map and the high level features, and connect them to a fully connected neural network classifier to evaluate the saliency of a query region. Our experiments show that our method can further improve the performance of state-of-the-art deep learning-based saliency detection methods.",
"Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmentation, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. The contributions lie in two-fold. One is that we show our approach, which integrates the regional contrast, regional property and regional background ness descriptors together to form the master saliency map, is able to produce superior saliency maps to existing algorithms most of which combine saliency maps heuristically computed from different types of features. The other is that we introduce a new regional feature vector, background ness, to characterize the background, which can be regarded as a counterpart of the objectness descriptor [2]. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.",
"A key problem in salient object detection is how to effectively model the semantic properties of salient objects in a data-driven manner. In this paper, we propose a multi-task deep saliency model based on a fully convolutional neural network with global input (whole raw images) and global output (whole saliency maps). In principle, the proposed saliency model takes a data-driven strategy for encoding the underlying saliency prior information, and then sets up a multi-task learning scheme for exploring the intrinsic correlations between saliency detection and semantic image segmentation. Through collaborative feature learning from such two correlated tasks, the shared fully convolutional layers produce effective features for object perception. Moreover, it is capable of capturing the semantic information on salient objects across different levels using the fully convolutional layers, which investigate the feature-sharing properties of salient object detection with a great reduction of feature redundancy. Finally, we present a graph Laplacian regularized nonlinear regression model for saliency refinement. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches.",
"Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of considering the contrast between the salient objects and their surrounding regions, we consider both foreground and background cues in a different way. We rank the similarity of the image elements (pixels or regions) with foreground cues or background cues via graph-based manifold ranking. The saliency of the image elements is defined based on their relevances to the given seeds or queries. We represent the image as a close-loop graph with super pixels as nodes. These nodes are ranked based on the similarity to background and foreground queries, based on affinity matrices. Saliency detection is carried out in a two-stage scheme to extract background regions and foreground salient objects efficiently. Experimental results on two large benchmark databases demonstrate the proposed method performs well when against the state-of-the-art methods in terms of accuracy and speed. We also create a more difficult benchmark database containing 5,172 images to test the proposed saliency model and make this database publicly available with this paper for further studies in the saliency field.",
"In light of the powerful learning capability of deep neural networks (DNNs), deep (convolutional) models have been built in recent years to address the task of salient object detection. Although training such deep saliency models can significantly improve the detection performance, it requires large-scale manual supervision in the form of pixel-level human annotation, which is highly labor-intensive and time-consuming. To address this problem, this paper makes the earliest effort to train a deep salient object detector without using any human annotation. The key insight is “supervision by fusion”, i.e., generating useful supervisory signals from the fusion process of weak but fast unsupervised saliency models. Based on this insight, we combine an intra-image fusion stream and a inter-image fusion stream in the proposed framework to generate the learning curriculum and pseudo ground-truth for supervising the training of the deep salient object detector. Comprehensive experiments on four benchmark datasets demonstrate that our method can approach the same network trained with full supervision (within 2-5 performance gap) and, more encouragingly, even outperform a number of fully supervised state-of-the-art approaches.",
"This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global features to describe a set of object candidate regions. Another deep neural network (DNN-G) is trained to predict the saliency score of each object region based on the global features. The final saliency map is generated by a weighted sum of salient object regions. Our method presents two interesting insights. First, local features learned by a supervised scheme can effectively capture local contrast, texture and shape information for saliency detection. Second, the complex relationship between different global saliency cues can be captured by deep networks and exploited principally rather than heuristically. Quantitative and qualitative experiments on several benchmark data sets demonstrate that our algorithm performs favorably against the state-of-the-art methods.",
"In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.",
"Recent progresses in salient object detection have exploited the boundary prior, or background information, to assist other saliency cues such as contrast, achieving state-of-the-art results. However, their usage of boundary prior is very simple, fragile, and the integration with other cues is mostly heuristic. In this work, we present new methods to address these issues. First, we propose a robust background measure, called boundary connectivity. It characterizes the spatial layout of image regions with respect to image boundaries and is much more robust. It has an intuitive geometrical interpretation and presents unique benefits that are absent in previous saliency measures. Second, we propose a principled optimization framework to integrate multiple low level cues, including our background measure, to obtain clean and uniform saliency maps. Our formulation is intuitive, efficient and achieves state-of-the-art results on several benchmark datasets.",
"",
"",
"",
"Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. We then propose a refinement method to enhance the spatial coherence of our saliency results. Finally, aggregating multiple saliency maps computed for different levels of image segmentation can further boost the performance, yielding saliency maps better than those generated from a single segmentation. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks, improving the F-Measure by 5.0 and 13.2 respectively on the MSRA-B dataset and our new dataset (HKU-IS), and lowering the mean absolute error by 5.7 and 35.1 respectively on these two datasets.",
"The problem of salient region detection is formulated as the well-studied facility location problem from operations research. High-level priors are combined with low-level features to detect salient regions. Salient region detection is achieved by maximizing a sub modular objective function, which maximizes the total similarities (i.e., total profits) between the hypothesized salient region centers (i.e., facility locations) and their region elements (i.e., clients), and penalizes the number of potential salient regions (i.e., the number of open facilities). The similarities are efficiently computed by finding a closed-form harmonic solution on the constructed graph for an input image. The saliency of a selected region is modeled in terms of appearance and spatial location. By exploiting the sub modularity properties of the objective function, a highly efficient greedy-based optimization algorithm can be employed. This algorithm is guaranteed to be at least a (e - 1) e 0.632-approximation to the optimum. Experimental results demonstrate that our approach outperforms several recently proposed saliency detection approaches.",
"Salient object detection increasingly receives attention as an important component or step in several pattern recognition and image processing tasks. Although a variety of powerful saliency models have been intensively proposed, they usually involve heavy feature (or model) engineering based on priors (or assumptions) about the properties of objects and backgrounds. Inspired by the effectiveness of recently developed feature learning, we provide a novel deep image saliency computing (DISC) framework for fine-grained image saliency computing. In particular, we model the image saliency from both the coarse-and fine-level observations, and utilize the deep convolutional neural network (CNN) to learn the saliency representation in a progressive manner. In particular, our saliency model is built upon two stacked CNNs. The first CNN generates a coarse-level saliency map by taking the overall image as the input, roughly identifying saliency regions in the global context. Furthermore, we integrate superpixel-based local context information in the first CNN to refine the coarse-level saliency map. Guided by the coarse saliency map, the second CNN focuses on the local context to produce fine-grained and accurate saliency map while preserving object details. For a testing image, the two CNNs collaboratively conduct the saliency computing in one shot. Our DISC framework is capable of uniformly highlighting the objects of interest from complex background while preserving well object details. Extensive experiments on several standard benchmarks suggest that DISC outperforms other state-of-the-art methods and it also generalizes well across data sets without additional training. The executable version of DISC is available online: http: vision.sysu.edu.cn projects DISC .",
"Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. In this paper, we introduce a method for salient region detection that outputs full resolution saliency maps with well-defined boundaries of salient objects. These boundaries are preserved by retaining substantially more frequency content from the original image than other existing techniques. Our method exploits features of color and luminance, is simple to implement, and is computationally efficient. We compare our algorithm to five state-of-the-art salient region detection methods with a frequency domain analysis, ground truth, and a salient object segmentation application. Our method outperforms the five algorithms both on the ground-truth evaluation and on the segmentation task by achieving both higher precision and better recall.",
"Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.",
"Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.",
"Finding what is and what is not a salient object can be helpful in developing better features and models in salient object detection (SOD). In this paper, we investigate the images that are selected and discarded in constructing a new SOD dataset and find that many similar candidates, complex shape and low objectness are three main attributes of many non-salient objects. Moreover, objects may have diversified attributes that make them salient. As a result, we propose a novel salient object detector by ensembling linear exemplar regressors. We first select reliable foreground and background seeds using the boundary prior and then adopt locally linear embedding (LLE) to conduct manifold-preserving foregroundness propagation. In this manner, a foregroundness map can be generated to roughly pop-out salient objects and suppress non-salient ones with many similar candidates. Moreover, we extract the shape, foregroundness and attention descriptors to characterize the extracted object proposals, and a linear exemplar regressor is trained to encode how to detect salient proposals in a specific image. Finally, various linear exemplar regressors are ensembled to form a single detector that adapts to various scenarios. Extensive experimental results on 5 dataset and the new SOD dataset show that our approach outperforms 9 state-of-art methods.",
"The success of current deep saliency detection methods heavily depends on the availability of large-scale supervision in the form of per-pixel labeling. Such supervision, while labor-intensive and not always possible, tends to hinder the generalization ability of the learned models. By contrast, traditional handcrafted features based unsupervised saliency detection methods, even though have been surpassed by the deep supervised methods, are generally dataset-independent and could be applied in the wild. This raises a natural question that \"Is it possible to learn saliency maps without using labeled data while improving the generalization ability?\". To this end, we present a novel perspective to unsupervised saliency detection through learning from multiple noisy labeling generated by \"weak\" and \"noisy\" unsupervised handcrafted saliency methods. Our end-to-end deep learning framework for unsupervised saliency detection consists of a latent saliency prediction module and a noise modeling module that work collaboratively and are optimized jointly. Explicit noise modeling enables us to deal with noisy saliency maps in a probabilistic way. Extensive experimental results on various benchmarking datasets show that our model not only outperforms all the unsupervised saliency methods with a large margin but also achieves comparable performance with the recent state-of-the-art supervised deep saliency methods."
]
} |
1903.12476 | 2931083803 | Recent progress on salient object detection mainly aims at exploiting how to effectively integrate multi-scale convolutional features in convolutional neural networks (CNNs). Many state-of-the-art methods impose deep supervision to perform side-output predictions that are linearly aggregated for final saliency prediction. In this paper, we theoretically and experimentally demonstrate that linear aggregation of side-output predictions is suboptimal, and it only makes limited use of the side-output information obtained by deep supervision. To solve this problem, we propose Deeply-supervised Nonlinear Aggregation (DNA) for better leveraging the complementary information of various side-outputs. Compared with existing methods, it i) aggregates side-output features rather than predictions, and ii) adopts nonlinear instead of linear transformations. Experiments demonstrate that DNA can successfully break through the bottleneck of current linear approaches. Specifically, the proposed saliency detector, a modified U-Net architecture with DNA, performs favorably against state-of-the-art methods on various datasets and evaluation metrics without bells and whistles. Code and data will be released upon paper acceptance. | Hyper feature learning: Hyper feature learning @cite_3 @cite_2 is the most intuitive way to learn multi-scale information, as illustrated in fig:frame_comp (a). Examples of this structure for saliency include @cite_11 @cite_23 @cite_7 @cite_65 @cite_70 @cite_44 @cite_27 . These models concatenate sum multi-scale deep features from multiple layers of backbone nets @cite_11 @cite_23 or branches of the multi-stream nets @cite_7 @cite_65 @cite_70 . The fused hyper features, called , are then used for final saliency predictions. | {
"cite_N": [
"@cite_7",
"@cite_70",
"@cite_65",
"@cite_3",
"@cite_44",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_11"
],
"mid": [
"",
"2528092473",
"",
"1948751323",
"2906243118",
"",
"2799231793",
"2807333821",
"2949370174"
],
"abstract": [
"",
"Traditional saliency models usually adopt hand-crafted image features and human-designed mechanisms to calculate local or global contrast. In this paper, we propose a novel computational saliency model, i.e., deep spatial contextual long-term recurrent convolutional network (DSCLRCN), to predict where people look in natural scenes. DSCLRCN first automatically learns saliency related local features on each image location in parallel. Then, in contrast with most other deep network based saliency models which infer saliency in local contexts, DSCLRCN can mimic the cortical lateral inhibition mechanisms in human visual system to incorporate global contexts to assess the saliency of each image location by leveraging the deep spatial long short-term memory (DSLSTM) model. Moreover, we also integrate scene context modulation in DSLSTM for saliency inference, leading to a novel deep spatial contextual LSTM (DSCLSTM) model. The whole network can be trained end-to-end and works efficiently when testing. Experimental results on two benchmark datasets show that DSCLRCN can achieve state-of-the-art performance on saliency detection. Furthermore, the proposed DSCLSTM model can significantly boost the saliency detection performance by incorporating both global spatial interconnections and scene context modulation, which may uncover novel inspirations for studies on them in computational saliency models.",
"",
"Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.",
"Typically, a salient object detection (SOD) model faces opposite requirements in processing object interiors and boundaries. The features of interiors should be invariant to strong appearance change so as to pop-out the salient object as a whole, while the features of boundaries should be selective to slight appearance change to distinguish salient objects and background. To address this selectivity-invariance dilemma, we propose a novel boundary-aware network with successive dilation for image-based SOD. In this network, the feature selectivity at boundaries is enhanced by incorporating a boundary localization stream, while the feature invariance at interiors is guaranteed with a complex interior perception stream. Moreover, a transition compensation stream is adopted to amend the probable failures in transitional regions between interiors and boundaries. In particular, an integrated successive dilation module is proposed to enhance the feature invariance at interiors and transitional regions. Extensive experiments on six datasets show that the proposed approach outperforms 16 state-of-the-art methods.",
"",
"The categories and appearance of salient objects vary from image to image, therefore, saliency detection is an image-specific task. Due to lack of large-scale saliency training data, using deep neural networks (DNNs) with pretraining is difficult to precisely capture the image-specific saliency cues. To solve this issue, we formulate a zero-shot learning problem to promote existing saliency detectors. Concretely, a DNN is trained as an embedding function to map pixels and the attributes of the salient background regions of an image into the same metric space, in which an image-specific classifier is learned to classify the pixels. Since the image-specific task is performed by the classifier, the DNN embedding effectively plays the role of a general feature extractor. Compared with transferring the learning to a new recognition task using limited data, this formulation makes the DNN learn more effectively from small data. Extensive experiments on five data sets show that our method significantly improves accuracy of existing methods and compares favorably against state-of-the-art approaches.",
"Abstract We present a novel convolutional neural network (CNN) based pipeline which can effectively fuse multi-level information extracted from different intermediate layers generating hybrid convolutional features (HCF) for edge detection. Different from previous methods, the proposed method fuses multi-level information in a feature-map based manner. The produced hybrid convolutional features can be used to perform high-quality edge detection. The edge detector is also computationally efficient, because it detects edges in an image-to-image way without any post-processing. We evaluate the proposed method on three widely used datasets for edge detection including BSDS500, NYUD and Multicue, and also test the method on Pascal VOC’12 dataset for object contour detection. The results show that HCF achieves an improvement in performance over the state-of-the-art methods on all four datasets. On BSDS500 dataset, the efficient version of the proposed approach achieves ODS F-score of 0.804 with a speed of 22 fps and the high-accuracy version achieves ODS F-score of 0.814 with 11 fps .",
"Salient object detection has recently witnessed substantial progress due to powerful features extracted using deep convolutional neural networks (CNNs). However, existing CNN-based methods operate at the patch level instead of the pixel level. Resulting saliency maps are typically blurry, especially near the boundary of salient objects. Furthermore, image patches are treated as independent samples even when they are overlapping, giving rise to significant redundancy in computation and storage. In this CVPR 2016 paper, we propose an end-to-end deep contrast network to overcome the aforementioned limitations. Our deep network consists of two complementary components, a pixel-level fully convolutional stream and a segment-wise spatial pooling stream. The first stream directly produces a saliency map with pixel-level accuracy from an input image. The second stream extracts segment-wise features very efficiently, and better models saliency discontinuities along object boundaries. Finally, a fully connected CRF model can be optionally incorporated to improve spatial coherence and contour localization in the fused result from these two streams. Experimental results demonstrate that our deep model significantly improves the state of the art."
]
} |
1903.12476 | 2931083803 | Recent progress on salient object detection mainly aims at exploiting how to effectively integrate multi-scale convolutional features in convolutional neural networks (CNNs). Many state-of-the-art methods impose deep supervision to perform side-output predictions that are linearly aggregated for final saliency prediction. In this paper, we theoretically and experimentally demonstrate that linear aggregation of side-output predictions is suboptimal, and it only makes limited use of the side-output information obtained by deep supervision. To solve this problem, we propose Deeply-supervised Nonlinear Aggregation (DNA) for better leveraging the complementary information of various side-outputs. Compared with existing methods, it i) aggregates side-output features rather than predictions, and ii) adopts nonlinear instead of linear transformations. Experiments demonstrate that DNA can successfully break through the bottleneck of current linear approaches. Specifically, the proposed saliency detector, a modified U-Net architecture with DNA, performs favorably against state-of-the-art methods on various datasets and evaluation metrics without bells and whistles. Code and data will be released upon paper acceptance. | U-Net style: It is widely accepted that the top layers of deep neural networks contain high-level semantic information, while the bottom layers learn low-level fine details. Therefore, a reasonable revision of hyper feature learning is to progressively fuse deep features from upper layers to lower layers @cite_54 @cite_66 , as shown in fig:frame_comp (b). Many saliency detectors are of this type @cite_20 @cite_28 @cite_35 @cite_63 @cite_56 @cite_42 @cite_0 @cite_45 . Note that hyper feature learning and U-Net do not apply deep supervision, so they . | {
"cite_N": [
"@cite_35",
"@cite_28",
"@cite_54",
"@cite_42",
"@cite_56",
"@cite_0",
"@cite_45",
"@cite_63",
"@cite_66",
"@cite_20"
],
"mid": [
"2744613561",
"2963906836",
"2952632681",
"2799074129",
"",
"2895251968",
"2472782738",
"2740652190",
"2952232639",
"2519528544"
],
"abstract": [
"Saliency detection aims to highlight the most relevant objects in an image. Methods using conventional models struggle whenever salient objects are pictured on top of a cluttered background while deep neural nets suffer from excess complexity and slow evaluation speeds. In this paper, we propose a simplified convolutional neural network which combines local and global information through a multi-resolution 4×5 grid structure. Instead of enforcing spacial coherence with a CRF or superpixels as is usually the case, we implemented a loss function inspired by the Mumford-Shah functional which penalizes errors on the boundary. We trained our model on the MSRA-B dataset, and tested it on six different saliency benchmark datasets. Results show that our method is on par with the state-of-the-art while reducing computation time by a factor of 18 to 100 times, enabling near real-time, high performance saliency detection.",
"Deep convolutional neural networks (CNNs) have delivered superior performance in many computer vision tasks. In this paper, we propose a novel deep fully convolutional network model for accurate salient object detection. The key contribution of this work is to learn deep uncertain convolutional features (UCF), which encourage the robustness and accuracy of saliency detection. We achieve this via introducing a reformulated dropout (R-dropout) after specific convolutional layers to construct an uncertain ensemble of internal feature units. In addition, we propose an effective hybrid upsampling method to reduce the checkerboard artifacts of deconvolution operators in our decoder network. The proposed methods can also be applied to other deep convolutional networks. Compared with existing saliency detection methods, the proposed UCF model is able to incorporate uncertainties for more accurate object boundary inference. Extensive experiments demonstrate that our proposed saliency model performs favorably against state-ofthe-art approaches. The uncertain feature learning mechanism as well as the upsampling method can significantly improve performance on other pixel-wise vision tasks.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Recent progress on salient object detection is beneficial from Fully Convolutional Neural Network (FCN). The saliency cues contained in multi-level convolutional features are complementary for detecting salient objects. How to integrate multi-level features becomes an open problem in saliency detection. In this paper, we propose a novel bi-directional message passing model to integrate multi-level features for salient object detection. At first, we adopt a Multi-scale Context-aware Feature Extraction Module (MCFEM) for multi-level feature maps to capture rich context information. Then a bi-directional structure is designed to pass messages between multi-level features, and a gate function is exploited to control the message passing rate. We use the features after message passing, which simultaneously encode semantic information and spatial details, to predict saliency maps. Finally, the predicted results are efficiently combined to generate the final saliency map. Quantitative and qualitative experiments on five benchmark datasets demonstrate that our proposed model performs favorably against the state-of-the-art methods under different evaluation metrics.",
"",
"In recent years, deep Convolutional Neural Networks (CNNs) have broken all records in salient object detection. However, training such a deep model requires a large amount of manual annotations. Our goal is to overcome this limitation by automatically converting an existing deep contour detection model into a salient object detection model without using any manual salient object masks. For this purpose, we have created a deep network architecture, namely Contour-to-Saliency Network (C2S-Net), by grafting a new branch onto a well-trained contour detection network. Therefore, our C2S-Net has two branches for performing two different tasks: (1) predicting contours with the original contour branch, and (2) estimating per-pixel saliency score of each image with the newly-added saliency branch. To bridge the gap between these two tasks, we further propose a contour-to-saliency transferring method to automatically generate salient object masks which can be used to train the saliency branch from outputs of the contour branch. Finally, we introduce a novel alternating training pipeline to gradually update the network parameters. In this scheme, the contour branch generates saliency masks for training the saliency branch, while the saliency branch, in turn, feeds back saliency knowledge in the form of saliency-aware contour labels, for fine-tuning the contour branch. The proposed method achieves state-of-the-art performance on five well-known benchmarks, outperforming existing fully supervised methods while also maintaining high efficiency.",
"In this paper we consider the problem of visual saliency modeling, including both human gaze prediction and salient object segmentation. The overarching goal of the paper is to identify high level considerations relevant to deriving more sophisticated visual saliency models. A deep learning model based on fully convolutional networks (FCNs) is presented, which shows very favorable performance across a wide variety of benchmarks relative to existing proposals. We also demonstrate that the manner in which training data is selected, and ground truth treated is critical to resulting model behaviour. Recent efforts have explored the relationship between human gaze and salient objects, and we also examine this point further in the context of FCNs. Close examination of the proposed and alternative models serves as a vehicle for identifying problems important to developing more comprehensive models going forward.",
"Deep learning has been applied to saliency detection in recent years. The superior performance has proved that deep networks can model the semantic properties of salient objects. Yet it is difficult for a deep network to discriminate pixels belonging to similar receptive fields around the object boundaries, thus deep networks may output maps with blurred saliency and inaccurate boundaries. To tackle such an issue, in this work, we propose a deep Level Set network to produce compact and uniform saliency maps. Our method drives the network to learn a Level Set function for salient objects so it can output more accurate boundaries and compact saliency. Besides, to propagate saliency information among pixels and recover full resolution saliency map, we extend a superpixel-based guided filter to be a layer in the network. The proposed network has a simple structure and is trained end-to-end. During testing, the network can produce saliency maps by efficiently feedforwarding testing images at a speed over 12FPS on GPUs. Evaluations on benchmark datasets show that the proposed method achieves state-of-the-art performance.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .",
"Deep networks have been proved to encode high level semantic features and delivered superior performance in saliency detection. In this paper, we go one step further by developing a new saliency model using recurrent fully convolutional networks (RFCNs). Compared with existing deep network based methods, the proposed network is able to incorporate saliency prior knowledge for more accurate inference. In addition, the recurrent architecture enables our method to automatically learn to refine the saliency map by correcting its previous errors. To train such a network with numerous parameters, we propose a pre-training strategy using semantic segmentation data, which simultaneously leverages the strong supervision of segmentation tasks for better training and enables the network to capture generic representations of objects for saliency detection. Through extensive experimental evaluations, we demonstrate that the proposed method compares favorably against state-of-the-art approaches, and that the proposed recurrent deep model as well as the pre-training method can significantly improve performance."
]
} |
1903.12476 | 2931083803 | Recent progress on salient object detection mainly aims at exploiting how to effectively integrate multi-scale convolutional features in convolutional neural networks (CNNs). Many state-of-the-art methods impose deep supervision to perform side-output predictions that are linearly aggregated for final saliency prediction. In this paper, we theoretically and experimentally demonstrate that linear aggregation of side-output predictions is suboptimal, and it only makes limited use of the side-output information obtained by deep supervision. To solve this problem, we propose Deeply-supervised Nonlinear Aggregation (DNA) for better leveraging the complementary information of various side-outputs. Compared with existing methods, it i) aggregates side-output features rather than predictions, and ii) adopts nonlinear instead of linear transformations. Experiments demonstrate that DNA can successfully break through the bottleneck of current linear approaches. Specifically, the proposed saliency detector, a modified U-Net architecture with DNA, performs favorably against state-of-the-art methods on various datasets and evaluation metrics without bells and whistles. Code and data will be released upon paper acceptance. | U-Net + HED style: These methods combine the advantages of both U-Net and HED. We outline this architectures in fig:frame_comp (d). Specifically, deep supervision is imposed at each of the convolution stage of U-Net decoder. Many recent saliency models fall into this category @cite_41 @cite_71 @cite_39 @cite_60 @cite_62 @cite_57 @cite_49 @cite_52 @cite_34 . They differ from each other by applying different fusion strategies. One notable similarity of these models is that the final prediction is produced by a linear aggregation of side-output predictions. Hence the multi-scale learning is achieved : i) the U-Net aggregates multi-level convolutional features from top layers to bottom layers in an encoder-decoder form; ii) the multi-scale side-output predictions are further linearly aggregated for final prediction. , and models have designed very complex feature fusion strategies for this @cite_39 @cite_62 . | {
"cite_N": [
"@cite_62",
"@cite_60",
"@cite_41",
"@cite_52",
"@cite_39",
"@cite_57",
"@cite_71",
"@cite_49",
"@cite_34"
],
"mid": [
"2744263836",
"2963342032",
"2798791651",
"2914856806",
"",
"2461475918",
"",
"2772161954",
"2910681974"
],
"abstract": [
"Fully convolutional neural networks (FCNs) have shown outstanding performance in many dense labeling problems. One key pillar of these successes is mining relevant information from features in convolutional layers. However, how to better aggregate multi-level convolutional feature maps for salient object detection is underexplored. In this work, we present Amulet, a generic aggregating multi-level convolutional feature framework for salient object detection. Our framework first integrates multi-level feature maps into multiple resolutions, which simultaneously incorporate coarse semantics and fine details. Then it adaptively learns to combine these feature maps at each resolution and predict saliency maps with the combined features. Finally, the predicted results are efficiently fused to generate the final saliency map. In addition, to achieve accurate boundary inference and semantic enhancement, edge-aware feature maps in low-level layers and the predicted results of low resolution features are recursively embedded into the learning framework. By aggregating multi-level convolutional features in this efficient and flexible manner, the proposed saliency model provides accurate salient object labeling. Comprehensive experiments demonstrate that our method performs favorably against state-of-the art approaches in terms of near all compared evaluation metrics.",
"Salient object detection is a problem that has been considered in detail and many solutions proposed. In this paper, we argue that work to date has addressed a problem that is relatively ill-posed. Specifically, there is not universal agreement about what constitutes a salient object when multiple observers are queried. This implies that some objects are more likely to be judged salient than others, and implies a relative rank exists on salient objects. The solution presented in this paper solves this more general problem that considers relative rank, and we propose data and metrics suitable to measuring success in a relative object saliency landscape. A novel deep learning solution is proposed based on a hierarchical representation of relative saliency and stage-wise refinement. We also show that the problem of salient object subitizing can be addressed with the same network, and our approach exceeds performance of any prior work across all metrics considered (both traditional and newly proposed).",
"Effective convolutional features play an important role in saliency estimation but how to learn powerful features for saliency is still a challenging task. FCN-based methods directly apply multi-level convolutional features without distinction, which leads to sub-optimal results due to the distraction from redundant details. In this paper, we propose a novel attention guided network which selectively integrates multi-level contextual information in a progressive manner. Attentive features generated by our network can alleviate distraction of background thus achieve better performance. On the other hand, it is observed that most of existing algorithms conduct salient object detection by exploiting side-output features of the backbone feature extraction network. However, shallower layers of backbone network lack the ability to obtain global semantic information, which limits the effective feature learning. To address the problem, we introduce multi-path recurrent feedback to enhance our proposed progressive attention driven framework. Through multi-path recurrent connections, global semantic information from the top convolutional layer is transferred to shallower layers, which intrinsically refines the entire network. Experimental results on six benchmark datasets demonstrate that our algorithm performs favorably against the state-of-the-art approaches.",
"To detect salient objects accurately, existing methods usually design complex backbone network architectures to learn and fuse powerful features. However, the saliency inference module that performs saliency prediction from the fused features receives much less attention on its architecture design and typically adopts only a few fully convolutional layers. In this paper, we find the limited capacity of the saliency inference module indeed makes a fundamental performance bottleneck, and enhancing its capacity is critical for obtaining better saliency prediction. Correspondingly, we propose a deep yet light-weight saliency inference module that adopts a multi-dilated depth-wise convolution architecture. Such a deep inference module, though with simple architecture, can directly perform reasoning about salient objects from the multi-scale convolutional features fast, and give superior salient object detection performance with less computational cost. To our best knowledge, we are the first to reveal the importance of the inference module for salient object detection, and present a novel architecture design with attractive efficiency and accuracy. Extensive experimental evaluations demonstrate that our simple framework performs favorably compared with the state-of-the-art methods with complex backbone design.",
"",
"Traditional1 salient object detection models often use hand-crafted features to formulate contrast and various prior knowledge, and then combine them artificially. In this work, we propose a novel end-to-end deep hierarchical saliency network (DHSNet) based on convolutional neural networks for detecting salient objects. DHSNet first makes a coarse global prediction by automatically learning various global structured saliency cues, including global contrast, objectness, compactness, and their optimal combination. Then a novel hierarchical recurrent convolutional neural network (HRCNN) is adopted to further hierarchically and progressively refine the details of saliency maps step by step via integrating local context information. The whole architecture works in a global to local and coarse to fine manner. DHSNet is directly trained using whole images and corresponding ground truth saliency masks. When testing, saliency maps can be generated by directly and efficiently feedforwarding testing images through the network, without relying on any other techniques. Evaluations on four benchmark datasets and comparisons with other 11 state-of-the-art algorithms demonstrate that DHSNet not only shows its significant superiority in terms of performance, but also achieves a real-time speed of 23 FPS on modern GPUs.",
"",
"Subitizing (i.e., instant judgement on the number) and detection of salient objects are human inborn abilities. These two tasks influence each other in the human visual system. In this paper, we delve into the complementarity of these two tasks. We propose a multi-task deep neural network with weight prediction for salient object detection, where the parameters of an adaptive weight layer are dynamically determined by an auxiliary subitizing network. The numerical representation of salient objects is therefore embedded into the spatial representation. The proposed joint network can be trained end-to-end using backpropagation. Experiments show the proposed multi-task network outperforms existing multi-task architectures, and the auxiliary subitizing network provides strong guidance to salient object detection by reducing false positives and producing coherent saliency maps. Moreover, the proposed method is an unconstrained method able to handle images with without salient objects. Finally, we show state-of-theart performance on different salient object datasets.",
"Recent Salient Object Detection (SOD) systems are mostly based on Convolutional Neural Networks (CNNs). Specifically, Deeply Supervised Saliency (DSS) system has shown it is very useful to add short connections to the network and supervising on the side output. In this work, we propose a new SOD system which aims at designing a more efficient and effective way to pass back global information. Richer and Deeper Supervision (RDS) is applied to better combine features from each side output without demanding much extra computational space. Meanwhile, the backbone network used for SOD is normally pre-trained on the object classification dataset, ImageNet. But the pre-trained model has been trained on cropped images in order to only focus on distinguishing features within the region of the object. But the ignored background information is also significant in the task of SOD. We try to solve this problem by introducing the training data designed for object detection. A coarse global information is learned based on an entire image with its bounding box before training on the SOD dataset. The large-scale of object images can slightly improve the performance of SOD. Our experiment shows the proposed RDS network achieves the state-of-the-art results on five public SOD datasets."
]
} |
1903.12476 | 2931083803 | Recent progress on salient object detection mainly aims at exploiting how to effectively integrate multi-scale convolutional features in convolutional neural networks (CNNs). Many state-of-the-art methods impose deep supervision to perform side-output predictions that are linearly aggregated for final saliency prediction. In this paper, we theoretically and experimentally demonstrate that linear aggregation of side-output predictions is suboptimal, and it only makes limited use of the side-output information obtained by deep supervision. To solve this problem, we propose Deeply-supervised Nonlinear Aggregation (DNA) for better leveraging the complementary information of various side-outputs. Compared with existing methods, it i) aggregates side-output features rather than predictions, and ii) adopts nonlinear instead of linear transformations. Experiments demonstrate that DNA can successfully break through the bottleneck of current linear approaches. Specifically, the proposed saliency detector, a modified U-Net architecture with DNA, performs favorably against state-of-the-art methods on various datasets and evaluation metrics without bells and whistles. Code and data will be released upon paper acceptance. | A full literature review of salient object detection is beyond the scope of this paper. Please refer to @cite_29 @cite_38 @cite_32 @cite_59 for more comprehensive surveys. We find that the upper bound of traditional linear side-output prediction aggregation is limited to the side-output predictions. Hence we propose DNA to aggregate side-output features in the nonlinear way, so that the aggregated hybrid features can make good use of the complementary multi-scale deep features. | {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_32",
"@cite_59"
],
"mid": [
"2963503775",
"1772076007",
"2962966828",
""
],
"abstract": [
"How best to evaluate a saliency model's ability to predict where humans look in images is an open research question. The choice of evaluation metric depends on how saliency is defined and how the ground truth is represented. Metrics differ in how they rank saliency models, and this results from how false positives and false negatives are treated, whether viewing biases are accounted for, whether spatial deviations are factored in, and how the saliency maps are pre-processed. In this paper, we provide an analysis of 8 different evaluation metrics and their properties. With the help of systematic experiments and visualizations of metric computations, we add interpretability to saliency scores and more transparency to the evaluation of saliency models. Building off the differences in metric properties and behaviors, we make recommendations for metric selections under specific assumptions and for specific applications.",
"We extensively compare, qualitatively and quantitatively, 41 state-of-the-art models (29 salient object detection, 10 fixation prediction, 1 objectness, and 1 baseline) over seven challenging data sets for the purpose of benchmarking salient object detection and segmentation methods. From the results obtained so far, our evaluation shows a consistent rapid progress over the last few years in terms of both accuracy and running time. The top contenders in this benchmark significantly outperform the models identified as the best in the previous benchmark conducted three years ago. We find that the models designed specifically for salient object detection generally work better than models in closely related areas, which in turn provides a precise definition and suggests an appropriate treatment of this problem that distinguishes it from other problems. In particular, we analyze the influences of center bias and scene complexity in model performance, which, along with the hard cases for the state-of-the-art models, provide useful hints toward constructing more challenging large-scale data sets and better saliency models. Finally, we propose probable solutions for tackling several open problems, such as evaluation scores and data set bias, which also suggest future research directions in the rapidly growing field of salient object detection.",
"Visual saliency detection model simulates the human visual system to perceive the scene, and has been widely used in many vision tasks. With the development of acquisition technology, more comprehensive information, such as depth cue, inter-image correspondence, or temporal relationship, is available to extend image saliency detection to RGBD saliency detection, co-saliency detection, or video saliency detection. RGBD saliency detection model focuses on extracting the salient regions from RGBD images by combining the depth information. Co-saliency detection model introduces the inter-image correspondence constraint to discover the common salient object in an image group. The goal of video saliency detection model is to locate the motion-related salient object in video sequences, which considers the motion cue and spatiotemporal constraint jointly. In this paper, we review different types of saliency detection algorithms, summarize the important issues of the existing methods, and discuss the existent problems and future works. Moreover, the evaluation datasets and quantitative measurements are briefly introduced, and the experimental analysis and discission are conducted to provide a holistic overview of different saliency detection methods.",
""
]
} |
1903.12493 | 2932868752 | Due to its fast retrieval and storage efficiency capabilities, hashing has been widely used in nearest neighbor retrieval tasks. By using deep learning-based techniques, hashing can outperform non-learning-based hashing technique in many applications. However, we argue that the current deep learning-based hashing methods ignore some critical problems (e.g., the learned hash codes are not discriminative due to the hashing methods being unable to discover rich semantic information and the training strategy having difficulty optimizing the discrete binary codes). In this paper, we propose a novel image hashing method, termed as asymmetric deep semantic quantization (ADSQ). The ADSQ is implemented using three stream frameworks, which consist of one LabelNet and two ImgNets. The LabelNet leverages the power of three fully-connected layers, which are used to capture rich semantic information between image pairs. For the two ImgNets, they each adopt the same convolutional neural network structure but with different weights (i.e., asymmetric convolutional neural networks). The two ImgNets are used to generate discriminative compact hash codes. Specifically, the function of the LabelNet is to capture rich semantic information that is used to guide the two ImgNets in minimizing the gap between the real-continuous features and the discrete binary codes. Furthermore, the ADSQ can utilize the most critical semantic information to guide the feature learning process and consider the consistency of the common semantic space and Hamming space. The experimental results on three benchmarks (i.e., CIFAR-10, NUS-WIDE, and ImageNet) demonstrate that the proposed ADSQ can outperform current state-of-the-art methods. | By representing images as binary codes and taking advantage of fast query retrieval, the use of hashing techniques in image retrieval has attracted considerable attention. Wang @cite_1 have provided a comprehensive literature survey that covers the most important methods and latest advances in information retrieval. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2411707397"
],
"abstract": [
"Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics."
]
} |
1903.12493 | 2932868752 | Due to its fast retrieval and storage efficiency capabilities, hashing has been widely used in nearest neighbor retrieval tasks. By using deep learning-based techniques, hashing can outperform non-learning-based hashing technique in many applications. However, we argue that the current deep learning-based hashing methods ignore some critical problems (e.g., the learned hash codes are not discriminative due to the hashing methods being unable to discover rich semantic information and the training strategy having difficulty optimizing the discrete binary codes). In this paper, we propose a novel image hashing method, termed as asymmetric deep semantic quantization (ADSQ). The ADSQ is implemented using three stream frameworks, which consist of one LabelNet and two ImgNets. The LabelNet leverages the power of three fully-connected layers, which are used to capture rich semantic information between image pairs. For the two ImgNets, they each adopt the same convolutional neural network structure but with different weights (i.e., asymmetric convolutional neural networks). The two ImgNets are used to generate discriminative compact hash codes. Specifically, the function of the LabelNet is to capture rich semantic information that is used to guide the two ImgNets in minimizing the gap between the real-continuous features and the discrete binary codes. Furthermore, the ADSQ can utilize the most critical semantic information to guide the feature learning process and consider the consistency of the common semantic space and Hamming space. The experimental results on three benchmarks (i.e., CIFAR-10, NUS-WIDE, and ImageNet) demonstrate that the proposed ADSQ can outperform current state-of-the-art methods. | In order to solve the limitations of the data independent methods, the data-dependent methods attains more compact hash codes by combining dataset to achieve a better retrieval accuracy. Data dependent methods can be further categorized into supervised and unsupervised methods. Unsupervised hashing methods learn hash functions that encode data points to binary codes by training from unlabeled data. Typical learning criteria include minimize reconstruction error @cite_7 @cite_45 @cite_14 @cite_23 , graph based hashing @cite_25 @cite_31 , and minimize quantization error @cite_27 . Supervised methods utilize the semantic labels or relevance information to improve the quality of hash codes. For example, Supervised Hashing with Kernels (KSH) @cite_18 and Supervised Discrete Hashing (SDH) @cite_16 generate binary hash codes by minimizing the Hamming distances across similar pairs of data points. Distortion Minimization Hashing (DMS) @cite_19 , Minimal Loss Hashing (MLH) @cite_13 , and Order Preserving Hashing (OPH) @cite_21 learn hash codes by minimizing the triplet loss based on similar pairs of data points. However, if the feature distribution of a dataset is complex, the performance of these methods will decrease. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_21",
"@cite_19",
"@cite_27",
"@cite_45",
"@cite_23",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_25"
],
"mid": [
"1992371516",
"2084363474",
"1974647172",
"2089632823",
"2767123230",
"2412709527",
"2124509324",
"2772756366",
"2251864938",
"",
"2221852422",
""
],
"abstract": [
"Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods.",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"In this paper, we propose a novel method to learn similarity-preserving hash functions for approximate nearest neighbor (NN) search. The key idea is to learn hash functions by maximizing the alignment between the similarity orders computed from the original space and the ones in the hamming space. The problem of mapping the NN points into different hash codes is taken as a classification problem in which the points are categorized into several groups according to the hamming distances to the query. The hash functions are optimized from the classifiers pooled over the training points. Experimental results demonstrate the superiority of our approach over existing state-of-the-art hashing techniques.",
"Application of the hashing method to large-scale image retrieval has drawn much attention because of the high efficiency and favorable accuracy of the method. Its related research generally involves two basic problems: similarity-preserving projection and information-preserving quantization. Most previous works focused on learning projection approaches, while the importance of quantization strategies was ignored. Although several hashing quantization models have been recently proposed to improve retrieval performance by assigning multiple bits to projected directions, these models still suffer from suboptimal results, as the critical information loss that occurs in the quantization procedure is not considered. In this paper, to construct an effective quantization model, we utilize rate-distortion theory in the hashing quantization procedure and minimize the distortion to reduce the information loss. Furthermore, combining principal component analysis with our quantization strategy, we present a quantization-based hashing method named distortion minimization hashing. Extensive experiments involving one synthetic data set and three image data sets demonstrate the superior performance of our proposed methods over several quantization techniques and state-of-the-art hashing methods.",
"Hashing has been proved as an attractive solution to approximate nearest neighbor search, owing to its theoretical guarantee and computational efficiency. Though most of prior hashing algorithms can achieve low memory and computation consumption by pursuing compact hash codes, however, they are still far beyond the capability of learning discriminative hash functions from the data with complex inherent structure among them. To address this issue, in this paper, we propose a structure sensitive hashing based on cluster prototypes, which explicitly exploits both global and local structures. An alternating optimization algorithm, respectively, minimizing the quantization loss and spectral embedding loss, is presented to simultaneously discover the cluster prototypes for each hash function, and optimally assign unique binary codes to them satisfying the affinity alignment between them. For hash codes of a desired length, an adaptive bit assignment is further appended to the product quantization of the subspaces, approximating the Hamming distances and meanwhile balancing the variance among hash functions. Experimental results on four large-scale benchmarks CIFAR-10, NUS-WIDE, SIFT1M, and GIST1M demonstrate that our approach significantly outperforms state-of-the-art hashing methods in terms of semantic and metric neighbor search.",
"This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.",
"Nearest neighbor search is a fundamental problem in various domains, such as computer vision, data mining, and machine learning. With the explosive growth of data on the Internet, many new data structures using spatial partitions and recursive hyperplane decomposition (e.g., k-d trees) are proposed to speed up the nearest neighbor search. However, these data structures are facing big data challenges. To meet these challenges, binary hashing-based approximate nearest neighbor search methods attract substantial attention due to their fast query speed and drastically reduced storage. Since the most notably locality sensitive hashing was proposed, a large number of binary hashing methods have emerged. In this paper, we first illustrate the development of binary hashing research by proposing an overall and clear classification of them. Then we conduct extensive experiments to compare the performance of these methods on five famous and public data sets. Finally, we present our view on this topic.",
"Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.",
"",
"We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
""
]
} |
1903.12359 | 2926893252 | Conformal surface parameterization is useful in graphics, imaging and visualization, with applications to texture mapping, atlas construction, registration, remeshing and so on. With the increasing capability in scanning and storing data, dense 3D surface meshes are common nowadays. While meshes with higher resolution better resemble smooth surfaces, they pose computational difficulties for the existing parameterization algorithms. In this work, we propose a novel parallelizable algorithm for computing the global conformal parameterization of simply-connected surfaces via partial welding maps. A given simply-connected surface is first partitioned into smaller subdomains. The local conformal parameterizations of all subdomains are then computed in parallel. The boundaries of the parameterized subdomains are subsequently integrated consistently using a novel technique called partial welding, which is developed based on conformal welding theory. Finally, by solving the Laplace equation for each subdomain using the updated boundary conditions, we obtain a global conformal parameterization of the given surface, with bijectivity guaranteed by quasi-conformal theory. By including additional shape constraints, our method can be easily extended to achieve disk conformal parameterization for simply-connected open surfaces and spherical conformal parameterization for genus-0 closed surfaces. Experimental results are presented to demonstrate the effectiveness of our proposed algorithm. When compared to the state-of-the-art conformal parameterization methods, our method achieves a significant improvement in both computational time and accuracy. | Surface parameterization has been widely studied in geometry processing. For an overview of the subject, readers are referred to the surveys @cite_39 @cite_47 @cite_0 . It is well-known that only developable surfaces can be isometrically flattened without any distortions in area and angle. For general surfaces, it is unavoidable to introduce distortions in area or angle (or both) under parameterization. This limitation leads to two major classes of surface parameterization algorithms, namely the area-preserving parameterizations and angle-preserving (conformal) parameterizations. | {
"cite_N": [
"@cite_0",
"@cite_47",
"@cite_39"
],
"mid": [
"2085499357",
"",
"1621614599"
],
"abstract": [
"Mesh parameterization is a powerful geometry processing tool with numerous computer graphics applications, from texture mapping to animation transfer. This course outlines its mathematical foundations, describes recent methods for parameterizing meshes over various domains, discusses emerging tools like global parameterization and inter-surface mapping, and demonstrates a variety of parameterization applications.",
"",
"This paper provides a tutorial and survey of methods for parameterizing surfaces with a view to applications in geometric modelling and computer graphics. We gather various concepts from differential geometry which are relevant to surface mapping and use them to understand the strengths and weaknesses of the many methods for parameterizing piecewise linear surfaces and their relationship to one another."
]
} |
1903.12359 | 2926893252 | Conformal surface parameterization is useful in graphics, imaging and visualization, with applications to texture mapping, atlas construction, registration, remeshing and so on. With the increasing capability in scanning and storing data, dense 3D surface meshes are common nowadays. While meshes with higher resolution better resemble smooth surfaces, they pose computational difficulties for the existing parameterization algorithms. In this work, we propose a novel parallelizable algorithm for computing the global conformal parameterization of simply-connected surfaces via partial welding maps. A given simply-connected surface is first partitioned into smaller subdomains. The local conformal parameterizations of all subdomains are then computed in parallel. The boundaries of the parameterized subdomains are subsequently integrated consistently using a novel technique called partial welding, which is developed based on conformal welding theory. Finally, by solving the Laplace equation for each subdomain using the updated boundary conditions, we obtain a global conformal parameterization of the given surface, with bijectivity guaranteed by quasi-conformal theory. By including additional shape constraints, our method can be easily extended to achieve disk conformal parameterization for simply-connected open surfaces and spherical conformal parameterization for genus-0 closed surfaces. Experimental results are presented to demonstrate the effectiveness of our proposed algorithm. When compared to the state-of-the-art conformal parameterization methods, our method achieves a significant improvement in both computational time and accuracy. | Existing methods for area-preserving parameterizations include the locally authalic map @cite_9 , Lie advection @cite_24 , optimal mass transport (OMT) @cite_25 @cite_48 , density-equalizing map (DEM) @cite_33 @cite_8 and stretch energy minimization (SEM) @cite_44 . While the area elements can be preserved under area-preserving parameterizations, the angular distortion is uncontrolled. Since the angular distortion is related to the local geometry of the surfaces, it is important to minimize the angular distortion in many applications such as remeshing, texture mapping and cartography. In those cases, it is preferable to use conformal parameterization. | {
"cite_N": [
"@cite_33",
"@cite_8",
"@cite_48",
"@cite_9",
"@cite_24",
"@cite_44",
"@cite_25"
],
"mid": [
"2605879901",
"2904291448",
"",
"1976584051",
"2096817454",
"2963608135",
"2065140986"
],
"abstract": [
"In this paper, we are concerned with the problem of creating flattening maps of simply connected open surfaces in @math . Using a natural principle of density diffusion in physics, we propose an effective algorithm for computing density-equalizing maps with any prescribed density distribution. By varying the initial density distribution, a large variety of flattening maps with different properties can be achieved. For instance, area-preserving parameterizations of simply connected open surfaces can be easily computed. Experimental results are presented to demonstrate the effectiveness of our proposed method. Applications to data visualization and surface remeshing are explored.",
"Author(s): Choi, Gary PT; Chiu, Bernard; Rycroft, Chris H | Abstract: Carotid atherosclerosis is a focal disease at the bifurcations of the carotid artery. To quantitatively monitor the local changes in the vessel-wall-plus-plaque thickness (VWT) and compare the VWT distributions for different patients or for the same patients at different ultrasound scanning sessions, a mapping technique is required to adjust for the geometric variability of different carotid artery models. In this work, we propose a novel method called density-equalizing reference map (DERM) for mapping 3D carotid surfaces to a standardized 2D carotid template, with an emphasis on preserving the local geometry of the carotid surface by minimizing the local area distortion. The initial map was generated by a previously described arc-length scaling (ALS) mapping method, which projects a 3D carotid surface onto a 2D non-convex L-shaped domain. A smooth and area-preserving flattened map was subsequently constructed by deforming the ALS map using the proposed algorithm that combines the density-equalizing map and the reference map techniques. This combination allows, for the first time, one-to-one mapping from a 3D surface to a standardized non-convex planar domain in an area-preserving manner. Evaluations using 20 carotid surface models show that the proposed method reduced the area distortion of the flattening maps by over 80 as compared to the ALS mapping method.",
"",
"Parameterization of discrete surfaces is a fundamental and widely-used operation in graphics, required, for instance, for texture mapping or remeshing. As 3D data becomes more and more detailed, there is an increased need for fast and robust techniques to automatically compute least-distorted parameterizations of large meshes. In this paper, we present new theoretical and practical results on the parameterization of triangulated surface patches. Given a few desirable properties such as rotation and translation invariance, we show that the only admissible parameterizations form a two-dimensional set and each parameterization in this set can be computed using a simple, sparse, linear system. Since these parameterizations minimize the distortion of different intrinsic measures of the original mesh, we call them Intrinsic Parameterizations. In addition to this partial theoretical analysis, we propose robust, efficient and tunable tools to obtain least-distorted parameterizations automatically. In particular, we give details on a novel, fast technique to provide an optimal mapping without fixing the boundary positions, thus providing a unique Natural Intrinsic Parameterization. Other techniques based on this parameterization family, designed to ease the rapid design of parameterizations, are also proposed.",
"Parameterization of complex surfaces constitutes a major means of visualizing highly convoluted geometric structures as well as other properties associated with the surface. It also enables users with the ability to navigate, orient, and focus on regions of interest within a global view and overcome the occlusions to inner concavities. In this paper, we propose a novel area-preserving surface parameterization method which is rigorous in theory, moderate in computation, yet easily extendable to surfaces of non-disc and closed-boundary topologies. Starting from the distortion induced by an initial parameterization, an area restoring diffeomorphic flow is constructed as a Lie advection of differential 2-forms along the manifold, which yields equality of the area elements between the domain and the original surface at its final state. Existence and uniqueness of result are assured through an analytical derivation. Based upon a triangulated surface representation, we also present an efficient algorithm in line with discrete differential modeling. As an exemplar application, the utilization of this method for the effective visualization of brain cortical imaging modalities is presented. Compared with conformal methods, our method can reveal more subtle surface patterns in a quantitative manner. It, therefore, provides a competitive alternative to the existing parameterization techniques for better surface-based analysis in various scenarios.",
"Surface parameterizations have been widely applied to computer graphics and digital geometry processing. In this paper, we propose a novel stretch energy minimization (SEM) algorithm for the computation of equiareal parameterizations of simply connected open surfaces with very small area distortions and highly improved computational efficiencies. In addition, the existence of nontrivial limit points of the SEM algorithm is guaranteed under some mild assumptions of the mesh quality. Numerical experiments indicate that the accuracy, effectiveness, and robustness of the proposed SEM algorithm outperform the other state-of-the-art algorithms. Applications of the SEM on surface remeshing, registration and morphing for simply connected open surfaces are demonstrated thereafter. Thanks to the SEM algorithm, the computation for these applications can be carried out efficiently and reliably.",
"We present a novel area-preservation mapping flattening method using the optimal mass transport technique, based on the Monge-Brenier theory. Our optimal transport map approach is rigorous and solid in theory, efficient and parallel in computation, yet general for various applications. By comparison with the conventional Monge-Kantorovich approach, our method reduces the number of variables from O(n2) to O(n), and converts the optimal mass transport problem to a convex optimization problem, which can now be efficiently carried out by Newton's method. Furthermore, our framework includes the area weighting strategy that enables users to completely control and adjust the size of areas everywhere in an accurate and quantitative way. Our method significantly reduces the complexity of the problem, and improves the efficiency, flexibility and scalability during visualization. Our framework, by combining conformal mapping and optimal mass transport mapping, serves as a powerful tool for a broad range of applications in visualization and graphics, especially for medical imaging. We provide a variety of experimental results to demonstrate the efficiency, robustness and efficacy of our novel framework."
]
} |
1903.12368 | 2941372268 | We propose a real-time DNN-based technique to segment hand and object of interacting motions from depth inputs. Our model is called DenseAttentionSeg, which contains a dense attention mechanism to fuse information in different scales and improves the results quality with skip-connections. Besides, we introduce a contour loss in model training, which helps to generate accurate hand and object boundaries. Finally, we propose and release our InterSegHands dataset, a fine-scale hand segmentation dataset containing about 52k depth maps of hand-object interactions. Our experiments evaluate the effectiveness of our techniques and datasets, and indicate that our method outperforms the current state-of-the-art deep segmentation methods on interaction segmentation. | As for RGB-based methods, Bambach segmented hands egocentrically in many activities via the combination of traditional and neural methods @cite_10 . Urooj explored egocentric hand segmentation in social activities @cite_19 . There are also some hand works (including hand tracking and gesture recognition) based on depth camera @cite_17 @cite_9 . Some methods used random decision forest(RDF) @cite_18 @cite_22 . Kang adopted a simple segmentation method by using a black wristband in their hand tracking work @cite_8 . They also studied the problem of hand segmentation recently @cite_0 . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_8",
"@cite_9",
"@cite_0",
"@cite_19",
"@cite_10",
"@cite_17"
],
"mid": [
"2075156252",
"",
"2272277396",
"2964081342",
"",
"2791833757",
"2204609240",
"2963508807"
],
"abstract": [
"We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.",
"",
"Real-time hand articulations tracking is important for many applications such as interacting with virtual augmented reality devices. However, most of existing algorithms highly rely on expensive and high power-consuming GPUs to achieve real-time processing. Consequently, these systems are inappropriate for mobile and wearable devices. In this paper, we propose an efficient hand tracking system which does not require high performance GPUs.",
"Sign language recognition is important for natural and convenient communication between deaf community and hearing majority. We take the highly efficient initial step of automatic fingerspelling recognition system using convolutional neural networks (CNNs) from depth maps. In this work, we consider relatively larger number of classes compared with the previous literature. We train CNNs for the classification of 31 alphabets and numbers using a subset of collected depth data from multiple subjects. While using different learning configurations, such as hyper-parameter selection with and without validation, we achieve 99.99 accuracy for observed signers and 83.58 to 85.49 accuracy for new signers. The result shows that accuracy improves as we include more data from different subjects during training. The processing time is 3 ms for the prediction of a single image. To the best of our knowledge, the system achieves the highest accuracy and speed. The trained model and dataset is available on our repository1.",
"",
"A large number of works in egocentric vision have concentrated on action and object recognition. Detection and segmentation of hands in first-person videos, however, has less been explored. For many applications in this domain, it is necessary to accurately segment not only hands of the camera wearer but also the hands of others with whom he is interacting. Here, we take an in-depth look at the hand segmentation problem. In the quest for robust hand segmentation methods, we evaluated the performance of the state of the art semantic segmentation methods, off the shelf and fine-tuned, on existing datasets. We fine-tune RefineNet, a leading semantic segmentation method, for hand segmentation and find that it does much better than the best contenders. Existing hand segmentation datasets are collected in the laboratory settings. To overcome this limitation, we contribute by collecting two new datasets: a) EgoYouTubeHands including egocentric videos containing hands in the wild, and b) HandOverFace to analyze the performance of our models in presence of similar appearance occlusions. We further explore whether conditional random fields can help refine generated hand segmentations. To demonstrate the benefit of accurate hand maps, we train a CNN for hand-based activity recognition and achieve higher accuracy when a CNN was trained using hand maps produced by the fine-tuned RefineNet. Finally, we annotate a subset of the EgoHands dataset for fine-grained action recognition and show that an accuracy of 58.6 can be achieved by just looking at a single hand pose which is much better than the chance level (12.5 ).",
"Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.",
"Most of the existing deep learning-based methods for 3D hand and human pose estimation from a single depth map are based on a common framework that takes a 2D depth map and directly regresses the 3D coordinates of keypoints, such as hand or human body joints, via 2D convolutional neural networks (CNNs). The first weakness of this approach is the presence of perspective distortion in the 2D depth map. While the depth map is intrinsically 3D data, many previous methods treat depth maps as 2D images that can distort the shape of the actual object through projection from 3D to 2D space. This compels the network to perform perspective distortion-invariant estimation. The second weakness of the conventional approach is that directly regressing 3D coordinates from a 2D image is a highly nonlinear mapping, which causes difficulty in the learning procedure. To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint. We design our model as a 3D CNN that provides accurate estimates while running in real-time. Our system outperforms previous methods in almost all publicly available 3D hand and human pose estimation datasets and placed first in the HANDS 2017 frame-based 3D hand pose estimation challenge. The code is available in1."
]
} |
1903.12368 | 2941372268 | We propose a real-time DNN-based technique to segment hand and object of interacting motions from depth inputs. Our model is called DenseAttentionSeg, which contains a dense attention mechanism to fuse information in different scales and improves the results quality with skip-connections. Besides, we introduce a contour loss in model training, which helps to generate accurate hand and object boundaries. Finally, we propose and release our InterSegHands dataset, a fine-scale hand segmentation dataset containing about 52k depth maps of hand-object interactions. Our experiments evaluate the effectiveness of our techniques and datasets, and indicate that our method outperforms the current state-of-the-art deep segmentation methods on interaction segmentation. | Bambach introduced EgoHands for hand tasks including segmentation based on RGB images @cite_10 , including 4.8k frames in total. Zimmermann and Brox recently proposed a larger dataset for color images, which has about 44k samples @cite_7 . But it is synthetic and difficult to apply in the real world. As for the depth based dataset, Tompson released NYU Hand Pose Dataset, which contains about 6k depth frames but lacks object interaction @cite_18 . Kang collected a total number of 27k depth images for segmentation, but most of their dataset samples do not show the fine structure of hands and lost the object information to some degree @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_10",
"@cite_7"
],
"mid": [
"",
"2075156252",
"2204609240",
"2611797529"
],
"abstract": [
"",
"We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.",
"Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.",
"Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images. In this paper, we present an approach that estimates 3D hand pose from regular RGB images. This task has far more ambiguities due to the missing depth information. To this end, we propose a deep network that learns a network-implicit 3D articulation prior. Together with detected keypoints in the images, this network yields good estimates of the 3D pose. We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks. Experiments on a variety of test sets, including one on sign language recognition, demonstrate the feasibility of 3D hand pose estimation on single color images."
]
} |
1903.12368 | 2941372268 | We propose a real-time DNN-based technique to segment hand and object of interacting motions from depth inputs. Our model is called DenseAttentionSeg, which contains a dense attention mechanism to fuse information in different scales and improves the results quality with skip-connections. Besides, we introduce a contour loss in model training, which helps to generate accurate hand and object boundaries. Finally, we propose and release our InterSegHands dataset, a fine-scale hand segmentation dataset containing about 52k depth maps of hand-object interactions. Our experiments evaluate the effectiveness of our techniques and datasets, and indicate that our method outperforms the current state-of-the-art deep segmentation methods on interaction segmentation. | Long first proposed fully convolutional neural networks @cite_15 . Ronneberger designed an encoder-decoder architecture that was widely used in the following years @cite_20 . Other methods include PSPNet @cite_13 , RefineNet @cite_12 , Large Kernel Matters @cite_3 , Deeplab @cite_2 , and so on. Though existing network-based models work well, they were built for general segmentation tasks and may not all act the same for depth-based hand-object interaction segmentation task where the classes are limited and the hand and object are closely related. | {
"cite_N": [
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_20",
"@cite_13",
"@cite_12"
],
"mid": [
"2598666589",
"",
"2952632681",
"2952232639",
"2952596663",
""
],
"abstract": [
"One of recent trends [31, 32, 14] in network architecture design is stacking small filters (e.g., 1x1 or 3x3) in the entire network because the stacked small filters is more efficient than a large kernel, given the same computational complexity. However, in the field of semantic segmentation, where we need to perform dense per-pixel prediction, we find that the large kernel (and effective receptive field) plays an important role when we have to perform the classification and localization tasks simultaneously. Following our design principle, we propose a Global Convolutional Network to address both the classification and localization issues for the semantic segmentation. We also suggest a residual-based boundary refinement to further refine the object boundaries. Our approach achieves state-of-art performance on two public benchmarks and significantly outperforms previous results, 82.2 (vs 80.2 ) on PASCAL VOC 2012 dataset and 76.9 (vs 71.8 ) on Cityscapes dataset.",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .",
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
""
]
} |
1903.12050 | 2924238292 | We consider a variant of the planted clique problem where we are allowed unbounded computational time but can only investigate a small part of the graph by adaptive edge queries. We determine (up to logarithmic factors) the number of queries necessary both for detecting the presence of a planted clique and for finding the planted clique. Specifically, let @math be a random graph on @math vertices with a planted clique of size @math . We show that no algorithm that makes at most @math adaptive queries to the adjacency matrix of @math is likely to find the planted clique. On the other hand, when @math there exists a simple algorithm (with unbounded computational power) that finds the planted clique with high probability by making @math adaptive queries. For detection, the additive @math term is not necessary: the number of queries needed to detect the presence of a planted clique is @math (up to logarithmic factors). | This paper is a natural follow-up to the recent work of Feige, Gamarnik, Neeman, R 'acz, and Tet ali @cite_9 , where the authors consider the problem of finding cliques in an Erd o s-R 'enyi random graph under the same adaptive edge query model. While the largest clique in a random graph with edge density @math has size approximately @math , the current best algorithm that makes at most @math adaptive edge queries finds a clique of size approximately @math ---closing the gap between these two bounds is an open problem. Feige et al. @cite_9 show an impossibility result if the adaptivity of the algorithm is limited: any algorithm that makes @math edge queries ( @math ) in @math rounds finds cliques of size at most @math where @math . | {
"cite_N": [
"@cite_9"
],
"mid": [
"2891041771"
],
"abstract": [
"Consider algorithms with unbounded computation time that probe the entries of the adjacency matrix of an @math vertex graph, and need to output a clique. We show that if the input graph is drawn at random from @math (and hence is likely to have a clique of size roughly @math ), then for every @math and constant @math , there is an @math (that may depend on @math and @math ) such that no algorithm that makes @math probes in @math rounds is likely (over the choice of the random graph) to output a clique of size larger than @math ."
]
} |
1903.12050 | 2924238292 | We consider a variant of the planted clique problem where we are allowed unbounded computational time but can only investigate a small part of the graph by adaptive edge queries. We determine (up to logarithmic factors) the number of queries necessary both for detecting the presence of a planted clique and for finding the planted clique. Specifically, let @math be a random graph on @math vertices with a planted clique of size @math . We show that no algorithm that makes at most @math adaptive queries to the adjacency matrix of @math is likely to find the planted clique. On the other hand, when @math there exists a simple algorithm (with unbounded computational power) that finds the planted clique with high probability by making @math adaptive queries. For detection, the additive @math term is not necessary: the number of queries needed to detect the presence of a planted clique is @math (up to logarithmic factors). | Several recent works consider finding structure in a random graph under such an adaptive edge query model. Ferber, Krivelevich, Sudakov, and Vieira studied finding a Hamilton cycle @cite_5 and finding long paths @cite_10 , while Conlon, Fox, Grinshpun, and He @cite_12 studied finding a copy of a fixed target graph (such as a constant size clique). All of these works focus on sparse random graphs. | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"2963302945",
"2963995085",
"2811269531"
],
"abstract": [
"We introduce a new setting of algorithmic problems in random graphs, studying the minimum number of queries one needs to ask about the adjacency between pairs of vertices of G(n,p) in order to typically find a subgraph possessing a given target property. We show that if p≥lnn+lnlnn+ω(1)n, then one can find a Hamilton cycle with high probability after exposing (1+o(1))n edges. Our result is tight in both p and the number of exposed edges. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 2016",
"We discuss a new algorithmic type of problem in random graphs studying the minimum number of queries one has to ask about adjacency between pairs of vertices of a random graph G∼G(n,p) in order to find a subgraph which possesses some target property with high probability. In this paper we focus on finding long paths in G∼G(n,p) when p=1+en for some fixed constant e>0 . This random graph is known to have typically linearly long paths. To have l edges with high probability in G∼G(n,p) one clearly needs to query at least Ω(lp) pairs of vertices. Can we find a path of length l economically, i.e., by querying roughly that many pairs? We argue that this is not possible and one needs to query significantly more pairs. We prove that any randomised algorithm which finds a path of length l=Ω(log(1e)e) with at least constant probability in G∼G(n,p) with p=1+en must query at least Ω(lpelog(1e)) pairs of vertices. This is tight up to the log(1e) factor. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 2016",
"The @math -online Ramsey game is a combinatorial game between two players, Builder and Painter. Starting from an infinite set of isolated vertices, Builder draws an edge at each step and Painter immediately paints the edge red or blue. Builder's goal is to force Painter to create either a red @math or a blue @math using as few edges as possible. The online Ramsey number @math is the minimum number of edges Builder needs to guarantee a win in the @math -online Ramsey game. By analyzing the special case where Painter plays randomly, we obtain an exponential improvement [ r (n,n) 2^ (2- 2 )n + O(1) ] for the lower bound on the diagonal online Ramsey number, as well as a corresponding improvement [ r (m,n) n^ (2- 2 )m + O(1) ] for the off-diagonal case, where @math is fixed and @math . Using a different randomized Painter strategy and the Lopsided Lov 'asz Local Lemma, we prove that @math , determining this function up to a polylogarithmic factor. We also improve the upper bound in the off-diagonal case for @math . In connection with the online Ramsey game with random Painter, we study the problem of finding a copy of a target graph @math in a sufficiently large unknown Erd o s--R ' e nyi random graph @math using as few queries as possible, where each query reveals whether or not a particular pair of vertices are adjacent. We call this problem the Subgraph Query Problem. We determine the order of the number of queries needed for complete graphs up to five vertices and prove general bounds for this problem."
]
} |
1903.12065 | 2949139033 | We give an improved algorithm for drawing a random sample from a large data stream when the input elements are distributed across multiple sites which communicate via a central coordinator. At any point in time the set of elements held by the coordinator represent a uniform random sample from the set of all the elements observed so far. When compared with prior work, our algorithms asymptotically improve the total number of messages sent in the system as well as the computation required of the coordinator. We also present a matching lower bound, showing that our protocol sends the optimal number of messages up to a constant factor with large probability. As a byproduct, we obtain an improved algorithm for finding the heavy hitters across multiple distributed sites. | Stream sampling under sliding windows over distributed streams has been considered in @cite_8 . Their algorithm for sliding windows is already optimal upto lower-order additive terms (see Theorems 4.1 and 4.2 in @cite_8 ). Hence our improved results for the non-sliding window case do not translate into an improvement for the case of sliding windows. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2001183701"
],
"abstract": [
"A fundamental problem in data management is to draw a sample of a large data set, for approximate query answering, selectivity estimation, and query planning. With large, streaming data sets, this problem becomes particularly difficult when the data is shared across multiple distributed sites. The challenge is to ensure that a sample is drawn uniformly across the union of the data while minimizing the communication needed to run the protocol and track parameters of the evolving data. At the same time, it is also necessary to make the protocol lightweight, by keeping the space and time costs low for each participant. In this paper, we present communication-efficient protocols for sampling (both with and without replacement) from k distributed streams. These apply to the case when we want a sample from the full streams, and to the sliding window cases of only the W most recent items, or arrivals within the last w time units. We show that our protocols are optimal, not just in terms of the communication used, but also that they use minimal or near minimal (up to logarithmic factors) time to process each new item, and space to operate."
]
} |
1903.12065 | 2949139033 | We give an improved algorithm for drawing a random sample from a large data stream when the input elements are distributed across multiple sites which communicate via a central coordinator. At any point in time the set of elements held by the coordinator represent a uniform random sample from the set of all the elements observed so far. When compared with prior work, our algorithms asymptotically improve the total number of messages sent in the system as well as the computation required of the coordinator. We also present a matching lower bound, showing that our protocol sends the optimal number of messages up to a constant factor with large probability. As a byproduct, we obtain an improved algorithm for finding the heavy hitters across multiple distributed sites. | A related model of distributed streams was considered in @cite_3 @cite_15 . In this model, the coordinator was not required to continuously maintain an estimate of the required aggregate, but when the query was posed to the coordinator, the sites would be contacted and the query result would be constructed. In their model, the coordinator could be said to be reactive'', whereas in the model considered in this paper, the coordinator is pro-active''. | {
"cite_N": [
"@cite_15",
"@cite_3"
],
"mid": [
"1990465412",
"2112400233"
],
"abstract": [
"This paper presents algorithms for estimating aggregate functions over a \"sliding window\" of the N most recent data items in one or more streams. Our results include: For a single stream, we present the first e-approximation scheme for the number of 1's in a sliding window that is optimal in both worst case time and space. We also present the first e for the sum of integers in [0..R] in a sliding window that is optimal in both worst case time and space (assuming R is at most polynomial in N). Both algorithms are deterministic and use only logarithmic memory words. In contrast, we show that an deterministic algorithm that estimates, to within a small constant relative error, the number of 1's (or the sum of integers) in a sliding window over the union of distributed streams requires O(N) space. We present the first randomized (e,s)-approximation scheme for the number of 1's in a sliding window over the union of distributed streams that uses only logarithmic memory words. We also present the first (e,s)-approximation scheme for the number of distinct values in a sliding window over distributed streams that uses only logarithmic memory words. < olOur results are obtained using a novel family of synopsis data structures.",
"Massive data sets often arise as physically distributed, parallel data streams. We present algorithms for estimating simple functions on the union of such data streams, while using only logarithmic space per stream. Each processor observes only its own stream, and communicates with the other processors only after observing its entire stream. This models the set-up in current network monitoring products. Our algorithms employ a novel coordinated sampling technique to extract a sample of the union; this sample can be used to estimate aggregate functions on the union. The technique can also be used to estimate aggregate functions over the distinct “labels” in one or more data streams, e.g., to determine the zeroth frequency moment (i.e., the number of distinct labels) in one or more data streams. Our space and time bounds are the best known for these problems, and our logarithmic space bounds for coordinated sampling contrast with polynomial lower bounds for independent sampling. We relate our distributed streams model to previously studied non-distributed (i.e., merged) streams models, presenting tight bounds on the gap between the distributed and merged models for deterministic algorithms."
]
} |
1903.12033 | 2923370592 | To properly validate wireless networking solutions we depend on experimentation. Simulation very often produces less accurate results due to the use of models that are simplifications of the real phenomena they try to model. Networking experimentation may offer limited repeatability and reproducibility. Being influenced by external random phenomena such as noise, interference, and multipath, real experiments are hardly repeatable. In addition, they are difficult to reproduce due to testbed operational constraints and availability. Without repeatability and reproducibility, the validation of the networking solution under evaluation is questionable. In this paper, we show how the Trace-based Simulation (TS) approach can be used to accurately repeat and reproduce real experiments and, consequently, introduce a paradigm shift when it comes to the evaluation of wireless networking solutions. We present an extensive evaluation of the TS approach using the Fed4FIRE+ w-iLab.2 testbed. The results show that it is possible to repeat and reproduce real experiments using ns-3 trace-based simulations with more accuracy than in pure simulation, with average accuracy gains above 50 . | For simulation, two main approaches can be found: 1) , such as the one proposed in @cite_2 , where the authors capture traffic of real networks and try to reproduce the same experimental condition in simulation down to per packet resolution; 2) , as the one presented in @cite_3 , where the authors try to abstract all low level variables and reproduce the traffic delays and performance bottlenecks experienced in the real network at the Application layer. These approaches do not allow to keep improving the solution under evaluation. | {
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"2518482829",
"2133107013"
],
"abstract": [
"Ns-3 is a widely used as a the network simulator of choice by researchers. It contains many well tested and high quality models of network protocols. However, the application layer models of ns-3 are very simplistic, and do not capture all aspects of real life applications. As a result, there is often a huge gap between the results of real experiments and the corresponding simulations. This problem is particularly exacerbated for wireless simulations, where many networking phenomena like wireless channel contention crucially depend on the application traffic characteristics. One way to bridge the gap between experiments and simulations is to incorporate knowledge from network traces into simulations. To this end, our work builds a trace-based application layer simulator in ns-3. Given a network trace collected from a user, our TraceReplay application layer model automatically generates traffic that is faithful to the real application in the ns-3 simulator. TraceReplay infers and replays only application layer delays like user think times, lets the simulator control the lower layer phenomena. TraceReplay extracts application layer characteristics from a single trace, and replays this information across many users in simulation, by using suitable randomization. Our model is also generic enough to replay any application layer protocol. Validation of our simulation model shows that simulation results obtained using TraceReplay are significantly different from those using other models, and are closer to experimental observations.",
"This paper proposes a new approach for making simulations realistic. This approach is based on the principle of \"trace driven simulation\", i.e. using the results of the actual traffic traces analysis in order to reproduce the same experimental conditions in simulation. The main principle of the approach proposed in this paper deals with making simulation traffic sources replay under certain conditions - the actual traffic traces grabbed on actual networks. This paper describes the implementation of this approach in the NS simulator, and evaluates it by comparing the characteristics of the traces obtained with our replay approach with original data traces. The parameters that are considered for making the comparison are the usual traffic parameters as throughput, packet rate, etc., but also everything that is related to traffic dynamics, i.e. the second order statistical moments as autocorrelation of traffic or long range dependence."
]
} |
1903.12033 | 2923370592 | To properly validate wireless networking solutions we depend on experimentation. Simulation very often produces less accurate results due to the use of models that are simplifications of the real phenomena they try to model. Networking experimentation may offer limited repeatability and reproducibility. Being influenced by external random phenomena such as noise, interference, and multipath, real experiments are hardly repeatable. In addition, they are difficult to reproduce due to testbed operational constraints and availability. Without repeatability and reproducibility, the validation of the networking solution under evaluation is questionable. In this paper, we show how the Trace-based Simulation (TS) approach can be used to accurately repeat and reproduce real experiments and, consequently, introduce a paradigm shift when it comes to the evaluation of wireless networking solutions. We present an extensive evaluation of the TS approach using the Fed4FIRE+ w-iLab.2 testbed. The results show that it is possible to repeat and reproduce real experiments using ns-3 trace-based simulations with more accuracy than in pure simulation, with average accuracy gains above 50 . | To the best of our knowledge, the TS approach originally proposed in @cite_8 @cite_12 remains the only one that allows to replay the conditions of the scenario both in simulation and emulation mode. | {
"cite_N": [
"@cite_12",
"@cite_8"
],
"mid": [
"2804572498",
"2612366895"
],
"abstract": [
"In wireless networking R&D we typically depend on experimentation to further evaluate a solution, as simulation is inherently a simplification of the real-world. However, experimentation is limited in aspects where simulation excels, such as repeatability and reproducibility. Real wireless experiments are hardly repeatable. Given the same input they can produce very different output results, since wireless communications are influenced by external random phenomena such as noise, interference, and multipath. Real experiments are also difficult to reproduce due to testbed operational constraints and availability. We have previously proposed the Trace-based Simulation (TS) approach, which uses the TraceBasedPropagationLossModel to successfully reproduce past experiments. Yet, in its current version, the TraceBasedPropagationLossModel only supports point-to-point scenarios. In this paper, we introduce a new version of the model that supports Multiple Access wireless scenarios. To validate the new version of the model, the network throughput was measured in a laboratory testbed. The experimental results were then compared to the network throughput achieved using the ns-3 trace-based simulation and a pure ns-3 simulation, confirming the TS approach is valid for multiple access scenarios too.",
"A common problem in mobile networking research and development is the cost related to deploying and running real-world mobile testbeds. Due to cost and operational constraints, these testbeds usually run for short time periods but generate very unique and relevant results that are hard to reproduce. We propose the use of ns-3 as a solution to successfully reproduce real-world mobile testbed experiments. This is accomplished by feeding ns-3 with real testbed traces including node positions and radio link quality only. In order to validate our approach, the network throughput between a fixed Base Station and a Unmanned Aerial Vehicle (UAV) was measured in a real-world testbed. The experimental results were compared to the network throughput achieved using the ns-3 trace-based simulation and a plain ns-3 simulation. The obtained results show the high accuracy of the trace-based simulation, thus validating our approach."
]
} |
1903.12174 | 2923575270 | Sliding-window object detectors that generate bounding-box object predictions over a dense, regular grid have advanced rapidly and proven popular. In contrast, modern instance segmentation approaches are dominated by methods that first detect object bounding boxes, and then crop and segment these regions, as popularized by Mask R-CNN. In this work, we investigate the paradigm of dense sliding-window instance segmentation, which is surprisingly under-explored. Our core observation is that this task is fundamentally different than other dense prediction tasks such as semantic segmentation or bounding-box object detection, as the output at every spatial location is itself a geometric structure with its own spatial dimensions. To formalize this, we treat dense instance segmentation as a prediction task over 4D tensors and present a general framework called TensorMask that explicitly captures this geometry and enables novel operators on 4D tensors. We demonstrate that the tensor view leads to large gains over baselines that ignore this structure, and leads to results comparable to Mask R-CNN. These promising results suggest that TensorMask can serve as a foundation for novel advances in dense mask prediction and a more complete understanding of the task. Code will be made available. | The modern instance segmentation task was introduced by Hariharan al @cite_31 (before being popularized by COCO @cite_9 ). In their work, the method proposed for this task involved first generating object @cite_41 @cite_23 , then classifying these proposals @cite_31 . In earlier work, the methodology was used for other tasks. For example, Selective Search @cite_41 and the original R-CNN @cite_13 classified mask proposals to obtain box detections and semantic segmentation results; these methods could easily be applied to instance segmentation. These early methods relied on bottom-up mask proposals computed by pre-deep-learning era methods @cite_41 @cite_23 ; our work is more closely related to dense sliding-window methods for mask object proposals as pioneered by DeepMask @cite_32 . We discuss this connection shortly. | {
"cite_N": [
"@cite_41",
"@cite_9",
"@cite_32",
"@cite_23",
"@cite_31",
"@cite_13"
],
"mid": [
"2129305389",
"2952122856",
"809122546",
"",
"1507506748",
"2102605133"
],
"abstract": [
"For object recognition, the current state-of-the-art is based on exhaustive search. However, to enable the use of more expensive features and classifiers and thereby progress beyond the state-of-the-art, a selective search strategy is needed. Therefore, we adapt segmentation as a selective search by reconsidering segmentation: We propose to generate many approximate locations over few and precise object delineations because (1) an object whose location is never generated can not be recognised and (2) appearance and immediate nearby context are most effective for object recognition. Our method is class-independent and is shown to cover 96.7 of all objects in the Pascal VOC 2007 test set using only 1,536 locations per image. Our selective search enables the use of the more expensive bag-of-words method which we use to substantially improve the state-of-the-art by up to 8.5 for 8 out of 20 classes on the Pascal VOC 2010 detection challenge.",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.",
"",
"We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top-down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
} |
1903.12174 | 2923575270 | Sliding-window object detectors that generate bounding-box object predictions over a dense, regular grid have advanced rapidly and proven popular. In contrast, modern instance segmentation approaches are dominated by methods that first detect object bounding boxes, and then crop and segment these regions, as popularized by Mask R-CNN. In this work, we investigate the paradigm of dense sliding-window instance segmentation, which is surprisingly under-explored. Our core observation is that this task is fundamentally different than other dense prediction tasks such as semantic segmentation or bounding-box object detection, as the output at every spatial location is itself a geometric structure with its own spatial dimensions. To formalize this, we treat dense instance segmentation as a prediction task over 4D tensors and present a general framework called TensorMask that explicitly captures this geometry and enables novel operators on 4D tensors. We demonstrate that the tensor view leads to large gains over baselines that ignore this structure, and leads to results comparable to Mask R-CNN. These promising results suggest that TensorMask can serve as a foundation for novel advances in dense mask prediction and a more complete understanding of the task. Code will be made available. | The now dominant paradigm for instance segmentation involves first detecting objects with a box and then segmenting each object using the box as a guide @cite_1 @cite_15 @cite_36 @cite_37 . Perhaps the most successful instantiation of the methodology is Mask R-CNN @cite_16 , which extended the Faster R-CNN @cite_8 detector with a simple mask predictor. Approaches that build on Mask R-CNN @cite_7 @cite_34 @cite_44 have dominated leaderboards of recent challenges @cite_9 @cite_2 @cite_12 . Unlike in bounding-box detection, where sliding-window @cite_35 @cite_38 @cite_18 and region-based @cite_3 @cite_24 methods have both thrived, in the area of instance segmentation, research on dense sliding-window methods has been missing. Our work aims to close this gap. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_38",
"@cite_18",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_9",
"@cite_1",
"@cite_34",
"@cite_3",
"@cite_44",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_12"
],
"mid": [
"2193145675",
"",
"",
"",
"2963857746",
"2613718673",
"",
"2952122856",
"2216125271",
"",
"",
"",
"",
"",
"",
"",
""
],
"abstract": [
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"",
"",
"",
"The way that information propagates in neural networks is of great importance. In this paper, we propose Path Aggregation Network (PANet) aiming at boosting information flow in proposal-based instance segmentation framework. Specifically, we enhance the entire feature hierarchy with accurate localization signals in lower layers by bottom-up path augmentation, which shortens the information path between lower layers and topmost feature. We present adaptive feature pooling, which links feature grid and all feature levels to make useful information in each level propagate directly to following proposal subnetworks. A complementary branch capturing different views for each proposal is created to further improve mask prediction. These improvements are simple to implement, with subtle extra computational overhead. Yet they are useful and make our PANet reach the 1st place in the COCO 2017 Challenge Instance Segmentation task and the 2nd place in Object Detection task without large-batch training. PANet is also state-of-the-art on MVD and Cityscapes.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multitask Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.",
"",
"",
"",
"",
"",
"",
"",
""
]
} |
1903.12174 | 2923575270 | Sliding-window object detectors that generate bounding-box object predictions over a dense, regular grid have advanced rapidly and proven popular. In contrast, modern instance segmentation approaches are dominated by methods that first detect object bounding boxes, and then crop and segment these regions, as popularized by Mask R-CNN. In this work, we investigate the paradigm of dense sliding-window instance segmentation, which is surprisingly under-explored. Our core observation is that this task is fundamentally different than other dense prediction tasks such as semantic segmentation or bounding-box object detection, as the output at every spatial location is itself a geometric structure with its own spatial dimensions. To formalize this, we treat dense instance segmentation as a prediction task over 4D tensors and present a general framework called TensorMask that explicitly captures this geometry and enables novel operators on 4D tensors. We demonstrate that the tensor view leads to large gains over baselines that ignore this structure, and leads to results comparable to Mask R-CNN. These promising results suggest that TensorMask can serve as a foundation for novel advances in dense mask prediction and a more complete understanding of the task. Code will be made available. | To the best of our knowledge, . The proposed framework is the such approach. The closest methods are for the related task of class-agnostic mask generation, specifically models such as DeepMask @cite_32 @cite_6 and InstanceFCN @cite_21 which apply convolutional neural networks to generate mask proposals in a manner. Like these approaches, is a dense sliding-window model, but it spans a more expressive design space. DeepMask and InstanceFCN can be expressed naturally as class-agnostic models, but enables novel architectures that perform better. Also, unlike these class-agnostic methods, performs multi-class classification in parallel to mask prediction, and thus can be applied to the task of instance segmentation. | {
"cite_N": [
"@cite_21",
"@cite_32",
"@cite_6"
],
"mid": [
"2317851288",
"809122546",
""
],
"abstract": [
"Fully convolutional networks (FCNs) have been proven very successful for semantic segmentation, but the FCN outputs are unaware of object instances. In this paper, we develop FCNs that are capable of proposing instance-level segment candidates. In contrast to the previous FCN that generates one score map, our FCN is designed to compute a small set of instance-sensitive score maps, each of which is the outcome of a pixel-wise classifier of a relative position to instances. On top of these instance-sensitive score maps, a simple assembling module is able to output instance candidate at each position. In contrast to the recent DeepMask method for segmenting instances, our method does not have any high-dimensional layer related to the mask resolution, but instead exploits image local coherence for estimating instances. We present competitive results of instance segment proposal on both PASCAL VOC and MS COCO.",
"Recent object detection systems rely on two critical steps: (1) a set of object proposals is predicted as efficiently as possible, and (2) this set of candidate proposals is then passed to an object classifier. Such approaches have been shown they can be fast, while achieving the state of the art in detection performance. In this paper, we propose a new way to generate object proposals, introducing an approach based on a discriminative convolutional network. Our model is trained jointly with two objectives: given an image patch, the first part of the system outputs a class-agnostic segmentation mask, while the second part of the system outputs the likelihood of the patch being centered on a full object. At test time, the model is efficiently applied on the whole test image and generates a set of segmentation masks, each of them being assigned with a corresponding object likelihood score. We show that our model yields significant improvements over state-of-the-art object proposal algorithms. In particular, compared to previous approaches, our model obtains substantially higher object recall using fewer proposals. We also show that our model is able to generalize to unseen categories it has not seen during training. Unlike all previous approaches for generating object masks, we do not rely on edges, superpixels, or any other form of low-level segmentation.",
""
]
} |
1903.12296 | 2934332334 | The state-of-the-art approaches in Generative Adversarial Networks (GANs) are able to learn a mapping function from one image domain to another with unpaired image data. However, these methods often produce artifacts and can only be able to convert low-level information, but fail to transfer high-level semantic part of images. The reason is mainly that generators do not have the ability to detect the most discriminative semantic part of images, which thus makes the generated images with low-quality. To handle the limitation, in this paper we propose a novel Attention-Guided Generative Adversarial Network (AGGAN), which can detect the most discriminative semantic object and minimize changes of unwanted part for semantic manipulation problems without using extra data and models. The attention-guided generators in AGGAN are able to produce attention masks via a built-in attention mechanism, and then fuse the input image with the attention mask to obtain a target image with high-quality. Moreover, we propose a novel attention-guided discriminator which only considers attended regions. The proposed AGGAN is trained by an end-to-end fashion with an adversarial loss, cycle-consistency loss, pixel loss and attention loss. Both qualitative and quantitative results demonstrate that our approach is effective to generate sharper and more accurate images than existing models. | @cite_15 are powerful generative models, which have achieved impressive results on different computer vision tasks, , image generation @cite_17 @cite_11 @cite_31 , image editing @cite_12 @cite_18 and image inpainting @cite_29 @cite_44 . In order to generate meaningful images that meet user requirement, Conditional GAN (CGAN) @cite_1 is proposed where the conditioned information is employed to guide the image generation process. The conditioned information can be discrete labels @cite_5 , text @cite_25 @cite_27 , object keypoints @cite_24 , human skeleton @cite_4 and reference images @cite_39 . CGANs using a reference images as conditional information have tackled a lot of problems, , text-to-image translation @cite_25 , image-to-image translation @cite_39 and video-to-video translation @cite_26 . | {
"cite_N": [
"@cite_31",
"@cite_18",
"@cite_4",
"@cite_26",
"@cite_29",
"@cite_1",
"@cite_17",
"@cite_39",
"@cite_44",
"@cite_24",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2964337551",
"2607170299",
"2963361924",
"",
"2611104282",
"2125389028",
"2893749619",
"2963073614",
"2738588019",
"2530372461",
"2405756170",
"2552611751",
"2099471712",
"2963143316",
"2579578355",
"2598591334"
],
"abstract": [
"Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoderdecoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-theart results on large pose face recognition.",
"Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other — a process that is tedious, fragile, and computationally intensive. In this paper, we propose an end-to-end generative adversarial network that infers a face-specific disentangled representation of intrinsic face properties, including shape (i.e. normals), albedo, and lighting, and an alpha matte. We show that this network can be trained on in-the-wild images by incorporating an in-network physically-based image formation module and appropriate loss functions. Our disentangling latent representation allows for semantically relevant edits, where one aspect of facial appearance can be manipulated while keeping orthogonal properties fixed, and we demonstrate its use for a number of facial editing applications.",
"Hand gesture-to-gesture translation in the wild is a challenging task since hand gestures can have arbitrary poses, sizes, locations and self-occlusions. Therefore, this task requires a high-level understanding of the mapping between the input source gesture and the output target gesture. To tackle this problem, we propose a novel hand Gesture Generative Adversarial Network (GestureGAN). GestureGAN consists of a single generator G and a discriminator D, which takes as input a conditional hand image and a target hand skeleton image. GestureGAN utilizes the hand skeleton information explicitly, and learns the gesture-to-gesture mapping through two novel losses, the color loss and the cycle-consistency loss. The proposed color loss handles the issue of \"channel pollution\" while back-propagating the gradients. In addition, we present the Frechet ResNet Distance (FRD) to evaluate the quality of generated images. Extensive experiments on two widely used benchmark datasets demonstrate that the proposed GestureGAN achieves state-of-the-art performance on the unconstrained hand gesture-to-gesture translation task. Meanwhile, the generated images are in high-quality and are photo-realistic, allowing them to be used as data augmentation to improve the performance of a hand gesture classifier. Our model and code are available at https: github.com Ha0Tang GestureGAN.",
"",
"In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.",
"We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.",
"Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes. Additionally, we evaluate the design of cGANs. The combination of an encoder with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real images with deterministic complex modifications.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Abstract: Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions. The proposed model iteratively draws patches on a canvas, while attending to the relevant words in the description. After training on Microsoft COCO, we compare our model with several baseline generative models on image generation and retrieval tasks. We demonstrate that our model produces higher quality samples than other approaches and generates images with novel scene compositions corresponding to previously unseen captions in the dataset.",
"Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a face image according to a given attribute value. Instead of manipulating the whole image, we propose to learn the corresponding residual image defined as the difference between images before and after the manipulation. In this way, the manipulation can be operated efficiently with modest pixel modification. The framework of our approach is based on the Generative Adversarial Network. It consists of two image transformation networks and a discriminative network. The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images. We also apply dual learning to allow transformation networks to learn from each other. Experiments show that residual images can be effectively learned and used for attribute manipulations. The generated images remain most of the details in attribute-irrelevant areas.",
"We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods."
]
} |
1903.12296 | 2934332334 | The state-of-the-art approaches in Generative Adversarial Networks (GANs) are able to learn a mapping function from one image domain to another with unpaired image data. However, these methods often produce artifacts and can only be able to convert low-level information, but fail to transfer high-level semantic part of images. The reason is mainly that generators do not have the ability to detect the most discriminative semantic part of images, which thus makes the generated images with low-quality. To handle the limitation, in this paper we propose a novel Attention-Guided Generative Adversarial Network (AGGAN), which can detect the most discriminative semantic object and minimize changes of unwanted part for semantic manipulation problems without using extra data and models. The attention-guided generators in AGGAN are able to produce attention masks via a built-in attention mechanism, and then fuse the input image with the attention mask to obtain a target image with high-quality. Moreover, we propose a novel attention-guided discriminator which only considers attended regions. The proposed AGGAN is trained by an end-to-end fashion with an adversarial loss, cycle-consistency loss, pixel loss and attention loss. Both qualitative and quantitative results demonstrate that our approach is effective to generate sharper and more accurate images than existing models. | In order to fix the aforementioned limitations, Liang propose ContrastGAN @cite_9 , which uses the object mask annotations from each dataset as extra input data. In this method, we have to make an assumption that after applying semantic changes an object shape does not change. Another method is to train another segmentation or attention model and fit it to the system. For instance, Mejjati @cite_28 propose an attention mechanisms that are jointly trained with the generators and discriminators. Chen propose AttentionGAN @cite_42 , which uses an extra attention network to generate attention maps, so that major attention can be paid to objects of interests. Kastaniotis @cite_30 present ATAGAN, which use a teacher network to produce attention maps. Zhang @cite_22 propose the Self-Attention Generative Adversarial Networks (SAGAN) for image generation task. Qian @cite_8 employ a recurrent network to generate visual attention first and then transform a raindrop degraded image into a clean one. Tang @cite_47 propose a novel Multi-Channel Attention Selection GAN for the challenging cross-view image translation task. Sun @cite_40 generate a facial mask by using FCN @cite_33 for face attribute manipulation. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_42",
"@cite_40",
"@cite_47"
],
"mid": [
"2788014765",
"",
"1903029394",
"2963800716",
"2807033398",
"2739540493",
"2793149861",
"2798539888",
"2953096069"
],
"abstract": [
"In this work, we present a novel approach for training Generative Adversarial Networks (GANs). Using the attention maps produced by a Teacher- Network we are able to improve the quality of the generated images as well as perform weakly object localization on the generated images. To this end, we generate images of HEp-2 cells captured with Indirect Imunofluoresence (IIF) and study the ability of our network to perform a weakly localization of the cell. Firstly, we demonstrate that whilst GANs can learn the mapping between the input domain and the target distribution efficiently, the discriminator network is not able to detect the regions of interest. Secondly, we present a novel attention transfer mechanism which allows us to enforce the discriminator to put emphasis on the regions of interest via transfer learning. Thirdly, we show that this leads to more realistic images, as the discriminator learns to put emphasis on the area of interest. Fourthly, the proposed method allows one to generate both images as well as attention maps which can be useful for data annotation e.g in object detection.",
"",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.",
"Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively.",
"Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene. Motivated by the important role of attention in human perception, we tackle this limitation by introducing unsupervised attention mechanisms which are jointly adversarially trained with the generators and discriminators. We empirically demonstrate that our approach is able to attend to relevant regions in the image without requiring any additional supervision, and that by doing so it achieves more realistic mappings compared to recent approaches.",
"Despite the promising results on paired unpaired image-to-image translation achieved by Generative Adversarial Networks (GANs), prior works often only transfer the low-level information (e.g. color or texture changes), but fail to manipulate high-level semantic meanings (e.g., geometric structure or content) of different object regions. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, aiming at modifying the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow ( )sheep, motor ( )bicycle, cat ( )dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective which is able to perform all types of semantic translations with one category-conditional generator. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Extensive qualitative and quantitative experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs.",
"This paper studies the object transfiguration problem in wild images. The generative network in classical GANs for object transfiguration often undertakes a dual responsibility: to detect the objects of interests and to convert the object from source domain to target domain. In contrast, we decompose the generative network into two separat networks, each of which is only dedicated to one particular sub-task. The attention network predicts spatial attention maps of images, and the transformation network focuses on translating objects. Attention maps produced by attention network are encouraged to be sparse, so that major attention can be paid to objects of interests. No matter before or after object transfiguration, attention maps should remain constant. In addition, learning attention network can receive more instructions, given the available segmentation annotations of images. Experimental results demonstrate the necessity of investigating attention in object transfiguration, and that the proposed algorithm can learn accurate attention to improve quality of generated images.",
"The task of face attribute manipulation has found increasing applications, but still remains challeng- ing with the requirement of editing the attributes of a face image while preserving its unique details. In this paper, we choose to combine the Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) for photorealistic image genera- tion. We propose an effective method to modify a modest amount of pixels in the feature maps of an encoder, changing the attribute strength contin- uously without hindering global information. Our training objectives of VAE and GAN are reinforced by the supervision of face recognition loss and cy- cle consistency loss for faithful preservation of face details. Moreover, we generate facial masks to en- force background consistency, which allows our training to focus on manipulating the foreground face rather than background. Experimental results demonstrate our method, called Mask-Adversarial AutoEncoder (M-AAE), can generate high-quality images with changing attributes and outperforms prior methods in detail preservation.",
""
]
} |
1903.12049 | 2924166999 | Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not. | convolutional neural networks (CNNs) have caused a revolution in the field of object detection. AlexNet @cite_0 was the first network to beat classical methods on ImageNet @cite_1 , by using clever strategies such as dropout @cite_23 , batch normalization @cite_24 and rectified linear units. VGG @cite_3 was also extremely influential, as it established good practices by its simplicity and elegance. For deeper networks, ResNets @cite_4 use skip connections to build a network out of residual blocks, which help propagate the gradient. This allows ResNets to go as deep as 150 layers. For faster networks, MobileNets @cite_10 use the idea of depth separable convolutions in order to speed up the computation process and save memory. The result is a network suitable for vision applications that can run on mobile devices. | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_24",
"@cite_23",
"@cite_10"
],
"mid": [
"",
"2108598243",
"2417429787",
"2163605009",
"1836465849",
"2095705004",
"2612445135"
],
"abstract": [
"",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Since Krizhevsky won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 competition with the brilliant deep convolutional neural networks(D-CNNs), researchers have designed lots of D-CNNs. However, almost all the existing very deep convolutional neural networks are trained on the giant ImageNet datasets. Small datasets like CIFAR-10 has rarely taken advantage of the power of depth since deep models are easy to overfit. In this paper, we proposed a modified VGG-16 network and used this model to fit CIFAR-10. By adding stronger regularizer and using Batch Normalization, we achieved 8.45 error rate on CIFAR-10 without severe overfitting. Our results show that the very deep CNN can be used to fit small datasets with simple and proper modifications and don't need to re-design specific small networks. We believe that if a model is strong enough to fit a large dataset, it can also fit a small one.",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82 top-5 test error, exceeding the accuracy of human raters.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.",
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization."
]
} |
1903.12049 | 2924166999 | Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not. | contrarily to object detection on single images, this task has seen less research. Here is presented some of the most notable recent works on this topic. Recently, Liu & Zhu @cite_25 used a Long short-term memory (LSTM) to propagate and refine feature maps between frames, allowing them to detect objects a lot faster while keeping a precision similar to a single frame detector. Before them, @cite_11 used the optical flow information to propagate feature maps to certain frames in order to save computation time. These two articles focus on reusing computation between frames in order to save time based on the premise that there is a lot of similarity and continuity between frames. In comparison, our work focuses on improving the precision and recall of the detection by combining information. By using deformable convolutions instead of optical flow training, @cite_26 trained a model to compute an offset between frames, and thus are able to sample features from close preceding and following frames to help detect objects in the current frame. Their model is particularly good in cases of occlusion or blurriness in the video. | {
"cite_N": [
"@cite_26",
"@cite_25",
"@cite_11"
],
"mid": [
"2964086649",
"2963212638",
"2552900565"
],
"abstract": [
"We propose a Spatiotemporal Sampling Network (STSN) that uses deformable convolutions across time for object detection in videos. Our STSN performs object detection in a video frame by learning to spatially sample features from the adjacent frames. This naturally renders the approach robust to occlusion or motion blur in individual frames. Our framework does not require additional supervision, as it optimizes sampling locations directly with respect to object detection performance. Our STSN outperforms the state-of-the-art on the ImageNet VID dataset and compared to prior video object detection methods it uses a simpler design, and does not require optical flow data for training.",
"This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an inter-weaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.",
"Deep convolutional neutral networks have achieved great success on image recognition tasks. Yet, it is non-trivial to transfer the state-of-the-art image recognition networks to videos as per-frame evaluation is too slow and unaffordable. We present deep feature flow, a fast and accurate framework for video recognition. It runs the expensive convolutional sub-network only on sparse key frames and propagates their deep feature maps to other frames via a flow field. It achieves significant speedup as flow computation is relatively fast. The end-to-end training of the whole architecture significantly boosts the recognition accuracy. Deep feature flow is flexible and general. It is validated on two recent large scale video datasets. It makes a large step towards practical video recognition. Code would be released."
]
} |
1903.12049 | 2924166999 | Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not. | one of the most notorious work in optical flow learning is without a doubt FlowNet @cite_8 . This work presented the first end-to-end network that learns to generate optical flow from a pair of images. They proposed two models, FlowNetSimple and FlowNetCorr, that are both trained on a digitally constructed dataset with 3D models of chairs. The dataset was created by moving these chairs on different backgrounds. The models take as input a pair of consecutive images. FlowNetSimple works by simply concatenating images and letting the network learn how to combine them. This is the inspiration for one of our model. FlowNetCorr, on the other hand, works by computing a correlation map between the high level representation of the two images. A second version, FlowNet 2.0 @cite_22 proposed several improvements over FlowNet, most notably to stack several slightly different architectures one after the other to refine the flow. | {
"cite_N": [
"@cite_22",
"@cite_8"
],
"mid": [
"2560474170",
"764651262"
],
"abstract": [
"The FlowNet demonstrated that optical flow estimation can be cast as a learning problem. However, the state of the art with regard to the quality of the flow has still been defined by traditional methods. Particularly on small displacements and real-world data, FlowNet cannot compete with variational methods. In this paper, we advance the concept of end-to-end learning of optical flow and make it work really well. The large improvements in quality and speed are caused by three major contributions: first, we focus on the training data and show that the schedule of presenting data during training is very important. Second, we develop a stacked architecture that includes warping of the second image with intermediate optical flow. Third, we elaborate on small displacements by introducing a subnetwork specializing on small motions. FlowNet 2.0 is only marginally slower than the original FlowNet but decreases the estimation error by more than 50 . It performs on par with state-of-the-art methods, while running at interactive frame rates. Moreover, we present faster variants that allow optical flow computation at up to 140fps with accuracy matching the original FlowNet.",
"Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps."
]
} |
1903.12117 | 2922913792 | Typical multi-task learning (MTL) methods rely on architectural adjustments and a large trainable parameter set to jointly optimize over several tasks. However, when the number of tasks increases so do the complexity of the architectural adjustments and resource requirements. In this paper, we introduce a method which applies a conditional feature-wise transformation over the convolutional activations that enables a model to successfully perform a large number of tasks. To distinguish from regular MTL, we introduce Many Task Learning (MaTL) as a special case of MTL where more than 20 tasks are performed by a single model. Our method dubbed Task Routing (TR) is encapsulated in a layer we call the Task Routing Layer (TRL), which applied in an MaTL scenario successfully fits hundreds of classification tasks in one model. We evaluate our method on 5 datasets against strong baselines and state-of-the-art approaches. | Asymmetric MTL relies on using knowledge from solving auxiliary tasks in order to improve the performance on one main target task. This formulation bears a resemblance to transfer learning @cite_46 . One key distinction between them is that in asymmetric MTL the auxiliary tasks are learned simultaneously with the main task, while in transfer learning they are learned independently @cite_37 . In our work we focus on symmetric MTL. | {
"cite_N": [
"@cite_46",
"@cite_37"
],
"mid": [
"2165698076",
"2742079690"
],
"abstract": [
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach, and then discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing are reviewed to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works. Finally, we present theoretical analyses and discuss several future directions for MTL."
]
} |
1903.12117 | 2922913792 | Typical multi-task learning (MTL) methods rely on architectural adjustments and a large trainable parameter set to jointly optimize over several tasks. However, when the number of tasks increases so do the complexity of the architectural adjustments and resource requirements. In this paper, we introduce a method which applies a conditional feature-wise transformation over the convolutional activations that enables a model to successfully perform a large number of tasks. To distinguish from regular MTL, we introduce Many Task Learning (MaTL) as a special case of MTL where more than 20 tasks are performed by a single model. Our method dubbed Task Routing (TR) is encapsulated in a layer we call the Task Routing Layer (TRL), which applied in an MaTL scenario successfully fits hundreds of classification tasks in one model. We evaluate our method on 5 datasets against strong baselines and state-of-the-art approaches. | Symmetric MTL, unlike Asymmetric MTL, aims to improve the performance of all tasks simultaneously. It leverages the fact that some tasks are correlated (co-dependent) and by learning their estimators jointly under a unified representation, the transferability of expertise between tasks is exploited to the maximal benefit of all @cite_35 . introduced such a symmetric approach named Multi-task Relationship Learning (MTRL) @cite_2 which regularizes the parallel learning of multiple tasks and models their relationships in a non-parametric manner as a task covariance matrix. Many other symmetric approaches @cite_47 @cite_22 @cite_41 @cite_6 @cite_27 @cite_19 have been developed in recent years. Permutations of utilizing different regularization strategies @cite_31 , multi-level sharing @cite_15 , cross-layer parameter combinations @cite_47 or meshes of all options @cite_11 have been extensively tested, however they are vulnerable to noisy and outlier tasks which when introduced dramatically deteriorate performance. This occurs due to the low grade feature robustness and the initial assumption that all tasks positively influence each other's learning process @cite_37 . In our work we address the feature robustness issue by randomizing the sharing structure from the start of the training process and enforcing tasks to use alternate routes for their data-flow through the model. | {
"cite_N": [
"@cite_35",
"@cite_31",
"@cite_37",
"@cite_22",
"@cite_15",
"@cite_41",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_47",
"@cite_11"
],
"mid": [
"2131479143",
"2078224158",
"2742079690",
"",
"2798512429",
"",
"",
"",
"",
"2097451239",
"",
"2900964459"
],
"abstract": [
"Consider the problem of learning logistic-regression models for multiple classification tasks, where the training data set for each task is not drawn from the same statistical distribution. In such a multi-task learning (MTL) scenario, it is necessary to identify groups of similar tasks that should be learned jointly. Relying on a Dirichlet process (DP) based statistical model to learn the extent of similarity between classification tasks, we develop computationally efficient algorithms for two different forms of the MTL problem. First, we consider a symmetric multi-task learning (SMTL) situation in which classifiers for multiple tasks are learned jointly using a variational Bayesian (VB) algorithm. Second, we consider an asymmetric multi-task learning (AMTL) formulation in which the posterior density function from the SMTL model parameters (from previous tasks) is used as a prior for a new task: this approach has the significant advantage of not requiring storage and use of all previous data from prior tasks. The AMTL formulation is solved with a simple Markov Chain Monte Carlo (MCMC) construction. Experimental results on two real life MTL problems indicate that the proposed algorithms: (a) automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions; and (b) are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.",
"We propose an heterogeneous multi-task learning framework for human pose estimation from monocular image with deep convolutional neural network. In particular, we simultaneously learn a pose-joint regressor and a sliding-window body-part detector in a deep network architecture. We show that including the body-part detection task helps to regularize the network, directing it to converge to a good solution. We report competitive and state-of-art results on several data sets. We also empirically show that the learned neurons in the middle layer of our network are tuned to localized body parts.",
"Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach, and then discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing are reviewed to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.",
"",
"Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. For example, we show that the total number of labeled datapoints needed for solving a set of 10 tasks can be reduced by roughly 2 3 (compared to training independently) while keeping the performance nearly the same. We provide a set of tools for computing and probing this taxonomical structure including a solver that users can employ to devise efficient supervision policies for their use cases.",
"",
"",
"",
"",
"Multitask learning is a learning paradigm that seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this article, we propose a regularization approach to learning the relationships between tasks in multitask learning. This approach can be viewed as a novel generalization of the regularized formulation for single-task learning. Besides modeling positive task correlation, our approach—multitask relationship learning (MTRL)—can also describe negative task correlation and identify outlier tasks based on the same underlying principle. By utilizing a matrix-variate normal distribution as a prior on the model parameters of all tasks, our MTRL method has a jointly convex objective function. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multitask learning setting and then generalize it to the asymmetric setting as well. We also discuss some variants of the regularization approach to demonstrate the use of other matrix-variate priors for learning task relationships. Moreover, to gain more insight into our model, we also study the relationships between MTRL and some existing multitask learning methods. Experiments conducted on a toy problem as well as several benchmark datasets demonstrate the effectiveness of MTRL as well as its high interpretability revealed by the task covariance matrix.",
"",
"Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)--(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15 average error reductions over common approaches to MTL."
]
} |
1903.12117 | 2922913792 | Typical multi-task learning (MTL) methods rely on architectural adjustments and a large trainable parameter set to jointly optimize over several tasks. However, when the number of tasks increases so do the complexity of the architectural adjustments and resource requirements. In this paper, we introduce a method which applies a conditional feature-wise transformation over the convolutional activations that enables a model to successfully perform a large number of tasks. To distinguish from regular MTL, we introduce Many Task Learning (MaTL) as a special case of MTL where more than 20 tasks are performed by a single model. Our method dubbed Task Routing (TR) is encapsulated in a layer we call the Task Routing Layer (TRL), which applied in an MaTL scenario successfully fits hundreds of classification tasks in one model. We evaluate our method on 5 datasets against strong baselines and state-of-the-art approaches. | Convolutional neural fabrics @cite_8 is related to our work in terms of architecture search. define 3D trellis that connect response maps from different layers and create a smaller, thinner specialized architecture. On the other, TR works in the MTL realm and allows us to pool from an exponentially large pool of subnetworks. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2964081403"
],
"abstract": [
"Despite the success of CNNs, selecting the optimal architecture for a given task remains an open problem. Instead of aiming to select a single optimal architecture, we propose a \" fabric \" that embeds an exponentially large number of architectures. The fabric consists of a 3D trellis that connects response maps at different layers, scales, and channels with a sparse homogeneous local connectivity pattern. The only hyper-parameters of a fabric are the number of channels and layers. While individual architectures can be recovered as paths, the fabric can in addition ensemble all embedded architectures together, sharing their weights where their paths overlap. Parameters can be learned using standard methods based on back-propagation, at a cost that scales linearly in the fabric size. We present benchmark results competitive with the state of the art for image classification on MNIST and CIFAR10, and for semantic segmentation on the Part Labels dataset."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.