aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1705.09792
2618946976
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks.
The phase component is not only important from a biological point of view but also from a signal processing perspective. It has been shown that the phase information in speech signals affects their intelligibility . Also @cite_6 show that the amount of information present in the phase of an image is sufficient to recover the majority of the information encoded in its magnitude. In fact, phase provides a detailed description of objects as it encodes shapes, edges, and orientations.
{ "cite_N": [ "@cite_6" ], "mid": [ "2122787031" ], "abstract": [ "In the Fourier representation of signals, spectral magnitude and phase tend to play different roles and in some situations many of the important features of a signal are preserved if only the phase is retained. Furthermore, under a variety of conditions, such as when a signal is of finite length, phase information alone is sufficient to completely reconstruct a signal to within a scale factor. In this paper, we review and discuss these observations and results in a number of different contexts and applications. Specifically, the intelligibility of phase-only reconstruction for images, speech, and crystallographic structures are illustrated. Several approaches to justifying the relative importance of phase through statistical arguments are presented, along with a number of informal arguments suggesting reasons for the importance of phase. Specific conditions under which a sequence can be exactly reconstructed from phase are reviewed, both for one-dimensional and multi-dimensional sequences, and algorithms for both approximate and exact reconstruction of signals from phase information are presented. A number of applications of the observations and results in this paper are suggested." ] }
1705.09792
2618946976
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks.
Recently, @cite_1 leveraged the Fourier spectral representation for convolutional neural networks, providing a technique for parameterizing convolution kernel weights in the spectral domain, and performing pooling on the spectral representation of the signal. However, the authors avoid performing complex-valued convolutions, instead building from real-valued kernels in the spatial domain. In order to ensure that a complex parametrization in the spectral domain maps onto real-valued kernels, the authors impose a conjugate symmetry constraint on the spectral-domain weights, such that when the is applied to them, it only yields real-valued kernels.
{ "cite_N": [ "@cite_1" ], "mid": [ "2951845198" ], "abstract": [ "Discrete Fourier transforms provide a significant speedup in the computation of convolutions in deep learning. In this work, we demonstrate that, beyond its advantages for efficient computation, the spectral domain also provides a powerful representation in which to model and train convolutional neural networks (CNNs). We employ spectral representations to introduce a number of innovations to CNN design. First, we propose spectral pooling, which performs dimensionality reduction by truncating the representation in the frequency domain. This approach preserves considerably more information per parameter than other pooling strategies and enables flexibility in the choice of pooling output dimensionality. This representation also enables a new form of stochastic regularization by randomized modification of resolution. We show that these methods achieve competitive results on classification and approximation tasks, without using any dropout or max-pooling. Finally, we demonstrate the effectiveness of complex-coefficient spectral parameterization of convolutional filters. While this leaves the underlying model unchanged, it results in a representation that greatly facilitates optimization. We observe on a variety of popular CNN configurations that this leads to significantly faster convergence during training." ] }
1705.09792
2618946976
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks.
As pointed out in , the use of complex-valued neural networks has been investigated long before the earliest deep learning breakthroughs . Recently @cite_14 @cite_8 @cite_5 @cite_0 @cite_7 have tried to bring more attention to the usefulness of deep complex neural networks by providing theoretical and mathematical motivation for using complex-valued deep networks. However, to the best of our knowledge, most of the recent works using complex valued networks have been applied on toy tasks, with the exception of some attempts. In fact, have used complex representation in vision tasks. @cite_7 have also performed a real-world speech task consisting of predicting the log magnitude of the future frames. In Natural Language Processing, have used complex-valued embeddings. Much remains to be done to develop proper tools and a general framework for training deep neural networks with complex-valued parameters.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_0", "@cite_5" ], "mid": [ "1526708997", "", "2950192406", "2950414499", "2175402905" ], "abstract": [ "Deep learning has recently led to great successes in tasks such as image recognition (e.g , 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to cortical circuits. The challenge is to identify which neuronal mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter, we demonstrate the potential of the approach in several simple experiments. Thus, neuronal synchrony could be a flexible mechanism that fulfills multiple functional roles in deep networks.", "", "A complex-valued convolutional network (convnet) implements the repeated application of the following composition of three operations, recursively applying the composition to an input vector of nonnegative real numbers: (1) convolution with complex-valued vectors followed by (2) taking the absolute value of every entry of the resulting vectors followed by (3) local averaging. For processing real-valued random vectors, complex-valued convnets can be viewed as \"data-driven multiscale windowed power spectra,\" \"data-driven multiscale windowed absolute spectra,\" \"data-driven multiwavelet absolute values,\" or (in their most general configuration) \"data-driven nonlinear multiwavelet packets.\" Indeed, complex-valued convnets can calculate multiscale windowed spectra when the convnet filters are windowed complex-valued exponentials. Standard real-valued convnets, using rectified linear units (ReLUs), sigmoidal (for example, logistic or tanh) nonlinearities, max. pooling, etc., do not obviously exhibit the same exact correspondence with data-driven wavelets (whereas for complex-valued convnets, the correspondence is much more than just a vague analogy). Courtesy of the exact correspondence, the remarkably rich and rigorous body of mathematical analysis for wavelets applies directly to (complex-valued) convnets.", "We investigate a new method to augment recurrent neural networks with extra memory without increasing the number of network parameters. The system has an associative memory based on complex-valued vectors and is closely related to Holographic Reduced Representations and Long Short-Term Memory networks. Holographic Reduced Representations have limited capacity: as they store more information, each retrieval becomes noisier due to interference. Our system in contrast creates redundant copies of stored information, which enables retrieval with reduced noise. Experiments demonstrate faster learning on multiple memorization tasks.", "Recurrent neural networks (RNNs) are notoriously difficult to train. When the eigenvalues of the hidden to hidden weight matrix deviate from absolute value 1, optimization becomes difficult due to the well studied issue of vanishing and exploding gradients, especially when trying to learn long-term dependencies. To circumvent this problem, we propose a new architecture that learns a unitary weight matrix, with eigenvalues of absolute value exactly 1. The challenge we address is that of parametrizing unitary matrices in a way that does not require expensive computations (such as eigendecomposition) after each weight update. We construct an expressive unitary weight matrix by composing several structured matrices that act as building blocks with parameters to be learned. Optimization with this parameterization becomes feasible only when considering hidden states in the complex domain. We demonstrate the potential of this architecture by achieving state of the art results in several hard tasks involving very long-term dependencies." ] }
1705.09914
2619538244
Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the model's depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts ( degridding'), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.
The re-emergence of convolutional networks as the primary feature extractor in computer vision has streamlined and unified this area @cite_17 @cite_3 @cite_29 @cite_24 @cite_18 . @cite_17 proposes a convolutional network for document recognition, in which convolutional layers are followed by pooling and subsampling until the spatial resolution is squeezed to 1. @cite_3 designs a much deeper network for natural image classification. The network can be divided into several levels, each of which includes several convolutional and non-linear activation layers and is followed by downsampling. Although the network has more layers and capacity, it shares the same downsampling strategy with @cite_17 . Larger and deeper networks @cite_29 @cite_24 @cite_18 are proposed under this progressively downsampling structure. The design focus is usually how to wire the convolutional layers to get more model capacity and efficient gradient propagation. Typically, the images are downsampled by a factor of 32 between the input layer and the output of the final convolutional layer. In this paper, we revisit this design choice and propose to retain higher spatial resolution all the way through to the output of the final convolutional layer, such that considerably higher-resolution activation maps are pooled for the final prediction.
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_3", "@cite_24", "@cite_17" ], "mid": [ "2949650786", "2950179405", "", "1686810756", "2147800946" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification." ] }
1705.09914
2619538244
Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the model's depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts ( degridding'), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.
To aggregate the signal from a high-resolution convolutional layer, we use global average pooling @cite_12 @cite_4 @cite_18 . We show that backpropagation can effectively handle global average pooling over much bigger feature maps than previously thought, yielding high accuracy in the final predictor as well as informative high-resolution activations.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_12" ], "mid": [ "2949650786", "2295107390", "" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.", "" ] }
1705.09914
2619538244
Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the model's depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts ( degridding'), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.
Object segmentation and localization also benefit from the rapid progress of network design for image classification because previous works @cite_20 @cite_28 @cite_22 find the features learned from classifying large scale image dataset can be transfered to those tasks and they can achieve impressive performance by fine-tuning the features. We can not list all the works using this scheme because essentially all the recent works related to image recognition take advantage of it. However, these new tasks usually require pixel-wise features, while the networks for image classification can only provide image level features or low resolution feature maps. @cite_20 resizes the image patches containing objects to the input size of classification network for object detection. @cite_28 proposes a hour glass shape network to learn upsampling for semantic image segmentation. Yu and Koltun @cite_22 transforms the classification network to produce higher spatial resolution feature maps after the network is trained. We develop an image classification model that produces informative high-resolution activation maps directly, with no sacrifice in classification accuracy and no need for post-hoc fine-tuning.
{ "cite_N": [ "@cite_28", "@cite_22", "@cite_20" ], "mid": [ "2952632681", "2286929393", "" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.", "" ] }
1705.09516
2617927774
Biomedical events describe complex interactions between various biomedical entities. Event trigger is a word or a phrase which typically signifies the occurrence of an event. Event trigger identification is an important first step in all event extraction methods. However many of the current approaches either rely on complex hand-crafted features or consider features only within a window. In this paper we propose a method that takes the advantage of recurrent neural network (RNN) to extract higher level features present across the sentence. Thus hidden state representation of RNN along with word and entity type embedding as features avoid relying on the complex hand-crafted features generated using various NLP toolkits. Our experiments have shown to achieve state-of-art F1-score on Multi Level Event Extraction (MLEE) corpus. We have also performed category-wise analysis of the result and discussed the importance of various features in trigger identification task.
proposed a neural network model where dependency based word embeddings within a window around the word are fed into a feed forward neural network (FFNN) @cite_1 to perform final classification. proposed another model based on convolutional neural network (CNN) where word and entity mention features of words within a window around the word are fed to a CNN to perform final classification. Although both of the methods have achieved good performance they fail to capture features outside the window.
{ "cite_N": [ "@cite_1" ], "mid": [ "2158899491" ], "abstract": [ "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements." ] }
1705.09597
2618168474
Vasculature is known to be of key biological significance, especially in the study of cancer. As such, considerable effort has been focused on the automated measurement and analysis of vasculature in medical and pre-clinical images. In tumors in particular, the vascular networks may be extremely irregular and the appearance of the individual vessels may not conform to classical descriptions of vascular appearance. Typically, vessels are extracted by either a segmentation and thinning pipeline, or by direct tracking. Neither of these methods are well suited to microscopy images of tumor vasculature. In order to address this we propose a method to directly extract a medial representation of the vessels using Convolutional Neural Networks. We then show that these two-dimensional centerlines can be meaningfully extended into 3D in anisotropic and complex microscopy images using the recently popularized Convolutional Long Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this hybrid convolutional-recurrent architecture over both 2D and 3D convolutional comparators.
A number of authors have attempted to apply Deep Learning methods to segmentation of vasculature. However, the vast majority of effort in this area is applied to segmentation of vasculature in optical imaging of the retinal fundus @cite_25 @cite_3 @cite_2 . Prentašic have applied deep learning methods to coherence microscopy images of micro-vasculature @cite_25 but they consider only 2D images in which the vasculature already has an enhanced appearance. Teikari have applied CNNs to the task of segmentation in 3D fluorescence microscopy images of micro-vasculature, similar to ours @cite_17 . However, they consider contrast enhanced vasculature, and only concern themselves with the segmentation of the vasculature, rather than extracting a medial representation. 3D CNNs have been applied successfully to brain lesion segmentation by Kamnitsas @cite_0 with their Deep Medic architecture. 3D CNNs have also been explored in the V-Net architecture of Milletari @cite_1 for prostate segmentation in MRI and the 3D extension of U-Net @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_1", "@cite_3", "@cite_0", "@cite_2", "@cite_25", "@cite_17" ], "mid": [ "2464708700", "2962914239", "2288283259", "2301358467", "2206167351", "2327793514", "" ], "abstract": [ "This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.", "Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods.", "The purpose of retinal image registration is to establish the coherent correspondences between the multi-model retinal image for applying into the ophthalmological surgery. Vessel landmarks detection in retinal image is the vital step in the retinal image registration. In this paper, a novel approach is proposed, firstly, a deep learning technology is used to vessel segmentation to generate the probability map of the retinal image, which is more reliable for optimizing the feature detection in retinal image. Secondly, we detect the landmarks using the multi-scale Hessian response on the probability map of the retinal image. Compared to the traditional methods, the results show that our method enable a majority of the bifurcation points, crossover points and curvature extreme points to be detected out simultaneously. Moreover, the impact of image noise and pathology can be reduced significantly.", "This work is supported by the EPSRC First Grant scheme (grant ref no. EP N023668 1) and partially funded under the 7th Framework Programme by the European Commission (TBIcare: http: www.tbicare.eu ; CENTER-TBI: https: www.center-tbi.eu ). This work was further supported by a Medical Research Council (UK) Program Grant (Acute brain injury: heterogeneity of mechanisms, therapeutic targets and outcome effects [G9439390 ID 65883]), the UK National Institute of Health Research Biomedical Research Centre at Cambridge and Technology Platform funding provided by the UK Department of Health. KK is supported by the Imperial College London PhD Scholarship Programme. VFJN is supported by a Health Foundation Academy of Medical Sciences Clinician Scientist Fellowship. DKM is supported by an NIHR Senior Investigator Award. We gratefully acknowledge the support of NVIDIA Corporation with the donation of two Titan X GPUs for our research.", "This paper presents a new supervised method for vessel segmentation in retinal images. This method remolds the task of segmentation as a problem of cross-modality data transformation from retinal image to vessel map. A wide and deep neural network with strong induction ability is proposed to model the transformation, and an efficient training strategy is presented. Instead of a single label of the center pixel, the network can output the label map of all pixels for a given image patch. Our approach outperforms reported state-of-the-art methods in terms of sensitivity, specificity and accuracy. The result of cross-training evaluation indicates its robustness to the training set. The approach needs no artificially designed feature and no preprocessing step, reducing the impact of subjective factors. The proposed method has the potential for application in image diagnosis of ophthalmologic diseases, and it may provide a new, general, high-performance computing framework for image segmentation.", "The condition of the vascular network of human eye is an important diagnostic factor in ophthalmology. Its segmentation in fundus imaging is a nontrivial task due to variable size of vessels, relatively low contrast, and potential presence of pathologies like microaneurysms and hemorrhages. Many algorithms, both unsupervised and supervised, have been proposed for this purpose in the past. We propose a supervised segmentation technique that uses a deep neural network trained on a large (up to 400 @math 000) sample of examples preprocessed with global contrast normalization, zero-phase whitening, and augmented using geometric transformations and gamma corrections. Several variants of the method are considered, including structured prediction, where a network classifies multiple pixels simultaneously. When applied to standard benchmarks of fundus imaging, the DRIVE, STARE, and CHASE databases, the networks significantly outperform the previous algorithms on the area under ROC curve measure (up to @math ) and accuracy of classification (up to @math ). The method is also resistant to the phenomenon of central vessel reflex, sensitive in detection of fine vessels ( @math ), and fares well on pathological cases.", "" ] }
1705.09543
2617384283
Smart environments interconnect indoor building environments, indoor wireless sensor and actuator networks, smartphones, and human together to provide smart infrastructure management and intelligent user experiences. To enable the "smart" operations, a complete set of hardware and software components are required. In this work, we present Smart Syndesi, a system for creating indoor location-aware smart building environments using wireless sensor and actuator networks (WSANs). Smart Syndesi includes an indoor tracking system, a WSAN for indoor environmental monitoring and activation automation, and a gateway interconnecting WSAN, tracking system with mobile users.The indoor positioning system tracks the real-time location of occupants with high accuracy, which works as a basis for indoor location-based sensor actuation automation.To show how the multiple software hardware components are integrated, we implemented the system prototype and performed intensive experiments in indoor office environments to automate the indoor location-driven environmental sensor monitoring and activation process. The tracked indoor location of a user's smartphone triggers the retrieval of environmental measurements and activates the actuators automatically (i.e. turn on off lights, switch on off fans) based on the location and correlated environmental sensor information.
The first problem in smart environment application is to detect or identify the number and identification of the residents or occupants in home or office building environments. Such knowledge can be very useful to various home office automation applications, such as energy management, lighting control, and security. @cite_14 proposed to estimate the number of people in an indoor environment using information available in standard HVAC (heating, ventilating, and air conditioning) systems. Their approach avoids the installation of extra hard sensors in the environment and offers a new solution to this problem. @cite_27 offers a different solution to this problem by using the Google Glass, which combines data from both visual and inertial sensors. @cite_13 investigated the issue of building occupancy estimation by using a wireless @math sensor network.
{ "cite_N": [ "@cite_27", "@cite_14", "@cite_13" ], "mid": [ "1670326883", "1680148840", "2577997467" ], "abstract": [ "This paper proposes an ego-motion tracking method that utilizes visual-inertial sensors for wearable blind navigation. The unique challenge of wearable motion tracking is to cope with arbitrary body motion and complex environmental dynamics. We introduce a visual sanity check to select accurate visual estimations by comparing visually estimated rotation with measured rotation by a gyroscope. The movement trajectory is recovered through adaptive fusion of visual estimations and inertial measurements, where the visual estimation outputs motion transformation between consecutive image captures, and inertial sensors measure translational acceleration and angular velocities. The frame rates of visual and inertial sensors are different, and vary with respect to time owning to visual sanity checks. We hence employ a multirate extended Kalman filter (EKF) to fuse visual and inertial estimations. The proposed method was tested in different indoor environments, and the results show its effectiveness and accuracy in ego-motion tracking.", "We address the problem of estimating the number of people in a room using information available in standard HVAC systems. We propose an estimation scheme based on two phases. In the first phase, we assume the availability of pilot data and identify a model for the dynamic relations occurring between occupancy levels, @math concentration and room temperature. In the second phase, we make use of the identified model to formulate the occupancy estimation task as a deconvolution problem. In particular, we aim at obtaining an estimated occupancy pattern by trading off between adherence to the current measurements and regularity of the pattern. To achieve this goal, we employ a special instance of the so-called fused lasso estimator, which promotes piecewise constant estimates by including an @math norm-dependent term in the associated cost function. We extend the proposed estimator to include different sources of information, such as actuation of the ventilation system and door opening closing events. We also provide conditions under which the occupancy estimator provides correct estimates within a guaranteed probability. We test the estimator running experiments on a real testbed, in order to compare it with other occupancy estimation techniques and assess the value of having additional information sources.", "" ] }
1705.09543
2617384283
Smart environments interconnect indoor building environments, indoor wireless sensor and actuator networks, smartphones, and human together to provide smart infrastructure management and intelligent user experiences. To enable the "smart" operations, a complete set of hardware and software components are required. In this work, we present Smart Syndesi, a system for creating indoor location-aware smart building environments using wireless sensor and actuator networks (WSANs). Smart Syndesi includes an indoor tracking system, a WSAN for indoor environmental monitoring and activation automation, and a gateway interconnecting WSAN, tracking system with mobile users.The indoor positioning system tracks the real-time location of occupants with high accuracy, which works as a basis for indoor location-based sensor actuation automation.To show how the multiple software hardware components are integrated, we implemented the system prototype and performed intensive experiments in indoor office environments to automate the indoor location-driven environmental sensor monitoring and activation process. The tracked indoor location of a user's smartphone triggers the retrieval of environmental measurements and activates the actuators automatically (i.e. turn on off lights, switch on off fans) based on the location and correlated environmental sensor information.
To build an infrastructure platform that controls the electrical appliances in smart environments, wireless sensor and actuator networks are the dominant technology @cite_9 @cite_5 @cite_1 @cite_12 @cite_16 . WSN, rather than WiFi based networks, have been popularly employed for remote control and monitoring applications, mainly because of their low cost and reduced power consumption @cite_19 @cite_4 @cite_23 @cite_26 . The deployment of such a system is not easy due to the existences of different communication standards. @cite_17 proposed a wireless sensor networking using 6LoWPAN to connect sensors and actuators in a home automation application. The authors of @cite_2 presented Syndesi, a framework for creating personalized smart environments using WSANs. It combines WSANs with different communication technologies, such as Near Field Communication (NFC), Bluetooth, ZigBee, and 6LoWPAN along with an electrical interface to control office appliances.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_9", "@cite_1", "@cite_19", "@cite_23", "@cite_2", "@cite_5", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "", "", "", "", "", "2119549134", "2014692821", "", "2322413790", "2151687604", "1984150331" ], "abstract": [ "", "", "", "", "", "This paper presents the design, implementation, and characterization of a hardware platform applicable to wireless structural health monitoring (WSHM). The primary design goal is to devise a system capable of persistent operation for the duration of the life cycle of a target structure. It should be deployable during the construction phase and reconfigurable thereafter, suitable for continuous long-term monitoring. In addition to selecting the most energy efficient useful components to ensure the lowest possible power consumption, it is necessary to consider sources of energy other than, or complementary to, batteries. Thus, the platform incorporates multisource energy harvesting, electrochemical fuel cell (FC), energy storage, recharging capability, and intelligent operation through real-time energy information exchange with the primary controller. It is shown that, with appropriate integration, the device will have sufficient energy to operate perpetually in a distributed WSHM application. This conclusion is demonstrated through experimental results, simulations, and empirical measurements that demonstrate the high-efficiency energy conversion of the harvesters (up to 86 ) and low-power characteristics of the platform (less than 1 mW in sleep mode). It is shown that energy autonomy is comfortably achievable for duty cycles up to 0.75 , meeting the demands of the application, and up to 1.5 , invoking the FC.", "Smart environments are places where different kind of embedded devices are interconnected in order to provide their occupants intelligent services improving their comfort and convenience. These smart environments are seen to be important for the future urban ecosystems in terms of user friendliness, quality of life, energy efficiency and sustainability. Lately such environments have become economically and technologically feasible due to the advancements in embedded and distributed technologies. Most of the novel infrastructures adopt smart technologies, while old infrastructures need a transition towards smart environments. Even though different technologies and devices are available, there is a need for an appropriate methodology and a system architecture for a smooth and profitable transition towards smart environments. In this paper we present a framework for creating personalized smart environments using wireless sensor networks. This framework, among other services that it provides, is able to identify people and take personalized actions such as control electrical devices based on their preferences and needs. We present, as a proof of concept, a real world deployment where two scenarios are implemented in two office premises.", "", "Abstract In this paper, current technologies which can be used in the wireless home networking area are reviewed and compared. Then a solution of smart home system based on ZigBee technology is put forward and discussed. The hardware design of ZigBee wireless sensor network based on CC2430 chip and embedded home gateway based on S3C2440 is also given in detail. The software workflow of coordinator and router are discussed and user-defined frame format is further discribed. Experimental results show that the solution of smart home system mentioned above can be served as practical application reliably.", "For 'mixed-criticality' systems that have both critical and non-critical functions, the greatest leverage on dependability may be at the design level. By designing so that each critical requirement has a small trusted base, the cost of the analysis required for a dependability case might be dramatically reduced. An implication of this approach is that conventional object-oriented design may be a liability, because it leads to 'entanglement', and an approach based on separating services may be preferable.", "Wireless sensor and actuator networks (WS&ANs) are a new technology based on networks of small radio-enabled embedded devices that are being deployed in areas such as environmental monitoring, vehicle tracking, building management, body monitoring and other applications. Power sources for network nodes are often limited, which imposes restrictions on hardware resources and their use by the underlying embedded software. We propose a new wireless sensor network architecture that is especially designed for the task of home automation. Our system relies on a low power WS&AN that employs energy harvesting techniques to maximize node lifetime and an embedded residential gateway that offers user interaction and secure connectivity to the outside world. The advantages of our system are its scalability, low power, self sufficiency and versatility." ] }
1705.09585
2619301517
Individuals on social media may reveal themselves to be in various states of crisis (e.g. suicide, self-harm, abuse, or eating disorders). Detecting crisis from social media text automatically and accurately can have profound consequences. However, detecting a general state of crisis without explaining why has limited applications. An explanation in this context is a coherent, concise subset of the text that rationalizes the crisis detection. We explore several methods to detect and explain crisis using a combination of neural and non-neural techniques. We evaluate these techniques on a unique data set obtained from Koko, an anonymous emotional support network available through various messaging applications. We annotate a small subset of the samples labeled with crisis with corresponding explanations. Our best technique significantly outperforms the baseline for detection and explanation.
In the past few years, neural attention mechanisms over words @cite_0 have led to improvements in performance and interpretability in a range of tasks, such as translation (Bahdanau) and natural language inference . These models induce a soft alignment between two sequences with the primary aim of using it to remove an information bottleneck, but this alignment can be also be used quite effectively to visualize which inputs drive model behavior.
{ "cite_N": [ "@cite_0" ], "mid": [ "2133564696" ], "abstract": [ "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition." ] }
1705.09369
2618565609
Deep Convolutional Neural Networks (CNN) have shown great success in supervised classification tasks such as character classification or dating. Deep learning methods typically need a lot of annotated training data, which is not available in many scenarios. In these cases, traditional methods are often better than or equivalent to deep learning methods. In this paper, we propose a simple, yet effective, way to learn CNN activation features in an unsupervised manner. Therefore, we train a deep residual network using surrogate classes. The surrogate classes are created by clustering the training dataset, where each cluster index represents one surrogate class. The activations from the penultimate CNN layer serve as features for subsequent classification tasks. We evaluate the feature representations on two publicly available datasets. The focus lies on the ICDAR17 competition dataset on historical document writer identification (Historical-WI). We show that the activation features we trained without supervision are superior to descriptors of state-of-the-art writer identification methods. Additionally, we achieve comparable results in the case of handwriting classification using the ICFHR16 competition dataset on historical Latin script types (CLaMM16).
Clustering has also been used to create unsupervised attributes for historical document dating in the work of al He @cite_21 . However, they use handcrafted features in conjunction with SVMs. Instead, we learn the features in an unsupervised manner using a deep CNN.
{ "cite_N": [ "@cite_21" ], "mid": [ "2444313793" ], "abstract": [ "The date of historical documents is an important metadata for scholars using them, as they need to know the historical context of the documents. This paper presents a novel attribute representation for medieval documents to automatically estimate the date information, which are the years they had been written. Non-semantic attributes are discovered in the low-level feature space using an unsupervised attribute learning method. A negative data set is involved in the attribute learning to make sure that our system rejects the documents which are not from the Middle Ages nor from the same archives. Experimental results on the basis of the Medieval Paleographic Scale (MPS) data set demonstrate that the proposed method achieves the state-of-the-art result." ] }
1705.09369
2618565609
Deep Convolutional Neural Networks (CNN) have shown great success in supervised classification tasks such as character classification or dating. Deep learning methods typically need a lot of annotated training data, which is not available in many scenarios. In these cases, traditional methods are often better than or equivalent to deep learning methods. In this paper, we propose a simple, yet effective, way to learn CNN activation features in an unsupervised manner. Therefore, we train a deep residual network using surrogate classes. The surrogate classes are created by clustering the training dataset, where each cluster index represents one surrogate class. The activations from the penultimate CNN layer serve as features for subsequent classification tasks. We evaluate the feature representations on two publicly available datasets. The focus lies on the ICDAR17 competition dataset on historical document writer identification (Historical-WI). We show that the activation features we trained without supervision are superior to descriptors of state-of-the-art writer identification methods. Additionally, we achieve comparable results in the case of handwriting classification using the ICFHR16 competition dataset on historical Latin script types (CLaMM16).
The most closely related work comes from al Dosovitskiy @cite_35 , where surrogate classes are created by a variety of image transformations such as rotation or scale. Using these classes to train a CNN, they generate features, which are invariant to many transformations and are advantageous in comparison to handcrafted features. They also suggest to cluster the images in advance to apply their transformations on each cluster image, and then use the cluster indices as surrogate classes. A similar procedure is applied by al Huang @cite_27 to discover shared attributes and visual representations. In comparison to the datasets used in the evaluation of al Dosovitskiy and al Huang , we have much more training samples available since we consider small handwriting patches. Thus, an exhaustive augmentation of the dataset is not necessary; instead, one cluster directly represents a surrogate class. Another interesting approach for deep unsupervised feature learning is the work of al Paulin @cite_40 , where Convolutional Kernel Networks (CKN) are employed. CKNs are similar to CNNs but are trained layer-wise to approximate a particular non-linear kernel.
{ "cite_N": [ "@cite_35", "@cite_27", "@cite_40" ], "mid": [ "2144796873", "2519998487", "2287558205" ], "abstract": [ "Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled ‘seed’ image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.", "Attributes offer useful mid-level features to interpret visual data. While most attribute learning methods are supervised by costly human-generated labels, we introduce a simple yet powerful unsupervised approach to learn and predict visual attributes directly from data. Given a large unlabeled image collection as input, we train deep Convolutional Neural Networks (CNNs) to output a set of discriminative, binary attributes often with semantic meanings. Specifically, we first train a CNN coupled with unsupervised discriminative clustering, and then use the cluster membership as a soft supervision to discover shared attributes from the clusters while maximizing their separability. The learned attributes are shown to be capable of encoding rich imagery properties from both natural images and contour patches. The visual representations learned in this way are also transferrable to other tasks such as object detection. We show other convincing results on the related tasks of image retrieval and classification, and contour detection.", "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval." ] }
1705.09587
2617708032
We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300 x 300 achieved 78.5 mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512 x 512 sized input achieved 80.8 mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.
A wide variety of methods using deep learning have been applied to the problem of object detection and it continues to show performance improvements. In the earlier works pioneered by R-CNN (region-based CNN) @cite_13 , the candidate region was proposed through a separate algorithms such as selective search @cite_9 or Edge boxes @cite_12 and the classification was performed with deep learning.
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_12" ], "mid": [ "", "2102605133", "7746136" ], "abstract": [ "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy." ] }
1705.09587
2617708032
We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300 x 300 achieved 78.5 mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512 x 512 sized input achieved 80.8 mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.
Although R-CNN improved the accuracy using deep learning, speed was still a problem, and end-to-end learning was impossible. The region proposal network (RPN) was first proposed in faster R-CNN, which improved the speed of the object detector significantly and was able to learn end-to-end @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2613718673" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn." ] }
1705.09587
2617708032
We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300 x 300 achieved 78.5 mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512 x 512 sized input achieved 80.8 mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.
On the other hand, SSD creates bounding box candidates at a given position and scale and obtains their actual bounding box and score for each class @cite_15 . Recently, to improve the accuracy of SSD, especially for small object, DSSD (deconvolutional SSD) that uses a large scale context for the feature pyramid was proposed @cite_10 . DSSD applied a deconvolution module to the feature pyramid and used ResNet instead of VGGNet. DSSD succeeded in raising accuracy at the expense of speed.
{ "cite_N": [ "@cite_15", "@cite_10" ], "mid": [ "2193145675", "2579985080" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset." ] }
1705.09587
2617708032
We propose an object detection method that improves the accuracy of the conventional SSD (Single Shot Multibox Detector), which is one of the top object detection algorithms in both aspects of accuracy and speed. The performance of a deep network is known to be improved as the number of feature maps increases. However, it is difficult to improve the performance by simply raising the number of feature maps. In this paper, we propose and analyze how to use feature maps effectively to improve the performance of the conventional SSD. The enhanced performance was obtained by changing the structure close to the classifier network, rather than growing layers close to the input data, e.g., by replacing VGGNet with ResNet. The proposed network is suitable for sharing the weights in the classifier networks, by which property, the training can be faster with better generalization power. For the Pascal VOC 2007 test set trained with VOC 2007 and VOC 2012 training sets, the proposed network with the input size of 300 x 300 achieved 78.5 mAP (mean average precision) at the speed of 35.0 FPS (frame per second), while the network with a 512 x 512 sized input achieved 80.8 mAP at 16.6 FPS using Nvidia Titan X GPU. The proposed network shows state-of-the-art mAP, which is better than those of the conventional SSD, YOLO, Faster-RCNN and RFCN. Also, it is faster than Faster-RCNN and RFCN.
Besides the fields of object detection, the fields of segmentation have also been much developed by the application of deep neural networks. Many of them use pixel-based object classification in combination with upsampling or deconvolution to obtain the segmentation results in the same size as the input image @cite_5 @cite_11 . In addition, features are engineered in various ways by concatenation, element-wise addition or element-wise product with bypass, to obtain improved performance. This paper is inspired by the fact that we get improved abstract representation of the original image from features in different scales @cite_0 @cite_7 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_7", "@cite_11" ], "mid": [ "1948751323", "2952637581", "2526041965", "2286929393" ], "abstract": [ "Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.", "We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.", "We explore architectures for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that (1) stratified sampling allows us to add diversity during batch updates and (2) sampled multi-scale features allow us to explore more nonlinear predictors (multiple fully-connected layers followed by ReLU) that improve overall accuracy. Finally, our objective is to show how a architecture can get performance better than (or comparable to) the architectures designed for a particular task. Interestingly, our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context, surface normal estimation on NYUDv2 dataset, and edge detection on BSDS without contextual post-processing.", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy." ] }
1705.08974
2619994939
Online social media is a social vehicle in which people share various moments of their lives with their friends, such as playing sports, cooking dinner or just taking a selfie for fun, via visual means, that is, photographs. Our study takes a closer look at the popular visual concepts illustrating various cultural lifestyles from aggregated, de-identified photographs. We perform analysis both at macroscopic and microscopic levels, to gain novel insights about global and local visual trends as well as the dynamics of interpersonal cultural exchange and diffusion among Facebook friends. We processed images by automatically classifying the visual content by a convolutional neural network (CNN). Through various statistical tests, we find that socially tied individuals more likely post images showing similar cultural lifestyles. To further identify the main cause of the observed social correlation, we use the Shuffle test and the Preference-based Matched Estimation (PME) test to distinguish the effects of influence and homophily. The results indicate that the visual content of each user's photographs are temporally, although not necessarily causally, correlated with the photographs of their friends, which may suggest the effect of influence. Our paper demonstrates that Facebook photographs exhibit diverse cultural lifestyles and preferences and that the social interaction mediated through the visual channel in social media can be an effective mechanism for cultural diffusion.
Automated visual content analysis by computer vision has been used to analyze web images in various applications including fashion studies @cite_11 or political analysis @cite_2 . Geo-tagged photographs are particularly useful to understand local communities and geographic differences in popular photographic style and content @cite_24 , architectural style @cite_7 , natural environment @cite_9 , ecological phenomena @cite_25 , or other socio-economic statues @cite_29 @cite_15 @cite_3 @cite_21 . These studies collect images for each geographical region and treat them collectively without distinguishing who post them ( , photos are used solely to study geographical features). In contrast, the key concern in this paper is each individual user's social relation, and we study the role of social ties in cultural diffusion.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_29", "@cite_21", "@cite_3", "@cite_24", "@cite_2", "@cite_15", "@cite_25", "@cite_11" ], "mid": [ "2055132753", "", "1525292367", "", "265421414", "2524508530", "2073775149", "1968147892", "2052202194", "1949942966" ], "abstract": [ "Given a large repository of geotagged imagery, we seek to automatically find visual elements, e. g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguishing architectural elements of different places can be very subtle. In addition, we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering approach able to take into account the weak geographic supervision. We show that geographically representative image elements can be discovered automatically from Google Street View imagery in a discriminative manner. We demonstrate that these elements are visually interpretable and perceptually geo-informative. The discovered visual elements can also support a variety of computational geography tasks, such as mapping architectural correspondences and influences within and across cities, finding representative elements at different geo-spatial scales, and geographically-informed image retrieval.", "", "After hundreds of years of human settlement, each city has formed a distinct identity, distinguishing itself from other cities. In this work, we propose to characterize the identity of a city via an attribute analysis of 2 million geo-tagged images from 21 cities over 3 continents. First, we estimate the scene attributes of these images and use this representation to build a higher-level set of 7 city attributes, tailored to the form and function of cities. Then, we conduct the city identity recognition experiments on the geo-tagged images and identify images with salient city identity on each city attribute. Based on the misclassification rate of the city identity recognition, we analyze the visual similarity among different cities. Finally, we discuss the potential application of computer vision to urban planning.", "", "Human observers make a variety of perceptual inferences about pictures of places based on prior knowledge and experience. In this paper we apply computational vision techniques to the task of predicting the perceptual characteristics of places by leveraging recent work on visual features along with a geo-tagged dataset of images associated with crowd-sourced urban perception judgments for wealth, uniqueness, and safety. We perform extensive evaluations of our models, training and testing on images of the same city as well as training and testing on images of different cities to demonstrate generalizability. In addition, we collect a new densely sampled dataset of streetview images for 4 cities and explore joint models to collectively predict perceptual judgments at city scale. Finally, we show that our predictions correlate well with ground truth statistics of wealth and crime.", "Billions of photos shared online today are created by people with different socio-economic characteristics living in different locations. We introduce a number of methods for quantifying the differences between such \"photo cultures\" and apply them to a large collection of Instagram images shared in five mega-cities around the world. First, we extract image content and style features and use them to design a new visualization technique for qualitative analysis of photo cultures. We then use supervised learning to automatically recognize and compare visual activity at different locations and expose surprising connections between geographically distant photo cultures. Finally, we perform a low-level quantitative analysis to understand what makes photo cultures different from each other.", "In this paper we introduce the novel problem of understanding visual persuasion. Modern mass media make extensive use of images to persuade people to make commercial and political decisions. These effects and techniques are widely studied in the social sciences, but behavioral studies do not scale to massive datasets. Computer vision has made great strides in building syntactical representations of images, such as detection and identification of objects. However, the pervasive use of images for communicative purposes has been largely ignored. We extend the significant advances in syntactic analysis in computer vision to the higher-level challenge of understanding the underlying communicative intent implied in images. We begin by identifying nine dimensions of persuasive intent latent in images of politicians, such as \"socially dominant, \" \"energetic, \" and \"trustworthy, \" and propose a hierarchical model that builds on the layer of syntactical attributes, such as \"smile\" and \"waving hand, \" to predict the intents presented in the images. To facilitate progress, we introduce a new dataset of 1, 124 images of politicians labeled with ground-truth intents in the form of rankings. This study demonstrates that a systematic focus on visual persuasion opens up the field of computer vision to a new class of investigations around mediated images, intersecting with media analysis, psychology, and political communication.", "A traveler visiting Rio, Manila or Caracas does not need a report to learn that these cities are unequal; she can see it directly from the taxicab window. This is because in most cities inequality is conspicuous, but also, because cities express different forms of inequality that are evident to casual observers. Cities are highly heterogeneous and often unequal with respect to the income of their residents, but also with respect to the cleanliness of their neighborhoods, the beauty of their architecture, and the liveliness of their streets, among many other evaluative dimensions. Until now, however, our ability to understand the effect of a city's built environment on social and economic outcomes has been limited by the lack of quantitative data on urban perception. Here, we build on the intuition that inequality is partly conspicuous to create quantitative measure of a city's contrasts. Using thousands of geo-tagged images, we measure the perception of safety, class and uniqueness; in the cities of Boston and New York in the United States, and Linz and Salzburg in Austria, finding that the range of perceptions elicited by the images of New York and Boston is larger than the range of perceptions elicited by images from Linz and Salzburg. We interpret this as evidence that the cityscapes of Boston and New York are more contrasting, or unequal, than those of Linz and Salzburg. Finally, we validate our measures by exploring the connection between them and homicides, finding a significant correlation between the perceptions of safety and class and the number of homicides in a NYC zip code, after controlling for the effects of income, population, area and age. Our results show that online images can be used to create reproducible quantitative measures of urban perception and characterize the inequality of different cities.", "The popularity of social media websites like Flickr and Twitter has created enormous collections of user-generated content online. Latent in these content collections are observations of the world: each photo is a visual snapshot of what the world looked like at a particular point in time and space, for example, while each tweet is a textual expression of the state of a person and his or her environment. Aggregating these observations across millions of social sharing users could lead to new techniques for large-scale monitoring of the state of the world and how it is changing over time. In this paper we step towards that goal, showing that by analyzing the tags and image features of geo-tagged, time-stamped photos we can measure and quantify the occurrence of ecological phenomena including ground snow cover, snow fall and vegetation density. We compare several techniques for dealing with the large degree of noise in the dataset, and show how machine learning can be used to reduce errors caused by misleading tags and ambiguous visual content. We evaluate the accuracy of these techniques by comparing to ground truth data collected both by surface stations and by Earth-observing satellites. Besides the immediate application to ecology, our study gives insight into how to accurately crowd-source other types of information from large, noisy social sharing datasets.", "In this paper, we analyze the fashion of clothing of a large social website. Our goal is to learn and predict how fashionable a person looks on a photograph and suggest subtle improvements the user could make to improve her his appeal. We propose a Conditional Random Field model that jointly reasons about several fashionability factors such as the type of outfit and garments the user is wearing, the type of the user, the photograph's setting (e.g., the scenery behind the user), and the fashionability score. Importantly, our model is able to give rich feedback back to the user, conveying which garments or even scenery she he should change in order to improve fashionability. We demonstrate that our joint approach significantly outperforms a variety of intelligent baselines. We additionally collected a novel heterogeneous dataset with 144,169 user posts containing diverse image, textual and meta information which can be exploited for our task. We also provide a detailed analysis of the data, showing different outfit trends and fashionability scores across the globe and across a span of 6 years." ] }
1705.08974
2619994939
Online social media is a social vehicle in which people share various moments of their lives with their friends, such as playing sports, cooking dinner or just taking a selfie for fun, via visual means, that is, photographs. Our study takes a closer look at the popular visual concepts illustrating various cultural lifestyles from aggregated, de-identified photographs. We perform analysis both at macroscopic and microscopic levels, to gain novel insights about global and local visual trends as well as the dynamics of interpersonal cultural exchange and diffusion among Facebook friends. We processed images by automatically classifying the visual content by a convolutional neural network (CNN). Through various statistical tests, we find that socially tied individuals more likely post images showing similar cultural lifestyles. To further identify the main cause of the observed social correlation, we use the Shuffle test and the Preference-based Matched Estimation (PME) test to distinguish the effects of influence and homophily. The results indicate that the visual content of each user's photographs are temporally, although not necessarily causally, correlated with the photographs of their friends, which may suggest the effect of influence. Our paper demonstrates that Facebook photographs exhibit diverse cultural lifestyles and preferences and that the social interaction mediated through the visual channel in social media can be an effective mechanism for cultural diffusion.
A few studies have also examined how image features can predict image popularity @cite_6 @cite_1 or viewer engagement @cite_35 . These studies focus on image instance-level analysis ( , what makes image popular?'') whereas our paper investigates whether certain visual concepts propagate between images of different users.
{ "cite_N": [ "@cite_35", "@cite_1", "@cite_6" ], "mid": [ "2168117362", "2065735564", "2039355801" ], "abstract": [ "Photos are becoming prominent means of communication online. Despite photos' pervasive presence in social media and online world, we know little about how people interact and engage with their content. Understanding how photo content might signify engagement, can impact both science and design, influencing production and distribution. One common type of photo content that is shared on social media, is the photos of people. From studies of offline behavior, we know that human faces are powerful channels of non-verbal communication. In this paper, we study this behavioral phenomena online. We ask how presence of a face, it's age and gender might impact social engagement on the photo. We use a corpus of 1 million Instagram images and organize our study around two social engagement feedback factors, likes and comments. Our results show that photos with faces are 38 more likely to receive likes and 32 more likely to receive comments, even after controlling for social network reach and activity. We find, however, that the number of faces, their age and gender do not have an effect. This work presents the first results on how photos with human faces relate to engagement on large scale image sharing communities. In addition to contributing to the research around online user behavior, our findings offer a new line of future work using visual analysis.", "Little is known on how visual content affects the popularity on social networks, despite images being now ubiquitous on the Web, and currently accounting for a considerable fraction of all content shared. Existing art on image sharing focuses mainly on non-visual attributes. In this work we take a complementary approach, and investigate resharing from a mainly visual perspective. Two sets of visual features are proposed, encoding both aesthetical properties (brightness, contrast, sharpness, etc.), and semantical content (concepts represented by the images). We collected data from a large image-sharing service (Pinterest) and evaluated the predictive power of different features on popularity (number of reshares). We found that visual properties have low predictive power compared that of social cues. However, after factoring-out social influence, visual features show considerable predictive power, especially for images with higher exposure, with over 3:1 accuracy odds when classifying highly exposed images between very popular and unpopular.", "Hundreds of thousands of photographs are uploaded to the internet every minute through various social networking and photo sharing platforms. While some images get millions of views, others are completely ignored. Even from the same users, different photographs receive different number of views. This begs the question: What makes a photograph popular? Can we predict the number of views a photograph will receive even before it is uploaded? These are some of the questions we address in this work. We investigate two key components of an image that affect its popularity, namely the image content and social context. Using a dataset of about 2.3 million images from Flickr, we demonstrate that we can reliably predict the normalized view count of images with a rank correlation of 0.81 using both image content and social cues. In this paper, we show the importance of image cues such as color, gradients, deep learning features and the set of objects present, as well as the importance of various social cues such as number of friends or number of photos uploaded that lead to high or low popularity of images." ] }
1705.09353
2619010835
We present a new model, called Predictive State Recurrent Neural Networks (PSRNNs), for filtering and prediction in dynamical systems. PSRNNs draw on insights from both Recurrent Neural Networks (RNNs) and Predictive State Representations (PSRs), and inherit advantages from both types of models. Like many successful RNN architectures, PSRNNs use (potentially deeply composed) bilinear transfer functions to combine information from multiple sources, so that one source can act as a gate for another. These bilinear functions arise naturally from the connection to state updates in Bayes filters like PSRs, in which observations can be viewed as gating belief states. We show that PSRNNs can be learned effectively by combining backpropogation through time (BPTT) with an initialization based on a statistically consistent learning algorithm for PSRs called two-stage regression (2SR). We also show that PSRNNs can be can be factorized using tensor decomposition, reducing model size and suggesting interesting theoretical connections to existing multiplicative architectures such as LSTMs. We applied PSRNNs to 4 datasets, and showed that we outperform several popular alternative approaches to modeling dynamical systems in all cases.
It is well known that a principled initialization can greatly increase the effectiveness of local search heuristics. For example, and use subspace ID to initialize EM for linear dyanmical systems, and use N4SID @cite_33 to initialize GP-Bayes filters.
{ "cite_N": [ "@cite_33" ], "mid": [ "2130212796" ], "abstract": [ "Abstract Recently a great deal of attention has been given to numerical algorithms for subspace state space system identification (N4SID). In this paper, we derive two new N4SID algorithms to identify mixed deterministic-stochastic systems. Both algorithms determine state sequences through the projection of input and output data. These state sequences are shown to be outputs of non-steady state Kalman filter banks. From these it is easy to determine the state space system matrices. The N4SID algorithms are always convergent (non-iterative) and numerically stable since they only make use of QR and Singular Value Decompositions. Both N4SID algorithms are similar, but the second one trades off accuracy for simplicity. These new algorithms are compared with existing subspace algorithms in theory and in practice." ] }
1705.09353
2619010835
We present a new model, called Predictive State Recurrent Neural Networks (PSRNNs), for filtering and prediction in dynamical systems. PSRNNs draw on insights from both Recurrent Neural Networks (RNNs) and Predictive State Representations (PSRs), and inherit advantages from both types of models. Like many successful RNN architectures, PSRNNs use (potentially deeply composed) bilinear transfer functions to combine information from multiple sources, so that one source can act as a gate for another. These bilinear functions arise naturally from the connection to state updates in Bayes filters like PSRs, in which observations can be viewed as gating belief states. We show that PSRNNs can be learned effectively by combining backpropogation through time (BPTT) with an initialization based on a statistically consistent learning algorithm for PSRs called two-stage regression (2SR). We also show that PSRNNs can be can be factorized using tensor decomposition, reducing model size and suggesting interesting theoretical connections to existing multiplicative architectures such as LSTMs. We applied PSRNNs to 4 datasets, and showed that we outperform several popular alternative approaches to modeling dynamical systems in all cases.
Existing work similar to ours can be organized into two main categories: 1) methods which attempt to use BFs to initialize RNNs @cite_1 @cite_15 ; and 2) methods which attempt to use BPTT to refine Bayes Filters @cite_17 @cite_20 . Researchers have been experimenting with the latter strategy for several decades @cite_20 .
{ "cite_N": [ "@cite_15", "@cite_1", "@cite_20", "@cite_17" ], "mid": [ "1826694107", "2394569251", "2512297048", "2587899849" ], "abstract": [ "Low dimensional representations of words allow accurate NLP models to be trained on limited annotated data. While most representations ignore words' local context, a natural way to induce context-dependent representations is to perform inference in a probabilistic latent-variable sequence model. Given the recent success of continuous vector space word representations, we provide such an inference procedure for continuous states, where words' representations are given by the posterior mean of a linear dynamical system. Here, efficient inference can be performed using Kalman filtering. Our learning algorithm is extremely scalable, operating on simple cooccurrence counts for both parameter initialization using the method of moments and subsequent iterations of EM. In our experiments, we employ our inferred word embeddings as features in standard tagging tasks, obtaining significant accuracy improvements. Finally, the Kalman filter updates can be seen as a linear recurrent neural network. We demonstrate that using the parameters of our model to initialize a non-linear recurrent neural network language model reduces its training time by a day and yields lower perplexity.", "Much recent research highlighted the critical role of unsuper- vised pre-training to improve the performance of neural network models. However, extensions of those architectures to the temporal domain intro- duce additional issues, which often prevent to obtain good performance in a reasonable time. We propose a novel approach to pre-train sequential neural networks in which a simpler, approximate distribution generated by a linear model is first used to drive the weights in a better region of the parameter space. After this smooth distribution has been learned, the net- work is fine-tuned on the more complex real dataset. The benefits of the proposed method are demonstrated on a prediction task using two datasets of polyphonic music, and the general validity of this strategy is shown by applying it to two different recurrent neural network architectures.", "", "Over the past decade there has been considerable interest in spectral algorithms for learning Predictive State Representations (PSRs). Spectral algorithms have appealing theoretical guarantees; however, the resulting models do not always perform well on inference tasks in practice. One reason for this behavior is the mismatch between the intended task (accurate filtering or prediction) and the loss function being optimized by the algorithm (estimation error in model parameters). A natural idea is to improve performance by refining PSRs using an algorithm such as EM. Unfortunately it is not obvious how to apply apply an EM style algorithm in the context of PSRs as the Log Likelihood is not well defined for all PSRs. We show that it is possible to overcome this problem using ideas from Predictive State Inference Machines. We combine spectral algorithms for PSRs as a consistent and efficient initialization with PSIM-style updates to refine the resulting model parameters. By combining these two ideas we develop Inference Gradients, a simple, fast, and robust method for practical learning of PSRs. Inference Gradients performs gradient descent in the PSR parameter space to optimize an inference-based loss function like PSIM. Because Inference Gradients uses a spectral initialization we get the same consistency benefits as PSRs. We show that Inference Gradients outperforms both PSRs and PSIMs on real and synthetic data sets." ] }
1705.09045
2619583400
In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.
Arguably, one of the most common approaches for representing goals for RL problems is to provide hand-specified rewards that are based on internal parameters of an agent's environment. There is a wealth of literature that describes such task-specific rewards, from those that indicate when an agent has reached a desired location @cite_5 , to more complex ones that can be used as feedback for training neural networks @cite_22 . Rewards are largely where we instill domain knowledge, and so are inherently specialized. Even if a reward is commonly used for multiple problems in the same domain, it often remains a function of the configurations of each agent in the environment.
{ "cite_N": [ "@cite_5", "@cite_22" ], "mid": [ "1491843047", "2553303224" ], "abstract": [ "This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments.", "Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214." ] }
1705.09045
2619583400
In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.
For problems consisting of visual inputs, a more general solution for representing goals is to use target images. With this approach, rewards are based on pixels alone, thus allowing one to abstract away the task specification from the parameters of the agent’s configuration. This approach is similar to visuo-servoing, which uses target images to guide robots to some goal @cite_32 . Such solutions often require extracting known objects from camera images. A more recent RL approach learns the relevant features from the agent’s state space @cite_40 . However, this method still requires defining the goal in terms of the agent's environment. Another approach uses motion templates to transform the visually dissimilar motions of humans and a simulated robot into a similar structure, but this correspondence is hand-specified @cite_2 . Our approach aims to learn such correspondences.
{ "cite_N": [ "@cite_40", "@cite_32", "@cite_2" ], "mid": [ "2210483910", "2167501464", "2513173501" ], "abstract": [ "Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.", "This article provides a tutorial introduction to visual servo control of robotic manipulators. Since the topic spans many disciplines our goal is limited to providing a basic conceptual framework. We begin by reviewing the prerequisite topics from robotics and computer vision, including a brief review of coordinate transformations, velocity representation, and a description of the geometric aspects of the image formation process. We then present a taxonomy of visual servo control systems. The two major classes of systems, position-based and image-based systems, are then discussed in detail. Since any visual servo system must be capable of tracking image features in a sequence of images, we also include an overview of feature-based and correlation-based methods for tracking. We conclude the tutorial with a number of observations on the current directions of the research field of visual servo control.", "Reinforcement learning problems are often described through rewards that indicate if an agent has completed some task. This specification can yield desirable behavior, however many problems are difficult to specify in this manner, as one often needs to know the proper configuration for the agent. When humans are learning to solve tasks, we often learn from visual instructions composed of images or videos. Such representations motivate our development of Perceptual Reward Functions, which provide a mechanism for creating visual task descriptions. We show that this approach allows an agent to learn from rewards that are based on raw pixels rather than internal parameters." ] }
1705.09045
2619583400
In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.
Many methods utilize human feedback to guide an agent's behavior. For example, some approaches allow humans to provide rewards to agents in real-time @cite_37 @cite_26 @cite_17 . Alternatively, policy shaping takes a more hands-off approach, and provides policy advice'', as opposed to explicit rewards @cite_30 . Another approach aims to interactively teach an agent goals @cite_18 . Finally, rather than providing direct target images for where an object should be, @cite_40 introduced an approach that allowed humans to select the pixel locations for where a robot should move an object.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_18", "@cite_40", "@cite_17" ], "mid": [ "2098441518", "2118756286", "", "2517466267", "2210483910", "" ], "abstract": [ "A long term goal of Interactive Reinforcement Learning is to incorporate nonexpert human feedback to solve complex tasks. Some state-of-the-art methods have approached this problem by mapping human information to rewards and values and iterating over them to compute better control policies. In this paper we argue for an alternate, more effective characterization of human feedback: Policy Shaping. We introduce Advise, a Bayesian approach that attempts to maximize the information gained from human feedback by utilizing it as direct policy labels. We compare Advise to state-of-the-art approaches and show that it can outperform them and is robust to infrequent and inconsistent human feedback.", "We report on our reinforcement learning work on Cobot, a software agent that resides in the well-known online chat community LambdaMOO. Our initial work on Cobot cobotaaai provided him with the ability to collect social statistics and report them to users in a reactive manner. Here we describe our application of reinforcement learning to allow Cobot to proactively take actions in this complex social environment, and adapt his behavior from multiple sources of human reward. After 5 months of training, Cobot received 3171 reward and punishment events from 254 different Lambda -MOO users, and learned nontrivial preferences for a number of users. Cobot modifies his behavior based on his current state in an attempt to maximize reward. Here we describe LambdaMOO and the state and action spaces of Cobot, and report the statistical results of the learning experiment.", "", "Abstract Humans are extremely good at quickly teaching and learning new tasks through situated instructions; tasks such as learning a novel game or household chore. From studying such instructional interactions, we have observed that humans excel at communicating information through multiple modalities, including visual, linguistic, and physical ones. Rosie is a tabletop robot implemented in the Soar architecture that learns new tasks from online interactive language instruction. In the past, the features of each task’s goal were explicitly described by a human instructor through language. In this work, we develop and study additional techniques for learning representations of goals. For game tasks, the agent can be given visual demonstrations of goal states, refined by human instructions. For procedural tasks, the agent uses information derived from task execution to determine which state features must be included in its goal representations. Using both approaches, Rosie learns correct goal representations from a single goal example or task execution across multiple games, puzzles, and procedural tasks. As expected, in most cases, the number of words required to teach the task is reduced when visual goal demonstrations are used. We also identify shortcomings of our approach and outline future research.", "Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.", "" ] }
1705.09045
2619583400
In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.
Inverse RL aims to infer a reward function from expert demonstrations @cite_29 . This approach typically requires multiple demonstrations for a single task. We also aim to learn a reward function in our approach, but we use a single sample to represent the goal. A similar approach is imitation learning, which aims to learn a policy directly from demonstrations @cite_31 . Both of these techniques often assume that the student, i.e., agent, and the teacher, or demonstrator, share a common environment. That is, the observed states and actions are shared between the teacher and student. When the observations are dissimilar, a correspondence problem must be solved @cite_33 . These correspondences are often hand-specified, and may require equipment such as sensors on the teacher’s body. Our own approach aims to learn such correspondences between different representations.
{ "cite_N": [ "@cite_29", "@cite_31", "@cite_33" ], "mid": [ "1999874108", "2166302491", "1986014385" ], "abstract": [ "We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.", "Summary This review investigates two recent developments in artificial intelligence and neural computation: learning from imitation and the development of humanoid robots. It will be postulated that the study of imitation learning offers a promising route to gain new insights into mechanisms of perceptual motor control that could ultimately lead to the creation of autonomous humanoid robots. Imitation learning focuses on three important issues: efficient motor learning, the connection between action and perception, and modular motor control in form of movement primitives. It will be reviewed how research on representations of, and functional connections between action and perception have contributed to our understanding of motor acts of other beings. The recent discovery that some areas in the primate brain are active during both movement perception and execution has provided a hypothetical neural basis of imitation. Computational approaches to imitation learning will also be described, initially from the perspective of traditional AI and robotics, but also from the perspective of neural network models and statistical learning research. Parallels and differences between biological and computational approaches to imitation will be highlighted and an overview of current projects that actually employ imitation learning for humanoid robots will be given.", "We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research." ] }
1705.09045
2619583400
In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.
Recent works have proposed solving the correspondence problem automatically. For example, one approach used deep learning to train an agent that had a first-person perspective through third-person samples @cite_12 . Another somewhat related RL approach is transfer learning, which aims to transfer learned behaviors from one domain to another @cite_34 , for example by initializing the parameters of policies in unsolved tasks @cite_35 , or by transferring skills across untrained robots @cite_19 . Our own approach does not assume we have examples of desired behaviors.
{ "cite_N": [ "@cite_19", "@cite_35", "@cite_34", "@cite_12" ], "mid": [ "2526379199", "1848094219", "2097381042", "" ], "abstract": [ "Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into \"task-specific\" and \"robot-specific\" modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel neural network architecture, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.", "The success of applying policy gradient reinforcement learning (RL) to difficult control tasks hinges crucially on the ability to determine a sensible initialization for the policy. Transfer learning methods tackle this problem by reusing knowledge gleaned from solving other related tasks. In the case of multiple task domains, these algorithms require an inter-task mapping to facilitate knowledge transfer across domains. However, there are currently no general methods to learn an inter-task mapping without requiring either background knowledge that is not typically present in RL settings, or an expensive analysis of an exponential number of inter-task mappings in the size of the state and action spaces. This paper introduces an autonomous framework that uses unsupervised manifold alignment to learn intertask mappings and effectively transfer samples between different task domains. Empirical results on diverse dynamical systems, including an application to quadrotor control, demonstrate its effectiveness for cross-domain transfer in the context of policy gradient RL.", "The reinforcement learning paradigm is a popular way to address problems that have only limited environmental feedback, rather than correctly labeled examples, as is common in other machine learning contexts. While significant progress has been made to improve learning in a single task, the idea of transfer learning has only recently been applied to reinforcement learning tasks. The core idea of transfer is that experience gained in learning to perform one task can help improve learning performance in a related, but different, task. In this article we present a framework that classifies transfer learning methods in terms of their capabilities and goals, and then use it to survey the existing literature, as well as to suggest future directions for transfer learning work.", "" ] }
1705.09045
2619583400
In reinforcement learning, we often define goals by specifying rewards within desirable states. One problem with this approach is that we typically need to redefine the rewards each time the goal changes, which often requires some understanding of the solution in the agents environment. When humans are learning to complete tasks, we regularly utilize alternative sources that guide our understanding of the problem. Such task representations allow one to specify goals on their own terms, thus providing specifications that can be appropriately interpreted across various environments. This motivates our own work, in which we represent goals in environments that are different from the agents. We introduce Cross-Domain Perceptual Reward (CDPR) functions, learned rewards that represent the visual similarity between an agents state and a cross-domain goal image. We report results for learning the CDPRs with a deep neural network and using them to solve two tasks with deep reinforcement learning.
Finally, there has been a breadth of recent work that aims to specify goals in different modalities. Such approaches have similar a motivation to our own. That is, they aim to provide goal specifications in more natural settings than the agent’s. For example, we have seen sketches of maps used for representing desired trajectories in navigational tasks @cite_1 @cite_4 , correspondences learned between words and robotic actions @cite_10 , rewards based on the touch of a handshake @cite_23 , and value functions learned from facial expressions @cite_38 . Recently, an approach learned correspondences between written instructions and visual states in Atari games @cite_43 . This approach is more similar to our own as it learns the correspondences between two different domain representations. It is clear that there is much momentum in this area of research. Still, we have not yet seen an approach that learns a visual correspondence for cross-domain goal representations.
{ "cite_N": [ "@cite_38", "@cite_4", "@cite_1", "@cite_43", "@cite_23", "@cite_10" ], "mid": [ "2419349576", "", "2522340145", "2609374097", "2570651606", "2167594606" ], "abstract": [ "An important application of interactive machine learning is extending or amplifying the cognitive and physical capabilities of a human. To accomplish this, machines need to learn about their human users' intentions and adapt to their preferences. In most current research, a user has conveyed preferences to a machine using explicit corrective or instructive feedback; explicit feedback imposes a cognitive load on the user and is expensive in terms of human effort. The primary objective of the current work is to demonstrate that a learning agent can reduce the amount of explicit feedback required for adapting to the user's preferences pertaining to a task by learning to perceive a value of its behavior from the human user, particularly from the user's facial expressions---we call this face valuing. We empirically evaluate face valuing on a grip selection task. Our preliminary results suggest that an agent can quickly adapt to a user's changing preferences with minimal explicit feedback by learning a value function that maps facial features extracted from a camera image to expected future reward. We believe that an agent learning to perceive a value from the body language of its human user is complementary to existing interactive machine learning approaches and will help in creating successful human-machine interactive applications.", "", "Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment. The supplementary video can be accessed at the following link: this https URL", "We introduce the first deep reinforcement learning agent that learns to beat Atari games with the aid of natural language instructions. The agent uses a multimodal embedding between environment observations and natural language to self-monitor progress through a list of English instructions, granting itself reward for completing instructions in addition to increasing the game score. Our agent significantly outperforms Deep Q-Networks (DQNs), Asynchronous Advantage Actor-Critic (A3C) agents, and the best agents posted to OpenAI Gym on what is often considered the hardest Atari 2600 environment: Montezuma's Revenge.", "For robots to coexist with humans in a social world like ours, it is crucial that they possess human-like social interaction skills. Programming a robot to possess such skills is a challenging task. In this paper, we propose a Multimodal Deep Q-Network (MDQN) to enable a robot to learn human-like interaction skills through a trial and error method. This paper aims to develop a robot that gathers data during its interaction with a human, and learns human interaction behavior from the high dimensional sensory information using end-to-end reinforcement learning. This paper demonstrates that the robot was able to learn basic interaction skills successfully, after 14 days of interacting with people.", "In this paper, a model based on Artificial Neural Networks (ANNs) extends the symbol grounding mechanism to abstract words for cognitive robots. The aim of this work is to obtain a semantic representation of abstract concepts through the grounding in sensorimotor experiences for a humanoid robotic platform. Simulation experiments have been developed on a software environment for the iCub robot. Words that express general actions with a sensorimotor component are first taught to the simulated robot. During the training stage the robot first learns to perform a set of basic action primitives through the mechanism of direct grounding. Subsequently, the grounding of action primitives, acquired via direct sensorimotor experience, is transferred to higher-order words via linguistic descriptions. The idea is that by combining words grounded in sensorimotor experience the simulated robot can acquire more abstract concepts. The experiments aim to teach the robot the meaning of abstract words by making it experience sensorimotor actions. The iCub humanoid robot will be used for testing experiments on a real robotic architecture." ] }
1705.08947
2619368999
We introduce a technique for augmenting neural text-to-speech (TTS) with lowdimensional trainable speaker embeddings to generate different voices from a single model. As a starting point, we show improvements over the two state-ofthe-art approaches for single-speaker neural TTS: Deep Voice 1 and Tacotron. We introduce Deep Voice 2, which is based on a similar pipeline with Deep Voice 1, but constructed with higher performance building blocks and demonstrates a significant audio quality improvement over Deep Voice 1. We improve Tacotron by introducing a post-processing neural vocoder, and demonstrate a significant audio quality improvement. We then demonstrate our technique for multi-speaker speech synthesis for both Deep Voice 2 and Tacotron on two multi-speaker TTS datasets. We show that a single neural TTS system can learn hundreds of unique voices from less than half an hour of data per speaker, while achieving high audio quality synthesis and preserving the speaker identities almost perfectly.
Our work is not the first to attempt a multi-speaker TTS system. For instance, in traditional HMM-based TTS synthesis [e.g.,][] yamagishi2009robust , an average voice model is trained using multiple speakers' data, which is then adapted to different speakers. DNN-based systems [e.g.,][] yang2016training have also been used to build average voice models, with i-vectors representing speakers as additional inputs and separate output layers for each target speaker. Similarly, @cite_1 uses a shared hidden representation among different speakers with speaker-dependent output layers predicting vocoder parameters (e.g., line spectral pairs, aperiodicity parameters etc.). For further context, @cite_0 empirically studies DNN-based multi-speaker modeling. More recently, speaker adaptation has been tackled with generative adversarial networks (GANs) .
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2605320104", "1492383498" ], "abstract": [ "A major advantage of statistical parametric speech synthesis (SPSS) over unit-selection speech synthesis is its adaptability and controllability in changing speaker characteristics and speaking style. Recently, several studies using deep neural networks (DNNs) as acoustic models for SPSS have shown promising results. However, the adaptability of DNNs in SPSS has not been systematically studied. In this paper, we conduct an experimental analysis of speaker adaptation for DNN-based speech synthesis at different levels. In particular, we augment a low-dimensional speaker-specific vector with linguistic features as input to represent speaker identity, perform model adaptation to scale the hidden activation weights, and perform a feature space transformation at the output layer to modify generated acoustic features. We systematically analyse the performance of each individual adaptation technique and that of their combinations. Experimental results confirm the adaptability of the DNN, and listening tests demonstrate that the DNN can achieve significantly better adaptation performance than the hidden Markov model (HMM) baseline in terms of naturalness and speaker similarity.", "In DNN-based TTS synthesis, DNNs hidden layers can be viewed as deep transformation for linguistic features and the output layers as representation of acoustic space to regress the transformed linguistic features to acoustic parameters. The deep-layered architectures of DNN can not only represent highly-complex transformation compactly, but also take advantage of huge amount of training data. In this paper, we propose an approach to model multiple speakers TTS with a general DNN, where the same hidden layers are shared among different speakers while the output layers are composed of speaker-dependent nodes explaining the target of each speaker. The experimental results show that our approach can significantly improve the quality of synthesized speech objectively and subjectively, comparing with speech synthesized from the individual, speaker-dependent DNN-based TTS. We further transfer the hidden layers for a new speaker with limited training data and the resultant synthesized speech of the new speaker can also achieve a good quality in term of naturalness and speaker similarity." ] }
1705.08997
2617832762
Typical reinforcement learning (RL) agents learn to complete tasks specified by reward functions tailored to their domain. As such, the policies they learn do not generalize even to similar domains. To address this issue, we develop a framework through which a deep RL agent learns to generalize policies from smaller, simpler domains to more complex ones using a recurrent attention mechanism. The task is presented to the agent as an image and an instruction specifying the goal. This meta-controller guides the agent towards its goal by designing a sequence of smaller subtasks on the part of the state space within the attention, effectively decomposing it. As a baseline, we consider a setup without attention as well. Our experiments show that the meta-controller learns to create subgoals within the attention.
Finally, @cite_5 describe how goal specific function approximators may be constructed for deep RL agents. Such a function can be constructed for our base learner by independently learning subgoals on a @math x @math image. We do not provide results for this setting in this paper but it can be integrated into future work.
{ "cite_N": [ "@cite_5" ], "mid": [ "567721252" ], "abstract": [ "Value functions are a core component of reinforcement learning systems. The main idea is to to construct a single function approximator V (s; θ) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V (s, g; θ) that generalise not just over states s but also over goals g. We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals." ] }
1705.09359
2955170540
Process mining is a research field focused on the analysis of event data with the aim of extracting insights related to dynamic behavior. Applying process mining techniques on data from smart home environments has the potential to provide valuable insights in (un)healthy habits and to contribute to ambient assisted living solutions. Finding the right event labels to enable the application of process mining techniques is however far from trivial, as simply using the triggering sensor as the label for sensor events results in uninformative models that allow for too much behavior (overgeneralizing). Refinements of sensor level event labels suggested by domain experts have been shown to enable discovery of more precise and insightful process models. However, there exists no automated approach to generate refinements of event labels in the context of process mining. In this paper we propose a framework for the automated generation of label refinements based on the time attribute of events, allowing us to distinguish behaviourally different instances of the same event type based on their time attribute. We show on a case study with real life smart home event data that using automatically generated refined labels in process discovery, we can find more specific, and therefore more insightful, process models. We observe that one label refinement could have an effect on the usefulness of other label refinements when used together. Therefore, we explore four strategies to generate useful combinations of multiple label refinements and evaluate those on three real life smart home event logs.
Another area of related work is data-aware process mining, where the aim is to discover rules with regard to data attributes of events that decide decision points in the process. De Leoni and van der Aalst @cite_44 proposed a method that discovers data guards for decision points in the process based on alignments and decision tree learning. This approach relies on the discovery of a behaviorally well-fitting process model from the original event log. When only overgeneralizing process models (i.e., allowing for too much behavior) can be discovered from an event log, the correct decision points might not be present in the discovered process model at all, resulting in this approach not being able to discover the data dependencies that are in the event log. Our label refinements use data attributes prior to process discovery to enable the discovery of more behaviorally constrained process models by bringing parts of the event attribute space to the event label.
{ "cite_N": [ "@cite_44" ], "mid": [ "2134443897" ], "abstract": [ "Process discovery, i.e., learning process models from event logs, has attracted the attention of researchers and practitioners. Today, there exists a wide variety of process mining techniques that are able to discover the control-flow of a process based on event data. These techniques are able to identify decision points, but do not analyze data flow to find rules explaining why individual cases take a particular path. Fortunately, recent advances in conformance checking can be used to align an event log with data and a process model with decision points. These alignments can be used to generate a well-defined classification problem per decision point. This way data flow and guards can be discovered and added to the process model." ] }
1705.09271
2617006937
Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. To date, the degree to which these findings manifest in practice has not been resolved. To address this issue, we examine one of the strongest cases against BEB: @math packets that simultaneously begin contending for the wireless channel. Using Network Simulator 3, we compare against more recent algorithms that are inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we discover that these newer algorithms significantly underperform. Through further investigation, we identify as the culprit a flawed but common abstraction regarding the cost of collisions. Our experimental results are complemented by analytical arguments that the number of collisions -- and not solely makespan -- is an important metric to optimize. We believe that these findings have implications for the design of contention-resolution algorithms.
Exponential backoff has been studied under Poisson-distributed traffic (see @cite_33 @cite_35 @cite_62 @cite_68 ). Guarantees on stability are known @cite_72 @cite_42 @cite_5 , and under saturated conditions @cite_9 .
{ "cite_N": [ "@cite_35", "@cite_62", "@cite_33", "@cite_9", "@cite_42", "@cite_72", "@cite_5", "@cite_68" ], "mid": [ "2146468366", "2017653216", "", "2162598825", "1966904792", "2102391321", "2011032476", "2079641160" ], "abstract": [ "We study contention resolution in a multiple-access channel such as the Ethernet channel. In the model that we consider, n users generate messages for the channel according to a probability distribution. Raghavan and Upfal have given a protocol in which the expected delay (time to get serviced) of every message is O(log n ) when messages are generated according to a Bernoulli distribution with generation rate up to about 1 10. Our main results are the following protocols: (a) one in which the expected average message delay is O(1) when messages are generated according to a Bernoulli distribution with a generation rate smaller than 1 e , and (b) one in which the expected delay of any message is O(1) for an analogous model in which users are synchronized (i.e., they agree about the time), there are potentially an infinite number of users, and messages are generated according to a Poisson distribution with generation rate up to 1 e . (Each message constitutes a new user.) To achieve (a), we first show how to simulate (b) using n synchronized users, and then show how to build the synchronization into the protocol.", "In the paper, we analyze the stochastic behavior of backoff protocols for multiple access channels such as the Ethernet. In particular, we prove that binary exponential backoff is unstable if the arrival rate of new messages at each station is λ N for any number of stations N exceeding 1 and any overall arrival rate λ exceeding .567 + 1 4 N - 2. More importantly, we also prove that any superlinear polynomial backoff protocol (e.g., quadratic backoff) is stable for any set of arrival rates that sum to less than one, and any number of stations. The results significantly extend the previous work in the area, and provide the first examples of acknowledgement based protocols known to be stable for a nonnegligible overall arrival rate distributed over an arbitrarily large number of stations. The results also disprove a popular assumption that exponential backoff is the best choice among acknowledgement based protocols for systems with large overall arrival rates. Finally, we prove that any linear or sublinear backoff protocol is unstable if the arrival rate at each station is λ N for any fixed λ and sufficiently large N .", "", "The IEEE has standardized the 802.11 protocol for wireless local area networks. The primary medium access control (MAC) technique of 802.11 is called the distributed coordination function (DCF). The DCF is a carrier sense multiple access with collision avoidance (CSMA CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS CTS mechanism. By means of the proposed model, we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.", "Goodman, Greenberg, Madras and March gave a lower bound of n -Ω(log n ) for the maximum arrival rate for which the n -user binary exponential backoff protocol is stable. Thus, they showed that the protocol is stable as long as the arrival rate is at most n -Ω(log n ) . We improve the lower bound, showing that the protocol is stable for arrival rates up to O(n -(0.75+δ) ) , for any δ>0 .", "Goodman, Greenberg, Madras and March gave a lower bound of n-Ω(log n) for the maximum arrival rate for which the n-user binary exponential backoff protocol is stable. Thus, they showed that the protocol is stable as long as the arrival rate is at most n-Ω(log n). We improve the lower bound, showing that the protocol is stable for arrival rates up to O(n-.9).", "Binary exponential backoff is a randomized protocol for regulating transmissions on a multiple-access broadcast channel. Ethernet, a local-area network, is built upon this protocol. The fundamental theoretical issue is stability: Does the backlog of packets awaiting transmission remain bounded in time, provided the rates of new packet arrivals are small enough? It is assumed n ≥ 2 stations share the channel, each having an infinite buffer where packets accumulate while the station attempts to transmit the first from the buffer. Here, it is established that binary exponential backoff is stable if the sum of the arrival rates is sufficiently small. Detailed results are obtained on which rates lead to stability when n = 2 stations share the channel. In passing, several other results are derived bearing on the efficiency of the conflict resolution process. Simulation results are reported that, in particular, indicate alternative retransmission protocols can significantly improve performance.", "We study contention resolution protocols under a stochastic model of continuous request generation from a set of contenders. The performance of such a protocol is characterized by two parameters: the maximum arrival rate for which the protocol is stable and the expected delay of a request from arrival to service. Known solutions are either unstable for any constant injection rate, or have polynomial (in the number of contenders) expected delay. Our main contribution is a protocol that is stable for a constant injection rate, while achieving logarithmic expected delay. We extend our results to the case of multiple servers, with each request being targeted for a specific server. This is related to the optically connected parallel computer (or OCPC ) model. Finally, we prove a lower bound showing that long delays are inevitable in a class of protocols including backoff-style protocols, if the arrival rate is large enough (but still smaller than 1)." ] }
1705.09271
2617006937
Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. To date, the degree to which these findings manifest in practice has not been resolved. To address this issue, we examine one of the strongest cases against BEB: @math packets that simultaneously begin contending for the wireless channel. Using Network Simulator 3, we compare against more recent algorithms that are inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we discover that these newer algorithms significantly underperform. Through further investigation, we identify as the culprit a flawed but common abstraction regarding the cost of collisions. Our experimental results are complemented by analytical arguments that the number of collisions -- and not solely makespan -- is an important metric to optimize. We believe that these findings have implications for the design of contention-resolution algorithms.
Under continuous traffic, windowed backoff schemes are examined in @cite_67 with a focus on the tradeoff between throughput and fairness. The authors focus on polynomial backoff and demonstrate via analysis and NS2 (the predecessor to NS3) simulations that quadratic backoff is a good candidate with respect to both metrics.
{ "cite_N": [ "@cite_67" ], "mid": [ "2028983157" ], "abstract": [ "Binary Exponential Backoff (BEB) is a key component of the IEEE 802.11 DCF protocol. It has been shown that BEB can achieve the theoretical limit of throughput as long as the initial backoff window size is properly selected. It, however, suffers from significant delay degradation when the network becomes saturated. It is thus of special interest for us to further design backoff schemes for IEEE 802.11 DCF networks that can achieve comparable throughput as BEB, but provide better delay performance. This paper presents a systematic study on the effect of backoff schemes on throughput and delay performance of saturated IEEE 802.11 DCF networks. In particular, a backoff scheme is defined as a sequence of backoff window sizes @math . The analysis shows that a saturated IEEE 802.11 DCF network has a single steady-state operating point as long as @math is a monotonic increasing sequence. The maximum throughput is found to be independent of @math , yet the growth rate of @math determines a fundamental tradeoff between throughput and delay performance. For illustration, Polynomial Backoff is proposed, and the effect of polynomial power @math on the network performance is characterized. It is demonstrated that Polynomial Backoff with a larger @math is more robust against the fluctuation of the network size, but in the meanwhile suffers from a larger second moment of access delay. Quadratic Backoff (QB), i.e., Polynomial Backoff with @math , stands out to be a favorable option as it strikes a good balance between throughput and delay performance. The comparative study between QB and BEB confirms that QB well preserves the robust nature of BEB and achieves much better queueing performance than BEB." ] }
1705.09271
2617006937
Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. To date, the degree to which these findings manifest in practice has not been resolved. To address this issue, we examine one of the strongest cases against BEB: @math packets that simultaneously begin contending for the wireless channel. Using Network Simulator 3, we compare against more recent algorithms that are inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we discover that these newer algorithms significantly underperform. Through further investigation, we identify as the culprit a flawed but common abstraction regarding the cost of collisions. Our experimental results are complemented by analytical arguments that the number of collisions -- and not solely makespan -- is an important metric to optimize. We believe that these findings have implications for the design of contention-resolution algorithms.
Work in @cite_46 addresses saturated throughput (each station always has a packet ready to be transmitted) of exponential backoff; roughly, this is the maximum throughput under stable packet arrival rates. Custom simulations are used to confirm these findings.
{ "cite_N": [ "@cite_46" ], "mid": [ "2134033057" ], "abstract": [ "New analytical results are given for the performance of the exponential backoff (EB) algorithm. Most available studies on EB focus on the stability of the algorithm and little attention has been paid to the performance analysis of EB. In this paper, we analyze EB and obtain saturation throughput and medium access delay of a packet for a given number of nodes N. The analysis considers the general case of EB with backoff factor r; binary exponential backoff (BEB) algorithm is the special case with r = 2. We also derive the analytical performance of EB with maximum retry limit M(EB-M), a practical version of EB. The accuracy of the analysis is checked against simulation results." ] }
1705.09271
2617006937
Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. To date, the degree to which these findings manifest in practice has not been resolved. To address this issue, we examine one of the strongest cases against BEB: @math packets that simultaneously begin contending for the wireless channel. Using Network Simulator 3, we compare against more recent algorithms that are inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we discover that these newer algorithms significantly underperform. Through further investigation, we identify as the culprit a flawed but common abstraction regarding the cost of collisions. Our experimental results are complemented by analytical arguments that the number of collisions -- and not solely makespan -- is an important metric to optimize. We believe that these findings have implications for the design of contention-resolution algorithms.
Regarding the time required for a single successful transmission, @cite_32 demonstrates a lower-bound of @math . In different communication models, other bounds are known @cite_55 @cite_50 .
{ "cite_N": [ "@cite_55", "@cite_32", "@cite_50" ], "mid": [ "2493870798", "2033487809", "2490780450" ], "abstract": [ "In this paper, we consider the classical contention resolution problem in which an unknown subset of n possible nodes are activated and connected to a shared channel. The problem is solved in the first round that an active node transmits alone (thus breaking symmetry). Contention resolution has been an active research topic for over four decades. Accordingly, tight upper and lower bounds are known for most major model assumptions. There remains, however, an important case that is unresolved: contention resolution with multiple channels and collision detection. (Tight bounds are known for contention resolution with multiple channels, and contention resolution with collision detection, but not for the combination of both assumptions.) Recent work proved the first non-trivial lower bound for randomized solutions to this problem in this setting. The optimality of this lower bound was left an open question. In this paper, we answer this open question by describing and analyzing new contention resolution algorithms that match, or come within a log factor of matching, this bound for all relevant parameters. By doing so, we help advance our understanding of an important longstanding problem. Of equal importance, our solutions introduce a novel new technique in which we leverage a distributed structure we call coalescing cohorts to simulate a well-known parallel search strategy from the structured PRAM CREW model in our unstructured distributed model. We conjecture that this approach is relevant to many problems in the increasingly important setting of distributed computation using multiple shared channels.", "We propose two selection protols that run on multiple access channels in log-logarithmic expected time, and establish a complementary lower bound showing that the first protocols falls within an additive constant of optimality and that the second differs from optimality by less than any multiplicative factor infinitesimally greater than 1 as the size of the problem approaches infinity. It is difficult to second-guess the fast-changing electronics industry, but our mathematical analysis could be relevant outside the traditional interests of communications protocols to semaphore-like problems.", "In this paper, we study upper and lower bounds for contention resolution on a single hop fading channel; i.e., a channel where receive behavior is determined by a signal to interference and noise ratio (SINR) equation. The best known previous solution solves the problem in this setting in O(log2 n log log n) rounds, with high probability in the system size n. We describe and analyze an algorithm that solves the problem in O(log n + log R) rounds, where R is the ratio between the longest and shortest link, and is a value upper bounded by a polynomial in n for most feasible deployments. We complement this result with an Ω(log n) lower bound that proves the bound tight for reasonable R. We note that in the classical radio network model (which does not include signal fading), high probability contention resolution requires Ω(log 2 n) rounds. Our algorithm, therefore, affirms the conjecture that the spectrum reuse enabled by fading should allow distributed algorithms to achieve a significant improvement on this log2 n speed limit. In addition, we argue that the new techniques required to prove our upper and lower bounds are of general use for analyzing other distributed algorithms in this increasingly well-studied fading channel setting." ] }
1705.09271
2617006937
Randomized binary exponential backoff (BEB) is a popular algorithm for coordinating access to a shared channel. With an operational history exceeding four decades, BEB is currently an important component of several wireless standards. Despite this track record, prior theoretical results indicate that under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms are possible. To date, the degree to which these findings manifest in practice has not been resolved. To address this issue, we examine one of the strongest cases against BEB: @math packets that simultaneously begin contending for the wireless channel. Using Network Simulator 3, we compare against more recent algorithms that are inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we discover that these newer algorithms significantly underperform. Through further investigation, we identify as the culprit a flawed but common abstraction regarding the cost of collisions. Our experimental results are complemented by analytical arguments that the number of collisions -- and not solely makespan -- is an important metric to optimize. We believe that these findings have implications for the design of contention-resolution algorithms.
A class of tree-based algorithms for contention resolution is proposed in @cite_26 . Work in @cite_71 addresses the case of heterogeneous packet sizes. The case where packets can arrive dynamically is examined in @cite_69 @cite_17 @cite_18 .
{ "cite_N": [ "@cite_69", "@cite_26", "@cite_18", "@cite_71", "@cite_17" ], "mid": [ "1984375161", "2116736083", "1984355228", "1881971906", "" ], "abstract": [ "We present an improved algorithm to wake up a multi-hop ad-hoc radio network. The goal is to have all the nodes activated, when some of them may wake up spontaneously at arbitrary times and the remaining nodes need to be awoken by the already active ones. The best previously known wake-up algorithm was given by Chrobak, Gasieniec and Kowalski [11], and operated in time O(n5 3 log n), where n is the number of nodes. We give an algorithm with the running time O(n3 2 log n). This also yields better algorithms for other synchronization-type primitives, like leader election and local-clocks synchronization, each with a time performance that differs from that of wake-up by an extra factor of O(log n) only, and improves the best previously known method for the problem by a factor of n1 6. A wake-up algorithm is a schedule of transmissions for each node. It can be represented as a collection of binary sequences. Useful properties of such collections have been abstracted to define a (radio) synchronizer. It has been known that good radio synchronizers exist and previous algorithms [17,11] relied on this. We show how to construct such synchronizers in polynomial time, from suitable constructible expanders. As an application, we obtain a wake-up protocol for a multiple-access channel that activates the network in time O(k2 polylog n), where k is the number of stations that wake up spontaneously, and which can be found in time polynomial in n. We extend the notion of synchronizers to universal synchronizers. We show that there exist universal synchronizers with parameters that guarantee time O(n3 2 log n) of wake-up.", "The multiaccessing of a broadcast communication channel by independent sources is considered. Previous accessing techniques suffer from long message delays, low throughput, and or congestion instabilities. A new class of high-speed, high-throughput, stable, multiaccessing algorithms is presented. Contentions resolving tree algorithms are introduced, and they are analyzed for specific probabilistic source models. It is shown that these algorithms are stable (in that all moments of delay exist) and are optimal in a certain sense. Furthermore, they have a maximum throughput of 0.430 packets slut and have good delay properties. It is also shown that, under heavy traffic, the optimally controlled tree algorithm adaptively changes to the conventional time-division multiple access protocol.", "We study the problem of waking up a collection of @math processors connected by a multihop ad hoc ratio network with unknown topology, no access to a global clock, and no collision detection mechanism available. Each node in the network either wakes up spontaneously or gets activated by receiving a wake-up signal from another node. All active nodes transmit the wake-up signals according to a given protocol @math . The running time of @math is the number of steps counted from the first spontaneous wake-up until all nodes become activated. We provide two protocols for this problem. The first one is a deterministic protocol with running time @math . Our protocol is based on a novel concept of a shift-tolerant selector to which we refer as a (radio) synchronizer. The second protocol is randomized, and its expected running time is @math , where @math is the diameter of the network. Subsequently we show how to employ our wake-up protocols to solve two other communication primitives: leader election and clock synchronization.", "We study the problem of contention resolution for different-sized jobs on a simple channel. When a job makes a run attempt, it learns only whether the attempt succeeded or failed. We first analyze binary exponential backoff, and show that it achieves a makespan of V2Θ(√logn) with high probability, where V is the total work of all n contending jobs. This bound is significantly larger than when jobs are constant sized. A variant of exponential backoff, however, achieves makespan O(V logV) with high probability. Finally, we introduce a new protocol, size-hashed backoff, specifically designed for jobs of multiple sizes that achieves makespan O(V log3logV). The error probability of the first two bounds is polynomially small in n and the latter is polynomially small in logV.", "" ] }
1705.08922
2619096655
Sparsity helps reduce the computational complexity of deep neural networks by skipping zeros. Taking advantage of sparsity is listed as a high priority in next generation DNN accelerators such as TPU. The structure of sparsity, i.e., the granularity of pruning, affects the efficiency of hardware accelerator design as well as the prediction accuracy. Coarse-grained pruning creates regular sparsity patterns, making it more amenable for hardware acceleration but more challenging to maintain the same accuracy. In this paper we quantitatively measure the trade-off between sparsity regularity and prediction accuracy, providing insights in how to maintain accuracy while having more a more structured sparsity pattern. Our experimental results show that coarse-grained pruning can achieve a sparsity ratio similar to unstructured pruning without loss of accuracy. Moreover, due to the index saving effect, coarse-grained pruning is able to obtain a better compression ratio than fine-grained sparsity at the same accuracy threshold. Based on the recent sparse convolutional neural network accelerator (SCNN), our experiments further demonstrate that coarse-grained sparsity saves about 2x the memory references compared to fine-grained sparsity. Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design.
. Among all types of sparsity, fine-grained sparsity (vanilla sparsity) and filter-wise sparsity (very coarse-grained sparsity) are two extreme cases that has been studied @cite_5 @cite_12 . Fine-grained sparsity is a type of sparsity in which individual weights are deleted and was first proposed in 1989 by @cite_16 . Fine-grained sparsity has been proven to work well on a wide range of popular neural network models of CNN and RNN @cite_5 @cite_25 @cite_22 @cite_14 . There is also channel reduction and filter reduction, which reduce the dimension of input output features as well as layers. Channel reduction can be viewed as very coarse-grained sparsity that removes 3-dimensional sub-tensors in convolutional layers. Such coarse-grained sparsity is beneficial for acceleration due to regularity @cite_17 @cite_7 . However, it usually causes notable reduced accuracy compared with fine-grained sparsity, as indicated by @cite_12 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_5", "@cite_16", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "", "2153827974", "2513419314", "2963674932", "2114766824", "", "2515385951", "566555209" ], "abstract": [ "", "Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic that significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that rules extracted from networks trained with this pruning heuristic are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state, triple-parity grammar. Further simulations indicate that this pruning method can have generalization performance superior to that obtained by training with weight decay. >", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "", "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers in ConvNets. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion. After such pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. We investigate different ways to add group-wise prunning to the learning process, and show that severalfold speedups of convolutional layers can be attained using group-sparsity regularizers. Our approach can adjust the shapes of the receptive fields in the convolutional layers, and even prune excessive feature maps from ConvNets, all in data-driven way." ] }
1705.08922
2619096655
Sparsity helps reduce the computational complexity of deep neural networks by skipping zeros. Taking advantage of sparsity is listed as a high priority in next generation DNN accelerators such as TPU. The structure of sparsity, i.e., the granularity of pruning, affects the efficiency of hardware accelerator design as well as the prediction accuracy. Coarse-grained pruning creates regular sparsity patterns, making it more amenable for hardware acceleration but more challenging to maintain the same accuracy. In this paper we quantitatively measure the trade-off between sparsity regularity and prediction accuracy, providing insights in how to maintain accuracy while having more a more structured sparsity pattern. Our experimental results show that coarse-grained pruning can achieve a sparsity ratio similar to unstructured pruning without loss of accuracy. Moreover, due to the index saving effect, coarse-grained pruning is able to obtain a better compression ratio than fine-grained sparsity at the same accuracy threshold. Based on the recent sparse convolutional neural network accelerator (SCNN), our experiments further demonstrate that coarse-grained sparsity saves about 2x the memory references compared to fine-grained sparsity. Since memory reference is more than two orders of magnitude more expensive than arithmetic operations, the regularity of sparse structure leads to more efficient hardware design.
. For very coarse-grained sparsity like filter-sparsity and channel-sparsity, it is simple to achieve acceleration on general-purpose processors because it is equivalent to obtaining a smaller dense model @cite_7 . For fine-grained sparsity, custom accelerators @cite_19 @cite_27 have been proposed to exploit the reduction of computations.
{ "cite_N": [ "@cite_19", "@cite_27", "@cite_7" ], "mid": [ "2285660444", "", "2513419314" ], "abstract": [ "State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×104 frames sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.", "", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNNs evaluation. Experimental results show that SSL achieves on average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network (ResNet) to 18 layers while improve the accuracy from 91.25 to 92.60 , which is still slightly higher than that of original ResNet with 32 layers. For AlexNet, structure regularization by SSL also reduces the error by around 1 . Open source code is in this https URL" ] }
1705.08562
2619789364
Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.
Hashing is a widely used approach for practical nearest neighbor retrieval @cite_11 , thanks to the efficiency of evaluating Hamming distances using bitwise operations, as well as the low memory and storage footprint. It has been theoretically demonstrated @cite_26 that data-dependent hashing methods outperform data-independent ones such as Locality Sensitive Hashing @cite_42 . We tackle the supervised hashing problem, also known as affinity-based hashing @cite_12 @cite_41 @cite_0 , where supervision is given in the form of pairwise affinities. Regarding optimization, the discrete nature of hashing usually results in NP-hard problems. Our solution uses continuous relaxations, which is in line with relaxation-based methods, @cite_41 @cite_0 @cite_1 , but differs from alternating methods that preserve the discrete constraints @cite_12 @cite_9 @cite_6 and two-step methods @cite_2 @cite_37 @cite_27 .
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_41", "@cite_9", "@cite_42", "@cite_1", "@cite_6", "@cite_0", "@cite_27", "@cite_2", "@cite_12", "@cite_11" ], "mid": [ "2296324964", "2953209208", "2164338181", "2568529502", "1502916507", "2789854678", "2113307832", "1992371516", "2066276674", "2153273131", "", "1870428314" ], "abstract": [ "In this paper, we aim to learn a mapping (or embedding) from images to a compact binary space in which Hamming distances correspond to a ranking measure for the image retrieval task. We make use of a triplet loss because this has been shown to be most effective for ranking problems. However, training in previous works can be prohibitively expensive due to the fact that optimization is directly performed on the triplet space, where the number of possible triplets for training is cubic in the number of training examples. To address this issue, we propose to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages. To solve the first stage, we design a large-scale high-order binary codes inference algorithm to reduce the high-order objective to a standard binary quadratic problem such that graph cuts can be used to efficiently infer the binary code which serve as the label of each training datum. In the second stage we propose to map the original image to compact binary codes via carefully designed deep convolutional neural networks (CNNs) and the hashing function fitting can be solved by training binary CNN classifiers. An incremental interleaved optimization strategy is proffered to ensure that these two steps are interactive with each other during training for better accuracy. We conduct experiments on several benchmark datasets, which demonstrate both improved training time (by as much as two orders of magnitude) as well as producing state-of-the-art hashing for various retrieval tasks.", "We show an optimal data-dependent hashing scheme for the approximate near neighbor problem. For an @math -point data set in a @math -dimensional space our data structure achieves query time @math and space @math , where @math for the Euclidean space and approximation @math . For the Hamming space, we obtain an exponent of @math . Our result completes the direction set forth in [AINR14] who gave a proof-of-concept that data-dependent hashing can outperform classical Locality Sensitive Hashing (LSH). In contrast to [AINR14], the new bound is not only optimal, but in fact improves over the best (optimal) LSH data structures [IM98,AI06] for all approximation factors @math . From the technical perspective, we proceed by decomposing an arbitrary dataset into several subsets that are, in a certain sense, pseudo-random.", "Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches. In this paper, we develop an algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings. We develop a scalable coordinate-descent algorithm for our proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art techniques.", "Hashing methods aim to learn a set of hash functions which map the original features to compact binary codes with similarity preserving in the Hamming space. Hashing has proven a valuable tool for large-scale information retrieval. We propose a column generation based binary code learning framework for data-dependent hash function learning. Given a set of triplets that encode the pairwise similarity comparison information, our column generation based method learns hash functions that preserve the relative comparison relations within the large-margin learning framework. Our method iteratively learns the best hash functions during the column generation procedure. Existing hashing methods optimize over simple objectives such as the reconstruction error or graph Laplacian related loss functions, instead of the performance evaluation criteria of interest--multivariate performance measures such as the AUC and NDCG. Our column generation based method can be further generalized from the triplet loss to a general structured learning based framework that allows one to directly optimize multivariate performance measures. For optimizing general ranking measures, the resulting optimization problem can involve exponentially or infinitely many variables and constraints, which is more challenging than standard structured output learning. We use a combination of column generation and cutting-plane techniques to solve the optimization problem. To speed-up the training we further explore stage-wise training and propose to optimize a simplified NDCG loss for efficient inference. We demonstrate the generality of our method by applying it to ranking prediction and image retrieval, and show that it outperforms several state-of-the-art hashing methods.", "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50).", "Binary vector embeddings enable fast nearest neighbor retrieval in large databases of high-dimensional objects, and play an important role in many practical applications, such as image and video retrieval. We study the problem of learning binary vector embeddings under a supervised setting, also known as hashing. We propose a novel supervised hashing method based on optimizing an information-theoretic quantity, mutual information. We show that optimizing mutual information can reduce ambiguity in the induced neighborhood structure in learned Hamming space, which is essential in obtaining high retrieval performance. To this end, we optimize mutual information in deep neural networks with minibatch stochastic gradient descent, with a formulation that maximally and efficiently utilizes available supervision. Experiments on four image retrieval benchmarks, including ImageNet, confirm the effectiveness of our method in learning high-quality binary embeddings for nearest neighbor retrieval.", "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .", "One widely-used solution to expedite similarity search of multimedia data is to construct hash functions to map the data into a Hamming space where linear search is known to be fast and often sublinear solutions perform well. In this paper, we propose a Boosting based formulation for supervised learning of the hash functions that is based on Error Correcting Codes. This approach allows us to apply established theoretical results for Boosting in our analysis of our hashing solution. Specifically, we show that the training accuracy in Boosting can be considered as a lower bound on the (empirical) Mean Average Precision (mAP) score. In experiments with three image retrieval benchmarks, the proposed formulation yields significant improvement in mAP over state-of-the-art supervised hashing methods, while using fewer bits in the hash codes.", "Supervised hashing aims to map the original features to compact binary codes that are able to preserve label based similarity in the Hamming space. Non-linear hash functions have demonstrated their advantage over linear ones due to their powerful generalization capability. In the literature, kernel functions are typically used to achieve non-linearity in hashing, which achieve encouraging retrieval perfor- mance at the price of slow evaluation and training time. Here we propose to use boosted decision trees for achieving non-linearity in hashing, which are fast to train and evalu- ate, hence more suitable for hashing with high dimensional data. In our approach, we first propose sub-modular for- mulations for the hashing binary code inference problem and an efficient GraphCut based block search method for solving large-scale inference. Then we learn hash func- tions by training boosted decision trees to fit the binary codes. Experiments demonstrate that our proposed method significantly outperforms most state-of-the-art methods in retrieval precision and training time. Especially for high- dimensional data, our method is orders of magnitude faster than many methods in terms of training time.", "", "Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space." ] }
1705.08562
2619789364
Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.
Supervised hashing can be cast as a distance metric learning problem @cite_6 , which itself can be formulated as learning to rank @cite_5 @cite_40 . Optimizing ranking metrics such as AP and NDCG has received much attention in the learning to rank literature. For instance, surrogates of AP and NDCG can be optimized in the structural SVM framework @cite_35 @cite_14 , and bound optimization algorithms exist for NDCG @cite_31 . Alternatively, there are gradient-based methods based on smoothing or approximating these metrics @cite_16 @cite_25 @cite_45 @cite_18 . Recently, @cite_43 tackles few-shot classification by optimizing AP using the direct loss minimization framework @cite_19 . These methods did not consider applications in hashing.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_6", "@cite_43", "@cite_40", "@cite_45", "@cite_19", "@cite_5", "@cite_31", "@cite_16", "@cite_25" ], "mid": [ "2127176025", "2128877075", "", "2113307832", "2734621483", "", "2059001985", "2414504425", "2158139921", "2155211665", "2122789217", "2051928158" ], "abstract": [ "Machine learning is commonly used to improve ranked retrieval systems. Due to computational difficulties, few learning techniques have been developed to directly optimize for mean average precision (MAP), despite its widespread use in evaluating such systems. Existing approaches optimizing MAP either do not find a globally optimal solution, or are computationally expensive. In contrast, we present a general SVM learning algorithm that efficiently finds a globally optimal solution to a straightforward relaxation of MAP. We evaluate our approach using the TREC 9 and TREC 10 Web Track corpora (WT10g), comparing against SVMs optimized for accuracy and ROCArea. In most cases we show our method to produce statistically significant improvements in MAP scores.", "The quality measures used in information retrieval are particularly difficult to optimize directly, since they depend on the model scores only through the sorted order of the documents returned for a given query. Thus, the derivatives of the cost with respect to the model parameters are either zero, or are undefined. In this paper, we propose a class of simple, flexible algorithms, called LambdaRank, which avoids these difficulties by working with implicit cost functions. We describe LambdaRank using neural network models, although the idea applies to any differentiable function class. We give necessary and sufficient conditions for the resulting implicit cost function to be convex, and we show that the general method has a simple mechanical interpretation. We demonstrate significantly improved accuracy, over a state-of-the-art ranking algorithm, on several datasets. We also show that LambdaRank provides a method for significantly speeding up the training phase of that ranking algorithm. Although this paper is directed towards ranking, the proposed method can be extended to any non-smooth and multivariate cost functions.", "", "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "Few-shot learning refers to understanding new concepts from only a few examples. We propose an information retrieval-inspired approach for this problem that is motivated by the increased importance of maximally leveraging all the available information in this low-data regime. We define a training objective that aims to extract as much information as possible from each training batch by effectively optimizing over all relative orderings of the batch points simultaneously. In particular, we view each batch point as a query' that ranks the remaining ones based on its predicted relevance to them and we define a model within the framework of structured prediction to optimize mean Average Precision over these rankings. Our method achieves impressive results on the standard few-shot classification benchmarks while is also capable of few-shot retrieval.", "", "We address the problem of learning large complex ranking functions. Most IR applications use evaluation metrics that depend only upon the ranks of documents. However, most ranking functions generate document scores, which are sorted to produce a ranking. Hence IR metrics are innately non-smooth with respect to the scores, due to the sort. Unfortunately, many machine learning algorithms require the gradient of a training objective in order to perform the optimization of the model parameters,and because IR metrics are non-smooth,we need to find a smooth proxy objective that can be used for training. We present a new family of training objectives that are derived from the rank distributions of documents, induced by smoothed scores. We call this approach SoftRank. We focus on a smoothed approximation to Normalized Discounted Cumulative Gain (NDCG), called SoftNDCG and we compare it with three other training objectives in the recent literature. We present two main results. First, SoftRank yields a very good way of optimizing NDCG. Second, we show that it is possible to achieve state of the art test set NDCG results by optimizing a soft NDCG objective on the training set with a different discount function", "Supervised training of deep neural nets typically relies on minimizing cross-entropy. However, in many domains, we are interested in performing well on metrics specific to the application. In this paper we propose a direct loss minimization approach to train deep neural networks, which provably minimizes the application-specific loss function. This is often non-trivial, since these functions are neither smooth nor decomposable and thus are not amenable to optimization with standard gradient-based methods. We demonstrate the effectiveness of our approach in the context of maximizing average precision for ranking problems. Towards this goal, we develop a novel dynamic programming algorithm that can efficiently compute the weight updates. Our approach proves superior to a variety of baselines in the context of action classification and object detection, especially in the presence of label noise.", "We study metric learning as a problem of information retrieval. We present a general metric learning algorithm, based on the structural SVM framework, to learn a metric such that rankings of data induced by distance from a query can be optimized against various ranking measures, such as AUC, Precision-at-k, MRR, MAP or NDCG. We demonstrate experimental results on standard classification data sets, and a large-scale online dating recommendation problem.", "Learning to rank is a relatively new field of study, aiming to learn a ranking function from a set of training data with relevancy labels. The ranking algorithms are often evaluated using information retrieval measures, such as Normalized Discounted Cumulative Gain (NDCG) [1] and Mean Average Precision (MAP) [2]. Until recently, most learning to rank algorithms were not using a loss function related to the above mentioned evaluation measures. The main difficulty in direct optimization of these measures is that they depend on the ranks of documents, not the numerical values output by the ranking function. We propose a probabilistic framework that addresses this challenge by optimizing the expectation of NDCG over all the possible permutations of documents. A relaxation strategy is used to approximate the average of NDCG over the space of permutation, and a bound optimization approach is proposed to make the computation efficient. Extensive experiments show that the proposed algorithm outperforms state-of-the-art ranking algorithms on several benchmark data sets.", "One of promising directions in research on learning to rank concerns the problem of appropriate choice of the objective function to maximize by means of machine learning algorithms. We describe a novel technique of smoothing an arbitrary ranking metric and demonstrate how to utilize it to maximize the retrieval quality in terms of the @math metric. The idea behind our listwise ranking model called TieRank is artificial probabilistic tying of predicted relevance scores at each iteration of learning process, which defines a distribution on the set of all permutations of retrieved documents. Such distribution provides a desired smoothed version of the target retrieval quality metric. This smooth function is possible to maximize using a gradient descent method. Experiments on LETOR collections show that TieRank outperforms most of the existing learning to rank algorithms.", "Most ranking algorithms are based on the optimization of some loss functions, such as the pairwise loss. However, these loss functions are often different from the criteria that are adopted to measure the quality of the web page ranking results. To overcome this problem, we propose an algorithm which aims at directly optimizing popular measures such as the Normalized Discounted Cumulative Gain and the Average Precision. The basic idea is to minimize a smooth approximation of these measures with gradient descent. Crucial to this kind of approach is the choice of the smoothing factor. We provide various theoretical analysis on that choice and propose an annealing algorithm to iteratively minimize a less and less smoothed approximation of the measure of interest. Results on the Letor benchmark datasets show that the proposed algorithm achieves state-of-the-art performances." ] }
1705.08562
2619789364
Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.
In the learning to hash literature, different strategies have been proposed to handle the difficulties in optimizing listwise ranking metrics. For example, @cite_44 decomposes listwise supervision into local triplets, @cite_9 @cite_33 use structural SVMs to optimize surrogate losses, @cite_29 maximizes precision at the top, and @cite_30 @cite_20 optimize NDCG surrogates. In other recent methods using deep neural networks, the learning objectives are not designed to match ranking evaluation metrics, @cite_32 @cite_46 @cite_1 @cite_37 . In contrast, we directly optimize listwise ranking metrics using deep neural networks.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_33", "@cite_9", "@cite_29", "@cite_1", "@cite_32", "@cite_44", "@cite_46", "@cite_20" ], "mid": [ "2403689645", "2296324964", "", "2568529502", "2197560310", "2789854678", "2207125444", "2126210882", "2565993688", "2949235290" ], "abstract": [ "Hashing method becomes popular for large scale similarity search due to its storage and computational efficiency. Many machine learning techniques, ranging from unsupervised to supervised, have been proposed to design compact hashing codes. Most of the existing hashing methods generate binary codes to efficiently find similar data examples to a query. However, the ranking accuracy among the retrieved data examples is not modeled. But in many real world applications, ranking measure is important for evaluating the quality of hashing codes. In this paper, we propose a novel Ranking Preserving Hashing (RPH) approach that directly optimizes a popular ranking measure, Normalized Discounted Cumulative Gain (NDCG), to obtain effective hashing codes with high ranking accuracy. The main difficulty in the direct optimization of NDCG measure is that it depends on the ranking order of data examples, which forms a non-convex nonsmooth optimization problem. We address this challenge by optimizing the expectation of NDCG measure calculated based on a linear hashing function. A gradient descent method is designed to achieve the goal. An extensive set of experiments on two large scale datasets demonstrate the superior ranking performance of the proposed approach over several state-of-the-art hashing methods.", "In this paper, we aim to learn a mapping (or embedding) from images to a compact binary space in which Hamming distances correspond to a ranking measure for the image retrieval task. We make use of a triplet loss because this has been shown to be most effective for ranking problems. However, training in previous works can be prohibitively expensive due to the fact that optimization is directly performed on the triplet space, where the number of possible triplets for training is cubic in the number of training examples. To address this issue, we propose to formulate high-order binary codes learning as a multi-label classification problem by explicitly separating learning into two interleaved stages. To solve the first stage, we design a large-scale high-order binary codes inference algorithm to reduce the high-order objective to a standard binary quadratic problem such that graph cuts can be used to efficiently infer the binary code which serve as the label of each training datum. In the second stage we propose to map the original image to compact binary codes via carefully designed deep convolutional neural networks (CNNs) and the hashing function fitting can be solved by training binary CNN classifiers. An incremental interleaved optimization strategy is proffered to ensure that these two steps are interactive with each other during training for better accuracy. We conduct experiments on several benchmark datasets, which demonstrate both improved training time (by as much as two orders of magnitude) as well as producing state-of-the-art hashing for various retrieval tasks.", "", "Hashing methods aim to learn a set of hash functions which map the original features to compact binary codes with similarity preserving in the Hamming space. Hashing has proven a valuable tool for large-scale information retrieval. We propose a column generation based binary code learning framework for data-dependent hash function learning. Given a set of triplets that encode the pairwise similarity comparison information, our column generation based method learns hash functions that preserve the relative comparison relations within the large-margin learning framework. Our method iteratively learns the best hash functions during the column generation procedure. Existing hashing methods optimize over simple objectives such as the reconstruction error or graph Laplacian related loss functions, instead of the performance evaluation criteria of interest--multivariate performance measures such as the AUC and NDCG. Our column generation based method can be further generalized from the triplet loss to a general structured learning based framework that allows one to directly optimize multivariate performance measures. For optimizing general ranking measures, the resulting optimization problem can involve exponentially or infinitely many variables and constraints, which is more challenging than standard structured output learning. We use a combination of column generation and cutting-plane techniques to solve the optimization problem. To speed-up the training we further explore stage-wise training and propose to optimize a simplified NDCG loss for efficient inference. We demonstrate the generality of our method by applying it to ranking prediction and image retrieval, and show that it outperforms several state-of-the-art hashing methods.", "In recent years, binary coding techniques are becoming increasingly popular because of their high efficiency in handling large-scale computer vision applications. It has been demonstrated that supervised binary coding techniques that leverage supervised information can significantly enhance the coding quality, and hence greatly benefit visual search tasks. Typically, a modern binary coding method seeks to learn a group of coding functions which compress data samples into binary codes. However, few methods pursued the coding functions such that the precision at the top of a ranking list according to Hamming distances of the generated binary codes is optimized. In this paper, we propose a novel supervised binary coding approach, namely Top Rank Supervised Binary Coding (Top-RSBC), which explicitly focuses on optimizing the precision of top positions in a Hamming-distance ranking list towards preserving the supervision information. The core idea is to train the disciplined coding functions, by which the mistakes at the top of a Hamming-distance ranking list are penalized more than those at the bottom. To solve such coding functions, we relax the original discrete optimization objective with a continuous surrogate, and derive a stochastic gradient descent to optimize the surrogate objective. To further reduce the training time cost, we also design an online learning algorithm to optimize the surrogate objective more efficiently. Empirical studies based upon three benchmark image datasets demonstrate that the proposed binary coding approach achieves superior image search accuracy over the state-of-the-arts.", "Binary vector embeddings enable fast nearest neighbor retrieval in large databases of high-dimensional objects, and play an important role in many practical applications, such as image and video retrieval. We study the problem of learning binary vector embeddings under a supervised setting, also known as hashing. We propose a novel supervised hashing method based on optimizing an information-theoretic quantity, mutual information. We show that optimizing mutual information can reduce ambiguity in the induced neighborhood structure in learned Hamming space, which is essential in obtaining high retrieval performance. To this end, we optimize mutual information in deep neural networks with minibatch stochastic gradient descent, with a formulation that maximally and efficiently utilizes available supervision. Experiments on four image retrieval benchmarks, including ImageNet, confirm the effectiveness of our method in learning high-quality binary embeddings for nearest neighbor retrieval.", "Recent years have witnessed wide application of hashing for large-scale image retrieval. However, most existing hashing methods are based on handcrafted features which might not be optimally compatible with the hashing procedure. Recently, deep hashing methods have been proposed to perform simultaneous feature learning and hash-code learning with deep neural networks, which have shown better performance than traditional hashing methods with hand-crafted features. Most of these deep hashing methods are supervised whose supervised information is given with triplet labels. For another common application scenario with pairwise labels, there have not existed methods for simultaneous feature learning and hash-code learning. In this paper, we propose a novel deep hashing method, called deep pairwise-supervised hashing (DPSH), to perform simultaneous feature learning and hashcode learning for applications with pairwise labels. Experiments on real datasets show that our DPSH method can outperform other methods to achieve the state-of-the-art performance in image retrieval applications.", "Hashing techniques have been intensively investigated in the design of highly efficient search engines for large-scale computer vision applications. Compared with prior approximate nearest neighbor search approaches like tree-based indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. However, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pair wise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage list wise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via list wise supervision can provide superior search accuracy without incurring heavy computational overhead.", "Hashing is one of the most popular and powerful approximate nearest neighbor search techniques for large-scale image retrieval. Most traditional hashing methods first represent images as off-the-shelf visual features and then produce hashing codes in a separate stage. However, off-the-shelf visual features may not be optimally compatible with the hash code learning procedure, which may result in sub-optimal hash codes. Recently, deep hashing methods have been proposed to simultaneously learn image features and hash codes using deep neural networks and have shown superior performance over traditional hashing methods. Most deep hashing methods are given supervised information in the form of pairwise labels or triplet labels. The current state-of-the-art deep hashing method DPSH [1], which is based on pairwise labels, performs image feature learning and hash code learning simultaneously by maximizing the likelihood of pairwise similarities. Inspired by DPSH [1], we propose a triplet label based deep hashing method which aims to maximize the likelihood of the given triplet labels. Experimental results show that our method outperforms all the baselines on CIFAR-10 and NUS-WIDE datasets, including the state-of-the-art method DPSH [1] and all the previous triplet label based deep hashing methods.", "With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multilevel semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets." ] }
1705.08562
2619789364
Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.
Key to our formulation is the observation that the integer-valued Hamming distance results in rankings with ties. However, this fact is not widely taken into consideration in previous work. Ties can be sidestepped by using weighted Hamming distance @cite_34 @cite_9 , but at the cost of reduced efficiency. Fortunately, tie-aware versions of common ranking metrics have been found in the information retrieval literature @cite_17 . Inspired by such results, we propose to optimize tie-aware ranking metrics on Hamming distances. Our gradient-based optimization uses a recent differentiable histogram binning technique @cite_1 @cite_13 @cite_21 .
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_1", "@cite_34", "@cite_13", "@cite_17" ], "mid": [ "2568529502", "2547446130", "2789854678", "2170314267", "", "" ], "abstract": [ "Hashing methods aim to learn a set of hash functions which map the original features to compact binary codes with similarity preserving in the Hamming space. Hashing has proven a valuable tool for large-scale information retrieval. We propose a column generation based binary code learning framework for data-dependent hash function learning. Given a set of triplets that encode the pairwise similarity comparison information, our column generation based method learns hash functions that preserve the relative comparison relations within the large-margin learning framework. Our method iteratively learns the best hash functions during the column generation procedure. Existing hashing methods optimize over simple objectives such as the reconstruction error or graph Laplacian related loss functions, instead of the performance evaluation criteria of interest--multivariate performance measures such as the AUC and NDCG. Our column generation based method can be further generalized from the triplet loss to a general structured learning based framework that allows one to directly optimize multivariate performance measures. For optimizing general ranking measures, the resulting optimization problem can involve exponentially or infinitely many variables and constraints, which is more challenging than standard structured output learning. We use a combination of column generation and cutting-plane techniques to solve the optimization problem. To speed-up the training we further explore stage-wise training and propose to optimize a simplified NDCG loss for efficient inference. We demonstrate the generality of our method by applying it to ranking prediction and image retrieval, and show that it outperforms several state-of-the-art hashing methods.", "We suggest a new loss for learning deep embeddings. The key characteristics of the new loss is the absence of tunable parameters and very good results obtained across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) point pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on these probability estimates. We show that these operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. The experiments reveal favourable results compared to recently proposed loss functions.", "Binary vector embeddings enable fast nearest neighbor retrieval in large databases of high-dimensional objects, and play an important role in many practical applications, such as image and video retrieval. We study the problem of learning binary vector embeddings under a supervised setting, also known as hashing. We propose a novel supervised hashing method based on optimizing an information-theoretic quantity, mutual information. We show that optimizing mutual information can reduce ambiguity in the induced neighborhood structure in learned Hamming space, which is essential in obtaining high retrieval performance. To this end, we optimize mutual information in deep neural networks with minibatch stochastic gradient descent, with a formulation that maximally and efficiently utilizes available supervision. Experiments on four image retrieval benchmarks, including ImageNet, confirm the effectiveness of our method in learning high-quality binary embeddings for nearest neighbor retrieval.", "Binary hashing has been widely used for efficient similarity search due to its query and storage efficiency. In most existing binary hashing methods, the high-dimensional data are embedded into Hamming space and the distance or similarity of two points are approximated by the Hamming distance between their binary codes. The Hamming distance calculation is efficient, however, in practice, there are often lots of results sharing the same Hamming distance to a query, which makes this distance measure ambiguous and poses a critical issue for similarity search where ranking is important. In this paper, we propose a weighted Hamming distance ranking algorithm (WhRank) to rank the binary codes of hashing methods. By assigning different bit-level weights to different hash bits, the returned binary codes are ranked at a finer-grained binary code level. We give an algorithm to learn the data-adaptive and query-sensitive weight for each hash bit. Evaluations on two large-scale image data sets demonstrate the efficacy of our weighted Hamming distance for binary code ranking.", "", "" ] }
1705.08590
2617152883
One of the bottlenecks in acquiring a perfect database for deep learning is the tedious process of collecting and labeling data. In this paper, we propose a generative model trained with synthetic images rendered from 3D models which can reduce the burden on collecting real training data and make the background conditions more realistic. Our architecture is composed of two sub-networks: a semantic foreground object reconstruction network based on Bayesian inference and a classification network based on multi-triplet cost training for avoiding overfitting on the monotone synthetic object surface and utilizing accurate information of synthetic images like object poses and lighting conditions which are helpful for recognizing regular photos. First, our generative model with metric learning utilizes additional foreground object channels generated from semantic foreground object reconstruction sub-network for recognizing the original input images. Multi-triplet cost function based on poses is used for metric learning which makes it possible to train an effective categorical classifier purely based on synthetic data. Second, we design a coordinate training strategy with the help of adaptive noise applied on the inputs of both of the concatenated sub-networks to make them benefit from each other and avoid inharmonious parameter tuning due to different convergence speeds of two sub-networks. Our architecture achieves the state-of-the-art accuracy of 50.5 on the ShapeNet database with data migration obstacle from synthetic images to real images. This pipeline makes it applicable to do recognition on real images only based on 3D models. Our codes are available at https: github.com wangyida gm-cml .
Metric learning is helpful to learn and exploit implicit relationships among data samples. Descriptor-based metric learning methods such as Triplet training @cite_33 and Siamese training @cite_6 utilize additional informations for person re-identification, 3D object pose estimation and kinship verification by learning specific manifold in the embedded descriptor space. Metric learning on embeddings such as @cite_2 uses pair-wise distances which is helpful for retrieval tasks. Joint Siamese and Triplet training is also applied on local descriptor optimization for patch matching @cite_39 . Here we utilize a multi-triplet loss for optimizing a concatenated architecture to make it applicable for training on synthetic data and testing on real data.
{ "cite_N": [ "@cite_39", "@cite_33", "@cite_6", "@cite_2" ], "mid": [ "2219193941", "1909903157", "2145773134", "2963026686" ], "abstract": [ "Recent innovations in training deep convolutional neural network (ConvNet) models have motivated the design of new methods to automatically learn local image descriptors. The latest deep ConvNets proposed for this task consist of a siamese network that is trained by penalising misclassification of pairs of local image patches. Current results from machine learning show that replacing this siamese by a triplet network can improve the classification accuracy in several problems, but this has yet to be demonstrated for local image descriptor learning. Moreover, current siamese and triplet networks have been trained with stochastic gradient descent that computes the gradient from individual pairs or triplets of local image patches, which can make them prone to overfitting. In this paper, we first propose the use of triplet networks for the problem of local image descriptor learning. Furthermore, we also propose the use of a global loss that minimises the overall classification error in the training set, which can improve the generalisation capability of the model. Using the UBC benchmark dataset for comparing local image descriptors, we show that the triplet network produces a more accurate embedding than the siamese network in terms of the UBC dataset errors. Moreover, we also demonstrate that a combination of the triplet and global losses produces the best embedding in the field, using this triplet network. Finally, we also show that the use of the central-surround siamese network trained with the global loss produces the best result of the field on the UBC dataset. Pre-trained models are available online at this https URL", "Detecting poorly textured objects and estimating their 3D pose reliably is still a very challenging problem. We introduce a simple but powerful approach to computing descriptors for object views that efficiently capture both the object identity and 3D pose. By contrast with previous manifold-based approaches, we can rely on the Euclidean distance to evaluate the similarity between descriptors, and therefore use scalable Nearest Neighbor search methods to efficiently handle a large number of objects under a large range of poses. To achieve this, we train a Convolutional Neural Network to compute these descriptors by enforcing simple similarity and dissimilarity constraints between the descriptors. We show that our constraints nicely untangle the images from different objects and different views into clusters that are not only well-separated but also structured as the corresponding sets of poses: The Euclidean distance between descriptors is large when the descriptors are from different objects, and directly related to the distance between the poses when the descriptors are from the same object. These important properties allow us to outperform state-of-the-art object views representations on challenging RGB and RGB-D data.", "Kinship verification from facial images is an interesting and challenging problem in computer vision, and there are very limited attempts on tackle this problem in the literature. In this paper, we propose a new neighborhood repulsed metric learning (NRML) method for kinship verification. Motivated by the fact that interclass samples (without a kinship relation) with higher similarity usually lie in a neighborhood and are more easily misclassified than those with lower similarity, we aim to learn a distance metric under which the intraclass samples (with a kinship relation) are pulled as close as possible and interclass samples lying in a neighborhood are repulsed and pushed away as far as possible, simultaneously, such that more discriminative information can be exploited for verification. To make better use of multiple feature descriptors to extract complementary information, we further propose a multiview NRML (MNRML) method to seek a common distance metric to perform multiple feature fusion to improve the kinship verification performance. Experimental results are presented to demonstrate the efficacy of our proposed methods. Finally, we also test human ability in kinship verification from facial images and our experimental results show that our methods are comparable to that of human observers.", "Learning the distance metric between pairs of examples is of great importance for learning and visual recognition. With the remarkable success from the state of the art convolutional neural networks, recent works [1, 31] have shown promising results on discriminatively training the networks to learn semantic feature embeddings where similar examples are mapped close to each other and dissimilar examples are mapped farther apart. In this paper, we describe an algorithm for taking full advantage of the training batches in the neural network training by lifting the vector of pairwise distances within the batch to the matrix of pairwise distances. This step enables the algorithm to learn the state of the art feature embedding by optimizing a novel structured prediction objective on the lifted problem. Additionally, we collected Stanford Online Products dataset: 120k images of 23k classes of online products for metric learning. Our experiments on the CUB-200-2011 [37], CARS196 [19], and Stanford Online Products datasets demonstrate significant improvement over existing deep feature embedding methods on all experimented embedding sizes with the GoogLeNet [33] network. The source code and the dataset are available at: https: github.com rksltnl Deep-Metric-Learning-CVPR16." ] }
1705.08563
2963344811
The problem of pricing the cloud has attracted much recent attention due to the widespread use of cloud computing and cloud services. From a theoretical perspective, several mechanisms that provide strong efficiency or fairness guarantees and desirable incentive properties have been designed. However, these mechanisms often rely on a rigid model, with several parameters needing to be precisely known in order for the guarantees to hold. In this paper, we consider a stochastic model and show that it is possible to obtain good welfare and revenue guarantees with simple mechanisms that do not make use of the information on some of these parameters. In particular, we prove that a mechanism that sets the same price per time step for jobs of any length achieves at least 50 of the welfare and revenue obtained by a mechanism that can set different prices for jobs of different lengths, and the ratio can be improved if we have more specific knowledge of some parameters. Similarly, a mechanism that sets the same price for all servers even though the servers may receive different kinds of jobs can provide a reasonable welfare and revenue approximation compared to a mechanism that is allowed to set different prices for different servers.
Much recent work has focused on designing online scheduling mechanisms with good welfare guarantees and incentive properties. exhibited a truthful mechanism for batch jobs on cloud systems where jobs are allocated non-preemptively, and the same group of authors came up with mechanisms for deadline-sensitive jobs in large computing clusters @cite_12 . also considered the problem of scheduling deadline-sensitive jobs; they circumvented known lower bounds by assuming that jobs could be delayed and still finish by their deadline. developed a framework for truthful online cloud auctions where users with heterogeneous demands can come and leave on the fly. More recently, constructed a truthful mechanism that achieves a constant competitive ratio given that slackness is allowed, while assumed a stochastic model and developed a truthful mechanism that approximates the expected maximum welfare up to a constant factor. designed mechanisms for selling reserved instances where users are allowed to reserve resources of any length and from any time point in the future. Their mechanisms determine the acceptance and payment immediately when the job arrives, and achieve a competitive ratio that is optimal to within a constant factor with regard to welfare.
{ "cite_N": [ "@cite_12" ], "mid": [ "2122985136" ], "abstract": [ "We consider a market-based resource allocation model for batch jobs in cloud computing clusters. In our model, we incorporate the importance of the due date of a job rather than the number of servers allocated to it at any given time. Each batch job is characterized by the work volume of total computing units (e.g., CPU hours) along with a bound on maximum degree of parallelism. Users specify, along with these job characteristics, their desired due date and a value for finishing the job by its deadline. Given this specification, the primary goal is to determine the scheduling of cloud computing instances under capacity constraints in order to maximize the social welfare (i.e., sum of values gained by allocated users). Our main result is a new ( C (C-k) ⋅ s (s-1))-approximation algorithm for this objective, where C denotes cloud capacity, k is the maximal bound on parallelized execution (in practical settings, k l C) and s is the slackness on the job completion time i.e., the minimal ratio between a specified deadline and the earliest finish time of a job. Our algorithm is based on utilizing dual fitting arguments over a strengthened linear program to the problem. Based on the new approximation algorithm, we construct truthful allocation and pricing mechanisms, in which reporting the job true value and properties (deadline, work volume and the parallelism bound) is a dominant strategy for all users. To that end, we provide a general framework for transforming allocation algorithms into truthful mechanisms in domains of single-value and multi-properties. We then show that the basic mechanism can be extended under proper Bayesian assumptions to the objective of maximizing revenues, which is important for public clouds. We empirically evaluate the benefits of our approach through simulations on data-center job traces, and show that the revenues obtained under our mechanism are comparable with an ideal fixed-price mechanism, which sets an on-demand price using oracle knowledge of users' valuations. Finally, we discuss how our model can be extended to accommodate uncertainties in job work volumes, which is a practical challenge in cloud settings." ] }
1705.08563
2963344811
The problem of pricing the cloud has attracted much recent attention due to the widespread use of cloud computing and cloud services. From a theoretical perspective, several mechanisms that provide strong efficiency or fairness guarantees and desirable incentive properties have been designed. However, these mechanisms often rely on a rigid model, with several parameters needing to be precisely known in order for the guarantees to hold. In this paper, we consider a stochastic model and show that it is possible to obtain good welfare and revenue guarantees with simple mechanisms that do not make use of the information on some of these parameters. In particular, we prove that a mechanism that sets the same price per time step for jobs of any length achieves at least 50 of the welfare and revenue obtained by a mechanism that can set different prices for jobs of different lengths, and the ratio can be improved if we have more specific knowledge of some parameters. Similarly, a mechanism that sets the same price for all servers even though the servers may receive different kinds of jobs can provide a reasonable welfare and revenue approximation compared to a mechanism that is allowed to set different prices for different servers.
Other work in this space has dealt with comparing pricing mechanisms such as the on-demand market and the spot market @cite_15 @cite_3 @cite_8 , achieving fairness in job allocation @cite_13 , and studying models of real-time pricing with budget constraints @cite_1 . gave a survey of the current state of research in economics and computer science with respect to cloud pricing.
{ "cite_N": [ "@cite_8", "@cite_1", "@cite_3", "@cite_15", "@cite_13" ], "mid": [ "2560202379", "1039000036", "", "2953154399", "1996998821" ], "abstract": [ "In Amazon EC2, cloud resources are sold through a combination of an on-demand market, in which customers buy resources at a fixed price, and a spot market, in which customers bid for an uncertain supply of excess resources. Standard market environments suggest that an optimal design uses just one type of market. We show the prevalence of a dual market system can be explained by heterogeneous risk attitudes of customers. In our stylized model, we consider unit demand risk-averse bidders. We show the model admits a unique equilibrium, with higher revenue and higher welfare than using only spot markets. Furthermore, as risk aversion increases, the usage of the on-demand market increases. We conclude that risk attitudes are an important factor in cloud resource allocation and should be incorporated into models of cloud markets.", "We introduce a new model of user-based dynamic pricing in which decisions occur in real time and are strongly influenced by the budget constraints of users. This model captures the fundamental operation of many electronic markets that are used for allocating resources. In particular, we focus on those used in data centers and cloud computing where pricing is often an internal mechanism used to efficiently allocate virtual machines. We study the allocative properties and dynamic stability of this pricing model under a standard framework of cloud computing systems which leads to highly degenerate systems of prices. We show that as the size of the system grows the user-based budget-constrained dynamic pricing mechanism converges to the standard Walrasian prices. However, for finite systems, the prices can be non-degenerate and the allocations unfair, with large groups of users receiving allocations significantly below their fair share. In addition, we show that improper choice of price update parameters can lead to significant instabilities in prices, which could be problematic in real cloud computing systems, by inducing system instabilities and allowing manipulations by users. We construct scaling rules for parameters that reduce these instabilities.", "", "We study a model of congestible resources, where pricing and scheduling are intertwined. Motivated by the problem of pricing cloud instances, we model a cloud computing service as linked @math queuing systems where the provider chooses to offer a fixed pricing service, a dynamic market based service, or a hybrid of both, where jobs can be preempted in the market-based service. Users (jobs), who are heterogeneous in both the value they place on service and their cost for waiting, then choose between the services offered. Combining insights from auction theory with queuing theory we are able to characterize user equilibrium behavior, and show its insensitivity to the precise market design mechanism used. We then provide theoretical and simulation based evidence suggesting that a fixed price typically, though not always, generates a higher expected revenue than the hybrid system for the provider.", "We present a model for fair strategyproof allocations in a realistic model of cloud computing centers. This model has the standard Leontief preferences but also captures a key property of virtualization, the use of containers to isolate jobs. We first present several impossibility results for deterministic mechanisms in this setting. We then construct an extension of the well known dominant resource fairness mechanism (DRF), which somewhat surprisingly does not involve the notion of a dominant resource. Our mechanism relies on the connection between the DRF mechanism and the Kalai-Smorodinsky bargaining solution; by computing a weighted max-min over the convex hull of the feasible region we can obtain an ex-ante fair, efficient and strategyproof randomized allocation. This randomized mechanism can be used to construct other mechanisms which do not rely on users' being expected (ex-ante) utility maximizers, in several ways. First, for the case of @math identical machines one can use the convex structure of the mechanism to get a simple mechanism which is approximately ex-post fair, efficient and strategyproof. Second, we present a more subtle construction for an arbitrary set of machines, using the Shapley-Folkman-Starr theorem to show the existence of an allocation which is approximately ex-post fair, efficient and strategyproof. This paper provides both a rigorous foundation for developing protocols that explicitly utilize the detailed structure of the modern cloud computing hardware and software, and a general method for extending the dominant resource fairness mechanism to more complex settings." ] }
1705.08563
2963344811
The problem of pricing the cloud has attracted much recent attention due to the widespread use of cloud computing and cloud services. From a theoretical perspective, several mechanisms that provide strong efficiency or fairness guarantees and desirable incentive properties have been designed. However, these mechanisms often rely on a rigid model, with several parameters needing to be precisely known in order for the guarantees to hold. In this paper, we consider a stochastic model and show that it is possible to obtain good welfare and revenue guarantees with simple mechanisms that do not make use of the information on some of these parameters. In particular, we prove that a mechanism that sets the same price per time step for jobs of any length achieves at least 50 of the welfare and revenue obtained by a mechanism that can set different prices for jobs of different lengths, and the ratio can be improved if we have more specific knowledge of some parameters. Similarly, a mechanism that sets the same price for all servers even though the servers may receive different kinds of jobs can provide a reasonable welfare and revenue approximation compared to a mechanism that is allowed to set different prices for different servers.
Our work falls into the broader area of the design and analysis of simple mechanisms, particularly posted price mechanisms. One of the motivations for studying simple mechanisms is that in practice, designers are often willing to partially give up optimality in return for simplicity. Mechanisms that simply post prices on goods have received significant attention since they reflect perhaps the most common way of selling goods in the real world, and moreover they leave no room for strategizing, making them easy for agents to participate in. A long line of work has investigated how well such mechanisms can approximate optimal mechanisms with respect to various objectives including welfare @cite_17 @cite_7 @cite_18 , revenue @cite_10 @cite_16 @cite_6 , and social costs @cite_4 . In we show that techniques from this literature can recover some of our results under relaxed assumptions.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_6", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2615064102", "1827565498", "", "2010377201", "161187638", "2077124610", "2950351404" ], "abstract": [ "We study the power and limitations of posted prices in markets with identical items, where agents arrive sequentially in an arbitrary order. We prove upper and lower bounds on the largest fraction of the optimal social welfare that can be guaranteed with posted prices, under a range of assumptions about the designer's information and agents' valuations. For example, we prove constant-factor guarantees for agents with (symmetric) subadditive valuations, even in an incomplete-information setting and with uniform prices.", "We consider dynamic pricing schemes in online settings where selfish agents generate online events. Previous work on online mechanisms has dealt almost entirely with the goal of maximizing social welfare or revenue in an auction settings. This paper deals with quite general settings and minimizing social costs. We show that appropriately computed posted prices allow one to achieve essentially the same performance as the best online algorithm. This holds in a wide variety of settings. Unlike online algorithms that learn about the event, and then make enforcable decisions, prices are posted without knowing the future events or even the current event, and are thus inherently dominant strategy incentive compatible. In particular we show that one can give efficient posted price mechanisms for metrical task systems, some instances of the k-server problem, and metrical matching problems. We give both deterministic and randomized algorithms. Such posted price mechanisms decrease the social cost dramatically over selfish behavior where no decision incurs a charge. One alluring application of this is reducing the social cost of free parking exponentially.", "", "The design of optimal auctions focuses on ways to negotiate with the bidders for eliciting relevant information that they hold. Sometimes, however, decisions should be made very quickly, and the auctioneer cannot allow a costly iterative procedure of negotiation or waiting for bidders to determine their exact valuation. One solution that has been used in practice is to post prices for the bidders, without collecting any information from the bidders, and ask for their immediate take-it-or-leave-it response. Our paper compares the expected revenue in full-revelation auctions to that in posted-price auctions. We focus on single-item auctions in a Bayesian model where the values that the bidders are willing to pay for the item are independently identically distributed according to a known distribution. This paper provides an exact asymptotic characterization of the optimal expected revenue achieved by each one of the above auctions. For posted price auctions, we also present the exact prices that achieve the optimal results.Our results are given up to terms with lower asymptotic order, that is, up to factor of 1--o(1). We provide two sets of results; one for distributions on a support that is bounded from above, and a second set for distributions on unbounded supports. In the first case we require a mild assumption on the way the distribution function approaches the end of the support, called the first von Mises condition; the latter case requires a similar mild condition called the second von Mises condition. These very weak conditions are taken from works on extreme-value theory and highest-order statistics.", "We consider a dynamic auction model, where bidders sequentially arrive to the market. The values of the bidders for the item for sale are independently drawn from a distribution, but this distribution is unknown to the seller. The seller offers a personalized take-it-or-leave-it price for each arriving bidder and aims to maximize revenue. We study how well can such sequential posted-price mechanisms approximate the optimal revenue that would be achieved if the distribution was known to the seller. On the negative side, we show that sequential posted-price mechanisms cannot guarantee a constant fraction of this revenue when the class of candidate distributions is unrestricted. We show that this impossibility holds even for randomized mechanisms and even if the set of possible distributions is very small or when the seller has a prior distribution over the candidate distributions. On the positive side, we devise a simple posted-price mechanism that guarantees a constant fraction of the known-distribution revenue when all candidate distributions exhibit the monotone hazard rate property.", "We study the classic mathematical economics problem of Bayesian optimal mechanism design where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [20]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [1], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [25]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. We prove that these mechanisms are approximately optimal in single-dimensional settings. These posted-price mechanisms avoid many of the properties of optimal mechanisms that make the latter impractical. Furthermore, these mechanisms generalize naturally to multi-dimensional settings where they give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time. This work can be viewed as an extension and improvement of the single-agent algorithmic pricing work of [9] to the setting of multiple agents where the designer has combinatorial feasibility constraints on which agents can simultaneously obtain each service.", "We study anonymous posted price mechanisms for combinatorial auctions in a Bayesian framework. In a posted price mechanism, item prices are posted, then the consumers approach the seller sequentially in an arbitrary order, each purchasing her favorite bundle from among the unsold items at the posted prices. These mechanisms are simple, transparent and trivially dominant strategy incentive compatible (DSIC). We show that when agent preferences are fractionally subadditive (which includes all submodular functions), there always exist prices that, in expectation, obtain at least half of the optimal welfare. Our result is constructive: given black-box access to a combinatorial auction algorithm A, sample access to the prior distribution, and appropriate query access to the sampled valuations, one can compute, in polytime, prices that guarantee at least half of the expected welfare of A. As a corollary, we obtain the first polytime (in n and m) constant-factor DSIC mechanism for Bayesian submodular combinatorial auctions, given access to demand query oracles. Our results also extend to valuations with complements, where the approximation factor degrades linearly with the level of complementarity." ] }
1705.08918
2619326017
This paper presents two unsupervised learning layers (UL layers) for label-free video analysis: one for fully connected layers, and the other for convolutional ones. The proposed UL layers can play two roles: they can be the cost function layer for providing global training signal; meanwhile they can be added to any regular neural network layers for providing local training signals and combined with the training signals backpropagated from upper layers for extracting both slow and fast changing features at layers of different depths. Therefore, the UL layers can be used in either pure unsupervised or semi-supervised settings. Both a closed-form solution and an online learning algorithm for two UL layers are provided. Experiments with unlabeled synthetic and real-world videos demonstrated that the neural networks equipped with UL layers and trained with the proposed online learning algorithm can extract shape and motion information from video sequences of moving objects. The experiments demonstrated the potential applications of UL layers and online learning algorithm to head orientation estimation and moving object localization.
Unsupervised learning of visual representations is a broad area with a rich history and a large volume of work -- ranging from classical K-means clustering @cite_23 , dimensionality reduction @cite_34 @cite_8 , sparse coding @cite_40 @cite_33 , RBMs(Restricted Boltzmann Machines) @cite_1 , autoencoders @cite_38 @cite_6 , single-layer analysis @cite_3 , to more recent work such as VAEs(Variational Auto-Encoders) @cite_27 , GANs(Generative Adversarial Networks) @cite_18 , pixelCNNs @cite_9 , and pixelRNNs @cite_17 . The increase of computational capabilities @cite_24 also makes large-scale deep unsupervised learning possible.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_33", "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_3", "@cite_24", "@cite_40", "@cite_27", "@cite_23", "@cite_34", "@cite_17" ], "mid": [ "2110798204", "1710476689", "", "", "2423557781", "2136922672", "", "2118858186", "2120432001", "2145889472", "", "", "2109312", "2953318193" ], "abstract": [ "Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.", "For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”", "", "", "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.", "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.", "", "A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several othe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. We then present a detailed analysis of the eect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”) between extracted features, and the eect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6 and 97.2 respectively).", "The promise of unsupervised learning methods lies in their potential to use vast amounts of unlabeled data to learn complex, highly nonlinear models with millions of free parameters. We consider two well-known unsupervised learning models, deep belief networks (DBNs) and sparse coding, that have recently been applied to a flurry of machine learning applications (Hinton & Salakhutdinov, 2006; , 2007). Unfortunately, current learning algorithms for both models are too slow for large-scale applications, forcing researchers to focus on smaller-scale models, or to use fewer training examples. In this paper, we suggest massively parallel methods to help resolve these problems. We argue that modern graphics processors far surpass the computational capabilities of multicore CPUs, and have the potential to revolutionize the applicability of deep unsupervised learning methods. We develop general principles for massively parallelizing unsupervised learning tasks using graphics processors. We show that these principles can be applied to successfully scaling up learning algorithms for both DBNs and sparse coding. Our implementation of DBN learning is up to 70 times faster than a dual-core CPU implementation for large models. For example, we are able to reduce the time required to learn a four-layer DBN with 100 million free parameters from several weeks to around a single day. For sparse coding, we develop a simple, inherently parallel algorithm, that leads to a 5 to 15-fold speedup over previous methods.", "THE receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented1–4 and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms5,6. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding7–12. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties13–18, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal8,12 that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs.", "", "", "", "Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent." ] }
1705.08918
2619326017
This paper presents two unsupervised learning layers (UL layers) for label-free video analysis: one for fully connected layers, and the other for convolutional ones. The proposed UL layers can play two roles: they can be the cost function layer for providing global training signal; meanwhile they can be added to any regular neural network layers for providing local training signals and combined with the training signals backpropagated from upper layers for extracting both slow and fast changing features at layers of different depths. Therefore, the UL layers can be used in either pure unsupervised or semi-supervised settings. Both a closed-form solution and an online learning algorithm for two UL layers are provided. Experiments with unlabeled synthetic and real-world videos demonstrated that the neural networks equipped with UL layers and trained with the proposed online learning algorithm can extract shape and motion information from video sequences of moving objects. The experiments demonstrated the potential applications of UL layers and online learning algorithm to head orientation estimation and moving object localization.
Our work is closely related to unsupervised learning from label-free video. Videos provide import temporal constraints for machine learning algorithms and they are abundant on websites. Most recent work employ video prediction as an auxiliary task for learning useful representations. The argument is that if the algorithm can predict the next fewer frames well, it has to learn some features related to objects' shape, motion, etc. These algorithms usually predict next frames at pixel @cite_19 , interstate @cite_35 , object location @cite_21 levels using language models @cite_15 , motion transformation @cite_10 , LSTM @cite_19 , GAN @cite_39 , or probabilistic models @cite_7 .
{ "cite_N": [ "@cite_35", "@cite_7", "@cite_21", "@cite_39", "@cite_19", "@cite_15", "@cite_10" ], "mid": [ "1599058448", "2470475590", "2521461771", "2248556341", "", "1568514080", "2400532028" ], "abstract": [ "In many computer vision applications, machines will need to reason beyond the present, and predict the future. This task is challenging because it requires leveraging extensive commonsense knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently obtaining this knowledge is through the massive amounts of readily available unlabeled video. In this paper, we present a large scale framework that capitalizes on temporal structure in unlabeled video to learn to anticipate both actions and objects in the future. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. We experimentally validate this idea on two challenging \"in the wild\" video datasets, and our results suggest that learning with unlabeled videos significantly helps forecast actions and anticipate objects.", "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.", "In many machine learning applications, labeled data is scarce and obtaining more labels is expensive. We introduce a new approach to supervising neural networks by specifying constraints that should hold over the output space, rather than direct examples of input-output pairs. These constraints are derived from prior domain knowledge, e.g., from known laws of physics. We demonstrate the effectiveness of this approach on real world and simulated computer vision tasks. We are able to train a convolutional neural network to detect and track objects without any labeled examples. Our approach can significantly reduce the need for labeled training data, but introduces new challenges for encoding prior knowledge into appropriate loss functions.", "Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset", "", "We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences.", "A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a \"visual imagination\" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods." ] }
1705.08926
2617547828
Cooperative multi-agent systems can be naturally used to model many real world problems, such as network packet routing and the coordination of autonomous vehicles. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actor-critic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.
@cite_1 concurrently propose a multi-agent policy-gradient algorithm using centralised critics. Their approach does not address multi-agent credit assignment. Unlike our work, it learns a separate centralised critic for each agent and is applied to competitive environments with continuous action spaces.
{ "cite_N": [ "@cite_1" ], "mid": [ "2623431351" ], "abstract": [ "We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies." ] }
1705.08620
2616910642
Domain adaptation (DA) is transfer learning which aims to leverage labeled data in a related source domain to achieve informed knowledge transfer and help the classification of unlabeled data in a target domain. In this paper, we propose a novel DA method, namely Robust Data Geometric Structure Aligned, Close yet Discriminative Domain Adaptation (RSA-CDDA), which brings closer, in a latent joint subspace, both source and target data distributions, and aligns inherent hidden source and target data geometric structures while performing discriminative DA in repulsing both interclass source and target data. The proposed method performs domain adaptation between source and target in solving a unified model, which incorporates data distribution constraints, in particular via a nonparametric distance, i.e., Maximum Mean Discrepancy (MMD), as well as constraints on inherent hidden data geometric structure segmentation and alignment between source and target, through low rank and sparse representation. RSA-CDDA achieves the search of a joint subspace in solving the proposed unified model through iterative optimization, alternating Rayleigh quotient algorithm and inexact augmented Lagrange multiplier algorithm. Extensive experiments carried out on standard DA benchmarks, i.e., 16 cross-domain image classification tasks, verify the effectiveness of the proposed method, which consistently outperforms the state-of-the-art methods.
State of the art in DA has featured so far two main approaches: 1) the search of a novel joint subspace where source and target domain have a similar data distribution @cite_15 @cite_7 @cite_8 ; or 2) adaptation of the prediction model trained using the source data @cite_4 @cite_16 @cite_5 . In the first approach, source and target data are changed because projected into the novel joint subspace. A prediction model trained on labeled source data in this novel subspace can thus be applied to projected target data because they are aligned or have similar data distribution in the novel joint subspace. The second approach is to modify the parameters of the prediction model trained on the source domain so that the decision boundaries are adapted to the target data which remain unchanged.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_5", "@cite_15", "@cite_16" ], "mid": [ "2240559667", "2096943734", "", "2089685866", "2115403315", "2157032359" ], "abstract": [ "In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http: www.yongxu.org lunwen.html .", "Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.", "", "Visual domain adaptation addresses the problem of adapting the sample distribution of the source domain to the target domain, where the recognition task is intended but the data distributions are different. In this paper, we present a low-rank reconstruction method to reduce the domain distribution disparity. Specifically, we transform the visual samples in the source domain into an intermediate representation such that each transformed source sample can be linearly reconstructed by the samples of the target domain. Unlike the existing work, our method captures the intrinsic relatedness of the source samples during the adaptation process while uncovering the noises and outliers in the source domain that cannot be adapted, making it more robust than previous methods. We formulate our problem as a constrained nuclear norm and l 2, 1 norm minimization objective and then adopt the Augmented Lagrange Multiplier (ALM) method for the optimization. Extensive experiments on various visual adaptation tasks show that the proposed method consistently and significantly beats the state-of-the-art domain adaptation methods.", "Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.", "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods." ] }
1705.08620
2616910642
Domain adaptation (DA) is transfer learning which aims to leverage labeled data in a related source domain to achieve informed knowledge transfer and help the classification of unlabeled data in a target domain. In this paper, we propose a novel DA method, namely Robust Data Geometric Structure Aligned, Close yet Discriminative Domain Adaptation (RSA-CDDA), which brings closer, in a latent joint subspace, both source and target data distributions, and aligns inherent hidden source and target data geometric structures while performing discriminative DA in repulsing both interclass source and target data. The proposed method performs domain adaptation between source and target in solving a unified model, which incorporates data distribution constraints, in particular via a nonparametric distance, i.e., Maximum Mean Discrepancy (MMD), as well as constraints on inherent hidden data geometric structure segmentation and alignment between source and target, through low rank and sparse representation. RSA-CDDA achieves the search of a joint subspace in solving the proposed unified model through iterative optimization, alternating Rayleigh quotient algorithm and inexact augmented Lagrange multiplier algorithm. Extensive experiments carried out on standard DA benchmarks, i.e., 16 cross-domain image classification tasks, verify the effectiveness of the proposed method, which consistently outperforms the state-of-the-art methods.
Because target domain only contains unlabeled data, modification of a prediction model's parameters proves difficult and an increasing research focus has recently turned on the first approach. One can distinguish two lines of research work in this direction: 1) Data distribution oriented methods @cite_18 @cite_15 @cite_7 @cite_8 , ; and 2) Data reconstruction-based methods @cite_4 @cite_16 @cite_5 @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_8", "@cite_3", "@cite_5", "@cite_15", "@cite_16" ], "mid": [ "2107298017", "2240559667", "2096943734", "", "2283717164", "2089685866", "2115403315", "2157032359" ], "abstract": [ "Transfer learning addresses the problem of how to utilize plenty of labeled data in a source domain to solve related but different problems in a target domain, even when the training and testing problems have different distributions or features. In this paper, we consider transfer learning via dimensionality reduction. To solve this problem, we learn a low-dimensional latent feature space where the distributions between the source domain data and the target domain data are the same or close to each other. Onto this latent feature space, we project the data in related domains where we can apply standard learning algorithms to train classification or regression models. Thus, the latent feature space can be treated as a bridge of transferring knowledge from the source domain to the target domain. The main contribution of our work is that we propose a new dimensionality reduction method to find a latent space, which minimizes the distance between distributions of the data in different domains in a latent space. The effectiveness of our approach to transfer learning is verified by experiments in two real world applications: indoor WiFi localization and binary text classification.", "In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http: www.yongxu.org lunwen.html .", "Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.", "", "We propose a novel reconstruction-based transfer learning method called latent sparse domain transfer (LSDT) for domain adaptation and visual categorization of heterogeneous data. For handling cross-domain distribution mismatch, we advocate reconstructing the target domain data with the combined source and target domain data points based on @math -norm sparse coding. Furthermore, we propose a joint learning model for simultaneous optimization of the sparse coding and the optimal subspace representation. In addition, we generalize the proposed LSDT model into a kernel-based linear nonlinear basis transformation learning framework for tackling nonlinear subspace shifts in reproduced kernel Hilbert space. The proposed methods have three advantages: 1) the latent space and the reconstruction are jointly learned for pursuit of an optimal subspace transfer; 2) with the theory of sparse subspace clustering, a few valuable source and target data points are formulated to reconstruct the target data with noise (outliers) from source domain removed during domain adaptation, such that the robustness is guaranteed; and 3) a nonlinear projection of some latent space with kernel is easily generalized for dealing with highly nonlinear domain shift (e.g., face poses). Extensive experiments on several benchmark vision data sets demonstrate that the proposed approaches outperform other state-of-the-art representation-based domain adaptation methods.", "Visual domain adaptation addresses the problem of adapting the sample distribution of the source domain to the target domain, where the recognition task is intended but the data distributions are different. In this paper, we present a low-rank reconstruction method to reduce the domain distribution disparity. Specifically, we transform the visual samples in the source domain into an intermediate representation such that each transformed source sample can be linearly reconstructed by the samples of the target domain. Unlike the existing work, our method captures the intrinsic relatedness of the source samples during the adaptation process while uncovering the noises and outliers in the source domain that cannot be adapted, making it more robust than previous methods. We formulate our problem as a constrained nuclear norm and l 2, 1 norm minimization objective and then adopt the Augmented Lagrange Multiplier (ALM) method for the optimization. Extensive experiments on various visual adaptation tasks show that the proposed method consistently and significantly beats the state-of-the-art domain adaptation methods.", "Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.", "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods." ] }
1705.08620
2616910642
Domain adaptation (DA) is transfer learning which aims to leverage labeled data in a related source domain to achieve informed knowledge transfer and help the classification of unlabeled data in a target domain. In this paper, we propose a novel DA method, namely Robust Data Geometric Structure Aligned, Close yet Discriminative Domain Adaptation (RSA-CDDA), which brings closer, in a latent joint subspace, both source and target data distributions, and aligns inherent hidden source and target data geometric structures while performing discriminative DA in repulsing both interclass source and target data. The proposed method performs domain adaptation between source and target in solving a unified model, which incorporates data distribution constraints, in particular via a nonparametric distance, i.e., Maximum Mean Discrepancy (MMD), as well as constraints on inherent hidden data geometric structure segmentation and alignment between source and target, through low rank and sparse representation. RSA-CDDA achieves the search of a joint subspace in solving the proposed unified model through iterative optimization, alternating Rayleigh quotient algorithm and inexact augmented Lagrange multiplier algorithm. Extensive experiments carried out on standard DA benchmarks, i.e., 16 cross-domain image classification tasks, verify the effectiveness of the proposed method, which consistently outperforms the state-of-the-art methods.
This drawback has been precisely addressed by data reconstruction-based methods, , RDALR @cite_5 , LTSL @cite_16 , LRSR @cite_4 and LSDT @cite_3 , which search a learned subspace in minimizing the reconstruction error of target data using source or both source and target data, and make use of low rank and sparse representation to segment the inherent hidden data structure and account for data outliers as well. While these methods ensure that source and target data are well aligned and interleaved, they do not have theoretic guarantee that aligned source and target data have similar data distribution.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_4", "@cite_3" ], "mid": [ "2089685866", "2157032359", "2240559667", "2283717164" ], "abstract": [ "Visual domain adaptation addresses the problem of adapting the sample distribution of the source domain to the target domain, where the recognition task is intended but the data distributions are different. In this paper, we present a low-rank reconstruction method to reduce the domain distribution disparity. Specifically, we transform the visual samples in the source domain into an intermediate representation such that each transformed source sample can be linearly reconstructed by the samples of the target domain. Unlike the existing work, our method captures the intrinsic relatedness of the source samples during the adaptation process while uncovering the noises and outliers in the source domain that cannot be adapted, making it more robust than previous methods. We formulate our problem as a constrained nuclear norm and l 2, 1 norm minimization objective and then adopt the Augmented Lagrange Multiplier (ALM) method for the optimization. Extensive experiments on various visual adaptation tasks show that the proposed method consistently and significantly beats the state-of-the-art domain adaptation methods.", "It is expensive to obtain labeled real-world visual data for use in training of supervised algorithms. Therefore, it is valuable to leverage existing databases of labeled data. However, the data in the source databases is often obtained under conditions that differ from those in the new task. Transfer learning provides techniques for transferring learned knowledge from a source domain to a target domain by finding a mapping between them. In this paper, we discuss a method for projecting both source and target data to a generalized subspace where each target sample can be represented by some combination of source samples. By employing a low-rank constraint during this transfer, the structure of source and target domains are preserved. This approach has three benefits. First, good alignment between the domains is ensured through the use of only relevant data in some subspace of the source domain in reconstructing the data in the target domain. Second, the discriminative power of the source domain is naturally passed on to the target domain. Third, noisy information will be filtered out during knowledge transfer. Extensive experiments on synthetic data, and important computer vision problems such as face recognition application and visual domain adaptation for object recognition demonstrate the superiority of the proposed approach over the existing, well-established methods.", "In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http: www.yongxu.org lunwen.html .", "We propose a novel reconstruction-based transfer learning method called latent sparse domain transfer (LSDT) for domain adaptation and visual categorization of heterogeneous data. For handling cross-domain distribution mismatch, we advocate reconstructing the target domain data with the combined source and target domain data points based on @math -norm sparse coding. Furthermore, we propose a joint learning model for simultaneous optimization of the sparse coding and the optimal subspace representation. In addition, we generalize the proposed LSDT model into a kernel-based linear nonlinear basis transformation learning framework for tackling nonlinear subspace shifts in reproduced kernel Hilbert space. The proposed methods have three advantages: 1) the latent space and the reconstruction are jointly learned for pursuit of an optimal subspace transfer; 2) with the theory of sparse subspace clustering, a few valuable source and target data points are formulated to reconstruct the target data with noise (outliers) from source domain removed during domain adaptation, such that the robustness is guaranteed; and 3) a nonlinear projection of some latent space with kernel is easily generalized for dealing with highly nonlinear domain shift (e.g., face poses). Extensive experiments on several benchmark vision data sets demonstrate that the proposed approaches outperform other state-of-the-art representation-based domain adaptation methods." ] }
1705.08182
2619514024
We propose a novel framework for abnormal event detection in video that requires no training sequences. Our framework is based on unmasking, a technique previously used for authorship verification in text documents, which we adapt to our task. We iteratively train a binary classifier to distinguish between two consecutive video sequences while removing at each step the most discriminant features. Higher training accuracy rates of the intermediately obtained classifiers represent abnormal events. To the best of our knowledge, this is the first work to apply unmasking for a computer vision task. We compare our method with several state-of-the-art supervised and unsupervised methods on four benchmark data sets. The empirical results indicate that our abnormal event detection framework can achieve state-of-the-art results, while running in real-time at 20 frames per second.
Abnormal event detection is usually formalized as an outlier detection task @cite_1 @cite_13 @cite_0 @cite_3 @cite_6 @cite_2 @cite_17 @cite_8 @cite_24 @cite_7 @cite_28 @cite_19 @cite_11 @cite_4 , in which the general approach is to learn a model of normality from training data and consider the detected outliers as abnormal events. Some abnormal event detection approaches @cite_13 @cite_0 @cite_3 @cite_17 @cite_7 are based on learning a dictionary of normal events, and label the events not represented by the dictionary as abnormal. Other approaches have employed deep features @cite_19 or locality sensitive hashing filters @cite_11 to achieve better results.
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_7", "@cite_8", "@cite_28", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_24", "@cite_19", "@cite_2", "@cite_13", "@cite_17" ], "mid": [ "", "2021659075", "", "2122361470", "2531279268", "2015461918", "2138092272", "2254513182", "2012931101", "2164489414", "2194550927", "", "1957718552", "2163612318" ], "abstract": [ "", "Real-time unusual event detection in video stream has been a difficult challenge due to the lack of sufficient training information, volatility of the definitions for both normality and abnormality, time constraints, and statistical limitation of the fitness of any parametric models. We propose a fully unsupervised dynamic sparse coding approach for detecting unusual events in videos based on online sparse re-constructibility of query signals from an atomically learned event dictionary, which forms a sparse coding bases. Based on an intuition that usual events in a video are more likely to be reconstructible from an event dictionary, whereas unusual events are not, our algorithm employs a principled convex optimization formulation that allows both a sparse reconstruction code, and an online dictionary to be jointly inferred and updated. Our algorithm is completely un-supervised, making no prior assumptions of what unusual events may look like and the settings of the cameras. The fact that the bases dictionary is updated in an online fashion as the algorithm observes more data, avoids any issues with concept drift. Experimental results on hours of real world surveillance video and several Youtube videos show that the proposed algorithm could reliably locate the unusual events in the video sequence, outperforming the current state-of-the-art methods.", "", "A novel framework for anomaly detection in crowded scenes is presented. Three properties are identified as important for the design of a localized video representation suitable for anomaly detection in such scenes: 1) joint modeling of appearance and dynamics of the scene, and the abilities to detect 2) temporal, and 3) spatial abnormalities. The model for normal crowd behavior is based on mixtures of dynamic textures and outliers under this model are labeled as anomalies. Temporal anomalies are equated to events of low-probability, while spatial anomalies are handled using discriminant saliency. An experimental evaluation is conducted with a new dataset of crowded scenes, composed of 100 video sequences and five well defined abnormality categories. The proposed representation is shown to outperform various state of the art anomaly detection techniques.", "Anomaly detection is still a challenging task for video surveillance due to complex environments and unpredictable human behaviors. Most existing approaches train offline detectors using manually labeled data and predefined parameters, and are hard to model changing scenes. This paper introduces a neural network based model called online Growing Neural Gas (online GNG) to perform an unsupervised learning. Unlike a parameter-fixed GNG, our model updates learning parameters continuously, for which we propose several online neighbor-related strategies. Specific operations, namely neuron insertion, deletion, learning rate adaptation and stopping criteria selection, get upgraded to online modes. In the anomaly detection stage, the behavior patterns far away from our model are labeled as anomalous, for which far away is measured by a time-varying threshold. Experiments are implemented on three surveillance datasets, namely UMN, UCSD Ped1 Ped2 and Avenue dataset. All datasets have changing scenes due to mutable crowd density and behavior types. Anomaly detection results show that our model can adapt to the current scene rapidly and reduce false alarms while still detecting most anomalies. Quantitative comparisons with 12 recent approaches further confirm our superiority. HighlightsOnline GNG employs neighbor-related strategies to adjust dominant learning parameters in a fully automated way.Online GNG can estimate the learning efficiency and adjust the network size to fit the unstable data space dynamically.Online GNG effectively reduces the false alarms and leak detections caused by model aging which frequently happens in changing surveillance scenes.", "Detecting abnormalities in video is a challenging problem since the class of all irregular objects and behaviors is infinite and thus no (or by far not enough) abnormal training samples are available. Consequently, a standard setting is to find abnormalities without actually knowing what they are because we have not been shown abnormal examples during training. However, although the training data does not define what an abnormality looks like, the main paradigm in this field is to directly search for individual abnormal local patches or image regions independent of another. To address this problem we parse video frames by establishing a set of hypotheses that jointly explain all the foreground while, at same time, trying to find normal training samples that explain the hypotheses. Consequently, we can avoid a direct detection of abnormalities. They are discovered indirectly as those hypotheses which are needed for covering the foreground without finding an explanation by normal samples for themselves. We present a probabilistic model that localizes abnormalities using statistical inference. On the challenging dataset of [15] it outperforms the state-of-the-art by 7 to achieve a frame-based abnormality classification performance of 91 and the localization performance improves by 32 to 76 .", "We propose a space-time Markov random field (MRF) model to detect abnormal activities in video. The nodes in the MRF graph correspond to a grid of local regions in the video frames, and neighboring nodes in both space and time are associated with links. To learn normal patterns of activity at each local node, we capture the distribution of its typical optical flow with a mixture of probabilistic principal component analyzers. For any new optical flow patterns detected in incoming video clips, we use the learned model and MRF graph to compute a maximum a posteriori estimate of the degree of normality at each local node. Further, we show how to incrementally update the current model parameters as new video observations stream in, so that the model can efficiently adapt to visual context changes over a long period of time. Experimental results on surveillance videos show that our space-time MRF model robustly detects abnormal activities both in a local and global sense: not only does it accurately localize the atomic abnormal activities in a crowded video, but at the same time it captures the global-level abnormalities caused by irregular interactions between local activities.", "We present an unsupervised approach for abnormal event detection in videos. We propose, given a dictionary of features learned from local spatiotemporal cuboids using the sparse coding objective, the abnormality of an event depends jointly on two factors: the frequency of each feature in reconstructing all events (or, rarity of a feature) and the strength by which it is used in reconstructing the current event (or, the absolute coefficient). The Incremental Coding Length (ICL) of a feature is a measure of its entropy gain. Given a dictionary, the ICL computation does not involve any parameter, is computationally efficient and has been used for saliency detection in images with impressive results. In this paper, the rarity of a dictionary feature is learned online as its average energy, a function of its ICL. The proposed approach is applicable to real world streaming videos. Experiments on three benchmark datasets and evaluations in comparison with a number of mainstream algorithms show that the approach is comparable to the state-of-the-art.", "We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given an over-complete normal basis set (e.g., an image sequence or a collection of local spatio-temporal patches), we introduce the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. To condense the size of the dictionary, a novel dictionary selection method is designed with sparsity consistency constraint. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. Our method provides a unified solution to detect both local abnormal events (LAE) and global abnormal events (GAE). We further extend it to support online abnormal event detection by updating the dictionary incrementally. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our algorithm.", "In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow.", "We present a novel unsupervised deep learning framework for anomalous event detection in complex video scenes. While most existing works merely use hand-crafted appearance and motion features, we propose Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation (early fusion). Based on the learned representations, multiple one-class SVM models are used to predict the anomaly scores of each input, which are then integrated with a late fusion strategy for final anomaly detection. We evaluate the proposed method on two publicly available video surveillance datasets, showing competitive performance with respect to state of the art approaches.", "", "This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression. While local anomaly is typically detected as a 3D pattern matching problem, we are more interested in global anomaly that involves multiple normal events interacting in an unusual manner such as car accident. To simultaneously detect local and global anomalies, we formulate the extraction of normal interactions from training video as the problem of efficiently finding the frequent geometric relations of the nearby sparse spatio-temporal interest points. A codebook of interaction templates is then constructed and modeled using Gaussian process regression. A novel inference method for computing the likelihood of an observed interaction is also proposed. As such, our model is robust to slight topological deformations and can handle the noise and data unbalance problems in the training data. Simulations show that our system outperforms the main state-of-the-art methods on this topic and achieves at least 80 detection rates based on three challenging datasets.", "Speedy abnormal event detection meets the growing demand to process an enormous number of surveillance videos. Based on inherent redundancy of video structures, we propose an efficient sparse combination learning framework. It achieves decent performance in the detection phase without compromising result quality. The short running time is guaranteed because the new method effectively turns the original complicated problem to one in which only a few costless small-scale least square optimization steps are involved. Our method reaches high detection rates on benchmark datasets at a speed of 140-150 frames per second on average when computing on an ordinary desktop PC using MATLAB." ] }
1705.08182
2619514024
We propose a novel framework for abnormal event detection in video that requires no training sequences. Our framework is based on unmasking, a technique previously used for authorship verification in text documents, which we adapt to our task. We iteratively train a binary classifier to distinguish between two consecutive video sequences while removing at each step the most discriminant features. Higher training accuracy rates of the intermediately obtained classifiers represent abnormal events. To the best of our knowledge, this is the first work to apply unmasking for a computer vision task. We compare our method with several state-of-the-art supervised and unsupervised methods on four benchmark data sets. The empirical results indicate that our abnormal event detection framework can achieve state-of-the-art results, while running in real-time at 20 frames per second.
There have been some approaches that employ unsupervised steps for abnormal event detection @cite_3 @cite_7 @cite_28 @cite_19 , but these approaches are not fully unsupervised. The approach presented in @cite_3 is to build a model of familiar events from training data and incrementally update the model in an unsupervised manner as new patterns are observed in the test data. In a similar fashion, @cite_28 train a Growing Neural Gas model starting from training videos and continue the training process as they analyze the test videos for anomaly detection. @cite_7 use an unsupervised approach, spectral clustering, to build a dictionary of atoms, each representing one type of normal behavior. Their approach requires training videos of normal events to construct the dictionary. @cite_19 use Stacked Denoising Auto-Encoders to learn deep feature representations in a unsupervised way. However, they still employ multiple one-class SVM models to predict the anomaly scores.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_7", "@cite_3" ], "mid": [ "2531279268", "2194550927", "", "2254513182" ], "abstract": [ "Anomaly detection is still a challenging task for video surveillance due to complex environments and unpredictable human behaviors. Most existing approaches train offline detectors using manually labeled data and predefined parameters, and are hard to model changing scenes. This paper introduces a neural network based model called online Growing Neural Gas (online GNG) to perform an unsupervised learning. Unlike a parameter-fixed GNG, our model updates learning parameters continuously, for which we propose several online neighbor-related strategies. Specific operations, namely neuron insertion, deletion, learning rate adaptation and stopping criteria selection, get upgraded to online modes. In the anomaly detection stage, the behavior patterns far away from our model are labeled as anomalous, for which far away is measured by a time-varying threshold. Experiments are implemented on three surveillance datasets, namely UMN, UCSD Ped1 Ped2 and Avenue dataset. All datasets have changing scenes due to mutable crowd density and behavior types. Anomaly detection results show that our model can adapt to the current scene rapidly and reduce false alarms while still detecting most anomalies. Quantitative comparisons with 12 recent approaches further confirm our superiority. HighlightsOnline GNG employs neighbor-related strategies to adjust dominant learning parameters in a fully automated way.Online GNG can estimate the learning efficiency and adjust the network size to fit the unstable data space dynamically.Online GNG effectively reduces the false alarms and leak detections caused by model aging which frequently happens in changing surveillance scenes.", "We present a novel unsupervised deep learning framework for anomalous event detection in complex video scenes. While most existing works merely use hand-crafted appearance and motion features, we propose Appearance and Motion DeepNet (AMDN) which utilizes deep neural networks to automatically learn feature representations. To exploit the complementary information of both appearance and motion patterns, we introduce a novel double fusion framework, combining both the benefits of traditional early fusion and late fusion strategies. Specifically, stacked denoising autoencoders are proposed to separately learn both appearance and motion features as well as a joint representation (early fusion). Based on the learned representations, multiple one-class SVM models are used to predict the anomaly scores of each input, which are then integrated with a late fusion strategy for final anomaly detection. We evaluate the proposed method on two publicly available video surveillance datasets, showing competitive performance with respect to state of the art approaches.", "", "We present an unsupervised approach for abnormal event detection in videos. We propose, given a dictionary of features learned from local spatiotemporal cuboids using the sparse coding objective, the abnormality of an event depends jointly on two factors: the frequency of each feature in reconstructing all events (or, rarity of a feature) and the strength by which it is used in reconstructing the current event (or, the absolute coefficient). The Incremental Coding Length (ICL) of a feature is a measure of its entropy gain. Given a dictionary, the ICL computation does not involve any parameter, is computationally efficient and has been used for saliency detection in images with impressive results. In this paper, the rarity of a dictionary feature is learned online as its average energy, a function of its ICL. The proposed approach is applicable to real world streaming videos. Experiments on three benchmark datasets and evaluations in comparison with a number of mainstream algorithms show that the approach is comparable to the state-of-the-art." ] }
1705.08498
2616901848
Real-time prediction of clinical interventions remains a challenge within intensive care units (ICUs). This task is complicated by data sources that are noisy, sparse, heterogeneous and outcomes that are imbalanced. In this paper, we integrate data from all available ICU sources (vitals, labs, notes, demographics) and focus on learning rich representations of this data to predict onset and weaning of multiple invasive interventions. In particular, we compare both long short-term memory networks (LSTM) and convolutional neural networks (CNN) for prediction of five intervention tasks: invasive ventilation, non-invasive ventilation, vasopressors, colloid boluses, and crystalloid boluses. Our predictions are done in a forward-facing manner to enable "real-time" performance, and predictions are made with a six hour gap time to support clinically actionable planning. We achieve state-of-the-art results on our predictive tasks using deep architectures. We explore the use of feature occlusion to interpret LSTM models, and compare this to the interpretability gained from examining inputs that maximally activate CNN outputs. We show that our models are able to significantly outperform baselines in intervention prediction, and provide insight into model learning, which is crucial for the adoption of such models in practice.
Recent studies have applied recurrent neural networks (RNNs) to modeling sequential EHR data to tag ICU signals with billing code labels , to identify the impact of different drugs for diabetes . @cite_2 compared CNNs to LSTMs for longitudinal outcome prediction on billing codes using lab tests. With regard to interpretability, @cite_3 used temporal attention to identify important features in early diagnostic prediction of chronic diseases from time-ordered billing codes. Others have focused on using representations of clinical notes or patient physiological signals to predict mortality .
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "2517259736", "2487589373" ], "abstract": [ "Accuracy and interpretation are two goals of any successful predictive models. Most existing works have to suffer the tradeoff between the two by either picking complex black box models such as recurrent neural networks (RNN) or relying on less accurate traditional models with better interpretation such as logistic regression. To address this dilemma, we present REverse Time AttentIoN model (RETAIN) for analyzing Electronic Health Records (EHR) data that achieves high accuracy while remaining clinically interpretable. RETAIN is a two-level neural attention model that can find influential past visits and significant clinical variables within those visits (e.g,. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that more recent clinical visits will likely get higher attention. Experiments on a large real EHR dataset of 14 million visits from 263K patients over 8 years confirmed the comparable predictive accuracy and computational scalability to the state-of-the-art methods such as RNN. Finally, we demonstrate the clinical interpretation with concrete examples from RETAIN.", "Disparate areas of machine learning have benefited from models that can take raw data with little preprocessing as input and learn rich representations of that raw data in order to perform well on a given prediction task. We evaluate this approach in healthcare by using longitudinal measurements of lab tests, one of the more raw signals of a patient's health state widely available in clinical data, to predict disease onsets. In particular, we train a Long Short-Term Memory (LSTM) recurrent neural network and two novel convolutional neural networks for multi-task prediction of disease onset for 133 conditions based on 18 common lab tests measured over time in a cohort of 298K patients derived from 8 years of administrative claims data. We compare the neural networks to a logistic regression with several hand-engineered, clinically relevant features. We find that the representation-based learning approaches significantly outperform this baseline. We believe that our work suggests a new avenue for patient risk stratification based solely on lab results." ] }
1705.08422
2619989803
Sepsis is a leading cause of mortality in intensive care units (ICUs) and costs hospitals billions annually. Treating a septic patient is highly challenging, because individual patients respond very differently to medical interventions and there is no universally agreed-upon treatment for sepsis. Understanding more about a patient's physiological state at a given time could hold the key to effective treatment policies. In this work, we propose a new approach to deduce optimal treatment policies for septic patients by using continuous state-space models and deep reinforcement learning. Learning treatment policies over continuous spaces is important, because we retain more of the patient's physiological information. Our model is able to learn clinically interpretable treatment policies, similar in important aspects to the treatment policies of physicians. Evaluating our algorithm on past ICU patient data, we find that our model could reduce patient mortality in the hospital by up to 3.6 over observed clinical policies, from a baseline mortality of 13.7 . The learned treatment policies could be used to aid intensive care clinicians in medical decision making and improve the likelihood of patient survival.
@cite_0 applied deep RL techniques to modeling ICU heparin dosing as a Partially Observed Markov Decision Process (POMDP), using both discriminative Hidden Markov Models and Q-networks to discover the optimal policy. Their investigation was made more challenging by the relatively small amount of available data. @cite_2 learned optimal treatment policies for schizophrenic patients, and quantified the uncertainty around the expected outcome for patients who followed the policies. @cite_1 use off-policy reinforcement learning algorithms to determine ICU strategies for mechanical ventilation administration and weaning, but focus on simpler learning algorithms and a heuristic action space. We experiment with using a sparse autoencoder to generate latent representations of the state of a patient, likely leading to an easier learning problem. We also propose neural network architectures that obtain more robust methods for optimal policy deduction.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_2" ], "mid": [ "2535327675", "2609169452", "2151161180" ], "abstract": [ "Misdosing medications with sensitive therapeutic windows, such as heparin, can place patients at unnecessary risk, increase length of hospital stay, and lead to wasted hospital resources. In this work, we present a clinician-in-the-loop sequential decision making framework, which provides an individualized dosing policy adapted to each patient's evolving clinical phenotype. We employed retrospective data from the publicly available MIMIC II intensive care unit database, and developed a deep reinforcement learning algorithm that learns an optimal heparin dosing policy from sample dosing trails and their associated outcomes in large electronic medical records. Using separate training and testing datasets, our model was observed to be effective in proposing heparin doses that resulted in better expected outcomes than the clinical guidelines. Our results demonstrate that a sequential modeling approach, learned from retrospective data, could potentially be used at the bedside to derive individualized patient dosing policies.", "The management of invasive mechanical ventilation, and the regulation of sedation and analgesia during ventilation, constitutes a major part of the care of patients admitted to intensive care units. Both prolonged dependence on mechanical ventilation and premature extubation are associated with increased risk of complications and higher hospital costs, but clinical opinion on the best protocol for weaning patients off of a ventilator varies. This work aims to develop a decision support tool that uses available patient information to predict time-to-extubation readiness and to recommend a personalized regime of sedation dosage and ventilator support. To this end, we use off-policy reinforcement learning algorithms to determine the best action at a given patient state from sub-optimal historical ICU data. We compare treatment policies from fitted Q-iteration with extremely randomized trees and with feedforward neural networks, and demonstrate that the policies learnt show promise in recommending weaning protocols with improved outcomes, in terms of minimizing rates of reintubation and regulating physiological stability.", "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia." ] }
1705.08422
2619989803
Sepsis is a leading cause of mortality in intensive care units (ICUs) and costs hospitals billions annually. Treating a septic patient is highly challenging, because individual patients respond very differently to medical interventions and there is no universally agreed-upon treatment for sepsis. Understanding more about a patient's physiological state at a given time could hold the key to effective treatment policies. In this work, we propose a new approach to deduce optimal treatment policies for septic patients by using continuous state-space models and deep reinforcement learning. Learning treatment policies over continuous spaces is important, because we retain more of the patient's physiological information. Our model is able to learn clinically interpretable treatment policies, similar in important aspects to the treatment policies of physicians. Evaluating our algorithm on past ICU patient data, we find that our model could reduce patient mortality in the hospital by up to 3.6 over observed clinical policies, from a baseline mortality of 13.7 . The learned treatment policies could be used to aid intensive care clinicians in medical decision making and improve the likelihood of patient survival.
Optimal sepsis treatment strategy was tackled most recently by @cite_3 , using a discretized state and action-space to deduce optimal treatment policies for septic patients. Their work applied on-policy SARSA learning to fit an action-value function to the physician policy and value-iteration techniques to find an optimal policy . The optimal policy was then evaluated by comparing the Q-values that would have been obtained following chosen actions to the Q-values obtained by the physicians. We reproduce a similar model as our baseline, using related data pre-processing and clustering techniques. We additionally build on this approach by extending the results to the continuous domain, where policies are learned directly from the physiological state data, without discretization. We also propose a novel evaluation metric, different from ones used in @cite_3 . We focus on in-hospital mortality instead of 90-day mortality (used in @cite_3 ) because of the other unobserved factors that could affect mortality in a 3-month timeframe.
{ "cite_N": [ "@cite_3" ], "mid": [ "2580396496" ], "abstract": [ "Purpose The optimal strategy of fluid resuscitation in the early hours of severe sepsis and septic shock is controversial, with both an aggressive and conservative approach being recommended." ] }
1705.08056
2620088061
Construction of ambiguity set in robust optimization relies on the choice of divergences between probability distributions. In distribution learning, choosing appropriate probability distributions based on observed data is critical for approximating the true distribution. To improve the performance of machine learning models, there has recently been interest in designing objective functions based on Lp-Wasserstein distance rather than the classical Kullback-Leibler (KL) divergence. In this paper, we derive concentration and asymptotic results using Bregman divergence. We propose a novel asymmetric statistical divergence called Wasserstein-Bregman divergence as a generalization of L2-Wasserstein distance. We discuss how these results can be applied to the construction of ambiguity set in robust optimization.
In @cite_22 , they formulate a robust optimization problem in terms of a KL divergence constraint and show that the problem can be converted into a convex optimization problem which can be solved analytically. In @cite_35 , they show that chance constraints with KL divergence ambiguity sets can be reformulated into a traditional chance constraint problem with different risk levels.
{ "cite_N": [ "@cite_35", "@cite_22" ], "mid": [ "760153864", "2562747313" ], "abstract": [ "In this paper, we study data-driven chance constrained stochastic programs, or more specifically, stochastic programs with distributionally robust chance constraints (DCCs) in a data-driven setting to provide robust solutions for the classical chance constrained stochastic program facing ambiguous probability distributions of random parameters. We consider a family of density-based confidence sets based on a general @math ź-divergence measure, and formulate DCC from the perspective of robust feasibility by allowing the ambiguous distribution to run adversely within its confidence set. We derive an equivalent reformulation for DCC and show that it is equivalent to a classical chance constraint with a perturbed risk level. We also show how to evaluate the perturbed risk level by using a bisection line search algorithm for general @math ź-divergence measures. In several special cases, our results can be strengthened such that we can derive closed-form expressions for the perturbed risk levels. In addition, we show that the conservatism of DCC vanishes as the size of historical data goes to infinity. Furthermore, we analyze the relationship between the conservatism of DCC and the size of historical data, which can help indicate the value of data. Finally, we conduct extensive computational experiments to test the performance of the proposed DCC model and compare various @math ź-divergence measures based on a capacitated lot-sizing problem with a quality-of-service requirement.", "In this paper we study distributionally robust optimization (DRO) problems where the ambiguity set of the probability distribution is defined by the Kullback-Leibler (KL) divergence. We consider DRO problems where the ambiguity is in the objective function, which takes a form of an expectation, and show that the resulted minimax DRO problems can be formulated as a one-layer convex minimization problem. We also consider DRO problems where the ambiguity is in the constraint. We show that ambiguous expectation-constrained programs may be reformulated as a one-layer convex optimization problem that takes the form of the Benstein approximation of Nemirovski and Shapiro (2006). We further consider distributionally robust probabilistic programs. We show that the optimal solution of a probability minimization problem is also optimal for the distributionally robust version of the same problem, and also show that the ambiguous chance-constrained programs (CCPs) may be reformulated as the original CCP with an adjusted confidence level. A number of examples and special cases are also discussed in the paper to show that the reformulated problems may take simple forms that can be solved easily. The main contribution of the paper is to show that the KL divergence constrained DRO problems are often of the same complexity as their original stochastic programming problems and, thus, KL divergence appears a good candidate in modeling distribution ambiguities in mathematical programming." ] }
1705.08037
2618935028
Millimeter-wave (mmWave) propagation is known to be severely affected by the blockage of the line-of-sight (LoS) path. In contrast to microwave systems, at shorter mmWave wavelengths such blockage can be caused by human bodies, where their mobility within environment makes wireless channel alternate between the blocked and non-blocked LoS states. Following the recent 3GPP requirements on modeling the dynamic blockage as well as the temporal consistency of the channel at mmWave frequencies, in this paper a new model for predicting the state of a user in the presence of mobile blockers for representative 3GPP scenarios is developed: urban micro cell (UMi) street canyon and park stadium square. It is demonstrated that the blockage effects produce an alternating renewal process with exponentially distributed non-blocked intervals, and blocked durations that follow the general distribution. The following metrics are derived (i) the mean and the fraction of time spent in blocked non-blocked state, (ii) the residual blocked non-blocked time, and (iii) the time-dependent conditional probability of having blockage no blockage at time t1 given that there was blockage no blockage at time t0. The latter is a function of the arrival rate (intensity), width, and height of moving blockers, distance to the mmWave access point (AP), as well as the heights of the AP and the user device. The proposed model can be used for system-level characterization of mmWave cellular communication systems. For example, the optimal height and the maximum coverage radius of the mmWave APs are derived, while satisfying the required mean data rate constraint. The system-level simulations corroborate that the use of the proposed method considerably reduces the modeling complexity.
The LoS blockage by humans in mmWave systems has been evaluated through simulation studies in @cite_3 . @cite_27 , a LoS blockage model where humans are represented as cylinders of random width and height was proposed. However, there the authors assumed that both the users and the blockers are stationary. In addition to academic work, the 3GPP community is currently exploring various options for modeling the impact of human blockage appropriately @cite_41 @cite_16 .
{ "cite_N": [ "@cite_41", "@cite_27", "@cite_16", "@cite_3" ], "mid": [ "2275969031", "2337330101", "2463747392", "2089106496" ], "abstract": [ "For the development of new 5G systems to operate in bands up to 100 GHz, there is a need for accurate radio propagation models at these bands that currently are not addressed by existing channel models developed for bands below 6 GHz. This document presents a preliminary overview of 5G channel models for bands up to 100 GHz. These have been derived based on extensive measurement and ray tracing results across a multitude of frequencies from 6 GHz to 100 GHz, and this document describes an initial 3D channel model which includes: 1) typical deployment scenarios for urban microcells (UMi) and urban macrocells (UMa), and 2) a baseline model for incorporating path loss, shadow fading, line of sight probability, penetration and blockage models for the typical scenarios. Various processing methodologies such as clustering and antenna decoupling algorithms are also presented.", "The use of extremely high frequency (EHF) or millimeter-wave (mmWave) band has attracted significant attention for the next generation wireless access networks. As demonstrated by recent measurements, mmWave frequencies render themselves quite sensitive to “blocking” caused by obstacles like foliage, humans, vehicles, etc. However, there is a dearth of analytical models for characterizing such blocking and the consequent effect on the signal reliability. In this paper, we propose a novel, general, and tractable model for characterizing the blocking caused by humans (assuming them to be randomly located in the environment) to mmWave propagation as a function of system parameters like transmitter-receiver locations and dimensions, as well as density and dimensions of humans. Moreover, the proposed model is validated using a ray-launcher tool. Utilizing the proposed model, the blockage probability is shown to increase with human density and separation between the transmitter-receiver pair. Furthermore, the developed analysis is shown to demonstrate the existence of a transmitter antenna height that maximizes the received signal strength, which in turn is a function of the transmitter-receiver distance and their dimensions.", "This paper presents a 3-D statistical channel impulse response (IR) model for urban line of sight (LOS) and non-LOS channels developed from 28- and 73-GHz ultrawideband propagation measurements in New York City, useful in the design of 5G wireless systems that will operate in both the ultra-high frequency microwave and millimeter-wave (mmWave) spectrum to increase channel capacities. A 3GPP-like stochastic IR channel model is developed from measured power delay profiles, angle of departure, and angle of arrival power spectra. The extracted statistics are used to implement a channel model and simulator capable of generating 3-D mmWave temporal and spatial channel parameters for arbitrary mmWave carrier frequency, signal bandwidth, and antenna beamwidth. The model presented here faithfully reproduces realistic IRs of measured urban channels, supporting air interface design of mmWave transceivers, filters, and multi-element antenna arrays.", "In this paper we study the effect of human blockage on a new cellular architecture that incorporates millimeter wave technology for the access link. Assumptions, methodology, and simulation results are discussed to analyze the effect of human blockage on this type of deployment. A statistical model for human blockage and self blockage of the millimeter wave (mmW) signal is presented and analyzed. Our simulations show that even with a simple modulation scheme, an average network throughput of 400 Gbps km2 is achieved. The objective is to propose and demonstrate that such an architecture is capable of providing high throughput services to outdoor users even in heavily crowded areas." ] }
1705.08037
2618935028
Millimeter-wave (mmWave) propagation is known to be severely affected by the blockage of the line-of-sight (LoS) path. In contrast to microwave systems, at shorter mmWave wavelengths such blockage can be caused by human bodies, where their mobility within environment makes wireless channel alternate between the blocked and non-blocked LoS states. Following the recent 3GPP requirements on modeling the dynamic blockage as well as the temporal consistency of the channel at mmWave frequencies, in this paper a new model for predicting the state of a user in the presence of mobile blockers for representative 3GPP scenarios is developed: urban micro cell (UMi) street canyon and park stadium square. It is demonstrated that the blockage effects produce an alternating renewal process with exponentially distributed non-blocked intervals, and blocked durations that follow the general distribution. The following metrics are derived (i) the mean and the fraction of time spent in blocked non-blocked state, (ii) the residual blocked non-blocked time, and (iii) the time-dependent conditional probability of having blockage no blockage at time t1 given that there was blockage no blockage at time t0. The latter is a function of the arrival rate (intensity), width, and height of moving blockers, distance to the mmWave access point (AP), as well as the heights of the AP and the user device. The proposed model can be used for system-level characterization of mmWave cellular communication systems. For example, the optimal height and the maximum coverage radius of the mmWave APs are derived, while satisfying the required mean data rate constraint. The system-level simulations corroborate that the use of the proposed method considerably reduces the modeling complexity.
@cite_31 , the authors contributed a model for temporal correlation of interference in a mobile network with a certain density of users. It was demonstrated that the correlated propagation states across the users significantly impact the temporal interference statistics. Analytically tractable models for correlated outdoor and indoor shadowing have been proposed in @cite_24 and @cite_37 , thus accentuating the high correlation between the locations of the nodes and the shadowing effects. The analytical expression to characterize the correlation between the signals of two antennas was given in @cite_11 .
{ "cite_N": [ "@cite_24", "@cite_31", "@cite_37", "@cite_11" ], "mid": [ "2290648561", "2415418270", "1556372134", "1989192631" ], "abstract": [ "A Manhattan Poisson line process divides the plane into an infinite number of rectangular rooms with walls extending infinitely along the axes. When the path loss is dominated by the penetration through each of the walls, a Poisson field of transmitters creates a heavy tailed interference at a randomly picked room, whose distribution is tractable in the Laplace domain. Interference correlation at different rooms is explicitly available. This model gives the first tractable mathematical abstraction to indoor physical environments where wireless signals are shadowed by (common) walls. Applying the analytical results leads to a formula for success probabilities of a transmission attempt between two given rooms.", "In mobile wireless networks with blockage, different users, and or a single user at different time slots, may be blocked by some common obstacles. Therefore the temporal correlation of interference does not depend only on the user displacement law but also on the spatial correlation introduced by the obstacles. In this letter, we show that in mobile networks with a high density of users, blockage increases the temporal correlation of interference, while in sparse networks blockage has the opposite effect.", "This paper presents an analytically tractable stochastic geometry model for urban wireless networks, where the locations of the nodes and the shadowing are highly correlated and different path loss functions can be applied to line-of-sight (LOS) and non-line-of-sight (NLOS) links. Using a distance-based LOS path loss model and a blockage (shadowing)-based NLOS path loss model, we are able to derive the distribution of the interference observed at a typical location and the joint distribution at different locations. When applied to cellular networks, this model leads to tractable expressions for the coverage probability (SINR distribution). We show that this model captures important features of urban wireless networks, which cannot be analyzed using existing models. The numerical results also suggest that even in the presence of significant penetration loss, ignoring the NLOS interference can lead to erroneous estimations on coverage. They also suggest that allowing users to be associated with NLOS BSs may provide a non-trivial gain on coverage.", "This paper derives a general analytical expression for the correlation coefficient of signals received by two identical spaced antennas in a radar system used to detect meteorological phenomena such as rain and clouds. Such a correlation coefficient is found as a function of the relevant physical parameters, namely, distance between antennas (baseline distance), signal bandwidth, and antenna directivity. One of the antennas transmits the signal and receives its echoes from the scatterers, characterizing a monostatic radar, whereas the other antenna only receives these echoes, characterizing a bistatic radar. Our approach can be easily extended to nonidentical antennas." ] }
1705.07818
2618009704
Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.
For action segmentation, many existing works use frame-level features as the input and then build temporal models on the whole video sequence. @cite_24 propose an attention LSTM network to model the dependencies of the input frame features in a fixed-length window. @cite_1 consider the weakly-supervised action labeling problem. @cite_6 present a multi-stream bi-directional recurrent neural network for fine-grained action detection task. @cite_12 propose two temporal convolutional networks for action segmentation and detection. The design of our model is inspired by @cite_6 and @cite_12 and we compare with them in the experiments.
{ "cite_N": [ "@cite_24", "@cite_1", "@cite_12", "@cite_6" ], "mid": [ "2952835694", "2951144893", "2550143307", "" ], "abstract": [ "Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory (LSTM) deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.", "We propose a weakly-supervised framework for action labeling in video, where only the order of occurring actions is required during training time. The key challenge is that the per-frame alignments between the input (video) and label (action) sequences are unknown during training. We address this by introducing the Extended Connectionist Temporal Classification (ECTC) framework to efficiently evaluate all possible alignments via dynamic programming and explicitly enforce their consistency with frame-to-frame visual similarities. This protects the model from distractions of visually inconsistent or degenerated alignments without the need of temporal supervision. We further extend our framework to the semi-supervised case when a few frames are sparsely annotated in a video. With less than 1 of labeled frames per video, our method is able to outperform existing semi-supervised approaches and achieve comparable performance to that of fully supervised approaches.", "The ability to identify and temporally segment fine-grained human actions throughout a video is crucial for robotics, surveillance, education, and beyond. Typical approaches decouple this problem by first extracting local spatiotemporal features from video frames and then feeding them into a temporal classifier that captures high-level temporal patterns. We describe a class of temporal models, which we call Temporal Convolutional Networks (TCNs), that use a hierarchy of temporal convolutions to perform fine-grained action segmentation or detection. Our Encoder-Decoder TCN uses pooling and upsampling to efficiently capture long-range temporal patterns whereas our Dilated TCN uses dilated convolutions. We show that TCNs are capable of capturing action compositions, segment durations, and long-range dependencies, and are over a magnitude faster to train than competing LSTM-based Recurrent Neural Networks. We apply these models to three challenging fine-grained datasets and show large improvements over the state of the art.", "" ] }
1705.07818
2618009704
Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.
@cite_7 introduce a spatial CNN and a spatiotemporal CNN; the latter is an end-to-end approach to model the whole sequence from frames. Here, we use the output features of their spatial CNN as the input to our TricorNet, and compare with their results obtained by the spatiotemporal CNN. @cite_17 use a statistical language model to focus on localizing and classifying segments in videos of varying lengths. @cite_18 propose an end-to-end generative approach for action segmentation, using Hidden Markov Models on dense trajectory features. We also compare with their results on the 50 Salads dataset.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_17" ], "mid": [ "2952349270", "", "2463824207" ], "abstract": [ "We describe an end-to-end generative approach for the segmentation and recognition of human activities. In this approach, a visual representation based on reduced Fisher Vectors is combined with a structured temporal model for recognition. We show that the statistical properties of Fisher Vectors make them an especially suitable front-end for generative models such as Gaussian mixtures. The system is evaluated for both the recognition of complex activities as well as their parsing into action units. Using a variety of video datasets ranging from human cooking activities to animal behaviors, our experiments demonstrate that the resulting architecture outperforms state-of-the-art approaches for larger datasets, i.e. when sufficient amount of data is available for training structured generative models.", "", "While current approaches to action recognition on presegmented video clips already achieve high accuracies, temporal action detection is still far from comparably good results. Automatically locating and classifying the relevant action segments in videos of varying lengths proves to be a challenging task. We propose a novel method for temporal action detection including statistical length and language modeling to represent temporal and contextual structure. Our approach aims at globally optimizing the joint probability of three components, a length and language model and a discriminative action model, without making intermediate decisions. The problem of finding the most likely action sequence and the corresponding segment boundaries in an exponentially large search space is addressed by dynamic programming. We provide an extensive evaluation of each model component on Thumos 14, a large action detection dataset, and report state-of-the-art results on three datasets." ] }
1705.07831
2618858825
Training generative adversarial networks is unstable in high-dimensions as the true data distribution tends to be concentrated in a small fraction of the ambient space. The discriminator is then quickly able to classify nearly all generated samples as fake, leaving the generator without meaningful gradients and causing it to deteriorate after a point in training. In this work, we propose training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data. Individual discriminators, now provided with restricted views of the input, are unable to reject generated samples perfectly and continue to provide meaningful gradients to the generator throughout training. Meanwhile, the generator learns to produce samples consistent with the full data distribution to satisfy all discriminators simultaneously. We demonstrate the practical utility of this approach experimentally, and show that it is able to produce image samples with higher quality than traditional training with a single discriminator.
Indeed, the idea of combining multiple simple classifiers, with boosting , or mixture-of-experts @cite_0 , has a rich history in machine learning, including recent work where the simple classifiers act on random lower-dimensional projections of their inputs (additionally benefiting from the regularization effect of such projections ). While these combinations have been aimed at achieving high classification accuracy (i.e., achieve accurate discrimination), our objective is to maintain a flow of gradients from individual discriminators to the generator. It is also worth noting here the work of , who also use low-dimensional projections to deal with high-dimensional probability distributions. They use such projections to efficiently compute Wasserstein distances and optimal transport between two distributions, while we seek to enable stable GAN training in high dimensions.
{ "cite_N": [ "@cite_0" ], "mid": [ "2095539364" ], "abstract": [ "This article reviews statistical techniques for combining multiple probability distributions. The framework is that of a decision maker who consults several experts regarding some events. The experts express their opinions in the form of probability distributions. The decision maker must aggregate the experts' distributions into a single distribution that can be used for decision making. Two classes of aggregation methods are reviewed. When using a supra Bayesian procedure, the decision maker treats the expert opinions as data that may be combined with its own prior distribution via Bayes' rule. When using a linear opinion pool, the decision maker forms a linear combination of the expert opinions. The major feature that makes the aggregation of expert opinions difficult is the high correlation or dependence that typically occurs among these opinions. A theme of this paper is the need for training procedures that result in experts with relatively independent opinions or for aggregation methods that implici..." ] }
1705.07663
2952492946
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and or sample quality.
Over the past few years, a few privacy attacks on machine learning have been proposed. For instance, attacks targeting distributed recommender systems @cite_23 have focused on inferring which inputs cause output changes by looking at temporal patterns of the model.
{ "cite_N": [ "@cite_23" ], "mid": [ "2123147099" ], "abstract": [ "Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon." ] }
1705.07663
2952492946
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and or sample quality.
Specific to membership inference are attacks against supervised models by @cite_46 . Their approach exploits differences in the model's response to inputs that were or were not seen during training. For each class of the targeted black-box model, they train a , with the same machine learning technique. Whereas, our approach targets generative models and relies on GANs to provide a general framework for measuring the information leakage. As mentioned earlier, membership inference on generative models is much more challenging than on discriminative models: in the former, the attacker cannot exploit confidence values on inputs belonging to the same classes, thus it is more difficult to detect overfitting and mount the attack. As a matter of fact, detecting overfitting in generative models is regarded as one of the most important research problems in machine learning @cite_56 . Overall, our work presents black-box attacks that do not rely on any prediction vectors from the target model, as generic generative models output synthetic samples.
{ "cite_N": [ "@cite_46", "@cite_56" ], "mid": [ "2949777041", "2561050497" ], "abstract": [ "We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial \"machine learning as a service\" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.", "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of some popular models such as generative adversarial networks and generative moment matching networks, is a decoder network, a parametric deep neural net that defines a generative distribution. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution." ] }
1705.07663
2952492946
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and or sample quality.
@cite_59 attacks force a machine learning model to memorize the training data in such a way that an adversary can later extract training inputs with only black-box access to the model. Then, @cite_9 show that deep learning-based language models trained on text data can unintentionally memorize specific training inputs, which can then be extracted with black-box access, however, demonstrating it only for simple sequences of digits artificially introduced into the text. @cite_16 present a few attacks against SVM and HMM classifiers aimed to reconstruct properties about training sets, by exploiting knowledge of model parameters.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_59" ], "mid": [ "2788502731", "2962835266", "2949338291" ], "abstract": [ "Machine learning models based on neural networks and deep learning are being rapidly adopted for many purposes. What those models learn, and what they may share, is a significant concern when the training data may contain secrets and the models are public -- e.g., when a model helps users compose text messages using models trained on all users' messages. This paper presents exposure: a simple-to-compute metric that can be applied to any deep learning model for measuring the memorization of secrets. Using this metric, we show how to extract those secrets efficiently using black-box API access. Further, we show that unintended memorization occurs early, is not due to over-fitting, and is a persistent issue across different types of models, hyperparameters, and training strategies. We experiment with both real-world models (e.g., a state-of-the-art translation model) and datasets (e.g., the Enron email dataset, which contains users' credit card numbers) to demonstrate both the utility of measuring exposure and the ability to extract secrets. Finally, we consider many defenses, finding some ineffective (like regularization), and others to lack guarantees. However, by instantiating our own differentially-private recurrent model, we validate that by appropriately investing in the use of state-of-the-art techniques, the problem can be resolved, with high utility.", "Machine-learning ML enables computers to learn how to recognise patterns, make unintended decisions, or react to a dynamic environment. The effectiveness of trained machines varies because of more suitable ML algorithms or because superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. In this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. Such information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights.", "Machine learning (ML) is becoming a commodity. Numerous ML frameworks and services are available to data holders who are not ML experts but want to train predictive models on their data. It is important that ML models trained on sensitive inputs (e.g., personal images or documents) not leak too much information about the training data. We consider a malicious ML provider who supplies model-training code to the data holder, does not observe the training, but then obtains white- or black-box access to the resulting model. In this setting, we design and implement practical algorithms, some of them very similar to standard ML techniques such as regularization and data augmentation, that \"memorize\" information about the training dataset in the model yet the model is as accurate and predictive as a conventionally trained model. We then explain how the adversary can extract memorized information from the model. We evaluate our techniques on standard ML tasks for image classification (CIFAR10), face recognition (LFW and FaceScrub), and text analysis (20 Newsgroups and IMDB). In all cases, we show how our algorithms create models that have high predictive power yet allow accurate extraction of subsets of their training data." ] }
1705.07663
2952492946
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and or sample quality.
Privacy-enhancing tools based on secure multiparty computation and homomorphic encryption have been proposed to securely train supervised machine learning models, such as decision trees @cite_26 , linear regressors @cite_33 , and neural networks @cite_35 @cite_18 . However, these mechanisms do not prevent an attacker from running inference attacks on the privately trained models as the final parameters are left unchanged.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_26", "@cite_33" ], "mid": [ "2767079719", "2435473771", "2612279584", "" ], "abstract": [ "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.", "Applying machine learning to a problem which involves medical, financial, or other types of sensitive data, not only requires accurate predictions but also careful attention to maintaining data privacy and security. Legal and ethical requirements may prevent the use of cloud-based machine learning solutions for such tasks. In this work, we will present a method to convert learned neural networks to CryptoNets, neural networks that can be applied to encrypted data. This allows a data owner to send their data in an encrypted form to a cloud service that hosts the network. The encryption ensures that the data remains confidential since the cloud does not have access to the keys needed to decrypt it. Nevertheless, we will show that the cloud service is capable of applying the neural network to the encrypted data to make encrypted predictions, and also return them in encrypted form. These encrypted predictions can be sent back to the owner of the secret key who can decrypt them. Therefore, the cloud service does not gain any information about the raw data nor about the prediction it made. We demonstrate CryptoNets on the MNIST optical character recognition tasks. CryptoNets achieve 99 accuracy and can make around 59000 predictions per hour on a single PC. Therefore, they allow high throughput, accurate, and private predictions.", "In this paper we introduce the concept of privacy preserving data mining. In our model, two parties owning confidential databases wish to run a data mining algorithm on the union of their databases, without revealing any unnecessary information. This problem has many practical and important applications, such as in medical research with confidential patient records. Data mining algorithms are usually complex, especially as the size of the input is measured in megabytes, if not gigabytes. A generic secure multi-party computation solution, based on evaluation of a circuit computing the algorithm on the entire input, is therefore of no practical use. We focus on the problem of decision tree learning and use ID3, a popular and widely used algorithm for this problem. We present a solution that is considerably more efficient than generic solutions. It demands very few rounds of communication and reasonable bandwidth. In our solution, each party performs by itself a computation of the same order as computing the ID3 algorithm for its own database. The results are then combined using efficient cryptographic protocols, whose overhead is only logarithmic in the number of transactions in the databases. We feel that our result is a substantial contribution, demonstrating that secure multi-party computation can be made practical, even for complex problems and large inputs.", "" ] }
1705.07663
2952492946
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and or sample quality.
Differential Privacy @cite_3 can be used to mitigate inference attacks, and it has been widely applied to various machine learning models @cite_57 @cite_51 @cite_10 @cite_30 @cite_34 @cite_41 . Shokri and Shmatikov @cite_10 support distributed training of deep learning networks in a privacy-preserving way, where independent entities collaboratively build a model without sharing their training data, but selectively share subsets of noisy model parameters during training. @cite_57 show how to train deep neural networks (DNNs) with non-convex objectives with an acceptable privacy budget, while @cite_37 show that 's proposal partially mitigates the effects of 's @cite_46 membership inference attack.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_41", "@cite_3", "@cite_57", "@cite_51", "@cite_46", "@cite_34", "@cite_10" ], "mid": [ "2951114885", "2798657499", "2294907105", "2109426455", "2473418344", "2950602864", "2949777041", "2949274245", "" ], "abstract": [ "We study statistical risk minimization problems under a privacy model in which the data is kept confidential even from the learner. In this local privacy framework, we establish sharp upper and lower bounds on the convergence rates of statistical estimation procedures. As a consequence, we exhibit a precise tradeoff between the amount of privacy the data preserves and the utility, as measured by convergence rate, of any statistical estimator or learning procedure.", "", "Private regression has received attention from both database and security communities. Recent work by (USENIX Security 2014) analyzed the functional mechanism ( VLDB 2012) for training linear regression models over medical data. Unfortunately, they found that model accuracy is already unacceptable with differential privacy when @math . We address this issue, presenting an explicit connection between differential privacy and stable learning theory through which a substantially better privacy utility tradeoff can be obtained. Perhaps more importantly, our theory reveals that the most basic mechanism in differential privacy, output perturbation, can be used to obtain a better tradeoff for all convex-Lipschitz-bounded learning tasks. Since output perturbation is simple to implement, it means that our approach is potentially widely applicable in practice. We go on to apply it on the same medical data as used by Encouragingly, we achieve accurate models even for @math . In the last part of this paper, we study the impact of our improved differentially private mechanisms on model inversion attacks, a privacy attack introduced by We observe that the improved tradeoff makes the resulting differentially private model more susceptible to inversion attacks. We analyze this phenomenon formally.", "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.", "Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as \"teachers\" for a \"student\" model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.", "We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial \"machine learning as a service\" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.", "Bayesian optimization is a powerful tool for fine-tuning the hyper-parameters of a wide variety of machine learning models. The success of machine learning has led practitioners in diverse real-world settings to learn classifiers for practical problems. As machine learning becomes commonplace, Bayesian optimization becomes an attractive method for practitioners to automate the process of classifier hyper-parameter tuning. A key observation is that the data used for tuning models in these settings is often sensitive. Certain data such as genetic predisposition, personal email statistics, and car accident history, if not properly private, may be at risk of being inferred from Bayesian optimization outputs. To address this, we introduce methods for releasing the best hyper-parameters and classifier accuracy privately. Leveraging the strong theoretical guarantees of differential privacy and known Bayesian optimization convergence bounds, we prove that under a GP assumption these private quantities are also near-optimal. Finally, even if this assumption is not satisfied, we can use different smoothness guarantees to protect privacy.", "" ] }
1705.07663
2952492946
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and or sample quality.
@cite_51 @cite_58 combine multiple models trained with disjoint datasets without exposing the models, while, in @cite_32 , present defensive distillation'' to reduce the effectiveness of adversarial samples on DNNs.
{ "cite_N": [ "@cite_58", "@cite_51", "@cite_32" ], "mid": [ "", "2950602864", "2174868984" ], "abstract": [ "", "Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE). The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users. Because they rely directly on sensitive data, these models are not published, but instead used as \"teachers\" for a \"student\" model. The student learns to predict an output chosen by noisy voting among all of the teachers, and cannot directly access an individual teacher or the underlying data or parameters. The student's privacy properties can be understood both intuitively (since no single teacher and thus no single dataset dictates the student's training) and formally, in terms of differential privacy. These properties hold even if an adversary can not only query the student but also inspect its internal workings. Compared with previous work, the approach imposes only weak assumptions on how teachers are trained: it applies to any model, including non-convex models like DNNs. We achieve state-of-the-art privacy utility trade-offs on MNIST and SVHN thanks to an improved privacy analysis and semi-supervised learning.", "Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs crafted to force a deep neural network (DNN) to provide adversary-selected outputs. Such attacks can seriously undermine the security of the system supported by the DNN, sometimes with devastating consequences. For example, autonomous vehicles can be crashed, illicit or illegal content can bypass content filters, or biometric authentication systems can be manipulated to allow improper access. In this work, we introduce a defensive mechanism called defensive distillation to reduce the effectiveness of adversarial samples on DNNs. We analytically investigate the generalizability and robustness properties granted by the use of defensive distillation when training DNNs. We also empirically study the effectiveness of our defense mechanisms on two DNNs placed in adversarial settings. The study shows that defensive distillation can reduce effectiveness of sample creation from 95 to less than 0.5 on a studied DNN. Such dramatic gains can be explained by the fact that distillation leads gradients used in adversarial sample creation to be reduced by a factor of 10^30. We also find that distillation increases the average minimum number of features that need to be modified to create adversarial samples by about 800 on one of the DNNs we tested." ] }
1705.07663
2952492946
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and or sample quality.
Then, @cite_62 apply the noisy gradient descent from @cite_57 to train the discriminator of a Generative Adversarial Network under differential privacy. The resulting model is then used to generate synthetic subjects based on the population of clinical trial data. Finally, @cite_17 use adversarial machine learning to defend against attribute inference attacks, in the setting where an attacker trains a classifier to infer a target user's sensitive attributes from their public data, while @cite_27 leverage adversarial regularizers to design a privacy-preserving training mechanism with provable protections against membership inference attacks against discriminative models.
{ "cite_N": [ "@cite_57", "@cite_27", "@cite_62", "@cite_17" ], "mid": [ "2473418344", "2949492662", "", "2799694080" ], "abstract": [ "Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.", "Machine learning models leak information about the datasets on which they are trained. An adversary can build an algorithm to trace the individual members of a model's training dataset. As a fundamental inference attack, he aims to distinguish between data points that were part of the model's training set and any other data points from the same distribution. This is known as the tracing (and also membership inference) attack. In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters. This is the current setting of machine learning as a service in the Internet. We introduce a privacy mechanism to train machine learning models that provably achieve membership privacy: the model's predictions on its training data are indistinguishable from its predictions on other data points from the same distribution. We design a strategic mechanism where the privacy mechanism anticipates the membership inference attacks. The objective is to train a model such that not only does it have the minimum prediction error (high utility), but also it is the most robust model against its corresponding strongest inference attack (high privacy). We formalize this as a min-max game optimization problem, and design an adversarial training algorithm that minimizes the classification loss of the model as well as the maximum gain of the membership inference attack against it. This strategy, which guarantees membership privacy (as prediction indistinguishability), acts also as a strong regularizer and significantly generalizes the model. We evaluate our privacy mechanism on deep neural networks using different benchmark datasets. We show that our min-max strategy can mitigate the risk of membership inference attacks (close to the random guess) with a negligible cost in terms of the classification error.", "", "Users in various web and mobile applications are vulnerable to attribute inference attacks, in which an attacker leverages a machine learning classifier to infer a target user's private attributes (e.g., location, sexual orientation, political view) from its public data (e.g., rating scores, page likes). Existing defenses leverage game theory or heuristics based on correlations between the public data and attributes. These defenses are not practical. Specifically, game-theoretic defenses require solving intractable optimization problems, while correlation-based defenses incur large utility loss of users' public data. In this paper, we present AttriGuard, a practical defense against attribute inference attacks. AttriGuard is computationally tractable and has small utility loss. Our AttriGuard works in two phases. Suppose we aim to protect a user's private attribute. In Phase I, for each value of the attribute, we find a minimum noise such that if we add the noise to the user's public data, then the attacker's classifier is very likely to infer the attribute value for the user. We find the minimum noise via adapting existing evasion attacks in adversarial machine learning. In Phase II, we sample one attribute value according to a certain probability distribution and add the corresponding noise found in Phase I to the user's public data. We formulate finding the probability distribution as solving a constrained convex optimization problem. We extensively evaluate AttriGuard and compare it with existing methods using a real-world dataset. Our results show that AttriGuard substantially outperforms existing methods. Our work is the first one that shows evasion attacks can be used as defensive techniques for privacy protection." ] }
1705.07565
2618305643
How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.
Regarding pruning for modern deep models, Han al @cite_6 proposed to delete unimportant parameters based on magnitude of their absolute values, and retrain the remaining ones to recover the original prediction performance. This method achieves considerable compression ratio in practice. However, as pointed out by pioneer research work @cite_12 @cite_3 , parameters with low magnitude of their absolute values can be necessary for low error. Therefore, magnitude-based approaches may eliminate wrong parameters, resulting in a big prediction performance drop right after pruning, and poor robustness before retraining @cite_9 . Though some variants have tried to find better magnitude-based criteria @cite_21 @cite_7 , the significant drop of prediction performance after pruning still remains. To avoid pruning wrong parameters, Guo al @cite_15 introduced a mask matrix to indicate the state of network connection for dynamically pruning after each gradient decent step. Jin al @cite_23 proposed an iterative hard thresholding approach to re-activate the pruned parameters after each pruning phase.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_21", "@cite_3", "@cite_6", "@cite_23", "@cite_15", "@cite_12" ], "mid": [ "2515385951", "2572009792", "2495425901", "", "2963674932", "2499540656", "", "2114766824" ], "abstract": [ "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34 and ResNet-110 by up to 38 on CIFAR10 while regaining close to the original accuracy by retraining the networks.", "How much can pruning algorithms teach us about the fundamentals of learning representations in neural networks? A lot, it turns out. Neural network model compression has become a topic of great interest in recent years, and many different techniques have been proposed to address this problem. In general, this is motivated by the idea that smaller models typically lead to better generalization. At the same time, the decision of what to prune and when to prune necessarily forces us to confront our assumptions about how neural networks actually learn to represent patterns in data. In this work we set out to test several long-held hypotheses about neural network learning representations and numerical approaches to pruning. To accomplish this we first reviewed the historical literature and derived a novel algorithm to prune whole neurons (as opposed to the traditional method of pruning weights) from optimally trained networks using a second-order Taylor method. We then set about testing the performance of our algorithm and analyzing the quality of the decisions it made. As a baseline for comparison we used a first-order Taylor method based on the Skeletonization algorithm and an exhaustive brute-force serial pruning algorithm. Our proposed algorithm worked well compared to a first-order method, but not nearly as well as the brute-force method. Our error analysis led us to question the validity of many widely-held assumptions behind pruning algorithms in general and the trade-offs we often make in the interest of reducing computational complexity. We discovered that there is a straightforward way, however expensive, to serially prune 40-70 of the neurons in a trained network with minimal effect on the learning representation and without any re-training.", "State-of-the-art neural networks are getting deeper and wider. While their performance increases with the increasing number of layers and neurons, it is crucial to design an efficient deep architecture in order to reduce computational and memory costs. Designing an efficient neural network, however, is labor intensive requiring many experiments, and fine-tunings. In this paper, we introduce network trimming which iteratively optimizes the network by pruning unimportant neurons based on analysis of their outputs on a large dataset. Our algorithm is inspired by an observation that the outputs of a significant portion of neurons in a large network are mostly zero, regardless of what inputs the network received. These zero activation neurons are redundant, and can be removed without affecting the overall accuracy of the network. After pruning the zero activation neurons, we retrain the network using the weights before pruning as initialization. We alternate the pruning and retraining to further reduce zero activations in a network. Our experiments on the LeNet and VGG-16 show that we can achieve high compression ratio of parameters without losing or even achieving higher accuracy than the original network.", "", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "Deep neural networks have achieved remarkable success in a wide range of practical problems. However, due to the inherent large parameter space, deep models are notoriously prone to overfitting and difficult to be deployed in portable devices with limited memory. In this paper, we propose an iterative hard thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs). An SDNN has much fewer parameters yet can achieve competitive or even better performance than its full CNN counterpart. More concretely, the IHT approach trains an SDNN through following two alternative phases: (I) perform hard thresholding to drop connections with small activations and fine-tune the other significant filters; (II) re-activate the frozen connections and train the entire network to improve its overall discriminative capability. We verify the superiority of SDNNs in terms of efficiency and classification performance on four benchmark object recognition datasets, including CIFAR-10, CIFAR-100, MNIST and ImageNet. Experimental results clearly demonstrate that IHT can be applied for training SDNN based on various CNN architectures such as NIN and AlexNet.", "", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application." ] }
1705.07768
2963377008
We develop an approach for unsupervised learning of associations between co-occurring perceptual events using a large graph. We applied this approach to successfully solve the image captcha of China's railroad system. The approach is based on the principle of suspicious coincidence. In this particular problem, a user is presented with a deformed picture of a Chinese phrase and eight low-resolution images. They must quickly select the relevant images in order to purchase their train tickets. This problem presents several challenges: (1) the teaching labels for both the Chinese phrases and the images were not available for supervised learning, (2) no pre-trained deep convolutional neural networks are available for recognizing these Chinese phrases or the presented images, and (3) each captcha must be solved within a few seconds. We collected 2.6 million captchas, with 2.6 million deformed Chinese phrases and over 21 million images. From these data, we constructed an association graph, composed of over 6 million vertices, and linked these vertices based on co-occurrence information and feature similarity between pairs of images. We then trained a deep convolutional neural network to learn a projection of the Chinese phrases onto a 230-dimensional latent space. Using label propagation, we computed the likelihood of each of the eight images conditioned on the latent space projection of the deformed phrase for each captcha. The resulting system solved captchas with 77 accuracy in 2 seconds on average. Our work, in answering this practical challenge, illustrates the power of this class of unsupervised association learning techniques, which may be related to the brain's general strategy for associating language stimuli with visual objects on the principle of suspicious coincidence.
Although our work bears strong resemblance to that of @cite_19 , there are key differences. First, they derive similarity scores for each pair of labels based on term proximity in text-based web data. However, since images present our primary challenge, we instead derive the similarity between pairs of images based on their feature representations and co-occurrence statistics. Second, they use a pre-trained, external classifier to initialize a vector of object scores for each image. We, on the other hand, leverage image-code co-occurrence statistics. Although it is difficult to compare the performance of these models due to differences in the data set, we believe our results show promise of improvement.
{ "cite_N": [ "@cite_19" ], "mid": [ "2160676519" ], "abstract": [ "Object recognition and localization are important tasks in computer vision. The focus of this work is the incorporation of contextual information in order to improve object recognition and localization. For instance, it is natural to expect not to see an elephant to appear in the middle of an ocean. We consider a simple approach to encapsulate such common sense knowledge using co-occurrence statistics from web documents. By merely counting the number of times nouns (such as elephants, sharks, oceans, etc.) co-occur in web documents, we obtain a good estimate of expected co-occurrences in visual data. We then cast the problem of combining textual co-occurrence statistics with the predictions of image-based classifiers as an optimization problem. The resulting optimization problem serves as a surrogate for our inference procedure. Albeit the simplicity of the resulting optimization problem, it is effective in improving both recognition and localization accuracy. Concretely, we observe significant improvements in recognition and localization rates for both ImageNet Detection 2012 and Sun 2012 datasets." ] }
1705.07704
2619122421
Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.
Replacing the @math operator by a differentiable approximation based on the @math has been exploited in numerous works. Regularizing the @math operator with a squared @math -norm is less frequent, but has been used to obtain a smoothed multiclass hinge loss @cite_42 or smoothed linear programming relaxations for maximum a-posteriori inference @cite_36 . Our work differs from these in mainly two aspects. First, we are less interested in the @math operator itself than in its gradient, which we use as a mapping from @math to @math . Second, since we use this mapping in neural networks trained with backpropagation, we study and compute the mapping's Jacobian (the Hessian of a regularized @math operator), in contrast with previous works.
{ "cite_N": [ "@cite_36", "@cite_42" ], "mid": [ "2192675411", "2950080435" ], "abstract": [ "Maximum a-posteriori (MAP) inference is an important task for many applications. Although the standard formulation gives rise to a hard combinatorial optimization problem, several effective approximations have been proposed and studied in recent years. We focus on linear programming (LP) relaxations, which have achieved state-of-the-art performance in many applications. However, optimization of the resulting program is in general challenging due to non-smoothness and complex non-separable constraints. Therefore, in this work we study the benefits of augmenting the objective function of the relaxation with strong convexity. Specifically, we introduce strong convexity by adding a quadratic term to the LP relaxation objective. We provide theoretical guarantees for the resulting programs, bounding the difference between their optimal value and the original optimum. Further, we propose suitable optimization algorithms and analyze their convergence.", "We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings." ] }
1705.07704
2619122421
Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.
A different approach to incorporating structure into attention uses the posterior marginal probabilities from a conditional random field as attention weights @cite_37 . While this approach takes into account structural correlations, the marginal probabilities are generally dense and different from each other. Our proposed mechanisms produce sparse and clustered attention weights, a visible benefit in interpretability. The idea of deriving constrained alternatives to softmax has been independently explored for differentiable easy-first decoding @cite_10 . Finally, sparsity-inducing penalties have been used to obtain convex relaxations of neural networks @cite_38 or to compress models @cite_3 @cite_27 @cite_9 . These works differ from ours, in that sparsity is enforced on the network , while our approach can produce sparse and structured from neural attention layers.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_9", "@cite_3", "@cite_27", "@cite_10" ], "mid": [ "2167967601", "2586050494", "2460144244", "1935978687", "2963000224", "2758087204" ], "abstract": [ "Convexity has recently received a lot of attention in the machine learning community, and the lack of convexity has been seen as a major disadvantage of many learning algorithms, such as multi-layer artificial neural networks. We show that training multi-layer neural networks in which the number of hidden units is learned can be viewed as a convex optimization problem. This problem involves an infinite number of variables, but can be solved by incrementally inserting a hidden unit at a time, each time finding a linear classifier that minimizes a weighted sum of errors.", "Attention networks have proven to be an effective approach for embedding categorical inference within a deep neural network. However, for many tasks we may want to model richer structural dependencies without abandoning end-to-end training. In this work, we experiment with incorporating richer structural distributions, encoded using graphical models, within deep networks. We show that these structured attention networks are simple extensions of the basic attention procedure, and that they allow for extending attention beyond the standard soft-selection approach, such as attending to partial segmentations or to subtrees. We experiment with two different classes of structured attention networks: a linear-chain conditional random field and a graph-based parsing model, and describe how these models can be practically implemented as neural network layers. Experiments show that this approach is effective for incorporating structural biases, and structured attention networks outperform baseline attention models on a variety of synthetic and real tasks: tree transduction, neural machine translation, question answering, and natural language inference. We further find that models trained in this way learn interesting unsupervised hidden representations that generalize simple attention.", "In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are traditionally dealt with separately, we propose an efficient regularized formulation enabling their simultaneous parallel execution, using standard optimization routines. Specifically, we extend the group Lasso penalty, originally proposed in the linear regression literature, to impose group-level sparsity on the networks connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We carry out an extensive experimental evaluation, in comparison with classical weight decay and Lasso penalties, both on a toy dataset for handwritten digit recognition, and multiple realistic mid-scale classification benchmarks. Comparative results demonstrate the potential of our proposed sparse group Lasso penalty in producing extremely compact networks, with a significantly lower number of input features, with a classification accuracy which is equal or only slightly inferior to standard regularization terms.", "Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90 of parameters, with a drop of accuracy that is less than 1 on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.", "High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN's evaluation. Experimental results show that SSL achieves on average 5.1 × and 3.1 × speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25 to 92.60 , which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by 1 .", "" ] }
1705.07359
2951213295
When interacting with their software systems, users may have to deal with problems like crashes, failures, and program instability. Faulty software running in the field is not only the consequence of ineffective in-house verification and validation techniques, but it is also due to the complexity and diversity of the interactions between an application and its environment. Many of these interactions can be hardly predicted at testing time, and even when they could be predicted, often there are so many cases to be tested that they cannot be all feasibly addressed before the software is released. This Ph.D. thesis investigates the idea of addressing the faults that cannot be effectively addressed in house directly in the field, exploiting the field itself as testbed for running the test cases. An enormous number of diverse environments would then be available for testing, giving the possibility to run many test cases in many different situations, timely revealing the many failures that would be hard to detect otherwise.
Existing draw conclusions on characteristics such as the distribution of faults types @cite_14 @cite_0 , the locality of faults @cite_14 , the distribution of faults across source files @cite_3 , the root cause, that is, the development phase in which a fault has been introduced, and human error that caused the fault @cite_15 , the relation between fault types, failure detection, and failure severity @cite_7 , and the evolution of faults during bug fixing @cite_8 . However, none of these studies has focused on the causes of field failures and the reasons why failures have not been discovered at testing time, as well as the common characteristics of field failures, which is the starting point of our research.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_8", "@cite_3", "@cite_0", "@cite_15" ], "mid": [ "2099593355", "1983639634", "1753527524", "2154816175", "2081000701", "" ], "abstract": [ "The benefits of the analysis of software faults and failures have been widely recognized. However, detailed studies based on empirical data are rare. In this paper, we analyze the fault and failure data from two large, real-world case studies. Specifically, we explore: 1) the localization of faults that lead to individual software failures and 2) the distribution of different types of software faults. Our results show that individual failures are often caused by multiple faults spread throughout the system. This observation is important since it does not support several heuristics and assumptions used in the past. In addition, it clearly indicates that finding and fixing faults that lead to such software failures in large, complex systems are often difficult and challenging tasks despite the advances in software development. Our results also show that requirement faults, coding faults, and data problems are the three most common types of software faults. Furthermore, these results show that contrary to the popular belief, a significant percentage of failures are linked to late life cycle activities. Another important aspect of our work is that we conduct intra- and interproject comparisons, as well as comparisons with the findings from related studies. The consistency of several main trends across software systems in this paper and several related research efforts suggests that these trends are likely to be intrinsic characteristics of software faults and failures rather than project specific.", "Many papers have been published on analysis and prediction of software faults and or failures, but few established the links from software faults (i.e., the root causes) to (potential or observed) failures and addressed multiple attributes. This paper aims at filling this gap by studying types of faults that caused software failures, activities taking place when faults were detected or failures were reported, and the severity of failures. Furthermore, it explores the associations among these attributes and the trends within releases (i.e., pre-release and post-release) and across releases. The results are based on the data extracted from a safety-critical NASA mission, which follows an evolutionary development process. In particular, we analyzed 21 large-scale software components, which together constitute over 8,000 files and millions of lines of code. The main insights include: (1) only a few fault types were responsible for the majority of failures pre-release and post-release, and across releases. Analysis and testing activities detected the majority of failures caused by each fault type. (2) The distributions of fault types differed for pre-release and post-release failures. (3) The percentage of safety-critical failures was small overall, and their relative contribution increased on-orbit. (4) Both post-release failures and safety-critical failures were more heavily associated with coding faults than with any other type of faults. (5) Components that experienced high number of failures in one release were not necessarily among high failure-prone components in the subsequent release. (6) Components that experienced more failures pre-release were more likely to fail post-release, overall and for each release.", "A large part of software engineering research suffers from a major problem-there are insufficient data to test software hypotheses, or to estimate parameters in models. To obtain statistically significant results, a large set of programs is needed, each set comprising many programs built to the same specification. We have gained access to such a large body of programs (written in C, C++, Java or Pascal) and in this paper we present the results of an exploratory analysis of around 29,000 C programs written to a common specification. The objectives of this study were to characterise the types of fault that are present in these programs; to characterise how programs are debugged during development; and to assess the effectiveness of diverse programming. The findings are discussed, together with the potential limitations on the realism of the findings.", "A case study is presented using thirteen releases of a large industrial inventory tracking system. Several types of questions are addressed in this study. The first involved examining how faults are distributed over the different files. This included making a distinction between the release during which they were discovered, the lifecycle stage at which they were first detected, and the severity of the fault. The second category of questions we considered involved studying how the size of modules affected their fault density. This included looking at questions like whether or not files with high fault densities at early stages of the lifecycle also had high fault densities during later stages. A third type of question we considered was whether files that contained large numbers of faults during early stages of development, also had large numbers of faults during later stages, and whether faultiness persisted from release to release. Finally, we examined whether newly written files were more fault-prone than ones that were written for earlier releases of the product. The ultimate goal of this study is to help identify characteristics of files that can be used as predictors of fault-proneness, thereby helping organizations determine how best to use their testing resources.", "Lessons from safety-critical anomalies during operation provide important information for constructing safer systems. To assist anomaly analysis, this research develops an integrated Failure Mode and Effect Analysis (FMEA) model to analyze causal scenarios and a Three-Frame Mode model to analyze the working mode inconsistencies of failure cases. The models are used to analyze 180 digital Instrumentation and Control (IC therefore, recovery should be emphasized and planned.", "" ] }
1705.07359
2951213295
When interacting with their software systems, users may have to deal with problems like crashes, failures, and program instability. Faulty software running in the field is not only the consequence of ineffective in-house verification and validation techniques, but it is also due to the complexity and diversity of the interactions between an application and its environment. Many of these interactions can be hardly predicted at testing time, and even when they could be predicted, often there are so many cases to be tested that they cannot be all feasibly addressed before the software is released. This Ph.D. thesis investigates the idea of addressing the faults that cannot be effectively addressed in house directly in the field, exploiting the field itself as testbed for running the test cases. An enormous number of diverse environments would then be available for testing, giving the possibility to run many test cases in many different situations, timely revealing the many failures that would be hard to detect otherwise.
The problem of running test cases in the field has been preliminarily studied by other researchers. The approach to that is closest to the work developed in this thesis is in-vivo testing @cite_11 . In-vivo testing consists of deploying unit test cases that are executed with a user-defined probability while the application is running to detect failures. However, in-vivo testing does not take into account the specific elements (e.g., resources in the environment) that should be exploited in a field testing strategy to detect the faults that are hard to reveal in house, and does not address the problem of the non-intrusiveness of the testing process, which is assumed to be guaranteed by construction by the tests. Another attempt to move the testing process in the field is the work by @cite_16 . However, they focus on moving acceptance testing to the field, which is a different task compared to designing a technique for revealing the faults that are hard to reveal in house.
{ "cite_N": [ "@cite_16", "@cite_11" ], "mid": [ "2108057276", "2130169574" ], "abstract": [ "Quality assurance (QA) tasks, such as testing, profiling, and performance evaluation, have historically been done in-house on developer-generated workloads and regression suites. Since this approach is inadequate for many systems, tools and processes are being developed to improve software quality by increasing user participation in the QA process. A limitation of these approaches is that they focus on isolated mechanisms, not on the coordination and control policies and tools needed to make the global QA process efficient, effective, and scalable. To address these issues, we have initiated the Skoll project, which is developing and validating novel software QA processes and tools that leverage the extensive computing resources of worldwide user communities in a distributed, continuous manner to significantly and rapidly improve software quality. This paper provides several contributions to the study of distributed continuous QA. First, it illustrates the structure and functionality of a generic around-the-world, around-the-clock QA process and describes several sophisticated tools that support this process. Second, it describes several QA scenarios built using these tools and process. Finally, it presents a feasibility study applying these scenarios to a 1MLOC+ software package called ACE+TAO. While much work remains to be done, the study suggests that the Skoll process and tools effectively manage and control distributed, continuous QA processes. Using Skoll we rapidly identified problems that had taken the ACE+TAO developers substantially longer to find and several of which had previously not been found. Moreover, automatic analysis of QA task results often provided developers information that quickly led them to the root cause of the problems.", "Software products released into the field typically have some number of residual defects that either were not detected or could not have been detected during testing. This may be the result of flaws in the test cases themselves, incorrect assumptions made during the creation of test cases, or the infeasibility of testing the sheer number of possible configurations for a complex system; these defects may also be due to application states that were not considered during lab testing, or corrupted states that could arise due to a security violation. One approach to this problem is to continue to test these applications even after deployment, in hopes of finding any remaining flaws. In this paper, we present a testing methodology we call in vivo testing, in which tests are continuously executed in the deployment environment. We also describe a type of test we call in vivo tests that are specifically designed for use with such an approach: these tests execute within the current state of the program (rather than by creating a clean slate) without affecting or altering that state from the perspective of the end-user. We discuss the approach and the prototype testing framework for Java applications called Invite. We also provide the results of case studies that demonstrate Invite's effectiveness and efficiency." ] }
1705.07359
2951213295
When interacting with their software systems, users may have to deal with problems like crashes, failures, and program instability. Faulty software running in the field is not only the consequence of ineffective in-house verification and validation techniques, but it is also due to the complexity and diversity of the interactions between an application and its environment. Many of these interactions can be hardly predicted at testing time, and even when they could be predicted, often there are so many cases to be tested that they cannot be all feasibly addressed before the software is released. This Ph.D. thesis investigates the idea of addressing the faults that cannot be effectively addressed in house directly in the field, exploiting the field itself as testbed for running the test cases. An enormous number of diverse environments would then be available for testing, giving the possibility to run many test cases in many different situations, timely revealing the many failures that would be hard to detect otherwise.
Finally, there are analysis techniques . For instance, residual test coverage monitoring @cite_2 collects coverage data from the field to complement the results of structural testing obtained in-house. Recently, Ohmann et Al. @cite_18 proposed a novel approach to limit the amount of instrumentation necessary to collect coverage data, increasing the applicability of residual test coverage in the field.
{ "cite_N": [ "@cite_18", "@cite_2" ], "mid": [ "2507618760", "2155891645" ], "abstract": [ "Program coverage is used across many stages of software development. While common during testing, program coverage has also found use outside the test lab, in production software. However, production software has stricter requirements on run-time overheads, and may limit possible program instrumentation. Thus, optimizing the placement of probes to gather program coverage is important. We introduce and study the problem of customized program coverage optimization. We generalize previous work that optimizes for complete coverage instrumentation with a system that adapts optimization to customizable program coverage requirements. Specifically, our system allows a user to specify desired coverage locations and to limit legal instrumentation locations. We prove that the problem of determining optimal coverage probes is NP-hard, and we present a solution based on mixed integer linear programming. Due to the computational complexity of the problem, we also provide two practical approximation approaches. We evaluate the effectiveness of our approximations across a diverse set of benchmarks, and show that our techniques can substantially reduce instrumentation while allowing the user immense freedom in defining coverage requirements. When naive instrumentation is dense or expensive, our optimizations succeed in lowering execution time overheads.", "Structural coverage criteria are often used as an indicator of the thoroughness of testing, but complete satisfaction of a criterion is seldom achieved. When a software product is released with less than 100 coverage, testers are explicitly or implicitly assuming that executions satisfying the remaining test obligations (the residue) are either infeasible or occur so rarely that they have negligible impact on quality. Violation of this assumption indicates shortcomings in the testing process. Monitoring in the deployed environment, even in the beta test phase, is typically limited to error and sanity checks. Monitoring the residue of test coverage in actual use can provide additional useful information, but it is unlikely to be accepted by users unless its performance impact is very small. Experience with a prototype tool for residual test coverage monitoring of Java programs suggests that, at least for statement coverage, the simple strategy of removing all probes except those corresponding to the residue of coverage testing reduces execution overhead to acceptably low levels." ] }
1705.07594
2619538598
Recurrent feedback connections in the mammalian visual system have been hypothesized to play a role in synthesizing input in the theoretical framework of analysis by synthesis. The comparison of internally synthesized representation with that of the input provides a validation mechanism during perceptual inference and learning. Inspired by these ideas, we proposed that the synthesis machinery can compose new, unobserved images by imagination to train the network itself so as to increase the robustness of the system in novel scenarios. As a proof of concept, we investigated whether images composed by imagination could help an object recognition system to deal with occlusion, which is challenging for the current state-of-the-art deep convolutional neural networks. We fine-tuned a network on images containing objects in various occlusion scenarios, that are imagined or self-generated through a deep generator network. Trained on imagined occluded scenarios under the object persistence constraint, our network discovered more subtle and localized image features that were neglected by the original network for object classification, obtaining better separability of different object classes in the feature space. This leads to significant improvement of object recognition under occlusion for our network relative to the original network trained only on un-occluded images. In addition to providing practical benefits in object recognition under occlusion, this work demonstrates the use of self-generated composition of visual scenes through the synthesis loop, combined with the object persistence constraint, can provide opportunities for neural networks to discover new relevant patterns in the data, and become more flexible in dealing with novel situations.
Generally, domain-specific methods deal with occlusion of a certain kind of object by acquiring prior knowledge from people. For pedestrian detection, @cite_25 employs a Boltzmann distribution to learn the prior of local deformation. Occlusion is also a challenging problem in face recognition. @cite_6 compares three different PCA strategies in robust face recognition, including holistic PCA, component-bases PCA and near-holistic PCA. While domain-specific methods work well in certain subfields of computer vision, it's hard to apply these ad hoc solutions to problems in other domains, let alone to transfer models' ability learned from one domain to another.
{ "cite_N": [ "@cite_25", "@cite_6" ], "mid": [ "2148754237", "2545755311" ], "abstract": [ "This paper presents a new statistical model for detecting and tracking deformable objects such as pedestrians, where large shape variations induced by local shape deformation can not be well captured by global methods such as PCA. The proposed model employs a Boltzmann distribution to capture the prior of local deformation, and embeds it into a Markov network which can be learned from data. A mean field variational analysis of this model provides computationally efficient algorithms for computing the likelihood of image observations and facilitate fast model training. Based on that, effective detection and tracking algorithms for deformable objects are proposed and applied to pedestrian detection and tracking. The proposed method has several advantages. Firstly, it captures local deformation well and thus is robust to occlusions and clutter. In addition, it is computationally tractable. Moreover, it divides deformation into local deformation and global deformation, then conquers them by combining bottom-up and top-down methodologies. Extensive experiments demonstrate the effectiveness of the proposed model for deformable objects.", "This paper addresses one of the main challenges of face recognition (FR): facial occlusions. Currently, the human brain is the most robust known FR approach towards partially occluded faces. Nevertheless, it is still not clear if humans recognize faces using a holistic or a component-based strategy, or even a combination of both. In this paper, three different approaches based on principal component analysis (PCA) are analyzed. The first one, a holistic approach, is the well-known eigenface approach. The second one, a component-based method, is a variation of the eigenfeatures approach, and finally, the third one, a near-holistic method, is an extension of the lophoscopic principal component analysis (LPCA). So the main contributions of this paper are: The three different strategies are compared and analyzed for identifying partially occluded faces and furthermore it explores how a priori knowledge about present occlusions can be used to improve the recognition performance." ] }
1705.07594
2619538598
Recurrent feedback connections in the mammalian visual system have been hypothesized to play a role in synthesizing input in the theoretical framework of analysis by synthesis. The comparison of internally synthesized representation with that of the input provides a validation mechanism during perceptual inference and learning. Inspired by these ideas, we proposed that the synthesis machinery can compose new, unobserved images by imagination to train the network itself so as to increase the robustness of the system in novel scenarios. As a proof of concept, we investigated whether images composed by imagination could help an object recognition system to deal with occlusion, which is challenging for the current state-of-the-art deep convolutional neural networks. We fine-tuned a network on images containing objects in various occlusion scenarios, that are imagined or self-generated through a deep generator network. Trained on imagined occluded scenarios under the object persistence constraint, our network discovered more subtle and localized image features that were neglected by the original network for object classification, obtaining better separability of different object classes in the feature space. This leads to significant improvement of object recognition under occlusion for our network relative to the original network trained only on un-occluded images. In addition to providing practical benefits in object recognition under occlusion, this work demonstrates the use of self-generated composition of visual scenes through the synthesis loop, combined with the object persistence constraint, can provide opportunities for neural networks to discover new relevant patterns in the data, and become more flexible in dealing with novel situations.
As occlusion normally does not block every distinct components of objects, part-based models try to utilize this nature by considering objects as a composition of several parts. Furthermore, part-based models usually require prior knowledge of the set of parts for each class. Deformable part model @cite_8 is a successful approach in this area, inspired by which @cite_12 proposes a hierarchical DPM to localize occluded faces. Although the part-based models succeed in certain tasks, they have some drawbacks. In addition to that these models may require data sets with hand-annotated parts to handle occlusion, there is actually a considerable number of objects not easily decomposable into distinct parts, such as flexible animals like snakes or objects without parts like potatoes. Thus, the part-based models are not suitable for all kinds of objects and lack scalability.
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2005264304", "2168356304" ], "abstract": [ "The presence of occluders significantly impacts performance of systems for object recognition. However, occlusion is typically treated as an unstructured source of noise and explicit models for occluders have lagged behind those for object appearance and shape. In this paper we describe a hierarchical deformable part model for face detection and keypoint localization that explicitly models occlusions of parts. The proposed model structure makes it possible to augment positive training data with large numbers of synthetically occluded instances. This allows us to easily incorporate the statistics of occlusion patterns in a discriminatively trained model. We test the model on several benchmarks for keypoint localization including challenging sets featuring significant occlusion. We find that the addition of an explicit model of occlusion yields a system that outperforms existing approaches in keypoint localization accuracy.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function." ] }
1705.07461
2618527520
Deep reinforcement learning (DRL) methods such as the Deep Q-Network (DQN) have achieved state-of-the-art results in a variety of challenging, high-dimensional domains. This success is mainly attributed to the power of deep neural networks to learn rich domain representations for approximating the value function or policy. Batch reinforcement learning methods with linear representations, on the other hand, are more stable and require less hyper parameter tuning. Yet, substantial feature engineering is necessary to achieve good results. In this work we propose a hybrid approach -- the Least Squares Deep Q-Network (LS-DQN), which combines rich feature representations learned by a DRL algorithm with the stability of a linear least squares method. We do this by periodically re-training the last hidden layer of a DRL network with a batch least squares update. Key to our approach is a Bayesian regularization term for the least squares update, which prevents over-fitting to the more recent data. We tested LS-DQN on five Atari games and demonstrate significant improvement over vanilla DQN and Double-DQN. We also investigated the reasons for the superior performance of our method. Interestingly, we found that the performance improvement can be attributed to the large batch size used by the LS method when optimizing the last layer.
The general idea of applying regularization for feature selection, and to avoid over-fitting is a common theme in machine learning. However, applying it to RL algorithms is challenging due to the fact that these algorithms are based on finding a fixed-point rather than optimizing a loss function .Value-based DRL approaches do not use regularization layers (e.g. pooling, dropout and batch normalization), which are popular in other deep learning methods. The DQN, for example, has a relatively shallow architecture (three convolutional layers, followed by two fully connected layers) without any regularization layers. Recently, regularization was introduced in problems that combine value-based RL with other learning objectives. For example, @cite_7 combine RL with supervised learning from expert demonstration, and introduce regularization to avoid over-fitting the expert data; and @cite_1 introduces regularization to avoid catastrophic forgetting in transfer learning. SRL methods, on the other hand, perform well with regularization and have been shown to converge @cite_3 .
{ "cite_N": [ "@cite_3", "@cite_1", "@cite_7" ], "mid": [ "2158738729", "2560647685", "2788862220" ], "abstract": [ "In this paper we consider approximate policy-iteration-based reinforcement learning algorithms. In order to implement a flexible function approximation scheme we propose the use of non-parametric methods with regularization, providing a convenient way to control the complexity of the function approximator. We propose two novel regularized policy iteration algorithms by adding L2-regularization to two widely-used policy evaluation methods: Bellman residual minimization (BRM) and least-squares temporal difference learning (LSTD). We derive efficient implementation for our algorithms when the approximate value-functions belong to a reproducing kernel Hilbert space. We also provide finite-sample performance bounds for our algorithms and show that they are able to achieve optimal rates of convergence under the studied conditions.", "Abstract The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.", "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN." ] }
1705.07461
2618527520
Deep reinforcement learning (DRL) methods such as the Deep Q-Network (DQN) have achieved state-of-the-art results in a variety of challenging, high-dimensional domains. This success is mainly attributed to the power of deep neural networks to learn rich domain representations for approximating the value function or policy. Batch reinforcement learning methods with linear representations, on the other hand, are more stable and require less hyper parameter tuning. Yet, substantial feature engineering is necessary to achieve good results. In this work we propose a hybrid approach -- the Least Squares Deep Q-Network (LS-DQN), which combines rich feature representations learned by a DRL algorithm with the stability of a linear least squares method. We do this by periodically re-training the last hidden layer of a DRL network with a batch least squares update. Key to our approach is a Bayesian regularization term for the least squares update, which prevents over-fitting to the more recent data. We tested LS-DQN on five Atari games and demonstrate significant improvement over vanilla DQN and Double-DQN. We also investigated the reasons for the superior performance of our method. Interestingly, we found that the performance improvement can be attributed to the large batch size used by the LS method when optimizing the last layer.
Moreover, it was recently observed that flat minima in practical deep learning model classes can be turned into sharp minima via re-parameterization without changing the generalization gap, and hence it requires further investigation @cite_4 . In addition, hoffer2017train showed that large-batch training can generalize as well as small-batch training by adapting the number of iterations @cite_2 . Thus, we strongly believe that our findings on combining large and small batches in DRL are in agreement with recent results of other deep learning research groups.
{ "cite_N": [ "@cite_4", "@cite_2" ], "mid": [ "2950928354", "2617242334" ], "abstract": [ "Despite their overwhelming capacity to overfit, deep learning architectures tend to generalize relatively well to unseen data, allowing them to be deployed in practice. However, explaining why this is the case is still an open area of research. One standing hypothesis that is gaining popularity, e.g. Hochreiter & Schmidhuber (1997); (2017), is that the flatness of minima of the loss function found by stochastic gradient based methods results in good generalization. This paper argues that most notions of flatness are problematic for deep models and can not be directly applied to explain generalization. Specifically, when focusing on deep networks with rectifier units, we can exploit the particular geometry of parameter space induced by the inherent symmetries that these architectures exhibit to build equivalent models corresponding to arbitrarily sharper minima. Furthermore, if we allow to reparametrize a function, the geometry of its parameters can change drastically without affecting its generalization properties.", "Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance - known as the \"generalization gap\" phenomena. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a \"random walk on random landscape\" statistical model which is known to exhibit similar \"ultra-slow\" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the \"generalization gap\" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named \"Ghost Batch Normalization\" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization." ] }
1705.07383
2964157370
To improve segmentation performance, a novel neural network architecture (termed DFCN-DCRF) is proposed, which combines an RGB-D fully convolutional neural network (DFCN) with a depth-sensitive fully-connected conditional random field (DCRF). First, a DFCN architecture which fuses depth information into the early layers and applies dilated convolution for later contextual reasoning is designed. Then, a depth-sensitive fully-connected conditional random field (DCRF) is proposed and combined with the previous DFCN to refine the preliminary result. Comparative experiments show that the proposed DFCN-DCRF achieves competitive performance compared with state-of-the-art methods.
In 2015, proposed a fully convolutional neural network model @cite_21 , which had a structure of encoder-decoder architecture. In this work, a skip architecture was designed, which combined semantic information from the deep coarse layer with appearance information from a shallow fine layer. The skip architecture is able to take advantage of all feature spectra and showed an accurate segmentation result. As a further discussion, @cite_5 proposed a novel FCN structure which eliminates the limitation of fixed-size receptive field. On the decoding step, it applied unpooling and convolution transpose to allow the network to learn the upsample weights. With the similar network structure of these two models, @cite_8 presented another architecture called SegNet, which comprised unpooling as well as the skip architecture. Besides, dropout @cite_27 and batch normalization @cite_28 can also further improve the segmentation accuracy during test time @cite_8 @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_28", "@cite_21", "@cite_27", "@cite_5" ], "mid": [ "2419448466", "2963881378", "2949117887", "1903029394", "2095705004", "1745334888" ], "abstract": [ "The ability to perform pixel-wise semantic segmentation in real-time is of paramount importance in practical mobile applications. Recent deep neural networks aimed at this task have the disadvantage of requiring a large number of floating point operations and have long run-times that hinder their usability. In this paper, we propose a novel deep neural network architecture named ENet (efficient neural network), created specifically for tasks requiring low latency operation. ENet is up to 18x faster, requires 75x less FLOPs, has 79x less parameters, and provides similar or better accuracy to existing models. We have tested it on CamVid, Cityscapes and SUN datasets and report on comparisons with existing state-of-the-art methods, and the trade-offs between accuracy and processing time of a network. We present performance measurements of the proposed architecture on embedded systems and suggest possible software improvements that could make ENet even faster.", "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http: mi.eng.cam.ac.uk projects segnet .", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network." ] }
1705.07383
2964157370
To improve segmentation performance, a novel neural network architecture (termed DFCN-DCRF) is proposed, which combines an RGB-D fully convolutional neural network (DFCN) with a depth-sensitive fully-connected conditional random field (DCRF). First, a DFCN architecture which fuses depth information into the early layers and applies dilated convolution for later contextual reasoning is designed. Then, a depth-sensitive fully-connected conditional random field (DCRF) is proposed and combined with the previous DFCN to refine the preliminary result. Comparative experiments show that the proposed DFCN-DCRF achieves competitive performance compared with state-of-the-art methods.
Meanwhile, another approach for contextual reasoning called dilated convolution was proposed. It aims to expands the receptive field on the input image exponentially without losing resolution of the result. As shown in Fig. , dilation on convolution expands the receptive field on the input image exponentially without losing resolution of the result. Based on this architecture, proposed DeepLab system which conducts contextual reasoning on images while keeping the spatial information @cite_26 . further applied dilated convolution on a novel FCN architecture with no pooling layer @cite_12 . In our work, we design the core network with both pooling and dilated convolution. The fundamental structure of the proposed DFCN-DCRF is similar to the DeepLab-LargeFOV architecture @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_12" ], "mid": [ "2412782625", "2286929393" ], "abstract": [ "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy." ] }
1705.07383
2964157370
To improve segmentation performance, a novel neural network architecture (termed DFCN-DCRF) is proposed, which combines an RGB-D fully convolutional neural network (DFCN) with a depth-sensitive fully-connected conditional random field (DCRF). First, a DFCN architecture which fuses depth information into the early layers and applies dilated convolution for later contextual reasoning is designed. Then, a depth-sensitive fully-connected conditional random field (DCRF) is proposed and combined with the previous DFCN to refine the preliminary result. Comparative experiments show that the proposed DFCN-DCRF achieves competitive performance compared with state-of-the-art methods.
Recently, some semantic segmentation algorithms based on CNN combine the FCN with conditional random fields (CRFs). CRFs is able to model the contextual relationships between different pixel so as to maximize the label agreement of them. presented an efficient inference algorithm for Gaussian Edge Potentials @cite_23 . The inference method allows a fully-connected CRF with pairwise connection over all pairs of pixels to inference in a reasonable time. It has been proved that the poor accuracy of boundary in the output of FCN can be addressed by combining the responses in the last layer of CNN with a fully-connected CRF model @cite_26 @cite_12 @cite_11 . In particular, proposed a novel architecture, in which the mean field approximation was modeled as a recurrent neural network and integrated as a part of deep neural network @cite_17 .
{ "cite_N": [ "@cite_26", "@cite_11", "@cite_23", "@cite_12", "@cite_17" ], "mid": [ "2412782625", "2296478878", "2161236525", "2286929393", "2124592697" ], "abstract": [ "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "We propose an approach for exploiting contextual information in semantic image segmentation, and particularly investigate the use of patch-patch context and patch-background context in deep CNNs. We formulate deep structured models by combining CNNs and Conditional Random Fields (CRFs) for learning the patch-patch context between image regions. Specifically, we formulate CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied in order to avoid repeated expensive CRF inference during the course of back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image inputs and sliding pyramid pooling is very effective for improving performance. We perform comprehensive evaluation of the proposed method. We achieve new state-of-the-art performance on a number of challenging semantic segmentation datasets.", "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.", "Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark." ] }
1705.07383
2964157370
To improve segmentation performance, a novel neural network architecture (termed DFCN-DCRF) is proposed, which combines an RGB-D fully convolutional neural network (DFCN) with a depth-sensitive fully-connected conditional random field (DCRF). First, a DFCN architecture which fuses depth information into the early layers and applies dilated convolution for later contextual reasoning is designed. Then, a depth-sensitive fully-connected conditional random field (DCRF) is proposed and combined with the previous DFCN to refine the preliminary result. Comparative experiments show that the proposed DFCN-DCRF achieves competitive performance compared with state-of-the-art methods.
Based on some labeled RGB-D image datasets @cite_4 @cite_20 @cite_3 @cite_2 , many studies have tried to incorporate the depth information to generate a better performance. @cite_21 shows that simply stack the depth with RGB as a 4-channels input cannot improve the performance in a significant way. @cite_25 presented a different representation of depth information referred as HHA. It comprised horizontal disparity, height from the ground and the angle between the local surface normal and gravity direction, and got a good results @cite_21 @cite_25 . However, @cite_14 . argued that the HHA representation did not contain more information than the raw depth itself, and it required a high computation cost. In their work, a fusion-based CNN architecture was presented. The network consisted of two branches of encoding networks, i.e., a depth branch and an RGB branch. The feature representation in these two branches was then fused into the master branch. There are two ways of fusing approaches, i.e., sparse fusion and dense fusion. It was proved that the sparse one is better. Therefore, in our work, we fuse RGB and depth channel feature representation in a sparse way from Conv1 to Conv4.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_21", "@cite_3", "@cite_2", "@cite_25", "@cite_20" ], "mid": [ "2587989515", "125693051", "1903029394", "1985238052", "1923184257", "1565402342", "2098883970" ], "abstract": [ "In this paper we address the problem of semantic labeling of indoor scenes on RGB-D data. With the availability of RGB-D cameras, it is expected that additional depth measurement will improve the accuracy. Here we investigate a solution how to incorporate complementary depth information into a semantic segmentation framework by making use of convolutional neural networks (CNNs). Recently encoder-decoder type fully convolutional CNN architectures have achieved a great success in the field of semantic segmentation. Motivated by this observation we propose an encoder-decoder type network, where the encoder part is composed of two branches of networks that simultaneously extract features from RGB and depth images and fuse depth features into the RGB feature maps as the network goes deeper. Comprehensive experimental evaluations demonstrate that the proposed fusion-based architecture achieves competitive results with the state-of-the-art methods on the challenging SUN RGB-D benchmark obtaining 76.27 global accuracy, 48.30 average class accuracy and 37.29 average intersection-over-union score.", "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Existing scene understanding datasets contain only a limited set of views of a place, and they lack representations of complete 3D spaces. In this paper, we introduce SUN3D, a large-scale RGB-D video database with camera pose and object labels, capturing the full 3D extent of many places. The tasks that go into constructing such a dataset are difficult in isolation -- hand-labeling videos is painstaking, and structure from motion (SfM) is unreliable for large spaces. But if we combine them together, we make the dataset construction task much easier. First, we introduce an intuitive labeling tool that uses a partial reconstruction to propagate labels from one frame to another. Then we use the object labels to fix errors in the reconstruction. For this, we introduce a generalization of bundle adjustment that incorporates object-to-object correspondences. This algorithm works by constraining points for the same object from different frames to lie inside a fixed-size bounding box, parameterized by its rotation and translation. The SUN3D database, the source code for the generalized bundle adjustment, and the web-based 3D annotation tool are all available at http: sun3d.cs.princeton.edu.", "Although RGB-D sensors have enabled major break-throughs for several vision tasks, such as 3D reconstruction, we have not attained the same level of success in high-level scene understanding. Perhaps one of the main reasons is the lack of a large-scale benchmark with 3D annotations and 3D evaluation metrics. In this paper, we introduce an RGB-D benchmark suite for the goal of advancing the state-of-the-arts in all major scene understanding tasks. Our dataset is captured by four different sensors and contains 10,335 RGB-D images, at a similar scale as PASCAL VOC. The whole dataset is densely annotated and includes 146,617 2D polygons and 64,595 3D bounding boxes with accurate object orientations, as well as a 3D room layout and scene category for each image. This dataset enables us to train data-hungry algorithms for scene-understanding tasks, evaluate them using meaningful 3D metrics, avoid overfitting to a small testing set, and study cross-sensor bias.", "In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.", "Recent proliferation of a cheap but quality depth sensor, the Microsoft Kinect, has brought the need for a challenging category-level 3D object detection dataset to the fore. We review current 3D datasets and find them lacking in variation of scenes, categories, instances, and viewpoints. Here we present our dataset of color and depth image pairs, gathered in real domestic and office environments. It currently includes over 50 classes, with more images added continuously by a crowd-sourced collection effort. We establish baseline performance in a PASCAL VOC-style detection task, and suggest two ways that inferred world size of the object may be used to improve detection. The dataset and annotations can be downloaded at http: www.kinectdata.com." ] }
1705.07640
2950050397
Tracking the full skelet al pose of the hands and fingers is a challenging problem that has a plethora of applications for user interaction. Existing techniques either require wearable hardware, add restrictions to user pose, or require significant computation resources. This research explores a new approach to tracking hands, or any articulated model, by using an augmented rigid body simulation. This allows us to phrase 3D object tracking as a linear complementarity problem with a well-defined solution. Based on a depth sensor's samples, the system generates constraints that limit motion orthogonal to the rigid body model's surface. These constraints, along with prior motion, collision contact constraints, and joint mechanics, are resolved with a projected Gauss-Seidel solver. Due to camera noise properties and attachment errors, the numerous surface constraints are impulse capped to avoid overpowering mechanical constraints. To improve tracking accuracy, multiple simulations are spawned at each frame and fed a variety of heuristics, constraints and poses. A 3D error metric selects the best-fit simulation, helping the system handle challenging hand motions. Such an approach enables real-time, robust, and accurate 3D skelet al tracking of a user's hand on a variety of depth cameras, while only utilizing a single x86 CPU core for processing.
Hand tracking began with the VPL dataglove @cite_14 . More recently, a classifier was trained to recognize the pose of a specifically colored glove as seen by a camera - a much less expensive setup @cite_38 . Microsoft Digits avoids using special markers on the fingers themselves, instead placing a camera on the wrist @cite_0 . To maximize consumer convenience, the eventual goal is to be able to track the hand without any apparatus @cite_17 .
{ "cite_N": [ "@cite_0", "@cite_38", "@cite_14", "@cite_17" ], "mid": [ "2099800354", "2114663654", "2113123816", "2137940226" ], "abstract": [ "Digits is a wrist-worn sensor that recovers the full 3D pose of the user's hand. This enables a variety of freehand interactions on the move. The system targets mobile settings, and is specifically designed to be low-power and easily reproducible using only off-the-shelf hardware. The electronics are self-contained on the user's wrist, but optically image the entirety of the user's hand. This data is processed using a new pipeline that robustly samples key parts of the hand, such as the tips and lower regions of each finger. These sparse samples are fed into new kinematic models that leverage the biomechanical constraints of the hand to recover the 3D pose of the user's hand. The proposed system works without the need for full instrumentation of the hand (for example using data gloves), additional sensors in the environment, or depth cameras which are currently prohibitive for mobile scenarios due to power and form-factor considerations. We demonstrate the utility of Digits for a variety of application scenarios, including 3D spatial interaction with mobile devices, eyes-free interaction on-the-move, and gaming. We conclude with a quantitative and qualitative evaluation of our system, and discussion of strengths, limitations and future work.", "Articulated hand-tracking systems have been widely used in virtual reality but are rarely deployed in consumer applications due to their price and complexity. In this paper, we propose an easy-to-use and inexpensive system that facilitates 3-D articulated user-input using the hands. Our approach uses a single camera to track a hand wearing an ordinary cloth glove that is imprinted with a custom pattern. The pattern is designed to simplify the pose estimation problem, allowing us to employ a nearest-neighbor approach to track hands at interactive rates. We describe several proof-of-concept applications enabled by our system that we hope will provide a foundation for new interactions in modeling, animation control and augmented reality.", "This paper reports on the development of a hand to machine interface device that provides real-time gesture, position and orientation information. The key element is a glove and the device as a whole incorporates a collection of technologies. Analog flex sensors on the glove measure finger bending. Hand position and orientation are measured either by ultrasonics, providing five degrees of freedom, or magnetic flux sensors, which provide six degrees of freedom. Piezoceramic benders provide the wearer of the glove with tactile feedback. These sensors are mounted on the light-weight glove and connected to the driving hardware via a small cable. Applications of the glove and its component technologies include its use in conjunction with a host computer which drives a real-time 3-dimensional model of the hand allowing the glove wearer to manipulate computer-generated objects as if they were real, interpretation of finger-spelling, evaluation of hand impairment in addition to providing an interface to a visual programming language.", "Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove-based sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer-controlled environment, and it requires long calibration and setup procedures. Computer vision (CV) has the potential to provide more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. In particular, two types of research directions have emerged. One is based on gesture classification and aims to extract high-level abstract information corresponding to motion patterns or postures of the hand. The second is based on pose estimation systems and aims to capture the real 3D motion of the hand. This paper presents a literature review on the latter research direction, which is a very challenging problem in the context of HCI." ] }
1705.07640
2950050397
Tracking the full skelet al pose of the hands and fingers is a challenging problem that has a plethora of applications for user interaction. Existing techniques either require wearable hardware, add restrictions to user pose, or require significant computation resources. This research explores a new approach to tracking hands, or any articulated model, by using an augmented rigid body simulation. This allows us to phrase 3D object tracking as a linear complementarity problem with a well-defined solution. Based on a depth sensor's samples, the system generates constraints that limit motion orthogonal to the rigid body model's surface. These constraints, along with prior motion, collision contact constraints, and joint mechanics, are resolved with a projected Gauss-Seidel solver. Due to camera noise properties and attachment errors, the numerous surface constraints are impulse capped to avoid overpowering mechanical constraints. To improve tracking accuracy, multiple simulations are spawned at each frame and fed a variety of heuristics, constraints and poses. A 3D error metric selects the best-fit simulation, helping the system handle challenging hand motions. Such an approach enables real-time, robust, and accurate 3D skelet al tracking of a user's hand on a variety of depth cameras, while only utilizing a single x86 CPU core for processing.
There are many works that focus on gesture recognition @cite_40 . We instead focus on the tracking, noting that gesture is then easily achieved by comparison of the relevant joint orientations from a reference pose or animation. Instead of simply providing collision geometry to a physical environment @cite_41 @cite_2 , we provide knowledge of specific hand and finger pose, with application to gesture recognition.
{ "cite_N": [ "@cite_41", "@cite_40", "@cite_2" ], "mid": [ "2113696470", "", "2109618101" ], "abstract": [ "Instrumented with a single depth camera, a stereoscopic projector, and a curved screen, MirageTable is an interactive system designed to merge real and virtual worlds into a single spatially registered experience on top of a table. Our depth camera tracks the user's eyes and performs a real-time capture of both the shape and the appearance of any object placed in front of the camera (including user's body and hands). This real-time capture enables perspective stereoscopic 3D visualizations to a single user that account for deformations caused by physical objects on the table. In addition, the user can interact with virtual objects through physically-realistic freehand actions without any gloves, trackers, or instruments. We illustrate these unique capabilities through three application examples: virtual 3D model creation, interactive gaming with real and virtual objects, and a 3D teleconferencing experience that not only presents a 3D view of a remote person, but also a seamless 3D shared task space. We also evaluated the user's perception of projected 3D objects in our system, which confirmed that the users can correctly perceive such objects even when they are projected over different background colors and geometries (e.g., gaps, drops).", "", "HoloDesk is an interactive system combining an optical see through display and Kinect camera to create the illusion that users are directly interacting with 3D graphics. A virtual image of a 3D scene is rendered through a half silvered mirror and spatially aligned with the real-world for the viewer. Users easily reach into an interaction volume displaying the virtual image. This allows the user to literally get their hands into the virtual display and to directly interact with an spatially aligned 3D virtual world, without the need for any specialized head-worn hardware or input device. We introduce a new technique for interpreting raw Kinect data to approximate and track rigid (e.g., books, cups) and non-rigid (e.g., hands, paper) physical objects and support a variety of physics-inspired interactions between virtual and real. In particular the algorithm models natural human grasping of virtual objects with more fidelity than previously demonstrated. A qualitative study highlights rich emergent 3D interactions, using hands and real-world objects. The implementation of HoloDesk is described in full, and example application scenarios explored. Finally, HoloDesk is quantitatively evaluated in a 3D target acquisition task, comparing the system with indirect and glasses-based variants." ] }
1705.07640
2950050397
Tracking the full skelet al pose of the hands and fingers is a challenging problem that has a plethora of applications for user interaction. Existing techniques either require wearable hardware, add restrictions to user pose, or require significant computation resources. This research explores a new approach to tracking hands, or any articulated model, by using an augmented rigid body simulation. This allows us to phrase 3D object tracking as a linear complementarity problem with a well-defined solution. Based on a depth sensor's samples, the system generates constraints that limit motion orthogonal to the rigid body model's surface. These constraints, along with prior motion, collision contact constraints, and joint mechanics, are resolved with a projected Gauss-Seidel solver. Due to camera noise properties and attachment errors, the numerous surface constraints are impulse capped to avoid overpowering mechanical constraints. To improve tracking accuracy, multiple simulations are spawned at each frame and fed a variety of heuristics, constraints and poses. A 3D error metric selects the best-fit simulation, helping the system handle challenging hand motions. Such an approach enables real-time, robust, and accurate 3D skelet al tracking of a user's hand on a variety of depth cameras, while only utilizing a single x86 CPU core for processing.
Physically motivated approaches have been applied to the reverse problem of generating realistic animations based on known user's intent @cite_10 . For tracking, previous work using a constrained, mechanical model of the hand showed an improved ability to track by limiting the degrees of freedom @cite_25 . Rather than limiting ourselves to a set of valid or invalid poses, we utilize range limits on joints, along with collision detection between bodies to avoid implausible hand states. Allowing additional degrees of freedom enables more subtle and full motions of the hand @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_25" ], "mid": [ "2079512211", "2157139838", "1815774045" ], "abstract": [ "The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.", "This paper introduces an optimization-based approach to synthesizing hand manipulations from a starting grasping pose. We describe an automatic method that takes as input an initial grasping pose and partial object trajectory, and produces as output physically plausible hand animation that effects the desired manipulation. In response to different dynamic situations during manipulation, our algorithm can generate a range of possible hand manipulations including changes in joint configurations, changes in contact points, and changes in the grasping force. Formulating hand manipulation as an optimization problem is key to our algorithm's ability to generate a large repertoire of hand motions from limited user input. We introduce an objective function that accentuates the detailed hand motion and contacts adjustment. Furthermore, we describe an optimization method that solves for hand motion and contacts efficiently while taking into account long-term planning of contact forces. Our algorithm does not require any tuning of parameters, nor does it require any prescribed hand motion sequences.", "Hand motion capture is one of the most important parts of gesture interfaces. Many current approaches to this task generally involve a formidable nonlinear optimization problem in a large search space. Motion capture can be achieved more cost-efficiently when considering the motion constraints of a hand. Although some constraints can be represented as equalities or inequalities, there exist many constraints which cannot be explicitly represented. In this paper, we propose a learning approach to model the hand configuration space directly. The redundancy of the configuration space can be eliminated by finding a lower-dimensional subspace of the original space. Finger motion is modeled in this subspace based on the linear behavior observed in the real motion data collected by a CyberGlove. Employing the constrained motion model, we are able to efficiently capture finger motion from video inputs. Several experiments show that our proposed model is helpful for capturing articulated motion." ] }
1705.07640
2950050397
Tracking the full skelet al pose of the hands and fingers is a challenging problem that has a plethora of applications for user interaction. Existing techniques either require wearable hardware, add restrictions to user pose, or require significant computation resources. This research explores a new approach to tracking hands, or any articulated model, by using an augmented rigid body simulation. This allows us to phrase 3D object tracking as a linear complementarity problem with a well-defined solution. Based on a depth sensor's samples, the system generates constraints that limit motion orthogonal to the rigid body model's surface. These constraints, along with prior motion, collision contact constraints, and joint mechanics, are resolved with a projected Gauss-Seidel solver. Due to camera noise properties and attachment errors, the numerous surface constraints are impulse capped to avoid overpowering mechanical constraints. To improve tracking accuracy, multiple simulations are spawned at each frame and fed a variety of heuristics, constraints and poses. A 3D error metric selects the best-fit simulation, helping the system handle challenging hand motions. Such an approach enables real-time, robust, and accurate 3D skelet al tracking of a user's hand on a variety of depth cameras, while only utilizing a single x86 CPU core for processing.
By leveraging depth cameras, and recursive state tracking, sequential Monte Carlo methods can search a large hand pose state-space for valid solutions @cite_22 . More recently, a particle swarm approach has demonstrated the ability to fully track a temporally-coherent 3D skelet al hand model @cite_12 . Many sample configurations are rendered to depth buffers, which are compared to the sensor's depth data, enabling iteration toward an error minimizing pose. In contrast to these approaches, which are very compute intensive, our approach comfortably runs over 60 Hz on a single core of an x86 processor. At each frame we move directly toward the best local fit (pose of minimum error) using proven techniques for rigid body simulation, rather than rendering-based trial and error. Robustness and recovering from loss-of-tracking is a significant challenge for any markerless system given the large high dimensional space of hand poses. To increase the reliability that the system predicts the correct pose, and avoid loss of tracking, we need only spawn a few different simulations rather than testing hundreds of potential candidate poses. Furthermore, the system is able to incorporate information, when available, such as as feature identification or known pose information acquired through other methods.
{ "cite_N": [ "@cite_22", "@cite_12" ], "mid": [ "2168392347", "2100642335" ], "abstract": [ "Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and or body. It is of utmost importance in designing an intelligent and efficient human-computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail. Existing challenges and future research possibilities are also highlighted", "We present a novel solution to the problem of recovering and tracking the 3D position, orientation and full articulation of a human hand from markerless visual observations obtained by a Kinect sensor. We treat this as an optimization problem, seeking for the hand model parameters that minimize the discrepancy between the appearance and 3D structure of hypothesized instances of a hand model and actual hand observations. This optimization problem is effectively solved using a variant of Particle Swarm Optimization (PSO). The proposed method does not require special markers and or a complex image acquisition setup. Being model based, it provides continuous solutions to the problem of tracking hand articulations. Extensive experiments with a prototype GPU-based implementation of the proposed method demonstrate that accurate and robust 3D tracking of hand articulations can be achieved in near real-time (15Hz)." ] }
1705.07637
2618140352
This paper presents a motion planner for systems subject to kinematic and dynamic constraints. The former appear when kinematic loops are present in the system, such as in parallel manipulators, in robots that cooperate to achieve a given task, or in situations involving contacts with the environment. The latter are necessary to obtain realistic trajectories, taking into account the forces acting on the system. The kinematic constraints make the state space become an implicitly-defined manifold, which complicates the application of common motion planning techniques. To address this issue, the planner constructs an atlas of the state space manifold incrementally, and uses this atlas both to generate random states and to dynamically simulate the steering of the system towards such states. The resulting tools are then exploited to construct a rapidly-exploring random tree (RRT) over the state space. To the best of our knowledge, this is the first randomized kinodynamic planner for implicitly-defined state spaces. The test cases presented in this paper validate the approach in significantly-complex systems.
From constrained multibody dynamics it is well known that, when simulating a system's motion, it is advantageous to directly deal with the DAE of the system, rather than converting it into its ODE form @cite_18 . Several techniques have been used to this end @cite_53 @cite_48 . In the popular Baumgarte method the drift is alleviated with control techniques @cite_2 , but the control parameters are problem dependent and there is no general method to tune them. Another way to reduce the drift is to use violation suppression techniques @cite_33 @cite_7 , but they do not guarantee a drift-free integration. A better alternative are the methods relying on local parameterizations @cite_36 , since they cancel the drift to machine accuracy. To the best of our knowledge, this approach has never been applied in the context of kinodynamic planning. However, it nicely complements an existing planning method for implicitly-defined configuration spaces @cite_11 @cite_52 , which also relies on local parameterizations. The planner introduced in this paper can be seen as an extension of the latter method to also deal with dynamics, or an extension of @cite_37 to include kinematic constraints.
{ "cite_N": [ "@cite_18", "@cite_37", "@cite_33", "@cite_7", "@cite_36", "@cite_48", "@cite_53", "@cite_52", "@cite_2", "@cite_11" ], "mid": [ "", "2000359213", "1993995584", "194097108", "2024521997", "2039847020", "1977512600", "2103776982", "1975537977", "2133631661" ], "abstract": [ "", "This paper presents the first randomized approach to kinodynamic planning (also known as trajectory planning or trajectory design). The task is to determine control inputs to drive a robot from an ini ial configuration and velocity to a goal configuration and velocity while obeying physically based dynamical models and avoiding obstacles in the robot’s environment. The authors consider generic systems that express the nonlinear dynamics of a robot in terms of the robot’s high-dimensional configuration space. Kinodynamic planning is treated as a motion-planning problem in a higher dimensional state space that has both first-order differential constraints and obstacle-based global constraints. The state space serves the same role as the configuration space for basic path planning; however, standard randomized path-planning techniques do not directly apply to planning trajectories in the state space. The authors have developed a randomized planning approach that is particularly tailored to trajectory plannin...", "By means of the Udwadia-Kalaba approach we propose an explicit equation of constrained motion developed to simulate constrained dynamical systems without error accumulation due to constraint drift. The basic idea is to embed a small virtual force and a small virtual impulse to the equation of motion, in order to avoid the drift typically experienced in constrained multibody simulations. The embedded correction terms are selected to minimally alter the dynamics in an acceleration and kinetic energy norm sense. The formulation allows one to use a standard ODE solver, avoiding the need for iterative constraint stabilization. The equation is based on the pseudoinverse of a constraint matrix such that it can be used under redundant constraints and kinematic singularities. The proposed method takes into account the finite word-length of the computational environment, and also accommodates possibly inconsistent initial conditions.", "Multibody systems are often modeled as constrained systems, and theconstraint equations are involved in the dynamics formulations. To makethe arising governing equations more tractable, the constraint equationsare differentiated with respect to time, and this results in unstablenumerical solutions which may violate the lower-order constraintequations. In this paper we develop a methodology for numerically exactelimination of the constraint violations, based on appropriatecorrections of the state variables (after each integration step) withoutany modification in the motion equations. While the elimination ofviolation of position constraints may require few iterations, theviolation of velocity constraints is removed in one step. The totalenergy of the system is sometimes treated as another measure of theintegration process inaccuracy. An improved scheme for one-stepelimination of the energy constraint violation is proposed as well. Theconclusion of this paper is, however, that the energy conservation is ofminor importance as concerns the improvement of accuracy of numericalsimulations. Some test calculations are reported.", "ABSTRACT ABSTRACT A new class of methods for solving the equations of motion of constrained mechanical system dynamics is presented. The tangent space local parametrization is used to form an index one system of mixed differential-algebraic equations (DAEs) that describes the constrained motion. Implicit numerical integration formulas are applied to the reduced ordinary differential equations (ODEs) in the tangent space. The resulting system of nonlinear equations is solved by Broyden's method. In the framework presented, the computational complexity of solving the implicit integration equations for the reduced ODE is about same as that of some explicit integration methods for the reduced ODE. In fact, in some cases the implicit method for solving the Euler-Lagrange equations can be implemented more efficiently than the explicit scheme.", "A hallmark of multibody dynamics is that most formulations involve a number of constraints. Typically, when redundant generalized coordinates are used, equations of motion are simpler to derive but constraint equations are present. While the dynamic behavior of constrained systems is well understood, the numerical solution of the resulting equations, potentially of differential-algebraic nature, remains problematic. Many different approaches have been proposed over the years, all presenting advantages and drawbacks: the sheer number and variety of methods that have been proposed indicate the difficulty of the problem. A cursory survey of the literature reveals that the various methods fall within broad categories sharing common theoretical foundations. This paper summarizes the theoretical foundations to the enforcement in constraints in multibody dynamics problems. Next, methods based on the use of Lagrange’s equation of the first kind, which are index-3 differential algebraic equations in the presence of holonomic constraints, are reviewed. Methods leading to a minimum set of equations are discussed; in view of the numerical difficulties associated with index-3 approaches, reduction to a minimum set is often performed, leading to a number of practical algorithms using methods developed for ordinary differential equations. The goal of this paper is to review the features of these methods, assess their accuracy and efficiency, underline the relationship among the methods, and recommend approaches that seem to perform better than others.", "A hallmark of multibody dynamics is that most formulations involve a number of constraints. Typically, when redundant generalized coordinates are used, equations of motion are simpler to derive but constraint equations are present. Approaches to dealing with high index differential algebraic equations, based on index reduction techniques, are reviewed and discussed. Constraint violation stabilization techniques that have been developed to control constraint drift are also reviewed. These techniques are used in conjunction with algorithms that do not exactly enforce the constraints. Control theory forms the basis for a number of these methods. Penalty based techniques have also been developed, but the augmented Lagrangian formulation presents a more solid theoretical foundation. In contrast to constraint violation stabilization techniques, constraint violation elimination techniques enforce exact satisfaction of the constraints, at least to machine accuracy. Finally, as the finite element method has gained popularity for the solution of multibody systems, new techniques for the enforcement of constraints have been developed in that framework. The goal of this paper is to review the features of these methods, assess their accuracy and efficiency, underline the relationship among the methods, and recommend approaches that seem to perform better than others.", "The situation arising in path planning under kinematic constraints, where the valid configurations define a manifold embedded in the joint ambient space, can be seen as a limit case of the well-known narrow corridor problem. With kinematic constraints, the probability of obtaining a valid configuration by sampling in the joint ambient space is not low but null, which complicates the direct application of sampling-based path planners. This paper presents the AtlasRRT algorithm, which is a planner especially tailored for such constrained systems that builds on recently developed tools for higher-dimensional continuation. These tools provide procedures to define charts that locally parametrize a manifold and to coordinate the charts, forming an atlas that fully covers it. AtlasRRT simultaneously builds an atlas and a bidirectional rapidly exploring random tree (RRT), using the atlas to sample configurations and to grow the branches of the RRTs, and the RRTs to devise directions of expansion for the atlas. The efficiency of AtlasRRT is evaluated in several benchmarks involving high-dimensional manifolds embedded in large ambient spaces. The results show that the combined use of the atlas and the RRTs produces a more rapid exploration of the configuration space manifolds than existing approaches.", "Abstract When a given system of differential equations is integrated by numerical and automatic integration it may occur that the solution at hand satisfies an analytical relation which is a corollary of the differential equations but which is unknown to the automatic computer. An example of such a relation is the energy relation in conservative systems or the analytical relation generated by an outer holonomic or non-holonomic constraint provided the Lagrange equations of the first kind are used. It is shown that, in general, the computed numerical values of the solution satisfy such analytic relations with poor accuracy. The aim of the paper is to show how the analytical relations can be satisfied in a stabilized manner in order to improve the numerical accuracy of the solution of the differential equations. The proposed method leads to a modified differential system which is often stable in the sense of Ljapunov, whereas the original system is unstable.", "Despite the significant advances in path planning methods, highly constrained problems are still challenging. In some situations, the presence of constraints defines a configuration space that is a non-parametrizable manifold embedded in a high-dimensional ambient space. In these cases, the use of sampling-based path planners is cumbersome since samples in the ambient space have low probability to lay on the configuration space manifold. In this paper, we present a new path planning algorithm specially tailored for highly constrained systems. The proposed planner builds on recently developed tools for higher-dimensional continuation, which provide numerical procedures to describe an implicitly defined manifold using a set of local charts. We propose to extend these methods focusing the generation of charts on the path between the two configurations to connect and randomizing the process to find alternative paths in the presence of obstacles. The advantage of this planner comes from the fact that it directly operates into the configuration space and not into the higher-dimensional ambient space, as most of the existing methods do." ] }
1705.07483
2620249651
Many infrastructure-free indoor positioning systems rely on fine-grained location-dependent fingerprints to train models for localization. The site survey process to collect fingerprints is laborious and is considered one of the major obstacles to deploying such systems. In this paper, we propose TuRF, a fast path-based fingerprint collection mechanism for site survey. We demonstrate the feasibility to collect fingerprints for indoor localization during walking along predefined paths. A step counter is utilized to accommodate the variations in walking speed. Approximate location labels inferred from the steps are then used to train a Gaussian Process regression model. Extensive experiments show that TuRF can significantly reduce the required time for site survey, without compromising the localization performance.
In fingerprint-based IPS, although all location-dependent environmental measures can be utilized as fingerprints, magnetic field magnitudes and WiFi RSSes are most often used. Traditionally, magnetic field measurements are used for heading estimation. Magnetic field anomalies caused by building materials and magnetic interference from machinery and IT equipment make them unsuitable for heading but attractive for fingerprinting or landmark identification @cite_13 @cite_9 . Special devices have been developed to utilize magnetic field for small range indoor navigation @cite_9 . In general, magnetic field vectors are not unique in a large area, but can be used to differentiate different locations in a small area. As a result, recent studies @cite_13 @cite_14 combine WiFi RSSes and magnetic field measurements for indoor localization. Moreover, magnetic field readings have been shown to reduce the searching space in fingerprint based indoor localization @cite_14 .
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_13" ], "mid": [ "2116068514", "2553223022", "" ], "abstract": [ "We present an indoor positioning system that measures location using disturbances of the Earth's magnetic field caused by structural steel elements in a building. The presence of these large steel members warps the geomagnetic field in a way that is spatially varying but temporally stable. To localize, we measure the magnetic field using an array of e-compasses and compare the measurement with a previously obtained magnetic map. We demonstrate accuracy within 1 meter 88 of the time in experiments in two buildings and across multiple floors within the buildings. We discuss several constraint techniques that can maintain accuracy as the sample space increases.", "As sensor-rich mobile devices became a commodity, more opportunities appeared for the creation of location-aware services. While GPS is a well established solution for outdoor localization, there is still no standard solution for localization indoors. This paper presents a novel accurate indoor positioning mechanism that is meant to run in common smartphones to be a readily and widely available solution. The system is based on multiple gait-model based filtering techniques for accurate movement quantification in combination with an advanced fused positioning mechanism that leverages sequences of opportunistic observations towards an accurate localization process. Magnetic field fluctuations, Wi-Fi readings and movement data are incrementally matched with a feature spot map containing multi-dimensional spatially-related features that characterize the building. A novel and convenient way of mapping the architectural and environmental properties of buildings is also introduced, which avoids the burden normally associated with the process. The system has been evaluated by multiple users in open and crowded spaces where overall median localization errors between 1.11 m and 1.68 m were obtained. While the reported errors are already satisfactory in the context of indoor localization, improvements may be readily achieved through the inclusion of additional reference features. High accuracy performance coupled with an opportunistic and infrastructure-free approach creates a very desirable solution for the indoor localization market doge.", "" ] }
1705.07483
2620249651
Many infrastructure-free indoor positioning systems rely on fine-grained location-dependent fingerprints to train models for localization. The site survey process to collect fingerprints is laborious and is considered one of the major obstacles to deploying such systems. In this paper, we propose TuRF, a fast path-based fingerprint collection mechanism for site survey. We demonstrate the feasibility to collect fingerprints for indoor localization during walking along predefined paths. A step counter is utilized to accommodate the variations in walking speed. Approximate location labels inferred from the steps are then used to train a Gaussian Process regression model. Extensive experiments show that TuRF can significantly reduce the required time for site survey, without compromising the localization performance.
Another line of work considers AP selection for localization. It was found that not all the APs contribute to indoor positioning in fingerprint-based IPS since APs have different beacon intervals and power saving mode. Previous studies show that judiciously selecting a subset of APs can improve the positioning accuracy (e.g., among those with strongest RSSes) and 6 to 10 APs distributed around the area are often sufficient efficient @cite_12 @cite_11 . AP selection schemes are complementary to fingerprint collection as we generally have no control over when RSSes can be collected from an AP. The former can be used during both the training and operational stages of fingerprint-based localization, and is adopted in TuRF as well.
{ "cite_N": [ "@cite_12", "@cite_11" ], "mid": [ "2011502582", "2143243918" ], "abstract": [ "The recent growing interest for indoor Location-Based Services (LBSs) has created a need for more accurate and real-time indoor positioning solutions. The sparse nature of location finding makes the theory of Compressive Sensing (CS) desirable for accurate indoor positioning using Received Signal Strength (RSS) from Wireless Local Area Network (WLAN) Access Points (APs). We propose an accurate RSS-based indoor positioning system using the theory of compressive sensing, which is a method to recover sparse signals from a small number of noisy measurements by solving an 1-minimization problem. Our location estimator consists of a coarse localizer, where the RSS is compared to a number of clusters to detect in which cluster the node is located, followed by a fine localization step, using the theory of compressive sensing, to further refine the location estimation. We have investigated different coarse localization schemes and AP selection approaches to increase the accuracy. We also show that the CS theory can be used to reconstruct the RSS radio map from measurements at only a small number of fingerprints, reducing the number of measurements significantly. We have implemented the proposed system on a WiFi-integrated mobile device and have evaluated the performance. Experimental results indicate that the proposed system leads to substantial improvement on localization accuracy and complexity over the widely used traditional fingerprinting methods.", "We present a WLAN location determination technique, the Joint Clustering technique, that uses: (1) signal strength probability distributions to address the noisy wireless channel, and (2) clustering of locations to reduce the computational cost of searching the radio map. The Joint Clustering technique reduces computational cost by more than an order of magnitude, compared to the current state of the art techniques, allowing non-centralized implementation on mobile clients. Results from 802.11-equipped iPAQ implementations show that the new technique gives user location to within 7 feet with over 90 accuracy." ] }
1705.07369
2620502216
The widely-studied radio network model [Chlamtac and Kutten, 1985] is a graph-based description that captures the inherent impact of collisions in wireless communication. In this model, the strong assumption is made that node v receives a message from a neighbor if and only if exactly one of its neighbors broadcasts. We relax this assumption by introducing a new noisy radio network model in which random faults occur at senders or receivers. Specifically, for a constant noise parameter p ∈ [0,1), either every sender has probability p of transmitting noise or every receiver of a single transmission in its neighborhood has probability p of receiving noise. We first study single-message broadcast algorithms in noisy radio networks and show that the Decay algorithm [Bar-, 1992] remains robust in the noisy model while the diameter-linear algorithm of , 2007 does not. We give a modified version of the algorithm of , 2007 that is robust to sender and receiver faults, and extend both this modified algorithm and the Decay algorithm to robust multi-message broadcast algorithms, broadcasting Ω(1 log n log log n) and Ω(1 log n) messages per round, respectively. We next investigate the extent to which (network) coding improves throughput in noisy radio networks. In particular, we study the coding cap -- the ratio of the throughput of coding to that of routing -- in noisy radio networks. We address the previously perplexing result of 2014 that worst case coding throughput is no better than worst case routing throughput up to constants: we show that the worst case throughput performance of coding is, in fact, superior to that of routing -- by a Θ(log(n)) gap -- provided receiver faults are introduced. However, we show that sender faults have little effect on throughput. In particular, we show that any coding or routing scheme for the noiseless setting can be transformed to be robust to sender faults with only a constant throughput overhead. These transformations imply that the results of , 2014 carry over to noisy radio networks with sender faults as well. As a result, if sender faults are introduced then there exist topologies for which there is a Θ(log log n) gap, but the worst case throughput across all topologies is Θ(1 log n) for both coding and routing.
The noisy broadcast model, introduced by , resembles our model as it assumes random errors. This model was mostly studied in the context of computing functions of the inputs of nodes @cite_5 @cite_25 @cite_16 @cite_15 . However, this model assumes a complete communication network and single-bit transmissions. An extension of this line of work for random planar networks was also studied @cite_1 @cite_6 @cite_2 @cite_9 . Unlike our own model, in this model a node can receive a message from multiple neighbors in a single round.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_25" ], "mid": [ "2095713941", "2022456395", "2103163002", "2140601517", "2053269541", "", "", "" ], "abstract": [ "A radio network is a synchronous network of processors that communicate by transmitting messages to their neighbors, where a processor receives a message in a given step if and only if it is silent in this step and precisely one of its neighbors transmits. In this paper we prove the existence of a family of radius-2 networks on n vertices for which any broadcast schedule requires at least Omega((log n log log n)2) rounds of transmissions. This almost matches an upper bound of O(log2 n) rounds for networks of radius 2 proved earlier by Bar-Yehuda, Goldreich, and Itai.", "We consider distributed computation of functions of distributed data in random planar networks with noisy wireless links. We present a new algorithm for computation of the maximum value which is order optimal in the number of transmissions and computation time. We also adapt the histogram computation algorithm of [1] to make the histogram computation time optimal.", "We consider a wireless sensor network consisting of n sensors, each having a recorded bit, the sensor’s measurement, which has been set to either “0” or “1”. The network has a special node called the fusion center whose goal is to compute a symmetric function of these bits; i.e., a function that depends only on the number of sensors that have a “1.” The sensors convey information to the fusion center in a multi-hop fashion to enable the function computation. The problem studied is to minimize the total transmission energy used by the network when computing this function, subject to the constraint that this computation is correct with high probability. We assume the wireless channels are binary symmetric channels with a probability of error p, and that each sensor uses rαunits of energy to transmit each bit, where r is the transmission range of the sensor. The main result in this paper is an algorithm whose energy usage is Θ (n(loglogn)(√logn n)α), and we also show that any algorithm satisfying the performance constraints must necessarily have energy usage Ω (n(√logn n)α). Then, we consider the case where the sensor network observes N events, and each node records one bit per event, thus having N bits to convey. The fusion center now wants to compute N symmetric functions, one for each of the events.", "We show a tight lower bound of Omega(N N) on the number of transmissions required to compute several functions (including the parity function and the majority function) in a network of N randomly placed sensors, communicating using local transmissions, and operating with power near the connectivity threshold. This result considerably simplifies and strengthens an earlier result of Dutta, Kanoria Manjunath and Radhakrishnan (SODA 08) that such networks cannot compute the parity function reliably with significantly fewer than N N transmissions, thereby showing that the protocol with O(N N) transmissions due to Ying, Srikant and Dullerud (WiOpt 06) is optimal. We also observe that all the lower bounds shown by Evans and Pippenger (SIAM J. on Computing, 1999) on the average noisy decision tree complexity for several functions can be derived using our technique simply and in a unified way.", "A broadcast network of N+1 nodes is considered in which each binary digit transmitted by each node is received by every other node via a binary symmetric channel of given transition probability. The errors on these channels are independent over transmitters, receivers and time. Each node has a binary state, and the problem is to construct a distributed algorithm to find the parity of the set of states with some given reliability. It is shown that this can be done with O(ln(lnN)) bits of communication from each node. Communicating all the node states to one node can be accomplished with only marginally more communication. >", "", "", "" ] }
1705.07369
2620502216
The widely-studied radio network model [Chlamtac and Kutten, 1985] is a graph-based description that captures the inherent impact of collisions in wireless communication. In this model, the strong assumption is made that node v receives a message from a neighbor if and only if exactly one of its neighbors broadcasts. We relax this assumption by introducing a new noisy radio network model in which random faults occur at senders or receivers. Specifically, for a constant noise parameter p ∈ [0,1), either every sender has probability p of transmitting noise or every receiver of a single transmission in its neighborhood has probability p of receiving noise. We first study single-message broadcast algorithms in noisy radio networks and show that the Decay algorithm [Bar-, 1992] remains robust in the noisy model while the diameter-linear algorithm of , 2007 does not. We give a modified version of the algorithm of , 2007 that is robust to sender and receiver faults, and extend both this modified algorithm and the Decay algorithm to robust multi-message broadcast algorithms, broadcasting Ω(1 log n log log n) and Ω(1 log n) messages per round, respectively. We next investigate the extent to which (network) coding improves throughput in noisy radio networks. In particular, we study the coding cap -- the ratio of the throughput of coding to that of routing -- in noisy radio networks. We address the previously perplexing result of 2014 that worst case coding throughput is no better than worst case routing throughput up to constants: we show that the worst case throughput performance of coding is, in fact, superior to that of routing -- by a Θ(log(n)) gap -- provided receiver faults are introduced. However, we show that sender faults have little effect on throughput. In particular, we show that any coding or routing scheme for the noiseless setting can be transformed to be robust to sender faults with only a constant throughput overhead. These transformations imply that the results of , 2014 carry over to noisy radio networks with sender faults as well. As a result, if sender faults are introduced then there exist topologies for which there is a Θ(log log n) gap, but the worst case throughput across all topologies is Θ(1 log n) for both coding and routing.
Another model that captures unreliability in radio networks is the model @cite_14 @cite_3 @cite_26 @cite_11 @cite_10 . In this graph-based model, there is a static set of edges that form the network graph @math as well as an additional graph @math which consists of edges that may or may not be present in each round, as chosen by an adversary. While an excellent way to capture an environment where some links are reliable and others are not, the dual graph model fails to model random noise.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_3", "@cite_10", "@cite_11" ], "mid": [ "2756497387", "2135699088", "2043964337", "2127629521", "2950965411" ], "abstract": [ "A diversity of possible communication assumptions complicates the study of algorithms and lower bounds for radio networks. We address this problem by defining an abstract MAC layer. This service provides reliable local broadcast communication, with timing guarantees stated in terms of a collection of abstract delay functions applied to the relevant contention. Algorithm designers can analyze their algorithms in terms of these functions, independently of specific channel behavior. Concrete implementations of the abstract MAC layer over basic radio network models generate concrete definitions for these delay functions, automatically adapting bounds proven for the abstract service to bounds for the specific radio network under consideration. To illustrate this approach, we use the abstract MAC layer to study the new problem of Multi-Message Broadcast, a generalization of standard single-message broadcast in which multiple messages can originate at different times and locations in the network. We present and analyze two algorithms for Multi-Message Broadcast in static networks: a simple greedy algorithm and one that uses regional leaders. We then indicate how these results can be extended to mobile networks.", "Practitioners agree that unreliable links, which sometimes deliver messages and sometime do not, are an important characteristic of wireless networks. In contrast, most theoretical models of radio networks fix a static set of links and assume that these links are reliable. This gap between theory and practice motivates us to investigate how unreliable links affect theoretical bounds on broadcast in radio networks. To that end we consider a model that includes two types of links: reliable links, which always deliver messages, and unreliable links, which sometimes fail to deliver messages. We assume that the reliable links induce a connected graph, and that unreliable links are controlled by a worst-case adversary. In the new model we show an Ω(n log n) lower bound on deterministic broadcast in undirected graphs, even when all processes are initially awake and have collision detection, and an Ω(n) lower bound on randomized broadcast in undirected networks of constant diameter. This separates the new model from the classical, reliable model. On the positive side, we give two algorithms that tolerate unreliability: an O(n3 2 √log n)-time deterministic algorithm and a randomized algorithm which terminates in O(n log2 n) rounds with high probability.", "In this paper we study the problem of building a constant-degree connected dominating set (CCDS), a network structure that can be used as a communication backbone, in the dual graph radio network model ( in J Parallel Distrib Comput 64:89–96, 2004; in Proceedings of the international symposium on principles of distributed computing 2009, Distrib Comput 24(3–4):187–206 2011, Proceedings of the international symposium on principles of distributed computing 2010). This model includes two types of links: reliable, which always deliver messages, and unreliable, which sometimes fail to deliver messages. Real networks compensate for this differing quality by deploying low-layer detection protocols to filter unreliable from reliable links. With this in mind, we begin by presenting an algorithm that solves the CCDS problem in the dual graph model under the assumption that every process (u ) is provided with a local link detector set consisting of every neighbor connected to (u ) by a reliable link. The algorithm solves the CCDS problem in (O ( ^2 n b + ^3 n ) ) rounds, with high probability, where ( ) is the maximum degree in the reliable link graph, (n ) is the network size, and (b ) is an upper bound in bits on the message size. The algorithm works by first building a Maximal Independent Set (MIS) in ( ^3 n ) time, and then leveraging the local topology knowledge to efficiently connect nearby MIS processes. A natural follow-up question is whether the link detector must be perfectly reliable to solve the CCDS problem. With this in mind, we first describe an algorithm that builds a CCDS in (O( )polylog ((n)) ) time under the assumption of (O(1) ) unreliable links included in each link detector set. We then prove this algorithm to be (almost) tight by showing that the possible inclusion of only a single unreliable link in each process’s local link detector set is sufficient to require ( ( ) ) rounds to solve the CCDS problem, regardless of message size. We conclude by discussing how to apply our algorithm in the setting where the topology of reliable and unreliable links can change over time.", "We study upper and lower bounds for the global and local broadcast problems in the dual graph model combined with different strength adversaries. The dual graph model is a generalization of the standard graph-based radio network model that includes unreliable links controlled by an adversary. It is motivated by the ubiquity of unreliable links in real wireless networks. Existing results in this model [11, 12, 3, 8] assume an offline adaptive adversary - the strongest type of adversary considered in standard randomized analysis. In this paper, we study the two other standard types of adversaries: online adaptive and oblivious. Our goal is to find a model that captures the unpredictable behavior of real networks while still allowing for efficient broadcast solutions. For the online adaptive dual graph model, we prove a lower bound that shows the existence of constant-diameter graphs in which both types of broadcast require Ω(n log n) rounds, for network size n. This result is within log-factors of the (near) tight upper bound for the offline adaptive setting. For the oblivious dual graph model, we describe a global broadcast algorithm that solves the problem in O(Dlog n + log2 n) rounds for network diameter D, but prove a lower bound of Ω(√n= log n) rounds for local broadcast in this same setting. Finally, under the assumption of geographic constraints on the network graph, we describe a local broadcast algorithm that requires only O(log2 n logΔ) rounds in the oblivious model, for maximum degree Δ. In addition to the theoretical interest of these results, we argue that the oblivious model (with geographic constraints) captures enough behavior of real networks to render our efficient algorithms useful for real deployments.", "The local broadcast problem assumes that processes in a wireless network are provided messages, one by one, that must be delivered to their neighbors. In this paper, we prove tight bounds for this problem in two well-studied wireless network models: the classical model, in which links are reliable and collisions consistent, and the more recent dual graph model, which introduces unreliable edges. Our results prove that the Decay strategy, commonly used for local broadcast in the classical setting, is optimal. They also establish a separation between the two models, proving that the dual graph setting is strictly harder than the classical setting, with respect to this primitive." ] }
1705.07450
2617781042
Inspired by the combination of feedforward and iterative computations in the visual cortex, and taking advantage of the ability of denoising autoencoders to estimate the score of a joint distribution, we propose a novel approach to iterative inference for capturing and exploiting the complex joint distribution of output variables conditioned on some input variables. This approach is applied to image pixel-wise segmentation, with the estimated conditional score used to perform gradient ascent towards a mode of the estimated conditional distribution. This extends previous work on score estimation by denoising autoencoders to the case of a conditional distribution, with a novel use of a corrupted feedforward predictor replacing Gaussian corruption. An advantage of this approach over more classical ways to perform iterative inference for structured outputs, like conditional random fields (CRFs), is that it is not any more necessary to define an explicit energy function linking the output variables. To keep computations tractable, such energy function parametrizations are typically fairly constrained, involving only a few neighbors of each of the output variables in each clique. We experimentally find that the proposed iterative inference from conditional score estimation by conditional denoising autoencoders performs better than comparable models based on CRFs or those not using any explicit modeling of the conditional joint distribution of outputs.
On one hand, recent advances in semantic segmentation mainly focus on improving the architecture design , increasing the context understanding capabilities and building processing modules to enforce structure consistency to segmentation outputs . Here, we are interested in this last research direction. CRFs are among the most popular choices to impose structured information at the output of a segmentation network, being fully connected CRFs and CRFs as RNNs @cite_6 among best performing variants. More recently, an alternative to promote structure consistency by decomposing the prediction process into multiple steps and iteratively adding structure information, was introduced in . Another iterative approach was introduced in @cite_2 to tackle image semantic segmentation by repeatedly detecting, replacing and refining segmentation masks. Finally, the reinterpretation of residual networks @cite_0 @cite_1 was exploited in @cite_8 , in the context of biomedical image segmentation, by iteratively refining learned pre-normalized images to generate segmentation predictions.
{ "cite_N": [ "@cite_8", "@cite_1", "@cite_6", "@cite_0", "@cite_2" ], "mid": [ "2949650786", "1913356549", "2147880316", "2337199865", "2584471766" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "We propose an algorithm for semantic segmentation based on 3D point clouds derived from ego-motion. We motivate five simple cues designed to model specific patterns of motion and 3D world structure that vary with object category. We introduce features that project the 3D cues back to the 2D image plane while modeling spatial layout and context. A randomized decision forest combines many such features to achieve a coherent 2D segmentation and recognize the object categories present. Our main contribution is to show how semantic segmentation is possible based solely on motion-derived 3D world structure. Our method works well on sparse, noisy point clouds, and unlike existing approaches, does not need appearance-based descriptors. Experiments were performed on a challenging new video database containing sequences filmed from a moving car in daylight and at dusk. The results confirm that indeed, accurate segmentation and recognition are possible using only motion and 3D world structure. Further, we show that the motion-derived information complements an existing state-of-the-art appearance-based method, improving both qualitative and quantitative performance.", "We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.", "We discuss relations between Residual Networks (ResNet), Recurrent Neural Networks (RNNs) and the primate visual cortex. We begin with the observation that a shallow RNN is exactly equivalent to a very deep ResNet with weight sharing among the layers. A direct implementation of such a RNN, although having orders of magnitude fewer parameters, leads to a performance similar to the corresponding ResNet. We propose 1) a generalization of both RNN and ResNet architectures and 2) the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex. We demonstrate the eectiveness of the architectures by testing them on the CIFAR-10 dataset.", "Pixel wise image labeling is an interesting and challenging problem with great significance in the computer vision community. In order for a dense labeling algorithm to be able to achieve accurate and precise results, it has to consider the dependencies that exist in the joint space of both the input and the output variables. An implicit approach for modeling those dependencies is by training a deep neural network that, given as input an initial estimate of the output labels and the input image, it will be able to predict a new refined estimate for the labels. In this context, our work is concerned with what is the optimal architecture for performing the label improvement task. We argue that the prior approaches of either directly predicting new label estimates or predicting residual corrections w.r.t. the initial labels with feed-forward deep network architectures are sub-optimal. Instead, we propose a generic architecture that decomposes the label improvement task to three steps: 1) detecting the initial label estimates that are incorrect, 2) replacing the incorrect labels with new ones, and finally 3) refining the renewed labels by predicting residual corrections w.r.t. them. Furthermore, we explore and compare various other alternative architectures that consist of the aforementioned Detection, Replace, and Refine components. We extensively evaluate the examined architectures in the challenging task of dense disparity estimation (stereo matching) and we report both quantitative and qualitative results on three different datasets. Finally, our dense disparity estimation network that implements the proposed generic architecture, achieves state-of-the-art results in the KITTI 2015 test surpassing prior approaches by a significant margin. We plan to release the Torch code that implements the paper in: https: github.com gidariss DRR_struct_pred ." ] }
1705.07422
2619445021
This paper proposes a new Generative Partition Network (GPN) to address the challenging multi-person pose estimation problem. Different from existing models that are either completely top-down or bottom-up, the proposed GPN introduces a novel strategy--it generates partitions for multiple persons from their global joint candidates and infers instance-specific joint configurations simultaneously. The GPN is favorably featured by low complexity and high accuracy of joint detection and re-organization. In particular, GPN designs a generative model that performs one feed-forward pass to efficiently generate robust person detections with joint partitions, relying on dense regressions from global joint candidates in an embedding space parameterized by centroids of persons. In addition, GPN formulates the inference procedure for joint configurations of human poses as a graph partition problem, and conducts local optimization for each person detection with reliable global affinity cues, leading to complexity reduction and performance improvement. GPN is implemented with the Hourglass architecture as the backbone network to simultaneously learn joint detector and dense regressor. Extensive experiments on benchmarks MPII Human Pose Multi-Person, extended PASCAL-Person-Part, and WAF, show the efficiency of GPN with new state-of-the-art performance.
Existing approaches following top-down strategy sequentially perform person detection and single-person pose estimation. @cite_27 , Gkioxari proposed to adopt the Generalized Hough Transform framework to first generate person proposals and then classify joint candidates based on the poselets. Sun @cite_13 presented a hierarchical part-based model for jointly person detection and pose estimation. Recently, deep learning techniques have been exploited to improve both person detection and single-person pose estimation. @cite_6 , Iqbal and Gall adopted Faster-RCNN @cite_2 based person detector and convolutional pose machine @cite_15 based joint detector for this task. Later, Fang @cite_19 utilized spatial transformer network @cite_25 and Hourglass network @cite_23 to further improve the quality of joint detections and partitions. Despite remarkable success, they suffer from limitations from early commitment and high joint detection complexity. Differently, the proposed GPN adopts a one-pass generative process for efficiently producing person detections with partitioned joint candidates, offering robustness to early commitment as well as low joint detection complexity.
{ "cite_N": [ "@cite_6", "@cite_19", "@cite_27", "@cite_23", "@cite_2", "@cite_15", "@cite_13", "@cite_25" ], "mid": [ "2509865052", "2559320744", "2031004336", "2950762923", "2613718673", "2255781698", "1993149133", "2951005624" ], "abstract": [ "Despite of the recent success of neural networks for human pose estimation, current approaches are limited to pose estimation of a single person and cannot handle humans in groups or crowds. In this work, we propose a method that estimates the poses of multiple persons in an image in which a person can be occluded by another person or might be truncated. To this end, we consider multi-person pose estimation as a joint-to-person association problem. We construct a fully connected graph from a set of detected joint candidates in an image and resolve the joint-to-person association and outlier detection using integer linear programming. Since solving joint-to-person association jointly for all persons in an image is an NP-hard problem and even approximations are expensive, we solve the problem locally for each person. On the challenging MPII Human Pose Dataset for multiple persons, our approach achieves the accuracy of a state-of-the-art method, but it is 6,000 to 19,000 times faster.", "Multi-person pose estimation in the wild is challenging. Although state-of-the-art human detectors have demonstrated good performance, small errors in localization and recognition are inevitable. These errors can cause failures for a single-person pose estimator (SPPE), especially for methods that solely depend on human detection results. In this paper, we propose a novel regional multi-person pose estimation (RMPE) framework to facilitate pose estimation in the presence of inaccurate human bounding boxes. Our framework consists of three components: Symmetric Spatial Transformer Network (SSTN), Parametric Pose Non-Maximum-Suppression (NMS), and Pose-Guided Proposals Generator (PGPG). Our method is able to handle inaccurate bounding boxes and redundant detections, allowing it to achieve a 17 increase in mAP over the state-of-the-art methods on the MPII (multi person) dataset.Our model and source codes are publicly available.", "A k-poselet is a deformable part model (DPM) with k parts, where each of the parts is a poselet, aligned to a specific configuration of keypoints based on ground-truth annotations. A separate template is used to learn the appearance of each part. The parts are allowed to move with respect to each other with a deformation cost that is learned at training time. This model is richer than both the traditional version of poselets and DPMs. It enables a unified approach to person detection and keypoint prediction which, barring contemporaneous approaches based on CNN features, achieves state-of-the-art keypoint prediction while maintaining competitive detection performance.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "Despite recent successes, pose estimators are still somewhat fragile, and they frequently rely on a precise knowledge of the location of the object. Unfortunately, articulated objects are also very difficult to detect. Knowledge about the articulated nature of these objects, however, can substantially contribute to the task of finding them in an image. It is somewhat surprising, that these two tasks are usually treated entirely separately. In this paper, we propose an Articulated Part-based Model (APM) for jointly detecting objects and estimating their poses. APM recursively represents an object as a collection of parts at multiple levels of detail, from coarse-to-fine, where parts at every level are connected to a coarser level through a parent-child relationship (Fig. 1(b)-Horizontal). Parts are further grouped into part-types (e.g., left-facing head, long stretching arm, etc) so as to model appearance variations (Fig. 1(b)-Vertical). By having the ability to share appearance models of part types and by decomposing complex poses into parent-child pairwise relationships, APM strikes a good balance between model complexity and model richness. Extensive quantitative and qualitative experiment results on public datasets show that APM outperforms state-of-the-art methods. We also show results on PASCAL 2007 - cats and dogs - two highly challenging articulated object categories.", "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations." ] }
1705.07422
2619445021
This paper proposes a new Generative Partition Network (GPN) to address the challenging multi-person pose estimation problem. Different from existing models that are either completely top-down or bottom-up, the proposed GPN introduces a novel strategy--it generates partitions for multiple persons from their global joint candidates and infers instance-specific joint configurations simultaneously. The GPN is favorably featured by low complexity and high accuracy of joint detection and re-organization. In particular, GPN designs a generative model that performs one feed-forward pass to efficiently generate robust person detections with joint partitions, relying on dense regressions from global joint candidates in an embedding space parameterized by centroids of persons. In addition, GPN formulates the inference procedure for joint configurations of human poses as a graph partition problem, and conducts local optimization for each person detection with reliable global affinity cues, leading to complexity reduction and performance improvement. GPN is implemented with the Hourglass architecture as the backbone network to simultaneously learn joint detector and dense regressor. Extensive experiments on benchmarks MPII Human Pose Multi-Person, extended PASCAL-Person-Part, and WAF, show the efficiency of GPN with new state-of-the-art performance.
The bottom-up strategy provides robustness to early commitment and low joint detection complexity. Previous bottom-up approaches @cite_12 @cite_1 @cite_4 @cite_24 mainly focus on improving either the joint detector or joint affinity cues, benefiting the following joint partition and configuration inference. For joint detector, fully convolutional neural networks, , Residual networks @cite_5 and Hourglass networks @cite_23 , have been widely exploited. As for joint affinity cues, Insafutdinov @cite_1 explored geometric and appearance constraints among joint candidates. Cao @cite_12 proposed part affinity fields to encode location and orientation of limbs. Newell and Deng @cite_4 presented the associative embedding for grouping joint candidates. Nevertheless, all these approaches partition joints based on partitioning the graph covering the whole image, resulting in high inference complexity. In contrast, GPN performs local inference with robust global affinity cues which is efficiently generated by dense regressions from the centroid embedding space, reducing complexity for joint partitions and improving pose estimation.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_24", "@cite_23", "@cite_5", "@cite_12" ], "mid": [ "2952819818", "2382036597", "2951256101", "2950762923", "2949650786", "" ], "abstract": [ "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to both multi-person pose estimation and instance segmentation and report state-of-the-art performance for multi-person pose on the MPII and MS-COCO datasets.", "The goal of this paper is to advance the state-of-the-art of articulated pose estimation in scenes with multiple people. To that end we contribute on three fronts. We propose (1) improved body part detectors that generate effective bottom-up proposals for body parts; (2) novel image-conditioned pairwise terms that allow to assemble the proposals into a variable number of consistent body part configurations; and (3) an incremental optimization strategy that explores the search space more efficiently thus leading both to better performance and significant speed-up factors. Evaluation is done on two single-person and two multi-person pose estimation benchmarks. The proposed approach significantly outperforms best known multi-person pose estimation results while demonstrating competitive performance on the task of single person pose estimation (Models and code available at http: pose.mpi-inf.mpg.de).", "This paper considers the task of articulated human pose estimation of multiple people in real world images. We propose an approach that jointly solves the tasks of detection and pose estimation: it infers the number of persons in a scene, identifies occluded body parts, and disambiguates body parts between people in close proximity of each other. This joint formulation is in contrast to previous strategies, that address the problem by first detecting people and subsequently estimating their body pose. We propose a partitioning and labeling formulation of a set of body-part hypotheses generated with CNN-based part detectors. Our formulation, an instance of an integer linear program, implicitly performs non-maximum suppression on the set of part candidates and groups them to form configurations of body parts respecting geometric and appearance constraints. Experiments on four different datasets demonstrate state-of-the-art results for both single person and multi person pose estimation. Models and code available at this http URL.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "" ] }
1705.06947
2952799392
As the number and the diversity of news outlets on the Web grow, so does the opportunity for "alternative" sources of information to emerge. Using large social networks like Twitter and Facebook, misleading, false, or agenda-driven information can quickly and seamlessly spread online, deceiving people or influencing their opinions. Also, the increased engagement of tightly knit communities, such as Reddit and 4chan, further compounds the problem, as their users initiate and propagate alternative information, not only within their own communities, but also to different ones as well as various social media. In fact, these platforms have become an important piece of the modern information ecosystem, which, thus far, has not been studied as a whole. In this paper, we begin to fill this gap by studying mainstream and alternative news shared on Twitter, Reddit, and 4chan. By analyzing millions of posts around several axes, we measure how mainstream and alternative news flows between these platforms. Our results indicate that alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter, providing evidence that "fringe" communities often succeed in spreading alternative news to mainstream social networks and the greater Web.
Disinformation dynamics. @cite_35 study the characteristics of rumor propagation on Twitter. They analyze a corpus of 1.7B tweets, covering three and a half years of Twitter data, and extract 104 viral events, which are then annotated by human coders. They study debunked stories from false information busting sites such as Snopes and compare temporal, linguistics, and structural characteristics to legitimate information. @cite_23 also use stories that Snopes determined as false to study the propagation and the evolution of false information on Facebook, finding it to be quite bursty, and that posts containing a comment with a link to Snopes are more likely to get deleted. @cite_31 study the presence of hoaxes in Wikipedia articles. They report that while most are detected quickly and have little impact, some end up cited widely on the Web.
{ "cite_N": [ "@cite_35", "@cite_31", "@cite_23" ], "mid": [ "2032897813", "2340254858", "" ], "abstract": [ "The problem of identifying rumors is of practical importance especially in online social networks, since information can diffuse more rapidly and widely than the offline counterpart. In this paper, we identify characteristics of rumors by examining the following three aspects of diffusion: temporal, structural, and linguistic. For the temporal characteristics, we propose a new periodic time series model that considers daily and external shock cycles, where the model demonstrates that rumor likely have fluctuations over time. We also identify key structural and linguistic differences in the spread of rumors and non-rumors. Our selected features classify rumors with high precision and recall in the range of 87 to 92 , that is higher than other states of the arts on rumor classification.", "Wikipedia is a major source of information for many people. However, false information on Wikipedia raises concerns about its credibility. One way in which false information may be presented on Wikipedia is in the form of hoax articles, i.e., articles containing fabricated facts about nonexistent entities or events. In this paper we study false information on Wikipedia by focusing on the hoax articles that have been created throughout its history. We make several contributions. First, we assess the real-world impact of hoax articles by measuring how long they survive before being debunked, how many pageviews they receive, and how heavily they are referred to by documents on the Web. We find that, while most hoaxes are detected quickly and have little impact on Wikipedia, a small number of hoaxes survive long and are well cited across the Web. Second, we characterize the nature of successful hoaxes by comparing them to legitimate articles and to failed hoaxes that were discovered shortly after being created. We find characteristic differences in terms of article structure and content, embeddedness into the rest of Wikipedia, and features of the editor who created the hoax. Third, we successfully apply our findings to address a series of classification tasks, most notably to determine whether a given article is a hoax. And finally, we describe and evaluate a task involving humans distinguishing hoaxes from non-hoaxes. We find that humans are not particularly good at the task and that our automated classifier outperforms them by a big margin.", "" ] }
1705.06947
2952799392
As the number and the diversity of news outlets on the Web grow, so does the opportunity for "alternative" sources of information to emerge. Using large social networks like Twitter and Facebook, misleading, false, or agenda-driven information can quickly and seamlessly spread online, deceiving people or influencing their opinions. Also, the increased engagement of tightly knit communities, such as Reddit and 4chan, further compounds the problem, as their users initiate and propagate alternative information, not only within their own communities, but also to different ones as well as various social media. In fact, these platforms have become an important piece of the modern information ecosystem, which, thus far, has not been studied as a whole. In this paper, we begin to fill this gap by studying mainstream and alternative news shared on Twitter, Reddit, and 4chan. By analyzing millions of posts around several axes, we measure how mainstream and alternative news flows between these platforms. Our results indicate that alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter, providing evidence that "fringe" communities often succeed in spreading alternative news to mainstream social networks and the greater Web.
@cite_19 introduce Hoaxy, a platform providing information about the dynamics of false information propagation on Twitter. They also study a sample of 1.4M tweets, finding that the diffusion of fact-checking content lags that of false information by 10-20 hours and that the top 1 @cite_28 present TwitterTrails, a website allowing users to study propagation of false information on Twitter, i.e., to visualize indications of bursty activity, community skepticism, temporal characteristics of propagation, as well as re-tweets networks. Also, Del @cite_4 analyze how Facebook users perceive and react to conspiracy theories vs. scientific stories, finding two polarized and homogeneous communities that have similar content consumption patterns but exhibit different cascade dynamics.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_4" ], "mid": [ "", "2296706733", "2232384272" ], "abstract": [ "", "Massive amounts of misinformation have been observed to spread in uncontrolled fashion across social media. Examples include rumors, hoaxes, fake news, and conspiracy theories. At the same time, several journalistic organizations devote significant efforts to high-quality fact checking of online claims. The resulting information cascades contain instances of both accurate and inaccurate information, unfold over multiple time scales, and often reach audiences of considerable size. All these factors pose challenges for the study of the social dynamics of online news sharing. Here we introduce Hoaxy, a platform for the collection, detection, and analysis of online misinformation and its related fact-checking efforts. We discuss the design of the platform and present a preliminary analysis of a sample of public tweets containing both fake news and fact checking. We find that, in the aggregate, the sharing of fact-checking content typically lags that of misinformation by 10-20 hours. Moreover, fake news are dominated by very active users, while fact checking is a more grass-roots activity. With the increasing risks connected to massive online misinformation, social news observatories have the potential to help researchers, journalists, and the general public understand the dynamics of real and fake news sharing.", "The wide availability of user-provided content in online social media facilitates the aggregation of people around common interests, worldviews, and narratives. However, the World Wide Web (WWW) also allows for the rapid dissemination of unsubstantiated rumors and conspiracy theories that often elicit rapid, large, but naive social responses such as the recent case of Jade Helm 15––where a simple military exercise turned out to be perceived as the beginning of a new civil war in the United States. In this work, we address the determinants governing misinformation spreading through a thorough quantitative analysis. In particular, we focus on how Facebook users consume information related to two distinct narratives: scientific and conspiracy news. We find that, although consumers of scientific and conspiracy stories present similar consumption patterns with respect to content, cascade dynamics differ. Selective exposure to content is the primary driver of content diffusion and generates the formation of homogeneous clusters, i.e., “echo chambers.” Indeed, homogeneity appears to be the primary driver for the diffusion of contents and each echo chamber has its own cascade dynamics. Finally, we introduce a data-driven percolation model mimicking rumor spreading and we show that homogeneity and polarization are the main determinants for predicting cascades’ size." ] }
1705.06947
2952799392
As the number and the diversity of news outlets on the Web grow, so does the opportunity for "alternative" sources of information to emerge. Using large social networks like Twitter and Facebook, misleading, false, or agenda-driven information can quickly and seamlessly spread online, deceiving people or influencing their opinions. Also, the increased engagement of tightly knit communities, such as Reddit and 4chan, further compounds the problem, as their users initiate and propagate alternative information, not only within their own communities, but also to different ones as well as various social media. In fact, these platforms have become an important piece of the modern information ecosystem, which, thus far, has not been studied as a whole. In this paper, we begin to fill this gap by studying mainstream and alternative news shared on Twitter, Reddit, and 4chan. By analyzing millions of posts around several axes, we measure how mainstream and alternative news flows between these platforms. Our results indicate that alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter, providing evidence that "fringe" communities often succeed in spreading alternative news to mainstream social networks and the greater Web.
Situngkir @cite_8 empirically studies an Indonesian hoax on Twitter, finding that it spread broadly and quickly (within two hours), and that it would have spread more if a conventional media outlet did not publicly deny it. He also argues that hoaxes can propagate easily if there is collaboration between the recipients of the hoax. @cite_9 present a case study based on a hostage crisis in Sydney, analyzing 5.4M tweets from three main perspectives: 1) volume (i.e., number of rumor-related messages per time interval), 2) exposure (i.e., number of individuals exposed to the rumor), and 3) content production (i.e., whether the content is written by the user or is re-shared). @cite_12 study two crisis-related incidents on Twitter aiming to determine the effect of official'' accounts with respect to the containment of rumors. Authors show that official account can significantly contribute to stopping the propagation of the rumor by actively engaging in conversations related to the incidents. Finally, @cite_2 study the dissemination of false rumors vs. confirmed news on Twitter the days following the 2010 earthquake in Chile, concluding that an aggregate analysis on the flow of tweets can distinguish the former from the latter.
{ "cite_N": [ "@cite_9", "@cite_2", "@cite_12", "@cite_8" ], "mid": [ "2290484796", "2050619059", "2295750479", "1547176573" ], "abstract": [ "In this paper we highlight three distinct approaches to studying rumor dynamicsâ volume, exposure, and content production. Expanding upon prior work, which has focused on rumor volume, we argue that considering the size of the exposed population is a vital component of understanding rumoring. Additionally, by combining all three approaches we discover subtle features of rumoring behavior that would have been missed by applying each approach in isolation. Using a case study of rumoring on Twitter during a hostage crisis in Sydney, Australia, we apply a mixed-methods framework to explore rumoring and its consequences through these three lenses, focusing on the added dimension of exposure in particular. Our approach demonstrates the importance of considering both rumor content and the people engaging with rumor content to arrive at a more holistic understanding of communication dynamics. These results have implications for emergency responders and official use of social media during crisis management.", "In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets.", "This paper examines how â officialâ accounts participate in the propagation and correction of online rumors in the context of crisis events. Using an emerging method for interpretive analysis of â bigâ social data, we investigate the spread of online rumors through digital tracesâ in this case, tweets. Our study suggests that official accounts can help to slow the spread of a rumor by posting a denial, andâ supported by reflections from an organization that recently dealt with a rumor-crisisâ offers best practices for organizations around social media strategies and protocols. Based on tweet data and connections to existing literature, we also demonstrate and discuss how mainstream media participate in rumoring, and note the role of a new breed of online media, â breaking newsâ accounts. This analysis offers a complementary perspective to existing studies that use surveys and interviews to characterize the role official accounts play in online rumoring.", "We discuss the way of hoax spreading as gossip and rumor throughout the social media, i.e.: Twitter, by observing an empirical case in Indonesia. We discuss the spreading factor of the gossip in the social media and see the epidemiology of the propagation hoax before and after the hoax being clarified in the conventional mass media. The discussions brought us to the open enrchiment analysis of the sociology of gossip and rumors within the online services like Twitter for future observation of human behavior." ] }
1705.06947
2952799392
As the number and the diversity of news outlets on the Web grow, so does the opportunity for "alternative" sources of information to emerge. Using large social networks like Twitter and Facebook, misleading, false, or agenda-driven information can quickly and seamlessly spread online, deceiving people or influencing their opinions. Also, the increased engagement of tightly knit communities, such as Reddit and 4chan, further compounds the problem, as their users initiate and propagate alternative information, not only within their own communities, but also to different ones as well as various social media. In fact, these platforms have become an important piece of the modern information ecosystem, which, thus far, has not been studied as a whole. In this paper, we begin to fill this gap by studying mainstream and alternative news shared on Twitter, Reddit, and 4chan. By analyzing millions of posts around several axes, we measure how mainstream and alternative news flows between these platforms. Our results indicate that alt-right communities within 4chan and Reddit can have a surprising level of influence on Twitter, providing evidence that "fringe" communities often succeed in spreading alternative news to mainstream social networks and the greater Web.
Detecting false information sources. @cite_32 formulate the problem of finding the source of false information as a maximum likelihood estimation problem, using a metric called rumor centrality. They evaluate it for all nodes in the network using a simple linear time message-passing algorithm, and the node with the highest rumor centrality is deemed to be the most likely source. Authors show, experimentally, that the model can distinguish the source of false information with a maximum error of 7 hops for general networks and 4 for tree networks. @cite_10 study the problem from a statistical point of view, proposing a source detection framework, also based on rumor centrality, which supports multiple snapshots of the network during the false information spread. They show that using two network snapshots instead of one can significantly improve detection.
{ "cite_N": [ "@cite_10", "@cite_32" ], "mid": [ "2112515575", "2111772797" ], "abstract": [ "This paper addresses the problem of a single rumor source detection with multiple observations, from a statistical point of view of a spreading over a network, based on the susceptible-infectious model. For tree networks, multiple sequential observations for one single instance of rumor spreading cannot improve over the initial snapshot observation. The situation dramatically improves for multiple independent observations. We propose a unified inference framework based on the union rumor centrality, and provide explicit detection performance for degree-regular tree networks. Surprisingly, even with merely two observations, the detection probability at least doubles that of a single observation, and further approaches one, i.e., reliable detection, with increasing degree. This indicates that a richer diversity enhances detectability. For general graphs, a detection algorithm using a breadth-first search strategy is also proposed and evaluated. Besides rumor source detection, our results can be used in network forensics to combat recurring epidemic-like information spreading such as online anomaly and fraudulent email spams.", "We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like." ] }