aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1706.03016
2917092067
Electronic tickets (e-tickets) are electronic versions of paper tickets, which enable users to access intended services and improve services' efficiency. However, privacy may be a concern of e-ticket users. In this paper, a privacy-preserving electronic ticket scheme with attribute-based credentials is proposed to protect users' privacy and facilitate ticketing based on a user's attributes. Our proposed scheme makes the following contributions: (1) users can buy different tickets from ticket sellers without releasing their exact attributes; (2) two tickets of the same user cannot be linked; (3) a ticket cannot be transferred to another user; (4) a ticket cannot be double spent; (5) the security of the proposed scheme is formally proven and reduced to well known (q-strong Diffie-Hellman) complexity assumption; (6) the scheme has been implemented and its performance empirically evaluated. To the best of our knowledge, our privacy-preserving attribute-based e-ticket scheme is the first one providing these five features. Application areas of our scheme include event or transport tickets where users must convince ticket sellers that their attributes (e.g. age, profession, location) satisfy the ticket price policies to buy discounted tickets. More generally, our scheme can be used in any system where access to services is only dependent on a user's attributes (or entitlements) but not their identities.
E-Tickets from Special Devices. There are other e-ticket schemes designed around special devices, including personal trusted device (PTD) @cite_50 , trusted platform module (TPM) @cite_48 , mobile handsets @cite_19 , Unlike our scheme, these schemes require special devices and do not enable de-anonymisation after double spending a ticket nor do they support privacy-preserving attribute-based ticketing.
{ "cite_N": [ "@cite_19", "@cite_48", "@cite_50" ], "mid": [ "2344800964", "1522968316", "2125027978" ], "abstract": [ "The mobile ticket dispenser system (MTDS) allows customers to remotely draw tickets for service orders anywhere through a mobile handset. In our previous work, the MTDS was applied to a restaurant scenario in which both clerks and customers are patient, i.e., once a mobile ticketing (MT) customer remotely draws the ticket, his request can be served by the clerk when the clerk is available, regardless of when the customer arrives at the restaurant. In this paper, the MTDS is applied to a post office scenario in which the customers are patient, but the clerk is impatient since the original ticket drawn by the MT customer may be invalid if he she does not arrive at the post office before his her turn. In this case, the behavior of the MT customer is the same as the so-called in-person ticketing customer who needs to draw a ticket in person when he she arrives at the service counter. We propose an analytical model to derive the probability that an MT customer misses his her turn when he she arrives at the post office. A discrete-event simulation model is developed to investigate the performance of the predicted time adjustment mechanism for the MTDS. We also use real data collected at a post office to observe the queuing behavior. Our study provides guidelines for arranging the time for an MT customer to arrive at the MTDS server.", "Trusted Computing is a security base technology that will perhaps be ubiquitous in a few years in personal computers and mobile devices alike. Despite its neutrality with respect to applications, it has raised some privacy concerns. We show that trusted computing can be applied for service access control in a manner protecting users’ privacy. We construct a ticket system, a concept at the heart of Identity Management, relying solely on the capabilities of the trusted platform module and the Trusted Computing Group’s standards. Two examples show how it can be used for pseudonymous, protected service access.", "Advances in wireless network technology and the continuously increasing users of Personal Trusted Device (PTD) make the latter an ideal channel for offering personalized services to mobile users. In this paper, we apply some cryptology (such as public key infrastructure, hashing chain and digital signature) to propose a realistic mobile ticket system such that fairness, non-repudiation, anonymity, no forging, efficient verification, simplicity, practicability and obviate the embezzlement issues can be guaranteed. On the basis of PTD is more portable and personal than personal computer, we gradually perceived that the widely used PTD will present huge commerce profits for mobile ticket service provider and it is convenient to the PTD user." ] }
1706.03015
2626325227
Dynamic textures exist in various forms, e.g., fire, smoke, and traffic jams, but recognizing dynamic texture is challenging due to the complex temporal variations. In this paper, we present a novel approach stemmed from slow feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly varying features from fast varying signals. Fortunately, SFA is capable to leach invariant representations from dynamic textures. However, complex temporal variations require high-level semantic representations to fully achieve temporal slowness, and thus it is impractical to learn a high-level representation from dynamic textures directly by SFA. In order to learn a robust low-level feature to resolve the complexity of dynamic textures, we propose manifold regularized SFA (MR-SFA) by exploring the neighbor relationship of the initial state of each temporal transition and retaining the locality of their variations. Therefore, the learned features are not only slowly varying, but also partly predictable. MR-SFA for dynamic texture recognition is proposed in the following steps: 1) learning feature extraction functions as convolution filters by MR-SFA, 2) extracting local features by convolution and pooling, and 3) employing Fisher vectors to form a video-level representation for classification. Experimental results on dynamic texture and dynamic scene recognition datasets validate the effectiveness of the proposed approach.
A linear dynamical systems (LDS) approach for dynamic texture recognition was proposed assuming that dynamic textures are stationary stochastic processes @cite_22 . LDS is a statistical generative model. It can be further used for dynamic texture synthesis @cite_30 . The recognition is performed by comparing the parameters of LDS. Some kernel methods and distance learning approaches were then proposed to improve the comparison @cite_47 @cite_17 ; however, their results are still limited by LDS-based features, which cannot handle different viewpoints, scales, or other aspects. A bag-of-words model based on LDS features was proposed to improve conventional LDS-based approaches @cite_46 . Then, the bag-of-system-trees was further proposed for better efficiency @cite_37 . Extreme learning machine (ELM) was applied to construct the codebook of LDS features while preserving the spatial and temporal characteristics of dynamic textures @cite_18 . A hierarchical expectation maximization algorithm was proposed to cluster dynamic textures using LDS features @cite_48 . The mixture of LDS was also exploited for modeling, clustering and segmenting dynamic textures @cite_31 . Although LDS is reasonable and intuitive, it tends to suffer from complex temporal variations in the sequential process.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_18", "@cite_31", "@cite_22", "@cite_46", "@cite_48", "@cite_47", "@cite_17" ], "mid": [ "2097769097", "2008082651", "18637838", "2162616721", "", "1992960277", "2095463701", "2142172505", "2146966357" ], "abstract": [ "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the reconstructed HR details using dynamic texture synthesis (DTS). Most existing multiframe-based video superresolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate subpixel motion estimation between frames in an LR video. To achieve high-quality reconstruction of HR details for an LR video, we propose a texture-synthesis (TS)-based video SR method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a temporally coherent way, which effectively addresses the temporal incoherence problem caused by traditional TS-based image SR methods. To further reduce the complexity of the proposed method, our method only performs the TS-based SR on a set of key frames, while the HR details of the remaining nonkey frames are simply predicted using the bidirectional overlapped block motion compensation. After all frames are upscaled, the proposed DTS-SR is applied to maintain the temporal coherence in the HR video. Experimental results demonstrate that the proposed method achieves significant subjective and objective visual quality improvement over state-of-the-art video SR methods.", "The bag-of-systems (BoS) representation is a descriptor of motion in a video, where dynamic texture (DT) codewords represent the typical motion patterns in spatio-temporal patches extracted from the video. The efficacy of the BoS descriptor depends on the richness of the codebook, which depends on the number of codewords in the codebook. However, for even modest sized codebooks, mapping videos onto the codebook results in a heavy computational load. In this paper we propose the BoS Tree, which constructs a bottom-up hierarchy of codewords that enables efficient mapping of videos to the BoS codebook. By leveraging the tree structure to efficiently index the codewords, the BoS Tree allows for fast look-ups in the codebook and enables the practical use of larger, richer codebooks. We demonstrate the effectiveness of BoS Trees on classification of four video datasets, as well as on annotation of a video dataset and a music dataset. Finally, we show that, although the fast look-ups of BoS Tree result in different descriptors than BoS for the same video, the overall distance (and kernel) matrices are highly correlated resulting in similar classification performance.", "Recognition of complex dynamic texture is a difficult task and captures the attention of the computer vision community for several decades. Essentially the dynamic texture recognition is a multi-class classification problem that has become a real challenge for computer vision and machine learning techniques. Due to the reason that the dynamic textures lie in non-Euclidean manifold, existing classifier such as extreme learning machine cannot effectively deal with this problem. In this paper, we propose a new approach to tackle the dynamic texture recognition problem. First, we utilize the affinity propagation clustering technology to design a codebook, and then construct a soft coding feature to represent the whole dynamic texture sequence. This new coding strategy preserves spatial and temporal characteristics of dynamic texture. Finally, by evaluating the proposed approach on the DynTex dataset, we show the effectiveness of the proposed strategy.", "A dynamic texture is a spatio-temporal generative model for video, which represents video sequences as observations from a linear dynamical system. This work studies the mixture of dynamic textures, a statistical model for an ensemble of video sequences that is sampled from a finite collection of visual processes, each of which is a dynamic texture. An expectation-maximization (EM) algorithm is derived for learning the parameters of the model, and the model is related to previous works in linear systems, machine learning, time- series clustering, control theory, and computer vision. Through experimentation, it is shown that the mixture of dynamic textures is a suitable representation for both the appearance and dynamics of a variety of visual processes that have traditionally been challenging for computer vision (for example, fire, steam, water, vehicle and pedestrian traffic, and so forth). When compared with state-of-the-art methods in motion segmentation, including both temporal texture methods and traditional representations (for example, optical flow or other localized motion representations), the mixture of dynamic textures achieves superior performance in the problems of clustering and segmenting video of such processes.", "", "We consider the problem of categorizing video sequences of dynamic textures, i.e., nonrigid dynamical objects such as fire, water, steam, flags, etc. This problem is extremely challenging because the shape and appearance of a dynamic texture continuously change as a function of time. State-of-the-art dynamic texture categorization methods have been successful at classifying videos taken from the same viewpoint and scale by using a Linear Dynamical System (LDS) to model each video, and using distances or kernels in the space of LDSs to classify the videos. However, these methods perform poorly when the video sequences are taken under a different viewpoint or scale. In this paper, we propose a novel dynamic texture categorization framework that can handle such changes. We model each video sequence with a collection of LDSs, each one describing a small spatiotemporal patch extracted from the video. This Bag-of-Systems (BoS) representation is analogous to the Bag-of-Features (BoF) representation for object recognition, except that we use LDSs as feature descriptors. This choice poses several technical challenges in adopting the traditional BoF approach. Most notably, the space of LDSs is not euclidean; hence, novel methods for clustering LDSs and computing codewords of LDSs need to be developed. We propose a framework that makes use of nonlinear dimensionality reduction and clustering techniques combined with the Martin distance for LDSs to tackle these issues. Our experiments compare the proposed BoS approach to existing dynamic texture categorization methods and show that it can be used for recognizing dynamic textures in challenging scenarios which could not be handled by existing methods.", "Dynamic texture (DT) is a probabilistic generative model, defined over space and time, that represents a video as the output of a linear dynamical system (LDS). The DT model has been applied to a wide variety of computer vision problems, such as motion segmentation, motion classification, and video registration. In this paper, we derive a new algorithm for clustering DT models that is based on the hierarchical EM algorithm. The proposed clustering algorithm is capable of both clustering DTs and learning novel DT cluster centers that are representative of the cluster members in a manner that is consistent with the underlying generative probabilistic model of the DT. We also derive an efficient recursive algorithm for sensitivity analysis of the discrete-time Kalman smoothing filter, which is used as the basis for computing expectations in the E-step of the HEM algorithm. Finally, we demonstrate the efficacy of the clustering algorithm on several applications in motion analysis, including hierarchical motion clustering, semantic motion annotation, and learning bag-of-systems (BoS) codebooks for dynamic texture recognition.", "We present a framework for the classification of visual processes that are best modeled with spatio-temporal autoregressive models. The new framework combines the modeling power of a family of models known as dynamic textures and the generalization guarantees, for classification, of the support vector machine classifier. This combination is achieved by the derivation of a new probabilistic kernel based on the Kullback-Leibier divergence (KL) between Gauss-Markov processes. In particular, we derive the KL-kernel for dynamic textures in both 1) the image space, which describes both the motion and appearance components of the spatio-temporal process, and 2) the hidden state space, which describes the temporal component alone. Together, the two kernels cover a large variety of video classification problems, including the cases where classes can differ in both appearance and motion and the cases where appearance is similar for all classes and only motion is discriminant. Experimental evaluation on two databases shows that the new classifier achieves superior performance over existing solutions.", "The range space of dynamic textures spans spatiotemporal phenomena that vary along three fundamental dimensions: spatial texture, spatial texture layout, and dynamics. By describing each dimension with appropriate spatial or temporal features and by equipping it with a suitable distance measure, elementary distances (one for each dimension) between dynamic texture sequences can be computed. In this paper, we address the problem of dynamic texture (DT) recognition by learning linear combinations of these elementary distances. By learning weights to these distances, we shed light on how \"salient\" (in a discriminative manner) each DT dimension is in representing classes of dynamic textures. To do this, we propose an efficient maximum margin distance learning (MMDL) method based on the Pegasos algorithm [1], for both classindependent and class-dependent weight learning. In contrast to popular MMDL methods, which enforce restrictive distance constraints and have a computational complexity that is cubic in the number of training samples, we show that our method, called DL-PEGASOS, can handle more general distance constraints with a computational complexity that can be made linear. When class dependent weights are learned, we show that, for certain classes of DTs, spatial texture features are dominantly \"salient\", while for other classes, this \"saliency\" lies in their temporal features. Furthermore, DL-PEGASOS outperforms state-of-the-art recognition methods on the UCLA benchmark DT dataset. By learning class independent weights, we show that this benchmark does not offer much variety along the three DT dimensions, thus, motivating the proposal of a new DT dataset, called DynTex++." ] }
1706.03015
2626325227
Dynamic textures exist in various forms, e.g., fire, smoke, and traffic jams, but recognizing dynamic texture is challenging due to the complex temporal variations. In this paper, we present a novel approach stemmed from slow feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly varying features from fast varying signals. Fortunately, SFA is capable to leach invariant representations from dynamic textures. However, complex temporal variations require high-level semantic representations to fully achieve temporal slowness, and thus it is impractical to learn a high-level representation from dynamic textures directly by SFA. In order to learn a robust low-level feature to resolve the complexity of dynamic textures, we propose manifold regularized SFA (MR-SFA) by exploring the neighbor relationship of the initial state of each temporal transition and retaining the locality of their variations. Therefore, the learned features are not only slowly varying, but also partly predictable. MR-SFA for dynamic texture recognition is proposed in the following steps: 1) learning feature extraction functions as convolution filters by MR-SFA, 2) extracting local features by convolution and pooling, and 3) employing Fisher vectors to form a video-level representation for classification. Experimental results on dynamic texture and dynamic scene recognition datasets validate the effectiveness of the proposed approach.
Local features have been successfully applied to dynamic texture recognition. Local binary patterns on three orthogonal planes (LBP-TOP) were proposed for dynamic texture and facial expression recognition @cite_42 . Instead of processing the entire video, this approach extracts features from three orthogonal planes in the video cube. LBP-TOP has been generalized to the tensor orthogonal LBP for micro-expression recognition @cite_3 . Similar to LBP-TOP, the method of multiscale binarized statistical image features on three orthogonal planes (MBSIF-TOP) was proposed using binarized responses of filters learned by applying independent component analysis on each plane @cite_40 . By capturing the direction of natural flows, a spatiotemporal directional number transitional graph (DNG) was proposed using spatial structures and motions of each local region @cite_10 . Although these approaches work well, they neglect a large amount of spatial-temporal information.
{ "cite_N": [ "@cite_40", "@cite_42", "@cite_10", "@cite_3" ], "mid": [ "2002195055", "2139916508", "2150515037", "2093033615" ], "abstract": [ "A spatio-temporal descriptor for representation and recognition of time-varying textures is proposed (binarized statis- tical image features on three orthogonal planes (BSIF-TOP)) in this paper. The descriptor, similar in spirit to the well known local binary patterns on three orthogonal planes approach, estimates histograms of binary coded image sequences on three orthogonal planes corresponding to spatial spatio-temporal dimensions. However, unlike some other methods which generate the code in a heuristic fashion, binary code generation in the BSIF-TOP approach is realized by filtering operations on different regions of spatial spatio-temporal support and by binarizing the filter responses. The filters are learnt via independent component analysis on each of three planes after preprocessing using a whitening transformation. By extending the BSIF-TOP descriptor to a multiresolution scheme, the descriptor is able to capture the spatio-temporal content of an image sequence at multiple scales, improving its representation capacity. In the evaluations on the UCLA, Dyntex, and Dyntex dynamic texture databases, the proposed method achieves very good performance compared to existing approaches.", "Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation", "Spatiotemporal image descriptors are gaining attention in the image research community for better representation of dynamic textures. In this paper, we introduce a dynamic-micro-texture descriptor, i.e., spatiotemporal directional number transitional graph (DNG), which describes both the spatial structure and motion of each local neighborhood by capturing the direction of natural flow in the temporal domain. We use the structure of the local neighborhood, given by its principal directions, and compute the transition of such directions between frames. Moreover, we present the statistics of the direction transitions in a transitional graph, which acts as a signature for a given spatiotemporal region in the dynamic texture. Furthermore, we create a sequence descriptor by dividing the spatiotemporal volume into several regions, computing a transitional graph for each of them, and represent the sequence as a set of graphs. Our results validate the robustness of the proposed descriptor in different scenarios for expression recognition and dynamic texture analysis.", "Micro-expressions are brief involuntary facial expressions that reveal genuine emotions and, thus, help detect lies. Because of their many promising applications, they have attracted the attention of researchers from various fields. Recent research reveals that two perceptual color spaces (CIELab and CIELuv) provide useful information for expression recognition. This paper is an extended version of our International Conference on Pattern Recognition paper, in which we propose a novel color space model, tensor independent color space (TICS), to help recognize micro-expressions. In this paper, we further show that CIELab and CIELuv are also helpful in recognizing micro-expressions, and we indicate why these three color spaces achieve better performance. A micro-expression color video clip is treated as a fourth-order tensor, i.e., a four-dimension array. The first two dimensions are the spatial information, the third is the temporal information, and the fourth is the color information. We transform the fourth dimension from RGB into TICS, in which the color components are as independent as possible. The combination of dynamic texture and independent color components achieves a higher accuracy than does that of RGB. In addition, we define a set of regions of interests (ROIs) based on the facial action coding system and calculated the dynamic texture histograms for each ROI. Experiments are conducted on two micro-expression databases, CASME and CASME 2, and the results show that the performances for TICS, CIELab, and CIELuv are better than those for RGB or gray." ] }
1706.03015
2626325227
Dynamic textures exist in various forms, e.g., fire, smoke, and traffic jams, but recognizing dynamic texture is challenging due to the complex temporal variations. In this paper, we present a novel approach stemmed from slow feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly varying features from fast varying signals. Fortunately, SFA is capable to leach invariant representations from dynamic textures. However, complex temporal variations require high-level semantic representations to fully achieve temporal slowness, and thus it is impractical to learn a high-level representation from dynamic textures directly by SFA. In order to learn a robust low-level feature to resolve the complexity of dynamic textures, we propose manifold regularized SFA (MR-SFA) by exploring the neighbor relationship of the initial state of each temporal transition and retaining the locality of their variations. Therefore, the learned features are not only slowly varying, but also partly predictable. MR-SFA for dynamic texture recognition is proposed in the following steps: 1) learning feature extraction functions as convolution filters by MR-SFA, 2) extracting local features by convolution and pooling, and 3) employing Fisher vectors to form a video-level representation for classification. Experimental results on dynamic texture and dynamic scene recognition datasets validate the effectiveness of the proposed approach.
Some approaches have been proposed to fully utilize the spatial-temporal information. The spatio-temporal fractal analysis (DFS) was proposed using both volumetric and multi-slice dynamic fractal spectrum components @cite_24 . Space-time orientation distributions generated by 3D Gaussian derivative filters were used for dynamic texture recognition @cite_26 @cite_7 , and they have been successfully extended to bag-of-words models for dynamic scene recognition @cite_34 . Although both space and time were considered, the performance of these approaches are affected by the complexity of spatial-temporal variations. Recently, a high-order hidden Markov model was employed to model dynamic textures @cite_33 . A dynamic shape and appearance model was proposed by learning a statistical model of the variability directly by a Gauss-Markov model @cite_19 . A motion estimation approach based on locally and globally varying models was proposed to estimate optical flows in dynamic texture videos @cite_12 . Besides the pixel domain, a wavelet domain multi-fractal analysis for dynamic texture recognition was proposed, and good results can be achieved by simply using frame averages @cite_36 .
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_7", "@cite_36", "@cite_24", "@cite_19", "@cite_34", "@cite_12" ], "mid": [ "2071524685", "", "2141859737", "2082265779", "2006963646", "2114858244", "2006656585", "1484247901" ], "abstract": [ "Natural scene classification is a fundamental challenge in computer vision. By far, the majority of studies have limited their scope to scenes from single image stills and thereby ignore potentially informative temporal cues. The current paper is concerned with determining the degree of performance gain in considering short videos for recognizing natural scenes. Towards this end, the impact of multiscale orientation measurements on scene classification is systematically investigated, as related to: (i) spatial appearance, (ii) temporal dynamics and (iii) joint spatial appearance and dynamics. These measurements in visual space, x-y, and spacetime, x-y-t, are recovered by a bank of spatiotemporal oriented energy filters. In addition, a new data set is introduced that contains 420 image sequences spanning fourteen scene categories, with temporal scene information due to objects and surfaces decoupled from camera-induced ones. This data set is used to evaluate classification performance of the various orientation-related representations, as well as state-of-the-art alternatives. It is shown that a notable performance increase is realized by spatiotemporal approaches in comparison to purely spatial or purely temporal methods.", "", "This paper is concerned with the representation and recognition of the observed dynamics (i.e., excluding purely spatial appearance cues) of spacetime texture based on a spatiotemporal orientation analysis. The term “spacetime texture” is taken to refer to patterns in visual spacetime, (x,y,t), that primarily are characterized by the aggregate dynamic properties of elements or local measurements accumulated over a region of spatiotemporal support, rather than in terms of the dynamics of individual constituents. Examples include image sequences of natural processes that exhibit stochastic dynamics (e.g., fire, water, and windblown vegetation) as well as images of simpler dynamics when analyzed in terms of aggregate region properties (e.g., uniform motion of elements in imagery, such as pedestrians and vehicular traffic). Spacetime texture representation and recognition is important as it provides an early means of capturing the structure of an ensuing image stream in a meaningful fashion. Toward such ends, a novel approach to spacetime texture representation and an associated recognition method are described based on distributions (histograms) of spacetime orientation structure. Empirical evaluation on both standard and original image data sets shows the promise of the approach, including significant improvement over alternative state-of-the-art approaches in recognizing the same pattern from different viewpoints.", "In this paper, we propose a new texture descriptor for both static and dynamic textures. The new descriptor is built on the wavelet-based spatial-frequency analysis of two complementary wavelet pyramids: standard multiscale and wavelet leader. These wavelet pyramids essentially capture the local texture responses in multiple high-pass channels in a multiscale and multiorientation fashion, in which there exists a strong power-law relationship for natural images. Such a power-law relationship is characterized by the so-called multifractal analysis. In addition, two more techniques, scale normalization and multiorientation image averaging, are introduced to further improve the robustness of the proposed descriptor. Combining these techniques, the proposed descriptor enjoys both high discriminative power and robustness against many environmental changes. We apply the descriptor for classifying both static and dynamic textures. Our method has demonstrated excellent performance in comparison with the state-of-the-art approaches in several public benchmark datasets.", "The large-scale images and videos are one kind of the main source of big data. Dynamic texture (DT) is essential for understanding the video sequences with spatio-temporal similarities. This paper presents a powerful tool called dynamic fractal analysis to DT description and classification, which integrates rich description of DT with strong robustness to environmental changes. The proposed dynamic fractal spectrum (DFS) for DT sequences is composed of two components. The first one is a volumetric dynamic fractal spectrum component (V-DFS) that captures the stochastic self-similarities of DT sequences by treating them as 3D volumes; the second one is a multi-slice dynamic fractal spectrum component (S-DFS) that encodes fractal structures of repetitive DT patterns on 2D slices along different views of the 3D volume. To fully exploit various types of dynamic patterns in DT, five measurements of DT pixels are collected for the analysis on DT sequences from different perspectives. We evaluated our method on four publicly available benchmark datasets. All the experimental results have demonstrated the excellent performance of our method in comparison with state-of-the-art approaches. HighlightsThe dynamic multi-fractal analysis is developed for DT description.Our method is very discriminative and robust to environmental changes.A computational acceleration scheme is provided for the proposed descriptor.Our method exhibits excellent performance on four benchmark datasets.", "We propose a model of the joint variation of shape and appearance of portions of an image sequence. The model is conditionally linear, and can be thought of as an extension of active appearance models to exploit the temporal correlation of adjacent image frames. Inference of the model parameters can be performed efficiently using established numerical optimization techniques borrowed from finite-element analysis and system identification techniques", "This paper presents a unified bag of visual word (BoW) framework for dynamic scene recognition. The approach builds on primitive features that uniformly capture spatial and temporal orientation structure of the imagery (e.g., video), as extracted via application of a bank of spatiotemporally oriented filters. Various feature encoding techniques are investigated to abstract the primitives to an intermediate representation that is best suited to dynamic scene representation. Further, a novel approach to adaptive pooling of the encoded features is presented that captures spatial layout of the scene even while being robust to situations where camera motion and scene dynamics are confounded. The resulting overall approach has been evaluated on two standard, publically available dynamic scene datasets. The results show that in comparison to a representative set of alternatives, the proposed approach outperforms the previous state-of-the-art in classification accuracy by 10 .", "Motion estimation, i.e., optical flow, of fluid-like and dynamic texture (DT) images videos is an important challenge, particularly for understanding outdoor scene changes created by objects and or natural phenomena. Most optical flow models use smoothness-based constraints using terms such as fluidity from the fluid dynamics framework, with constraints typically being incompressibility and low Reynolds numbers ( @math ). Such constraints are assumed to impede the clear capture of locally abrupt image intensity and motion changes, i.e., discontinuities and or high @math over time. This paper exploits novel physics-based optical flow models constraints for both smooth and discontinuous changes using a wave generation theory that imposes no constraint on @math or compressibility of an image sequence. Iterated two-step optimization between local and global optimization is also used: first, an objective function with varying multiple sine cosine bases with new local image properties, i.e., orientation and frequency, and with a novel transformed dispersion relationship equation are used. Second, the statistical property of image features is used to globally optimize model parameters. Experiments on synthetic and real DT image sequences with smooth and discontinuous motions demonstrate that the proposed locally and globally varying models outperform the previous optical flow models." ] }
1706.03015
2626325227
Dynamic textures exist in various forms, e.g., fire, smoke, and traffic jams, but recognizing dynamic texture is challenging due to the complex temporal variations. In this paper, we present a novel approach stemmed from slow feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly varying features from fast varying signals. Fortunately, SFA is capable to leach invariant representations from dynamic textures. However, complex temporal variations require high-level semantic representations to fully achieve temporal slowness, and thus it is impractical to learn a high-level representation from dynamic textures directly by SFA. In order to learn a robust low-level feature to resolve the complexity of dynamic textures, we propose manifold regularized SFA (MR-SFA) by exploring the neighbor relationship of the initial state of each temporal transition and retaining the locality of their variations. Therefore, the learned features are not only slowly varying, but also partly predictable. MR-SFA for dynamic texture recognition is proposed in the following steps: 1) learning feature extraction functions as convolution filters by MR-SFA, 2) extracting local features by convolution and pooling, and 3) employing Fisher vectors to form a video-level representation for classification. Experimental results on dynamic texture and dynamic scene recognition datasets validate the effectiveness of the proposed approach.
High-level features have also been exploited for dynamic texture recognition. Deep learning has been successfully applied to general object recognition and detection. It has also been applied to dynamic texture recognition. A 3D convolutional neural network (CNN) was trained from a very large number of videos @cite_25 . This 3D CNN has been used as general video feature extractor, and achieved a good result on dynamic scene recognition. Many approaches use a pre-trained CNN as a high-level feature extractor @cite_16 @cite_32 @cite_8 . These approaches outperform most existing dynamic texture recognition approaches. Besides the CNN, a complex network was proposed to extract features from dynamic textures directly @cite_21 . A deep belief network was used to extract features from conventional features @cite_29 . In contrast to all of the above-mentioned approaches that are based on deeply learned networks, MR-SFA extracts features without using deep networks.
{ "cite_N": [ "@cite_8", "@cite_29", "@cite_21", "@cite_32", "@cite_16", "@cite_25" ], "mid": [ "2950366620", "2002128052", "2042248823", "1946072890", "2951619115", "2952633803" ], "abstract": [ "State-of-the-art image-set matching techniques typically implicitly model each image-set with a Gaussian distribution. Here, we propose to go beyond these representations and model image-sets as probability distribution functions (PDFs) using kernel density estimators. To compare and match image-sets, we exploit Csiszar f-divergences, which bear strong connections to the geodesic distance defined on the space of PDFs, i.e., the statistical manifold. Furthermore, we introduce valid positive definite kernels on the statistical manifolds, which let us make use of more powerful classification schemes to match image-sets. Finally, we introduce a supervised dimensionality reduction technique that learns a latent space where f-divergences reflect the class labels of the data. Our experiments on diverse problems, such as video-based face recognition and dynamic texture classification, evidence the benefits of our approach over the state-of-the-art image-set matching methods.", "In this paper, a novel framework is proposed for dynamic textures (DTs) recognition by learning a high level feature using deep neural network (DNN). The insight behind the method is that a DT appearing in different videos should share similar features, which can be learned for better recognition performance. Unlike many prior works only focus on low level or middle level features, we propose a novel high level feature learning method using DNN. Our goal is to construct a compact and discriminative semantic feature. The conventional bag of features approach using k-means is not semantically meaningful since the clustering criterion is based on appearance similarity. The proposed framework can effectively overcome the problem by capturing the semantic relations of the middle level by DNN. Extensive experiments with qualitative and quantitative results demonstrate the efficacy of our approach.", "Abstract In this paper, we propose a novel approach for dynamic texture representation based on complex networks. In the proposed approach, each pixel of the video is mapped into a node of the complex network. Initially, a regular complex network is obtained by connecting two nodes if the Euclidean distance between their related pixels is equal or less than a given radius. For each connection, a weight is defined by the difference of the pixel intensities. Given the regular complex network, a function is applied to remove connections whose weight is equal to or below a given threshold. Finally, a feature vector is obtained by calculating the spatial and temporal average degree for networks transformed by different values of threshold and radius. The number of connections of pixels from the same frame and from different frames, respectively, gives the spatial and temporal degrees. Experimental results using synthetic and real dynamic textures have demonstrated the effectiveness of the proposed approach.", "The task of classifying videos of natural dynamic scenes into appropriate classes has gained lot of attention in recent years. The problem especially becomes challenging when the camera used to capture the video is dynamic. In this paper, we analyse the performance of statistical aggregation (SA) techniques on various pre-trained convolutional neural network(CNN) models to address this problem. The proposed approach works by extracting CNN activation features for a number of frames in a video and then uses an aggregation scheme in order to obtain a robust feature descriptor for the video. We show through results that the proposed approach performs better than the-state-of-the arts for the Maryland and YUPenn dataset. The final descriptor obtained is powerful enough to distinguish among dynamic scenes and is even capable of addressing the scenario where the camera motion is dominant and the scene dynamics are complex. Further, this paper shows an extensive study on the performance of various aggregation methods and their combinations. We compare the proposed approach with other dynamic scene classification algorithms on two publicly available datasets - Maryland and YUPenn to demonstrate the superior performance of the proposed approach.", "Dynamic texture and scene classification are two fundamental problems in understanding natural video content. Extracting robust and effective features is a crucial step towards solving these problems. However the existing approaches suffer from the sensitivity to either varying illumination, or viewpoint changing, or even camera motion, and or the lack of spatial information. Inspired by the success of deep structures in image classification, we attempt to leverage a deep structure to extract feature for dynamic texture and scene classification. To tackle with the challenges in training a deep structure, we propose to transfer some prior knowledge from image domain to video domain. To be specific, we propose to apply a well-trained Convolutional Neural Network (ConvNet) as a mid-level feature extractor to extract features from each frame, and then form a representation of a video by concatenating the first and the second order statistics over the mid-level features. We term this two-level feature extraction scheme as a Transferred ConvNet Feature (TCoF). Moreover we explore two different implementations of the TCoF scheme, i.e., the TCoF and the TCoF, in which the mean-removed frames and the difference between two adjacent frames are used as the inputs of the ConvNet, respectively. We evaluate systematically the proposed spatial TCoF and the temporal TCoF schemes on three benchmark data sets, including DynTex, YUPENN, and Maryland, and demonstrate that the proposed approach yields superior performance.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use." ] }
1706.03015
2626325227
Dynamic textures exist in various forms, e.g., fire, smoke, and traffic jams, but recognizing dynamic texture is challenging due to the complex temporal variations. In this paper, we present a novel approach stemmed from slow feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly varying features from fast varying signals. Fortunately, SFA is capable to leach invariant representations from dynamic textures. However, complex temporal variations require high-level semantic representations to fully achieve temporal slowness, and thus it is impractical to learn a high-level representation from dynamic textures directly by SFA. In order to learn a robust low-level feature to resolve the complexity of dynamic textures, we propose manifold regularized SFA (MR-SFA) by exploring the neighbor relationship of the initial state of each temporal transition and retaining the locality of their variations. Therefore, the learned features are not only slowly varying, but also partly predictable. MR-SFA for dynamic texture recognition is proposed in the following steps: 1) learning feature extraction functions as convolution filters by MR-SFA, 2) extracting local features by convolution and pooling, and 3) employing Fisher vectors to form a video-level representation for classification. Experimental results on dynamic texture and dynamic scene recognition datasets validate the effectiveness of the proposed approach.
Slow feature analysis (SFA) was proposed as an unsupervised learning approach @cite_0 . Inspired by the temporal slowness principle, SFA extracts slowly varying features from fast varying signals. It has been proven that the properties of feature extraction functions learned by SFA are similar to complex cells in the primary visual cortex (V1) of the brain @cite_11 . SFA has been successfully applied to applications such as human action recognition @cite_2 @cite_27 , dynamic scene recognition @cite_44 , and blind source separation @cite_20 @cite_38 .
{ "cite_N": [ "@cite_38", "@cite_0", "@cite_44", "@cite_27", "@cite_2", "@cite_20", "@cite_11" ], "mid": [ "2169314529", "2146444479", "", "1897029691", "1977814411", "", "2148553367" ], "abstract": [ "We present and test an extension of slow feature analysis as a novel approach to nonlinear blind source separation. The algorithm relies on temporal correlations and iteratively reconstructs a set of statistically independent sources from arbitrary nonlinear instantaneous mixtures. Simulations show that it is able to invert a complicated nonlinear mixture of two audio signals with a high reliability. The algorithm is based on a mathematical analysis of slow feature analysis for the case of input data that are generated from statistically independent sources.", "Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.", "", "Slow feature analysis (SFA) extracts slowly varying signals from input data and has been used to model complex cells in the primary visual cortex (V1). It transmits information to both ventral and dorsal pathways to process appearance and motion information, respectively. However, SFA only uses slowly varying features for local feature extraction, because they represent appearance information more effectively than motion information. To better utilize temporal information, we propose temporal variance analysis (TVA) as a generalization of SFA. TVA learns a linear transformation matrix that projects multidimensional temporal data to temporal components with temporal variance. Inspired by the function of V1, we learn receptive fields by TVA and apply convolution and pooling to extract local features. Embedded in the improved dense trajectory framework, TVA for action recognition is proposed to: 1) extract appearance and motion features from gray using slow and fast filters, respectively; 2) extract additional motion features using slow filters from horizontal and vertical optical flows; and 3) separately encode extracted local features with different temporal variances and concatenate all the encoded features as final features. We evaluate the proposed TVA features on several challenging data sets and show that both slow and fast features are useful in the low-level feature extraction. Experimental results show that the proposed TVA features outperform the conventional histogram-based features, and excellent results can be achieved by combining all TVA features.", "Slow Feature Analysis (SFA) extracts slowly varying features from a quickly varying input signal [1]. It has been successfully applied to modeling the visual receptive fields of the cortical neurons. Sufficient experimental results in neuroscience suggest that the temporal slowness principle is a general learning principle in visual perception. In this paper, we introduce the SFA framework to the problem of human action recognition by incorporating the discriminative information with SFA learning and considering the spatial relationship of body parts. In particular, we consider four kinds of SFA learning strategies, including the original unsupervised SFA (U-SFA), the supervised SFA (S-SFA), the discriminative SFA (D-SFA), and the spatial discriminative SFA (SD--SFA), to extract slow feature functions from a large amount of training cuboids which are obtained by random sampling in motion boundaries. Afterward, to represent action sequences, the squared first order temporal derivatives are accumulated over all transformed cuboids into one feature vector, which is termed the Accumulated Squared Derivative (ASD) feature. The ASD feature encodes the statistical distribution of slow features in an action sequence. Finally, a linear support vector machine (SVM) is trained to classify actions represented by ASD features. We conduct extensive experiments, including two sets of control experiments, two sets of large scale experiments on the KTH and Weizmann databases, and two sets of experiments on the CASIA and UT-interaction databases, to demonstrate the effectiveness of SFA for human action recognition. Experimental results suggest that the SFA-based approach (1) is able to extract useful motion patterns and improves the recognition performance, (2) requires less intermediate processing steps but achieves comparable or even better performance, and (3) has good potential to recognize complex multiperson activities.", "", "In this study we investigate temporal slowness as a learning principle for receptive fields using slow feature analysis, a new algorithm to determine functions that extract slowly varying signals from the input data. We find a good qualitative and quantitative match between the set of learned functions trained on image sequences and the population of complex cells in the primary visual cortex (V1). The functions show many properties found also experimentally in complex cells, such as direction selectivity, non-orthogonal inhibition, end-inhibition, and side-inhibition. Our results demonstrate that a single unsupervised learning principle can account for such a rich repertoire of receptive field properties." ] }
1706.03015
2626325227
Dynamic textures exist in various forms, e.g., fire, smoke, and traffic jams, but recognizing dynamic texture is challenging due to the complex temporal variations. In this paper, we present a novel approach stemmed from slow feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly varying features from fast varying signals. Fortunately, SFA is capable to leach invariant representations from dynamic textures. However, complex temporal variations require high-level semantic representations to fully achieve temporal slowness, and thus it is impractical to learn a high-level representation from dynamic textures directly by SFA. In order to learn a robust low-level feature to resolve the complexity of dynamic textures, we propose manifold regularized SFA (MR-SFA) by exploring the neighbor relationship of the initial state of each temporal transition and retaining the locality of their variations. Therefore, the learned features are not only slowly varying, but also partly predictable. MR-SFA for dynamic texture recognition is proposed in the following steps: 1) learning feature extraction functions as convolution filters by MR-SFA, 2) extracting local features by convolution and pooling, and 3) employing Fisher vectors to form a video-level representation for classification. Experimental results on dynamic texture and dynamic scene recognition datasets validate the effectiveness of the proposed approach.
Many other improvements in SFA have also been proposed. A regularized sparse kernel SFA was proposed to generate feature spaces for linear algorithms @cite_6 . A changing detection algorithm based on an online kernel SFA was proposed for video segmentation and tracking @cite_5 . Although kernel methods can handle nonlinear data, they will introduce more noises and computational complexities than linear approaches. Minh and Wiskott @cite_20 proposed a multivariate SFA for blind source separation. A probabilistic SFA was proposed for facial behavior analysis @cite_15 . Slow feature discriminant analysis (SFDA) was proposed as a supervised learning approach by maximizing the inter-class temporal variance and minimizing the intra-class temporal variance simultaneously @cite_39 . These approaches cannot be applied to dynamic texture recognition directly.
{ "cite_N": [ "@cite_6", "@cite_39", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "23058082", "2115744897", "2047037844", "2342036932", "" ], "abstract": [ "This paper develops a kernelized slow feature analysis (SFA) algorithm. SFA is an unsupervised learning method to extract features which encode latent variables from time series. Generative relationships are usually complex, and current algorithms are either not powerful enough or tend to over-fit. We make use of the kernel trick in combination with sparsification to provide a powerful function class for large data sets. Sparsity is achieved by a novel matching pursuit approach that can be applied to other tasks as well. For small but complex data sets, however, the kernel SFA approach leads to over-fitting and numerical instabilities. To enforce a stable solution, we introduce regularization to the SFA objective. Versatility and performance of our method are demonstrated on audio and video data sets.", "Slow Feature Analysis (SFA) is an unsupervised algorithm by extracting the slowly varying features from time series and has been used to pattern recognition successfully. Based on SFA, this paper develops a new algorithm, Slow Feature Discriminant Analysis (SFDA), which can maximize the temporal variation of between-class time series, and minimize the temporal variation of within-class time series simultaneously. Due to adoption of discrimination power, the performance on pattern recognition is improved compared to SFA. The experiments results on MNIST digit handwritten database also show that the proposed algorithm is in particular attractive.", "Slow feature analysis (SFA) is a dimensionality reduction technique which has been linked to how visual brain cells work. In recent years, the SFA was adopted for computer vision tasks. In this paper, we propose an exact kernel SFA (KSFA) framework for positive definite and indefinite kernels in Krein space. We then formulate an online KSFA which employs a reduced set expansion. Finally, by utilizing a special kind of kernel family, we formulate exact online KSFA for which no reduced set is required. We apply the proposed system to develop a SFA-based change detection algorithm for stream data. This framework is employed for temporal video segmentation and tracking. We test our setup on synthetic and real data streams. When combined with an online learning tracking system, the proposed change detection approach improves upon tracking setups that do not utilize change detection.", "A recently introduced latent feature learning technique for time-varying dynamic phenomena analysis is the so-called slow feature analysis (SFA). SFA is a deterministic component analysis technique for multidimensional sequences that, by minimizing the variance of the first-order time derivative approximation of the latent variables, finds uncorrelated projections that extract slowly varying features ordered by their temporal consistency and constancy. In this paper, we propose a number of extensions in both the deterministic and the probabilistic SFA optimization frameworks. In particular, we derive a novel deterministic SFA algorithm that is able to identify linear projections that extract the common slowest varying features of two or more sequences. In addition, we propose an expectation maximization (EM) algorithm to perform inference in a probabilistic formulation of SFA and similarly extend it in order to handle two and more time-varying data sequences. Moreover, we demonstrate that the probabilistic SFA (EM-SFA) algorithm that discovers the common slowest varying latent space of multiple sequences can be combined with dynamic time warping techniques for robust sequence time-alignment. The proposed SFA algorithms were applied for facial behavior analysis, demonstrating their usefulness and appropriateness for this task.", "" ] }
1706.02863
2623177501
In this paper, we share our experience in designing a convolutional network-based face detector that could handle faces of an extremely wide range of scales. We show that faces with different scales can be modeled through a specialized set of deep convolutional networks with different structures. These detectors can be seamlessly integrated into a single unified network that can be trained end-to-end. In contrast to existing deep models that are designed for wide scale range, our network does not require an image pyramid input and the model is of modest complexity. Our network, dubbed ScaleFace, achieves promising performance on WIDER FACE and FDDB datasets with practical runtime speed. Specifically, our method achieves 76.4 average precision on the challenging WIDER FACE dataset and 96 recall rate on the FDDB dataset with 7 frames per second (fps) for 900 * 1300 input image.
. Following the remarkable performance of deep convolutional networks on image classification @cite_22 and object detection @cite_0 , recent face detection studies @cite_7 @cite_13 @cite_30 @cite_18 @cite_17 also embrace deep learning for improved performance. These methods use deep convolutional networks as the backbone structure to learn highly discriminative representation from data and achieve impressive results on benchmark datasets such as FDDB and AFW. Among these methods, Faceness-Net @cite_17 , STN @cite_7 , and Grid-Loss @cite_18 are designed to detect faces under occlusions and large pose variations, Cascade-CNN @cite_13 and its variants @cite_2 achieve a good trade-off between speed and accuracy. Meanwhile, the unsatisfactory performance of existing methods on recent benchmark datasets in object detection @cite_14 and face detection @cite_20 reveals a new challenge on detecting tiny objects in uncontrolled environments.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_22", "@cite_7", "@cite_0", "@cite_2", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "2417750831", "2949232997", "", "2117539524", "", "2102605133", "2473640056", "1934410531", "2963566548", "2950557924" ], "abstract": [ "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face proposal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by pruning and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state-of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of predefined anchor boxes in the region proposals network (RPN) by exploiting a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth (l_1 )-losses of both the facial key-points and the face bounding boxes. In experiments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks.", "Detection of partially occluded objects is a challenging computer vision problem. Standard Convolutional Neural Network (CNN) detectors fail if parts of the detection window are occluded, since not every sub-part of the window is discriminative on its own. To address this issue, we propose a novel loss layer for CNNs, named grid loss, which minimizes the error rate on sub-blocks of a convolution layer independently rather than over the whole feature map. This results in parts being more discriminative on their own, enabling the detector to recover if the detection window is partially occluded. By mapping our loss layer back to a regular fully connected layer, no additional computational cost is incurred at runtime compared to standard CNNs. We demonstrate our method for face detection on several public face detection benchmarks and show that our method outperforms regular CNNs, is suitable for realtime applications and achieves state-of-the-art performance.", "", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks.", "Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset1, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated.", "In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99 on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91 . Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed." ] }
1706.02863
2623177501
In this paper, we share our experience in designing a convolutional network-based face detector that could handle faces of an extremely wide range of scales. We show that faces with different scales can be modeled through a specialized set of deep convolutional networks with different structures. These detectors can be seamlessly integrated into a single unified network that can be trained end-to-end. In contrast to existing deep models that are designed for wide scale range, our network does not require an image pyramid input and the model is of modest complexity. Our network, dubbed ScaleFace, achieves promising performance on WIDER FACE and FDDB datasets with practical runtime speed. Specifically, our method achieves 76.4 average precision on the challenging WIDER FACE dataset and 96 recall rate on the FDDB dataset with 7 frames per second (fps) for 900 * 1300 input image.
There are unique and inherent challenges in multi-scale face detection that require special and systematic analysis. In this study, we are trying to detect faces in an extremely large range of scale, of which the variance is much larger than object detection, , the target scale of object detection lies in [30-300] @cite_21 while that of face detection is [10-1000]. In addition, tiny faces usually appear very close to each other in a crowded scene. The design of appropriate receptive fields for different face scales becomes essential.
{ "cite_N": [ "@cite_21" ], "mid": [ "2193145675" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd." ] }
1706.02863
2623177501
In this paper, we share our experience in designing a convolutional network-based face detector that could handle faces of an extremely wide range of scales. We show that faces with different scales can be modeled through a specialized set of deep convolutional networks with different structures. These detectors can be seamlessly integrated into a single unified network that can be trained end-to-end. In contrast to existing deep models that are designed for wide scale range, our network does not require an image pyramid input and the model is of modest complexity. Our network, dubbed ScaleFace, achieves promising performance on WIDER FACE and FDDB datasets with practical runtime speed. Specifically, our method achieves 76.4 average precision on the challenging WIDER FACE dataset and 96 recall rate on the FDDB dataset with 7 frames per second (fps) for 900 * 1300 input image.
. A number of studies @cite_6 have been proposed to address multi-scale face detection. Recent deep learning-based methods can be categorized into two classes: scale-invariant based methods @cite_13 @cite_29 and scale-variant based method @cite_31 @cite_21 @cite_2 .
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_6", "@cite_2", "@cite_31", "@cite_13" ], "mid": [ "2438869444", "2193145675", "", "2473640056", "2951230065", "1934410531" ], "abstract": [ "The Faster R-CNN has recently demonstrated impressive results on various object detection benchmarks. By training a Faster R-CNN model on the large scale WIDER face dataset, we report state-of-the-art results on two widely used face detection benchmarks, FDDB and the recently released IJB-A.", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "", "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.", "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99 of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82 while prior art ranges from 29-64 ).", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks." ] }
1706.02863
2623177501
In this paper, we share our experience in designing a convolutional network-based face detector that could handle faces of an extremely wide range of scales. We show that faces with different scales can be modeled through a specialized set of deep convolutional networks with different structures. These detectors can be seamlessly integrated into a single unified network that can be trained end-to-end. In contrast to existing deep models that are designed for wide scale range, our network does not require an image pyramid input and the model is of modest complexity. Our network, dubbed ScaleFace, achieves promising performance on WIDER FACE and FDDB datasets with practical runtime speed. Specifically, our method achieves 76.4 average precision on the challenging WIDER FACE dataset and 96 recall rate on the FDDB dataset with 7 frames per second (fps) for 900 * 1300 input image.
(1) : The vast majority of face detection pipelines focus on learning scale-invariant representation. The seminal work of Faster-RCNN @cite_29 subscribes to this philosophy by extracting scale-invariant features through region of interest (ROI) pooling. The Cascade-CNN @cite_13 normalizes target object into a fixed scale and conducts multi-scale detection through an image pyramid. However, Faster-RCNN and Cascade-CNN are not specifically designed to finding faces in a wide range of scales. Specifically, the foreground and background ROIs of Faster-RCNN map to the same location on deep features, causing ambiguity to the classifier. The Cascade-CNN is mainly formed by a set of three-layer CNNs thus its capacity confines it from handling large appearance and scale variances at the same time.
{ "cite_N": [ "@cite_29", "@cite_13" ], "mid": [ "2438869444", "1934410531" ], "abstract": [ "The Faster R-CNN has recently demonstrated impressive results on various object detection benchmarks. By training a Faster R-CNN model on the large scale WIDER face dataset, we report state-of-the-art results on two widely used face detection benchmarks, FDDB and the recently released IJB-A.", "In real-world face detection, large visual variations, such as those due to pose, expression, and lighting, demand an advanced discriminative model to accurately differentiate faces from the backgrounds. Consequently, effective models for the problem tend to be computationally prohibitive. To address these two conflicting challenges, we propose a cascade architecture built on convolutional neural networks (CNNs) with very powerful discriminative capability, while maintaining high performance. The proposed CNN cascade operates at multiple resolutions, quickly rejects the background regions in the fast low resolution stages, and carefully evaluates a small number of challenging candidates in the last high resolution stage. To improve localization effectiveness, and reduce the number of candidates at later stages, we introduce a CNN-based calibration stage after each of the detection stages in the cascade. The output of each calibration stage is used to adjust the detection window position for input to the subsequent stage. The proposed method runs at 14 FPS on a single CPU core for VGA-resolution images and 100 FPS using a GPU, and achieves state-of-the-art detection performance on two public face detection benchmarks." ] }
1706.02863
2623177501
In this paper, we share our experience in designing a convolutional network-based face detector that could handle faces of an extremely wide range of scales. We show that faces with different scales can be modeled through a specialized set of deep convolutional networks with different structures. These detectors can be seamlessly integrated into a single unified network that can be trained end-to-end. In contrast to existing deep models that are designed for wide scale range, our network does not require an image pyramid input and the model is of modest complexity. Our network, dubbed ScaleFace, achieves promising performance on WIDER FACE and FDDB datasets with practical runtime speed. Specifically, our method achieves 76.4 average precision on the challenging WIDER FACE dataset and 96 recall rate on the FDDB dataset with 7 frames per second (fps) for 900 * 1300 input image.
(2) : In contrast to learning scale-invariant representation, Qin al @cite_2 propose a joint cascade network for learning scale-variant features. Samples from different scales are modeled separately by different networks and the detection results are generated by merging predictions across networks. Similar to Cascade-CNN, the capacity of the individual network in the joint cascade network is insufficient to handle large scale and appearance variances. SSD @cite_21 is proposed for object detection by making use of scale-variant templates based on the deep features. The SSD essentially tries to detect objects of various scales at different stage layer of the network. Nevertheless, a direct application of SSD for small face detection still does not return satisfactory results (see Fig. ) since the scale-variant templates at early layers cannot cope well with large-scale variance. While in the later stages, SSD will suffer similar overlap mapping problem as in Faster-RCNN.
{ "cite_N": [ "@cite_21", "@cite_2" ], "mid": [ "2193145675", "2473640056" ], "abstract": [ "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training." ] }
1706.03042
2963732333
Computer science marches towards energy-aware practices. This trend impacts not only the design of computer architectures, but also the design of programs. However, developers still lack affordable and accurate technology to measure energy consumption in computing systems. The goal of this paper is to mitigate such problem. To this end, we introduce JetsonLEAP, a framework that supports the implementation of energy-aware programs. JetsonLEAP consists of an embedded hardware, in our case, the Nvidia Tegra TK1 System-on-a-chip device, a circuit to control the flow of energy, of our own design, plus a library to instrument program parts. We discuss two different circuit setups. The most precise setup lets us reliably measure the energy spent by 225,000 instructions, the least precise, although more affordable setup, gives us a window of 975,000 instructions. To probe the precision of our system, we use it in tandem with a high-precision, high-cost acquisition system, and show that results do not differ in any significant way from those that we get using our simpler apparatus. Our entire infrastructure - board, power meter and both circuits - can be reproduced with about $500.00. To demonstrate the efficacy of our framework, we have used it to measure the energy consumed by programs running on ARM cores, on the GPU, and on a remote server. Furthermore, we have studied the impact of OpenACC directives on the energy efficiency of high-performance applications.
Much has been done, recently, to enable the reliable acquisition of power data from computing machinery. In this section we go over a few related work, focusing on the unique characteristics of our JetsonLeap. Before we commence our discussion, we emphasize a point: a lot of related literature uses energy models to derive metrics [ @cite_13 @cite_0 @cite_24 ]. Even though we do not contest the validity of these results, we are interested in direct energy probing. Thus, models, i.e., indirect estimation, are not part of this survey. Nevertheless, we believe that an infrastructure such as JetsonLeap can be used to calibrate new analytical models.
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_24" ], "mid": [ "2105778948", "2106710314", "" ], "abstract": [ "The number of embedded systems is increasing and a remarkable percentage is designed as mobile applications. For the latter, energy consumption is a limiting factor because of today's battery capacities. Besides the processor, memory accesses consume a high amount of energy. The use of additional less power hungry memories like caches or scratchpads is thus common. Caches incorporate the hardware control logic for moving data in and out automatically. On the other hand, this logic requires chip area and energy. A scratchpad memory is much more energy efficient, but there is a need for software control of its content. In this paper, an algorithm integrated into a compiler is presented which analyses the application and selects program and data parts which are placed into the scratchpad. Comparisons against a cache solution show remarkable advantages between 12 and 43 in energy consumption for designs of the same memory size.", "Energy is of primary importance in wireless sensor networks. By being able to estimate the energy consumption of the sensor nodes, applications and routing protocols are able to make informed decisions that increase the lifetime of the sensor network. However, it is in general not possible to measure the energy consumption on popular sensor node platforms. In this paper, we present and evaluate a software-based on-line energy estimation mechanism that estimates the energy consumption of a sensor node. We evaluate the mechanism by comparing the estimated energy consumption with the lifetime of capacitor-powered sensor nodes. By implementing and evaluating the X-MAC protocol, we show how software-based on-line energy estimation can be used to empirically evaluate the energy efficiency of sensor network protocols.", "" ] }
1706.03148
2625398819
The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics. In this paper, we propose a suite of techniques to trim and improve it. First, we validate a hypothesis that, given a current sentence, inferring the previous and inferring the next sentence provide similar supervision power, therefore only one decoder for predicting the next sentence is preserved in our trimmed skip-thought model. Second, we present a connection layer between encoder and decoder to help the model to generalize better on semantic relatedness tasks. Third, we found that a good word embedding initialization is also essential for learning better sentence representations. We train our model unsupervised on a large corpus with contiguous sentences, and then evaluate the trained model on 7 supervised tasks, which includes semantic relatedness, paraphrase detection, and text classification benchmarks. We empirically show that, our proposed model is a faster, lighter-weight and equally powerful alternative to the original skip-thought model.
Previously, @cite_15 proposed the continuous bag-of-words (CBOW) model and the skip-gram model for distributed word representation learning. The main idea is learn a word representation by discovering the context information from the surrounding words. @cite_2 improved the skip-gram model, and empirically showed that additive composition of the learned word representations successfully captures contextual information of phrases and sentences, which is a strong baseline model for NLP tasks. Similarly, @cite_17 proposed a method to learn a fixed-dimension vector for each sentence by predicting the words within the given sentence. However, after training, the representation for a new sentence is hard to derive, since it requires optimizing the sentence representation towards an objective.
{ "cite_N": [ "@cite_15", "@cite_17", "@cite_2" ], "mid": [ "1614298861", "2949547296", "2950133940" ], "abstract": [ "", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1706.03148
2625398819
The skip-thought model has been proven to be effective at learning sentence representations and capturing sentence semantics. In this paper, we propose a suite of techniques to trim and improve it. First, we validate a hypothesis that, given a current sentence, inferring the previous and inferring the next sentence provide similar supervision power, therefore only one decoder for predicting the next sentence is preserved in our trimmed skip-thought model. Second, we present a connection layer between encoder and decoder to help the model to generalize better on semantic relatedness tasks. Third, we found that a good word embedding initialization is also essential for learning better sentence representations. We train our model unsupervised on a large corpus with contiguous sentences, and then evaluate the trained model on 7 supervised tasks, which includes semantic relatedness, paraphrase detection, and text classification benchmarks. We empirically show that, our proposed model is a faster, lighter-weight and equally powerful alternative to the original skip-thought model.
Instead of learning to reconstruct the sentences which are adjacent to the current sentence, @cite_12 proposed a model that learns to categorize the manually defined relationships of two input sentences. The model encodes two sentences in two representations, respectively, and the classifier on top of the representations judges 1) whether the two sentences are adjacent to each other, 2) whether the two sentences are in the correct order, and 3) whether the second sentence starts with a conjunction phrase. The proposed model runs faster than the skip-thought model, since it only contains an encoder and no decoder is required. However, only the result on microsoft paraphrase detection task is similar to that of the skip-thought model, and the results on other tasks are not as good.
{ "cite_N": [ "@cite_12" ], "mid": [ "2610858497" ], "abstract": [ "This work presents a novel objective function for the unsupervised training of neural network sentence encoders. It exploits signals from paragraph-level discourse coherence to train these models to understand text. Our objective is purely discriminative, allowing us to train models many times faster than was possible under prior methods, and it yields models which perform well in extrinsic evaluations." ] }
1706.02387
2623780535
Android is designed with a number of built-in security features such as app sandboxing and permission-based access controls. Android supports multiple communication methods for apps to cooperate. This creates a security risk of app collusion. For instance, a sandboxed app with permission to access sensitive data might leak that data to another sandboxed app with access to the internet. In this paper, we present a method to detect potential collusion between apps. First, we extract from apps all information about their accesses to protected resources and communications. Then we identify sets of apps that might be colluding by using rules in first order logic codified in Prolog. After these, more computationally demanding approaches like taint analysis can focus on the identified sets that show collusion potential. This "filtering" approach is validated against a dataset of manually crafted colluding apps. We also demonstrate that our tool scales by running it on a set of more than 50,000 apps collected in the wild. Our tool allowed us to detect a large set of real apps that used collusion as a synchronization method to maximize the effects of a payload that was injected into all of them via the same SDK.
@cite_19 propose the tool which joins two apps into a single APK file. In this way, a security analyst can use IPC analyzers to analyze the IAC mechanisms. Their evaluation over a set of 3000 apps shows that the approach is valid, as it is capable of joining together 88
{ "cite_N": [ "@cite_19" ], "mid": [ "769484497" ], "abstract": [ "Android apps are made of components which can leak information between one another using the ICC mechanism. With the growing momentum of Android, a number of research contributions have led to tools for the intra-app analysis of Android apps. Unfortunately, these state-of-the-art approaches, and the associated tools, have long left out the security flaws that arise across the boundaries of single apps, in the interaction between several apps. In this paper, we present a tool called ApkCombiner which aims at reducing an inter-app communication problem to an intra-app inter-component communication problem. In practice, ApkCombiner combines different apps into a single apk on which existing tools can indirectly perform inter-app analysis. We have evaluated ApkCombiner on a dataset of 3,000 real-world Android apps, to demonstrate its capability to support static context-aware inter-app analysis scenarios." ] }
1706.02093
2622338386
In the 'Big Data' era, many real-world applications like search involve the ranking problem for a large number of items. It is important to obtain effective ranking results and at the same time obtain the results efficiently in a timely manner for providing good user experience and saving computational costs. Valuable prior research has been conducted for learning to efficiently rank like the cascade ranking (learning) model, which uses a sequence of ranking functions to progressively filter some items and rank the remaining items. However, most existing research of learning to efficiently rank in search is studied in a relatively small computing environments with simulated user queries. This paper presents novel research and thorough study of designing and deploying a Cascade model in a Large-scale Operational E-commerce Search application (CLOES), which deals with hundreds of millions of user queries per day with hundreds of servers. The challenge of the real-world application provides new insights for research: 1). Real-world search applications often involve multiple factors of preferences or constraints with respect to user experience and computational costs such as search accuracy, search latency, size of search results and total CPU cost, while most existing search solutions only address one or two factors; 2). Effectiveness of e-commerce search involves multiple types of user behaviors such as click and purchase, while most existing cascade ranking in search only models the click behavior. Based on these observations, a novel cascade ranking model is designed and deployed in an operational e-commerce search application. An extensive set of experiments demonstrate the advantage of the proposed work to address multiple factors of effectiveness, efficiency and user experience in the real-world application.
There are some other approaches for improving efficiency in ranking such as caching @cite_0 and index pruning @cite_18 . The caching approach can cache results of common queries or lists of query term postings. The index pruning approach creates compact indexing structure offline (e.g., remove unimportant text terms) and allows more efficient search over the structure. These approaches are complementary with our proposed search. The research in this work focuses on ranking model and can benefit from these approaches.
{ "cite_N": [ "@cite_0", "@cite_18" ], "mid": [ "2072156548", "2006997130" ], "abstract": [ "In this paper we study the trade-offs in designing efficient caching systems for Web search engines. We explore the impact of different approaches, such as static vs. dynamic caching, and caching query results vs.caching posting lists. Using a query log spanning a whole year we explore the limitations of caching and we demonstrate that caching posting lists can achieve higher hit rates than caching query answers. We propose a new algorithm for static caching of posting lists, which outperforms previous methods. We also study the problem of finding the optimal way to split the static cache between answers and posting lists. Finally, we measure how the changes in the query log affect the effectiveness of static caching, given our observation that the distribution of the queries changes slowly over time. Our results and observations are applicable to different levels of the data-access hierarchy, for instance, for a memory disk layer or a broker remote server layer.", "We introduce static index pruning methods that significantly reduce the index size in information retrieval systems.We investigate uniform and term-based methods that each remove selected entries from the index and yet have only a minor effect on retrieval results. In uniform pruning, there is a fixed cutoff threshold, and all index entries whose contribution to relevance scores is bounded above by a given threshold are removed from the index. In term-based pruning, the cutoff threshold is determined for each term, and thus may vary from term to term. We give experimental evidence that for each level of compression, term-based pruning outperforms uniform pruning, under various measures of precision. We present theoretical and experimental evidence that under our term-based pruning scheme, it is possible to prune the index greatly and still get retrieval results that are almost as good as those based on the full index." ] }
1706.02021
2622213512
Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks. However, deep networks are typically resource-intensive and thus difficult to be deployed on mobile devices. Recently, CNNs with binary weights have shown compelling efficiency to the community, whereas the accuracy of such models is usually unsatisfactory in practice. In this paper, we introduce network sketching as a novel technique of pursuing binary-weight CNNs, targeting at more faithful inference and better trade-off for practical applications. Our basic idea is to exploit binary structure directly in pre-trained filter banks and produce binary-weight models via tensor expansion. The whole process can be treated as a coarse-to-fine model approximation, akin to the pencil drawing steps of outlining and shading. To further speedup the generated models, namely the sketches, we also propose an associative implementation of binary tensor convolutions. Experimental results demonstrate that a proper sketch of AlexNet (or ResNet) outperforms the existing binary-weight models by large margins on the ImageNet large scale classification task, while the committed memory for network parameters only exceeds a little.
The deployment problem of deep CNNs has become a concern for years. Efficient models can be learnt either from scratch or from pre-trained models. Generally, training from scratch demands strong integration of network architecture and training policy @cite_31 , and here we mainly discuss representative works on the latter strategy.
{ "cite_N": [ "@cite_31" ], "mid": [ "2952936791" ], "abstract": [ "In recent years increasingly complex architectures for deep convolution networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Fixed point implementation of DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we propose a quantizer design for fixed point implementation of DCNs. We formulate and solve an optimization problem to identify optimal fixed point bit-width allocation across DCN layers. Our experiments show that in comparison to equal bit-width settings, the fixed point DCNs with optimized bit width allocation offer >20 reduction in the model size without any loss in accuracy on CIFAR-10 benchmark. We also demonstrate that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model. In doing so, we report a new state-of-the-art fixed point performance of 6.78 error-rate on CIFAR-10 benchmark." ] }
1706.02021
2622213512
Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks. However, deep networks are typically resource-intensive and thus difficult to be deployed on mobile devices. Recently, CNNs with binary weights have shown compelling efficiency to the community, whereas the accuracy of such models is usually unsatisfactory in practice. In this paper, we introduce network sketching as a novel technique of pursuing binary-weight CNNs, targeting at more faithful inference and better trade-off for practical applications. Our basic idea is to exploit binary structure directly in pre-trained filter banks and produce binary-weight models via tensor expansion. The whole process can be treated as a coarse-to-fine model approximation, akin to the pencil drawing steps of outlining and shading. To further speedup the generated models, namely the sketches, we also propose an associative implementation of binary tensor convolutions. Experimental results demonstrate that a proper sketch of AlexNet (or ResNet) outperforms the existing binary-weight models by large margins on the ImageNet large scale classification task, while the committed memory for network parameters only exceeds a little.
Early works are usually hardware-specific. Not restricted to CNNs, @cite_5 take advantage of programmatic optimizations to produce a @math speedup on x86 CPUs. On the other hand, @cite_23 perform fast Fourier transform (FFT) on GPUs and propose to compute convolutions efficiently in the frequency domain. Additionally, @cite_17 introduce two new FFT-based implementations for more significant speedups.
{ "cite_N": [ "@cite_5", "@cite_23", "@cite_17" ], "mid": [ "587794757", "1922123711", "1789336918" ], "abstract": [ "Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model neural network (HMM NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.", "Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.", "We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units. We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA's cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA's cuDNN implementation for many common convolutional layers (up to 23.5x for some synthetic kernel configurations). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided." ] }
1706.02021
2622213512
Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks. However, deep networks are typically resource-intensive and thus difficult to be deployed on mobile devices. Recently, CNNs with binary weights have shown compelling efficiency to the community, whereas the accuracy of such models is usually unsatisfactory in practice. In this paper, we introduce network sketching as a novel technique of pursuing binary-weight CNNs, targeting at more faithful inference and better trade-off for practical applications. Our basic idea is to exploit binary structure directly in pre-trained filter banks and produce binary-weight models via tensor expansion. The whole process can be treated as a coarse-to-fine model approximation, akin to the pencil drawing steps of outlining and shading. To further speedup the generated models, namely the sketches, we also propose an associative implementation of binary tensor convolutions. Experimental results demonstrate that a proper sketch of AlexNet (or ResNet) outperforms the existing binary-weight models by large margins on the ImageNet large scale classification task, while the committed memory for network parameters only exceeds a little.
More recently, low-rank based matrix (or tensor) decomposition has been used as an alternative way to accomplish this task. Mainly inspired by the seminal works from @cite_11 and @cite_7 , low-rank based methods attempt to exploit parameter redundancy among different feature channels and filters. By properly decomposing pre-trained filters, these methods @cite_21 @cite_13 @cite_15 @cite_22 @cite_29 can achieve appealing speedups ( @math to @math ) with acceptable accuracy drop ($ 1 Unlike the above mentioned ones, some research works regard memory saving as the top priority. To tackle the storage issue of deep networks, @cite_1 , @cite_16 and @cite_31 consider applying the quantization techniques to pre-trained CNNs, and trying to make network compressions with minor concessions on the inference accuracy. Another powerful category of methods in this scope is network pruning. Starting from the early work of 's @cite_26 and Hassibi & Stork's @cite_10 , pruning methods have delivered surprisingly good compressions on a range of CNNs, including some advanced ones like AlexNet and VGGnet @cite_3 @cite_14 @cite_20 . In addition, due to the reduction in model complexity, a fair speedup can be observed as a byproduct.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_14", "@cite_22", "@cite_7", "@cite_10", "@cite_29", "@cite_21", "@cite_1", "@cite_3", "@cite_15", "@cite_16", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2952936791", "2114766824", "", "", "", "2125389748", "", "2167215970", "1724438581", "2963674932", "", "2233116163", "", "", "2952899695" ], "abstract": [ "In recent years increasingly complex architectures for deep convolution networks (DCNs) have been proposed to boost the performance on image recognition tasks. However, the gains in performance have come at a cost of substantial increase in computation and model storage resources. Fixed point implementation of DCNs has the potential to alleviate some of these complexities and facilitate potential deployment on embedded hardware. In this paper, we propose a quantizer design for fixed point implementation of DCNs. We formulate and solve an optimization problem to identify optimal fixed point bit-width allocation across DCN layers. Our experiments show that in comparison to equal bit-width settings, the fixed point DCNs with optimized bit width allocation offer >20 reduction in the model size without any loss in accuracy on CIFAR-10 benchmark. We also demonstrate that fine-tuning can further enhance the accuracy of fixed point DCNs beyond that of the original floating point model. In doing so, we report a new state-of-the-art fixed point performance of 6.78 error-rate on CIFAR-10 benchmark.", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "", "", "", "We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.", "", "We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2 x, while keeping the accuracy within 1 of the original model.", "Deep convolutional neural networks (CNN) has become the most promising method for object recognition, repeatedly demonstrating record breaking results for image classification and object detection in recent years. However, a very deep CNN generally involves many layers with millions of parameters, making the storage of the network model to be extremely large. This prohibits the usage of deep CNNs on resource limited hardware, especially cell phones or other embedded devices. In this paper, we tackle this model storage issue by investigating information theoretical vector quantization methods for compressing the parameters of CNNs. In particular, we have found in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods. Simply applying k-means clustering to the weights or conducting product quantization can lead to a very good balance between model size and recognition accuracy. For the 1000-category classification task in the ImageNet challenge, we are able to achieve 16-24 times compression of the network with only 1 loss of classification accuracy using the state-of-the-art CNN.", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.", "", "Recently, convolutional neural networks (CNN) have demonstrated impressive performance in various computer vision tasks. However, high performance hardware is typically indispensable for the application of CNN models due to the high computation complexity, which prohibits their further extensions. In this paper, we propose an efficient framework, namely Quantized CNN, to simultaneously speed-up the computation and reduce the storage and memory overhead of CNN models. Both filter kernels in convolutional layers and weighting matrices in fully-connected layers are quantized, aiming at minimizing the estimation error of each layer's response. Extensive experiments on the ILSVRC-12 benchmark demonstrate 4 6× speed-up and 15 20× compression with merely one percentage loss of classification accuracy. With our quantized CNN model, even mobile devices can accurately classify images within one second.", "", "", "We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95 of the weights of a network without any drop in accuracy." ] }
1706.02083
2622544372
Closeness centrality is one way of measuring how central a node is in the given network. The closeness centrality measure assigns a centrality value to each node based on its accessibility to the whole network. In real life applications, we are mainly interested in ranking nodes based on their centrality values. The classical method to compute the rank of a node first computes the closeness centrality of all nodes and then compares them to get its rank. Its time complexity is @math , where @math represents total number of nodes, and @math represents total number of edges in the network. In the present work, we propose a heuristic method to fast estimate the closeness rank of a node in @math time complexity, where @math . We also propose an extended improved method using uniform sampling technique. This method better estimates the rank and it has the time complexity @math , where @math . This is an excellent improvement over the classical centrality ranking method. The efficiency of the proposed methods is verified on real world scale-free social networks using absolute and weighted error functions.
Real world networks are highly dynamic and the computation of closeness centrality of all nodes for each change in the networks will be a cumbersome task. In dynamic networks, for each update, the closeness centrality of some nodes may remain unaffected. proposed a method to update closeness centrality in dynamic networks @cite_27 . The proposed method uses the set of affected nodes to update the closeness centrality whenever there is any addition, removal, or modification of nodes or edges. Yen also proposed an algorithm called CENDY (Closeness centrality and avErage path leNgth in DYnamic networks) to update closeness centrality whenever an edge is updated @cite_7 . proposed a method to update closeness centrality using the level difference information of breadth first traversal @cite_12 .
{ "cite_N": [ "@cite_27", "@cite_12", "@cite_7" ], "mid": [ "2067674168", "1605385825", "" ], "abstract": [ "The increasing availability of dynamically growing digital data that can be used for extracting social networks has led to an upsurge of interest in the analysis of dynamic social networks. One key aspect of social network analysis is to understand the central nodes in a network. However, dynamic calculation of centrality values for rapidly growing networks might be unfeasibly expensive, especially if it involves recalculation from scratch for each time period. This paper proposes an incremental algorithm that effectively updates betweenness centralities of nodes in dynamic social networks while avoiding re-computations by exploiting information from earlier computations. Our performance results suggest that our incremental betweenness algorithm can achieve substantial performance speedup, on the order of thousands of times, over the state of the art, including the best-performing non-incremental betweenness algorithm and a recently proposed betweenness update algorithm.", "Analyzing networks requires complex algorithms to extract meaningful information. Centrality metrics have shown to be correlated with the importance and loads of the nodes in network traffic. Here, we are interested in the problem of centrality-based network management. The problem has many applications such as verifying the robustness of the networks and controlling or improving the entity dissemination. It can be defined as finding a small set of topological network modifications which yield a desired closeness centrality configuration. As a fundamental building block to tackle that problem, we propose incremental algorithms which efficiently update the closeness centrality values upon changes in network topology, i.e., edge insertions and deletions. Our algorithms are proven to be efficient on many real-life networks, especially on small-world networks, which have a small diameter and a spike-shaped shortest distance distribution. In addition to closeness centrality, they can also be a great arsenal for the shortest-path-based management and analysis of the networks. We experimentally validate the efficiency of our algorithms on large networks and show that they update the closeness centrality values of the temporal DBLP-coauthorship network of 1.2 million users 460 times faster than it would take to compute them from scratch. To the best of our knowledge, this is the first work which can yield practical large-scale network management based on closeness centrality values.", "" ] }
1706.02083
2622544372
Closeness centrality is one way of measuring how central a node is in the given network. The closeness centrality measure assigns a centrality value to each node based on its accessibility to the whole network. In real life applications, we are mainly interested in ranking nodes based on their centrality values. The classical method to compute the rank of a node first computes the closeness centrality of all nodes and then compares them to get its rank. Its time complexity is @math , where @math represents total number of nodes, and @math represents total number of edges in the network. In the present work, we propose a heuristic method to fast estimate the closeness rank of a node in @math time complexity, where @math . We also propose an extended improved method using uniform sampling technique. This method better estimates the rank and it has the time complexity @math , where @math . This is an excellent improvement over the classical centrality ranking method. The efficiency of the proposed methods is verified on real world scale-free social networks using absolute and weighted error functions.
Most of the real life applications focus on identifying few top nodes having the highest closeness centrality. proposed a method to rank @math highest closeness centrality nodes using a hybrid of approximate and exact algorithms @cite_39 . Ufimtsev proposed an algorithm to identify high closeness centrality nodes using group testing @cite_0 . presented an efficient technique to find @math most central nodes based on closeness centrality @cite_14 . They used intermediate results of centrality computation to minimize the computation time. proposed a faster method to identify top- @math nodes in undirected networks by approximating the upper bound on closeness centrality using BFT @cite_5 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_14", "@cite_39" ], "mid": [ "1973450591", "2295695614", "", "1536060130" ], "abstract": [ "The significance of an entity in a network is generally given by the centrality value of its vertex. For most analysis purposes, only the high ranked vertices are required. However, most algorithms calculate the centrality values of all the vertices. We present an extremely fast and scalable algorithm for identifying the high closeness centrality vertices, using group testing. We show that our approach is significantly faster (best-case over 50 times, worst-case over 7 times) than the currently used methods. We can also use group testing to identify networks that are sensitive to edge perturbation.", "Given a connected graph G = (V,E), the closeness centrality of a vertex v is defined as (n-1 w V d(v,w). This measure is widely used in the analysis of real-world complex networks, and the problem of selecting the k most central vertices has been deeply analysed in the last decade. However, this problem is computationally not easy, especially for large networks: in the first part of the paper, we prove that it is not solvable in time O(|E|^ 2-epsilon\u000f) on directed graphs, for any constant \u000fepsilon > 0, under reasonable complexity assumptions. Furthermore, we propose a new algorithm for selecting the k most central nodes in a graph: we experimentally show that this algorithm improves significantly both the textbook algorithm, which is based on computing the distance between all pairs of vertices, and the state of the art. For example, we are able to compute the top k nodes in few dozens of seconds in real-world networks with millions of nodes and edges. Finally, as a case study, we compute the 10 most central actors in the IMDB collaboration network, where two actors are linked if they played together in a movie, and in the Wikipedia citation network, which contains a directed edge from a page p to a page q if p contains a link to q.", "", "Closeness centrality is an important concept in social network analysis. In a graph representing a social network, closeness centrality measures how close a vertex is to all other vertices in the graph. In this paper, we combine existing methods on calculating exact values and approximate values of closeness centrality and present new algorithms to rank the top-kvertices with the highest closeness centrality. We show that under certain conditions, our algorithm is more efficient than the algorithm that calculates the closeness-centralities of all vertices." ] }
1706.02083
2622544372
Closeness centrality is one way of measuring how central a node is in the given network. The closeness centrality measure assigns a centrality value to each node based on its accessibility to the whole network. In real life applications, we are mainly interested in ranking nodes based on their centrality values. The classical method to compute the rank of a node first computes the closeness centrality of all nodes and then compares them to get its rank. Its time complexity is @math , where @math represents total number of nodes, and @math represents total number of edges in the network. In the present work, we propose a heuristic method to fast estimate the closeness rank of a node in @math time complexity, where @math . We also propose an extended improved method using uniform sampling technique. This method better estimates the rank and it has the time complexity @math , where @math . This is an excellent improvement over the classical centrality ranking method. The efficiency of the proposed methods is verified on real world scale-free social networks using absolute and weighted error functions.
studied the correlation of closeness centrality with the local neighborhood volume of the node @cite_25 . The ranking based on local neighborhood volume is named as DACCER (Distributed Assessment of the Closeness CEntrality Ranking) and is highly correlated with closeness centrality ranking in both real world and synthetic networks.
{ "cite_N": [ "@cite_25" ], "mid": [ "2044141740" ], "abstract": [ "We propose a method for the Distributed Assessment of the Closeness CEntrality Ranking (DACCER) in complex networks. DACCER computes centrality based only on localized information restricted to a limited neighborhood around each node, thus not requiring full knowledge of the network topology. We indicate that the node centrality ranking computed by DACCER is highly correlated with the node ranking based on the traditional closeness centrality, which requires high computational costs and full knowledge of the network topology by the entity responsible for calculating the centrality. This outcome is quite useful given the vast potential applicability of closeness centrality, which is seldom applied to large-scale networks due to its high computational costs. Results indicate that DACCER is simple, yet efficient, in assessing node centrality while allowing a distributed implementation that contributes to its performance. This also contributes to the practical applicability of DACCER to the analysis of large complex networks, as indicated in our experimental evaluation using both synthetically generated networks and real-world network traces of different kinds and scales." ] }
1706.01966
2623871108
In this paper, we address the problem of controlling a mobile stereo camera under image quantization noise. Assuming that a pair of images of a set of targets is available, the camera moves through a sequence of Next-Best-Views (NBVs), i.e., a sequence of views that minimize the trace of the targets' cumulative state covariance, constructed using a realistic model of the stereo rig that captures image quantization noise and a Kalman Filter (KF) that fuses the observation history with new information. The proposed algorithm decomposes control into two stages: first the NBV is computed in the camera relative coordinates, and then the camera moves to realize this view in the fixed global coordinate frame. This decomposition allows the camera to drive to a new pose that effectively realizes the NBV in camera coordinates while satisfying Field-of-View constraints in global coordinates, a task that is particularly challenging using complex sensing models. We provide simulations and real experiments that illustrate the ability of the proposed mobile camera system to accurately localize sets of targets. We also propose a novel data-driven technique to characterize unmodeled uncertainty, such as calibration errors, at the pixel level and show that this method ensures stability of the KF.
Our work is relevant to a growing body of literature that addresses control for one or several mobile sensors for the purpose of target localization or tracking @cite_42 @cite_34 @cite_1 @cite_13 @cite_41 @cite_8 @cite_11 @cite_21 . These methods use sensor models that are based only on range and viewing angle. These models, if used for stereo triangulation, can not accurately capture the covariance among errors in measurement coordinates, nor can they capture dependence on range and viewing angle. It is also common to ignore directional field of view constraints by assuming omnidirectional sensing. In this paper, we derive the covariance specifically for triangulation with a calibrated stereo rig. The derived measurement covariance, when fused with a prior distribution, provides our controller with critical directional information that enables the mobile robot to find the NBV, defined as the vantage point from where new information will reduce the posterior variance of the targets' distribution by the maximum amount.
{ "cite_N": [ "@cite_8", "@cite_41", "@cite_42", "@cite_1", "@cite_21", "@cite_34", "@cite_13", "@cite_11" ], "mid": [ "", "2160540358", "2128453677", "1989923929", "2114144879", "2138890480", "1984747682", "1975511713" ], "abstract": [ "", "This paper presents a decentralized motion planning algorithm for the distributed sensing of a noisy dynamical process by multiple cooperating mobile sensor agents. This problem is motivated by localization and tracking tasks of dynamic targets. Our gradient-descent method is based on a cost function that measures the overall quality of sensing. We also investigate the role of imperfect communication between sensor agents in this framework, and examine the trade-offs in performance between sensing and communication. Simulations illustrate the basic characteristics of the algorithms", "This paper presents a statistical algorithm for collaborative mobile robot localization. Our approach uses a sample-based version of Markov localization, capable of localizing mobile robots in an any-time fashion. When teams of robots localize themselves in the same environment, probabilistic methods are employed to synchronize each robot's belief whenever one robot detects another. As a result, the robots localize themselves faster, maintain higher accuracy, and high-cost sensors are amortized across multiple robot platforms. The technique has been implemented and tested using two mobile robots equipped with cameras and laser range-finders for detecting other robots. The results, obtained with the real robots and in series of simulation runs, illustrate drastic improvements in localization speed and accuracy when compared to conventional single-robot localization. A further experiment demonstrates that under certain conditions, successful localization is only possible if teams of heterogeneous robots collaborate during localization.", "In this paper, we present an approach to the problem of actively controlling the configuration of a team of mobile agents equipped with cameras so as to optimize the quality of the estimates derived from their measurements. The issue of optimizing the robots' configuration is particularly important in the context of teams equipped with vision sensors, since most estimation schemes of interest will involve some form of triangulation.We provide a theoretical framework for tackling the sensor planning problem, and a practical computational strategy inspired by work on particle filtering for implementing the approach. We then extend our framework by showing how modeled system dynamics and configuration space obstacles can be handled. These ideas have been applied to a target tracking task, and demonstrated both in simulation and with actual robot platforms. The results indicate that the framework is able to solve fairly difficult sensor planning problems online without requiring excessive amounts of computati...", "In this paper, we study the problem of optimal trajectory generation for a team of heterogeneous robots moving in a plane and tracking a moving target by processing relative observations, i.e., distance and or bearing. Contrary to previous approaches, we explicitly consider limits on the robots' speed and impose constraints on the minimum distance at which the robots are allowed to approach the target. We first address the case of a single tracking sensor and seek the next sensing location in order to minimize the uncertainty about the target's position. We show that although the corresponding optimization problem involves a nonconvex objective function and a nonconvex constraint, its global optimal solution can be determined analytically. We then extend the approach to the case of multiple sensors and propose an iterative algorithm, i.e., the Gauss-Seidel relaxation (GSR), to determine the next best sensing location for each sensor. Extensive simulation results demonstrate that the GSR algorithm, whose computational complexity is linear in the number of sensors, achieves higher tracking accuracy than gradient descent methods and has performance that is indistinguishable from that of a grid-based exhaustive search, whose cost is exponential in the number of sensors. Finally, through experiments, we demonstrate that the proposed GSR algorithm is robust and applicable to real systems.", "In this paper, we present a new approach to the problem of simultaneously localizing a group of mobile robots capable of sensing one another. Each of the robots collects sensor data regarding its own motion and shares this information with the rest of the team during the update cycles. A single estimator, in the form of a Kalman filter, processes the available positioning information from all the members of the team and produces a pose estimate for every one of them. The equations for this centralized estimator can be written in a decentralized form, therefore allowing this single Kalman filter to be decomposed into a number of smaller communicating filters. Each of these filters processes the sensor data collected by its host robot. Exchange of information between the individual filters is necessary only when two robots detect each other and measure their relative pose. The resulting decentralized estimation schema, which we call collective localization, constitutes a unique means for fusing measurements collected from a variety of sensors with minimal communication and processing requirements. The distributed localization algorithm is applied to a group of three robots and the improvement in localization accuracy is presented. Finally, a comparison to the equivalent decentralized information filter is provided.", "Abstract We present an approach for directing next-step movements of robot teams engaged in mapping objects in their environment: Move Value Estimation for Robot Teams (MVERT). Resulting robot paths tend to optimize vantage points for all robots on the team by maximizing information gain. At each step, each robot selects a movement to maximize the utility (in this case, reduction in uncertainty) of its next observation. Trajectories are not guaranteed to be optimal, but team behavior serves to maximize the team's knowledge since each robot considers the observational contributions of team mates. MVERT is evaluated in simulation by measuring the resulting uncertainty about target locations compared to that obtained by robots acting without regard to team mate locations and to that of global optimization over all robots for each single step. Additionally, MVERT is demonstrated on physical teams of robots. The qualitative behavior of the team is appropriate and close to the single-step optimal set of trajectories.", "This paper studies the active target-tracking problem for a team of unmanned aerial vehicles equipped with 3-D range-finding sensors. We propose a gradient-based control strategy that encompasses the three major optimum experimental design criteria, and we use the Kalman filter for estimating the target's position both in a cooperative and in a noncooperative scenario. Our control strategy is active because it moves the vehicles along paths that minimize the uncertainty about the location of the target. In the case that the position of the vehicles is not perfectly known, we introduce a new and more challenging problem, termed active cooperative localization and multitarget tracking (ACLMT). In this problem, the aerial vehicles must reconfigure themselves in the 3-D space in order to maximize both the accuracy of their own position estimate and that of multiple moving targets. For ACLMT, we derive analytical lower and upper bounds on the targets' and vehicles' position uncertainty by exploiting the monotonicity property of the Riccati differential equation arising from the Kalman-Bucy filter. These bounds allow us to study the impact of the sensors' accuracy and the targets' dynamics on the performance of our coordination strategy. Extensive simulation experiments illustrate the proposed theoretical results." ] }
1706.01966
2623871108
In this paper, we address the problem of controlling a mobile stereo camera under image quantization noise. Assuming that a pair of images of a set of targets is available, the camera moves through a sequence of Next-Best-Views (NBVs), i.e., a sequence of views that minimize the trace of the targets' cumulative state covariance, constructed using a realistic model of the stereo rig that captures image quantization noise and a Kalman Filter (KF) that fuses the observation history with new information. The proposed algorithm decomposes control into two stages: first the NBV is computed in the camera relative coordinates, and then the camera moves to realize this view in the fixed global coordinate frame. This decomposition allows the camera to drive to a new pose that effectively realizes the NBV in camera coordinates while satisfying Field-of-View constraints in global coordinates, a task that is particularly challenging using complex sensing models. We provide simulations and real experiments that illustrate the ability of the proposed mobile camera system to accurately localize sets of targets. We also propose a novel data-driven technique to characterize unmodeled uncertainty, such as calibration errors, at the pixel level and show that this method ensures stability of the KF.
We note briefly that this paper is based on preliminary results contained in our prior publications @cite_35 @cite_38 . These early works used simplified versions of the noise model and global controller and lacked experimental validation.
{ "cite_N": [ "@cite_35", "@cite_38" ], "mid": [ "1969458704", "2063893967" ], "abstract": [ "In this paper, we consider the problem of precisely localizing a group of stationary targets using a single stereo camera mounted on a mobile robot. In particular, assuming that at least one pair of stereo images of the targets is available, we seek to determine where to move the stereo camera so that the localization uncertainty of the targets is minimized. We call this problem the Next-Best-View problem. The advantage of using a stereo camera is that, using triangulation, the two simultaneous images can yield range and bearing measurements of the targets, as well as their uncertainty. We use a Kalman filter to fuse location and uncertainty estimates as more measurements are acquired. Our solution to the Next-Best-View problem is to iteratively minimize the fused uncertainty of the targets' locations subject to field-of-view constraints. We capture these objectives by appropriate artificial potentials on the camera's relative frame and the global frame, respectively. In particular, with every new observation, the mobile stereo camera computes the new next best view on the relative frame and subsequently realizes this view in the global frame via gradient descent on the space of robot positions and orientations, until a new observation is made. Integration of next best view with motion planning results in a hybrid system, which we illustrate in computer simulations.", "In this paper, we control image collection for a mobile stereo camera that is actively localizing a group of mobile targets. In particular, assuming that at least one pair of stereo images of the targets is available, we propose a novel approach to control the rotation and translation of the stereo camera so that the next observation of the targets will minimize their localization uncertainty. We call this problem the Next-Best-View problem for mobile targets (mNBV). The advantage of using a stereo camera is that, using triangulation, the two simultaneous images taken by the robot during a single observation can yield range and bearing measurements of the targets, as well as their uncertainty. A Kalman filter fuses the full state history and covariance estimates, as more measurements are acquired. Our solution to the mNBV problem determines the relative transformations between camera and targets that will minimize the fused uncertainty of the targets' locations. We determine a motion plan that realizes the mNBV while respecting field of view constraints. In particular, with every new observation, we compute a new mNBV in the frame relative to the camera and subsequently realize this view in global coordinates via a gradient descent algorithm that also respects field of view constraints. Integration of mNBV with motion planning results in a hybrid system, which we illustrate in computer simulations." ] }
1706.02189
2624650145
Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground background masks. Unfortunately these priors either require pixel-level annotations bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract accurate masks from networks pre-trained for the task of object recognition, thus forgoing external objectness modules. We first show how foreground background masks can be obtained from the activations of higher-level convolutional layers of a network. We then show how to obtain multi-class masks by the fusion of foreground background ones with information extracted from a weakly-supervised localization network. Our experiments evidence that exploiting these masks in conjunction with a weakly-supervised training loss yields state-of-the-art tag-based weakly-supervised semantic segmentation results.
Weakly-supervised semantic segmentation has attracted a lot of attention, because it alleviates the painstaking process of manually generating pixel-level training annotations. Over the years, great progress has been made @cite_28 @cite_49 @cite_42 @cite_11 @cite_3 @cite_30 @cite_45 @cite_17 @cite_0 @cite_13 @cite_12 @cite_23 @cite_38 @cite_48 @cite_21 . In particular, recently, Convolutional Neural Networks (CNNs) have been applied to the task of weakly-supervised segmentation with great success. In this section, we discuss these CNN-based approaches, which are the ones most related to our work.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_28", "@cite_48", "@cite_42", "@cite_21", "@cite_17", "@cite_3", "@cite_0", "@cite_45", "@cite_49", "@cite_23", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2962841641", "2951358285", "1993433125", "2520746254", "1927251054", "2519610629", "1783315696", "2221898772", "1495267108", "1945608308", "2158427031", "2133515615", "2026581312", "2291422229", "611457968" ], "abstract": [ "", "We introduce a new loss function for the weakly-supervised training of semantic image segmentation models based on three guiding principles: to seed with weak localization cues, to expand objects based on the information about which classes can occur in an image, and to constrain the segmentations to coincide with object boundaries. We show experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the working mechanism of our method by a detailed experimental study that illustrates how the segmentation quality is affected by each term of the proposed loss function as well as their combinations.", "We tackle the problem of weakly labeled semantic segmentation, where the only source of annotation are image tags encoding which classes are present in the scene. This is an extremely difficult problem as no pixel-wise labelings are available, not even at training time. In this paper, we show that this problem can be formalized as an instance of learning in a latent structured prediction framework, where the graphical model encodes the presence and absence of a class as well as the assignments of semantic labels to superpixels. As a consequence, we are able to leverage standard algorithms with good theoretical properties. We demonstrate the effectiveness of our approach using the challenging SIFT-flow dataset and show average per-class accuracy improvements of 7 over the state-of-the-art.", "Training neural networks for semantic segmentation is data hungry. Meanwhile annotating a large number of pixel-level segmentation masks needs enormous human effort. In this paper, we propose a framework with only image-level supervision. It unifies semantic segmentation and object localization with important proposal aggregation and selection modules. They greatly reduce the notorious error accumulation problem that commonly arises in weakly supervised learning. Our proposed training algorithm progressively improves segmentation performance with augmented feedback in iterations. Our method achieves decent results on the PASCAL VOC 2012 segmentation data, outperforming previous image-level supervised methods by a large margin.", "Despite the promising performance of conventional fully supervised algorithms, semantic segmentation has remained an important, yet challenging task. Due to the limited availability of complete annotations, it is of great interest to design solutions for semantic segmentation that take into account weakly labeled data, which is readily available at a much larger scale. Contrasting the common theme to develop a different algorithm for each type of weak annotation, in this work, we propose a unified approach that incorporates various forms of weak supervision - image level tags, bounding boxes, and partial labels - to produce a pixel-wise labeling. We conduct a rigorous evaluation on the challenging Siftflow dataset for various weakly labeled settings, and show that our approach outperforms the state-of-the-art by 12 on per-class accuracy, while maintaining comparable per-pixel accuracy.", "In this paper, we deal with a weakly supervised semantic segmentation problem where only training images with image-level labels are available. We propose a weakly supervised semantic segmentation method which is based on CNN-based class-specific saliency maps and fully-connected CRF. To obtain distinct class-specific saliency maps which can be used as unary potentials of CRF, we propose a novel method to estimate class saliency maps which improves the method proposed by (2014) significantly by the following improvements: (1) using CNN derivatives with respect to feature maps of the intermediate convolutional layers with up-sampling instead of an input image; (2) subtracting the saliency maps of the other classes from the saliency maps of the target class to differentiate target objects from other objects; (3) aggregating multiple-scale class saliency maps to compensate lower resolution of the feature maps. After obtaining distinct class saliency maps, we apply fully-connected CRF by using the class maps as unary potentials. By the experiments, we show that the proposed method has outperformed state-of-the-art results with the PASCAL VOC 2012 dataset under the weakly-supervised setting.", "We present an approach to learn a dense pixel-wise labeling from image-level tags. Each image-level tag imposes constraints on the output labeling of a Convolutional Neural Network (CNN) classifier. We propose Constrained CNN (CCNN), a method which uses a novel loss function to optimize for any set of linear constraints on the output space (i.e. predicted label distribution) of a CNN. Our loss formulation is easy to optimize and can be incorporated directly into standard stochastic gradient descent optimization. The key idea is to phrase the training objective as a biconvex optimization for linear models, which we then relax to nonlinear deep networks. Extensive experiments demonstrate the generality of our new learning framework. The constrained loss yields state-of-the-art results on weakly supervised semantic image segmentation. We further demonstrate that adding slightly more supervision can greatly improve the performance of the learning algorithm.", "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: bitbucket.org deeplab deeplab-public.", "Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called \"BoxSup\", produces competitive results (e.g., 62.0 mAP for validation) supervised by boxes only, on par with strong baselines (e.g., 63.8 mAP) fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT [26].", "We are interested in inferring object segmentation by leveraging only object class information, and by considering only minimal priors on the object segmentation task. This problem could be viewed as a kind of weakly supervised segmentation task, and naturally fits the Multiple Instance Learning (MIL) framework: every training image is known to have (or not) at least one pixel corresponding to the image class label, and the segmentation task can be rewritten as inferring the pixels belonging to the class of the object (given one image, and its object class). We propose a Convolutional Neural Network-based model, which is constrained during training to put more weight on pixels which are important for classifying the image. We show that at test time, the model has learned to discriminate the right pixels well enough, such that it performs very well on an existing segmentation benchmark, by adding only few smoothing priors. Our system is trained using a subset of the Imagenet dataset and the segmentation experiments are performed on the challenging Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model beats the state of the art results in weakly supervised object segmentation task by a large margin. We also compare the performance of our model with state of the art fully-supervised segmentation approaches.", "We propose a novel method for weakly supervised semantic segmentation. Training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method predicts a class label for every pixel. Our main innovation is a multi-image model (MIM) - a graphical model for recovering the pixel labels of the training images. The model connects superpixels from all training images in a data-driven fashion, based on their appearance similarity. For generalizing to new test images we integrate them into MIM using a learned multiple kernel metric, instead of learning conventional classifiers on the recovered pixel labels. We also introduce an “objectness” potential, that helps separating objects (e.g. car, dog, human) from background classes (e.g. grass, sky, road). In experiments on the MSRC 21 dataset and the LabelMe subset of [18], our technique outperforms previous weakly supervised methods and achieves accuracy comparable with fully supervised methods.", "Recently, significant improvement has been made on semantic object segmentation due to the development of deep convolutional neural networks (DCNNs). Training such a DCNN usually relies on a large number of images with pixel-level segmentation masks, and annotating these images is very costly in terms of both finance and human effort. In this paper, we propose a simple to complex (STC) framework in which only image-level annotations are utilized to learn DCNNs for semantic segmentation. Specifically, we first train an initial segmentation network called Initial-DCNN with the saliency maps of simple images (i.e., those with a single category of major object(s) and clean background). These saliency maps can be automatically obtained by existing bottom-up salient object detection techniques, where no supervision information is needed. Then, a better network called Enhanced-DCNN is learned with supervision from the predicted segmentation masks of simple images based on the Initial-DCNN as well as the image-level annotations. Finally, more pixel-level segmentation masks of complex images (two or more categories of objects with cluttered background), which are inferred by using Enhanced-DCNN and image-level annotations, are utilized as the supervision information to learn the Powerful-DCNN for semantic segmentation. Our method utilizes 40K simple images from Flickr.com and 10K complex images from PASCAL VOC for step-wisely boosting the segmentation network. Extensive experimental results on PASCAL VOC 2012 segmentation benchmark well demonstrate the superiority of the proposed STC framework compared with other state-of-the-arts.", "We address the problem of weakly supervised semantic segmentation. The training images are labeled only by the classes they contain, not by their location in the image. On test images instead, the method must predict a class label for every pixel. Our goal is to enable segmentation algorithms to use multiple visual cues in this weakly supervised setting, analogous to what is achieved by fully supervised methods. However, it is difficult to assess the relative usefulness of different visual cues from weakly supervised training data. We define a parametric family of structured models, were each model weights visual cues in a different way. We propose a Maximum Expected Agreement model selection principle that evaluates the quality of a model from the family without looking at superpixel labels. Searching for the best model is a hard optimization problem, which has no analytic gradient and multiple local optima. We cast it as a Bayesian optimization problem and propose an algorithm based on Gaussian processes to efficiently solve it. Our second contribution is an Extremely Randomized Hashing Forest that represents diverse superpixel features as a sparse binary vector. It enables using appearance models of visual classes that are fast at training and testing and yet accurate. Experiments on the SIFT-flow dataset show a significant improvement over previous weakly supervised methods and even over some fully supervised methods.", "Recently, deep convolutional neural networks (DCNNs) have significantly promoted the development of semantic image segmentation. However, previous works on learning the segmentation network often rely on a large number of ground-truths with pixel-level annotations, which usually require considerable human effort. In this paper, we explore a more challenging problem by learning to segment under image-level annotations. Specifically, our framework consists of two components. First, reliable hypotheses based localization maps are generated by incorporating the hypotheses-aware classification and cross-image contextual refinement. Second, the segmentation network can be trained in a supervised manner by these generated localization maps. We explore two network training strategies for achieving good segmentation performance. For the first strategy, a novel multi-label cross-entropy loss is proposed to train the network by directly using multiple localization maps for all classes, where each pixel contributes to each class with different weights. For the second strategy, the rough segmentation mask can be inferred from the localization maps, and then the network is optimized based on the single-label cross-entropy loss with the produced masks. We evaluate our methods on the PASCAL VOC 2012 segmentation benchmark. Extensive experimental results demonstrate the effectiveness of the proposed methods compared with the state-of-the-arts. HighlightsLocalization map generation is proposed by using the hypothesis-based classification.A novel multi-label loss is proposed to train the network based on localization maps.An effective method is proposed to predict the rough mask of the given training image.Our methods achieve new state-of-the-art results on PASCAL VOC 2012 benchmark.", "The semantic image segmentation task presents a trade-off between test time accuracy and training time annotation cost. Detailed per-pixel annotations enable training accurate models but are very time-consuming to obtain; image-level class labels are an order of magnitude cheaper but result in less accurate models. We take a natural step from image-level annotation towards stronger supervision: we ask annotators to point to an object if one exists. We incorporate this point supervision along with a novel objectness potential in the training loss function of a CNN model. Experimental results on the PASCAL VOC 2012 benchmark reveal that the combined effect of point-level supervision and objectness potential yields an improvement of (12.9 , ) mIOU over image-level supervision. Further, we demonstrate that models trained with point-level supervision are more accurate than models trained with image-level, squiggle-level or full supervision given a fixed annotation budget." ] }
1706.02189
2624650145
Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground background masks. Unfortunately these priors either require pixel-level annotations bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract accurate masks from networks pre-trained for the task of object recognition, thus forgoing external objectness modules. We first show how foreground background masks can be obtained from the activations of higher-level convolutional layers of a network. We then show how to obtain multi-class masks by the fusion of foreground background ones with information extracted from a weakly-supervised localization network. Our experiments evidence that exploiting these masks in conjunction with a weakly-supervised training loss yields state-of-the-art tag-based weakly-supervised semantic segmentation results.
Beyond foreground background masks, the method of the contemporary work @cite_38 exploits the output of the same localization network @cite_37 as us, but directly in a new composite loss function for weakly-supervised semantic segmentation. While effective, the method suffers from the fact that localization of some classes is inaccurate. By contrast, here, we combine our built-in foreground background mask with information from the localization network, thus obtaining more accurate multi-class masks. As evidenced by our experiments, these more robust masks yield more accurate semantic segmentation results.
{ "cite_N": [ "@cite_38", "@cite_37" ], "mid": [ "2951358285", "2295107390" ], "abstract": [ "We introduce a new loss function for the weakly-supervised training of semantic image segmentation models based on three guiding principles: to seed with weak localization cues, to expand objects based on the information about which classes can occur in an image, and to constrain the segmentations to coincide with object boundaries. We show experimentally that training a deep convolutional neural network using the proposed loss function leads to substantially better segmentations than previous state-of-the-art methods on the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the working mechanism of our method by a detailed experimental study that illustrates how the segmentation quality is affected by each term of the proposed loss function as well as their combinations.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1." ] }
1706.02275
2623431351
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
The simplest approach to learning in multi-agent settings is to use independently learning agents. This was attempted with Q-learning in @cite_35 , but does not perform well in practice @cite_22 . As we will show, independently-learning policy gradient methods also perform poorly. One issue is that each agent's policy changes during training, resulting in a non-stationary environment and preventing the na "i ve application of experience replay. Previous work has attempted to address this by inputting other agent’s policy parameters to the Q function @cite_14 , explicitly adding the iteration index to the replay buffer, or using importance sampling @cite_28 . Deep Q-learning approaches have previously been investigated in @cite_19 to train competing Pong agents.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_22", "@cite_28", "@cite_19" ], "mid": [ "1641379095", "2097498347", "2096145798", "2949201811", "2963658727" ], "abstract": [ "Intelligent human agents exist in a cooperative social environment that facilitates learning. They learn not only by trial-and-error, but also through cooperation by sharing instantaneous information, episodic experience, and learned knowledge. The key investigations of this paper are, “Given the same number of reinforcement learning agents, will cooperative agents outperform independent agents who do not communicate during learning?” and “What is the price for such cooperation?” Using independent agents as a benchmark, cooperative agents are studied in following ways: (1) sharing sensation, (2) sharing episodes, and (3) sharing learned policies. This paper shows that (a) additional sensation from another agent is beneficial if it can be used efficiently, (b) sharing learned policies or episodes among agents speeds up learning at the cost of communication, and (c) for joint tasks, agents engaging in partnership can significantly outperform independent agents although they may learn slowly in the beginning. These tradeoff's are not just limited to multi-agent reinforcement learning.", "Recent multi-agent extensions of Q-Learning require knowledge of other agents' payoffs and Q-functions, and assume game-theoretic play at all times by all other agents. This paper proposes a fundamentally different approach, dubbed \"Hyper-Q\" Learning, in which values of mixed strategies rather than base actions are learned, and in which other agents' strategies are estimated from observed actions via Bayesian inference. Hyper-Q may be effective against many different types of adaptive agents, even if they are persistently dynamic. Against certain broad categories of adaptation, it is argued that Hyper-Q may converge to exact optimal time-varying policies. In tests using Rock-Paper-Scissors, Hyper-Q learns to significantly exploit an Infinitesimal Gradient Ascent (IGA) player, as well as a Policy Hill Climber (PHC) player. Preliminary analysis of Hyper-Q against itself is also presented.", "In the framework of fully cooperative multi-agent systems, independent (non-communicative) agents that learn by reinforcement must overcome several difficulties to manage to coordinate. This paper identifies several challenges responsible for the non-coordination of independent agents: Pareto-selection, non-stationarity, stochasticity, alter-exploration and shadowed equilibria. A selection of multi-agent domains is classified according to those challenges: matrix games, Boutilier's coordination game, predators pursuit domains and a special multi-state game. Moreover, the performance of a range of algorithms for independent reinforcement learners is evaluated empirically. Those algorithms are Q-learning variants: decentralized Q-learning, distributed Q-learning, hysteretic Q-learning, recursive frequency maximum Q-value and win-or-learn fast policy hill climbing. An overview of the learning algorithms' strengths and weaknesses against each challenge concludes the paper and can serve as a basis for choosing the appropriate algorithm for a new domain. Furthermore, the distilled challenges may assist in the design of new learning algorithms that overcome these problems and achieve higher performance in multi-agent applications.", "Many real-world problems, such as network packet routing and urban traffic control, are naturally modeled as multi-agent reinforcement learning (RL) problems. However, existing multi-agent RL methods typically scale poorly in the problem size. Therefore, a key challenge is to translate the success of deep learning on single-agent RL to the multi-agent setting. A major stumbling block is that independent Q-learning, the most popular multi-agent RL method, introduces nonstationarity that makes it incompatible with the experience replay memory on which deep Q-learning relies. This paper proposes two methods that address this problem: 1) using a multi-agent variant of importance sampling to naturally decay obsolete data and 2) conditioning each agent's value function on a fingerprint that disambiguates the age of the data sampled from the replay memory. Results on a challenging decentralised variant of StarCraft unit micromanagement confirm that these methods enable the successful combination of experience replay with multi-agent RL.", "Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments." ] }
1706.02275
2623431351
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
The nature of interaction between agents can either be cooperative, competitive, or both and many algorithms are designed only for a particular nature of interaction. Most studied are cooperative settings, with strategies such as optimistic and hysteretic Q function updates @cite_31 @cite_24 @cite_15 , which assume that the actions of other agents are made to improve collective reward. Another approach is to indirectly arrive at cooperation via sharing of policy parameters @cite_9 , but this requires homogeneous agent capabilities. These algorithms are generally not applicable in competitive or mixed settings. See @cite_13 @cite_8 for surveys of multi-agent learning approaches and applications.
{ "cite_N": [ "@cite_15", "@cite_8", "@cite_9", "@cite_24", "@cite_31", "@cite_13" ], "mid": [ "2951896791", "2099618002", "2768629321", "2108892923", "1560074431", "2107544712" ], "abstract": [ "Many real-world tasks involve multiple agents with partial observability and limited communication. Learning is challenging in these settings due to local viewpoints of agents, which perceive the world as non-stationary due to concurrently-exploring teammates. Approaches that learn specialized policies for individual tasks face problems when applied to the real world: not only do agents have to learn and store distinct policies for each task, but in practice identities of tasks are often non-observable, making these approaches inapplicable. This paper formalizes and addresses the problem of multi-task multi-agent reinforcement learning under partial observability. We introduce a decentralized single-task learning approach that is robust to concurrent interactions of teammates, and present an approach for distilling single-task policies into a unified policy that performs well across multiple related tasks, without explicit provision of task identity.", "Multiagent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, and economics. The complexity of many tasks arising in these domains makes them difficult to solve with preprogrammed agent behaviors. The agents must, instead, discover a solution on their own, using learning. A significant part of the research on multiagent learning concerns reinforcement learning techniques. This paper provides a comprehensive survey of multiagent reinforcement learning (MARL). A central issue in the field is the formal statement of the multiagent learning goal. Different viewpoints on this issue have led to the proposal of many different goals, among which two focal points can be distinguished: stability of the agents' learning dynamics, and adaptation to the changing behavior of the other agents. The MARL algorithms described in the literature aim---either explicitly or implicitly---at one of these two goals or at a combination of both, in a fully cooperative, fully competitive, or more general setting. A representative selection of these algorithms is discussed in detail in this paper, together with the specific issues that arise in each category. Additionally, the benefits and challenges of MARL are described along with some of the problem domains where the MARL techniques have been applied. Finally, an outlook for the field is provided.", "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multi-agent domains.", "Multi-agent systems (MAS) are a field of study of growing interest in a variety of domains such as robotics or distributed controls. The article focuses on decentralized reinforcement learning (RL) in cooperative MAS, where a team of independent learning robots (IL) try to coordinate their individual behavior to reach a coherent joint behavior. We assume that each robot has no information about its teammates' actions. To date, RL approaches for such ILs did not guarantee convergence to the optimal joint policy in scenarios where the coordination is difficult. We report an investigation of existing algorithms for the learning of coordination in cooperative MAS, and suggest a Q-learning extension for ILs, called hysteretic Q-learning. This algorithm does not require any additional communication between robots. Its advantages are showing off and compared to other methods on various applications: bi-matrix games, collaborative ball balancing task and pursuit domain.", "", "Cooperative multi-agent systems (MAS) are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility. Due to the interactions among the agents, multi-agent problem complexity can rise rapidly with the number of agents or their behavioral sophistication. The challenge this presents to the task of programming solutions to MAS problems has spawned increasing interest in machine learning techniques to automate the search and optimization process. We provide a broad survey of the cooperative multi-agent learning literature. Previous surveys of this area have largely focused on issues common to specific subareas (for example, reinforcement learning, RL or robotics). In this survey we attempt to draw from multi-agent learning work in a spectrum of areas, including RL, evolutionary computation, game theory, complex systems, agent modeling, and robotics. We find that this broad view leads to a division of the work into two categories, each with its own special issues: applying a single learner to discover joint solutions to multi-agent problems (team learning), or using multiple simultaneous learners, often one per agent (concurrent learning). Additionally, we discuss direct and indirect communication in connection with learning, plus open issues in task decomposition, scalability, and adaptive dynamics. We conclude with a presentation of multi-agent learning problem domains, and a list of multi-agent learning resources." ] }
1706.02275
2623431351
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
Concurrently to our work, @cite_18 proposed a similar idea of using policy gradient methods with a centralized critic, and test their approach on a StarCraft micromanagement task. Their approach differs from ours in the following ways: (1) they learn a single centralized critic for all agents, whereas we learn a centralized critic for each agent, allowing for agents with differing reward functions including competitive scenarios, (2) we consider environments with explicit communication between agents, (3) they combine recurrent policies with feed-forward critics, whereas our experiments use feed-forward policies (although our methods are applicable to recurrent policies), (4) we learn continuous policies whereas they learn discrete policies.
{ "cite_N": [ "@cite_18" ], "mid": [ "2617547828" ], "abstract": [ "Cooperative multi-agent systems can be naturally used to model many real world problems, such as network packet routing and the coordination of autonomous vehicles. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actor-critic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state." ] }
1706.02275
2623431351
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
Recent work has focused on learning grounded cooperative communication protocols between agents to solve various tasks @cite_5 @cite_6 @cite_16 . However, these methods are usually only applicable when the communication between agents is carried out over a dedicated, differentiable communication channel.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_6" ], "mid": [ "2402402867", "2602275733", "2395575420" ], "abstract": [ "Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.", "By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language. This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.", "We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains." ] }
1706.02275
2623431351
We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.
Our method requires explicitly modeling decision-making process of other agents. The importance of such modeling has been recognized by both reinforcement learning @cite_25 @cite_26 and cognitive science communities @cite_36 . @cite_17 stressed the importance of being robust to the decision making process of other agents, as do others by building Bayesian models of decision making. We incorporate such robustness considerations by requiring that agents interact successfully with an ensemble of any possible policies of other agents, improving training stability and robustness of agents after training.
{ "cite_N": [ "@cite_36", "@cite_26", "@cite_25", "@cite_17" ], "mid": [ "1993979041", "2128643385", "1764574858", "1963576113" ], "abstract": [ "One of the most astonishing features of human language is its capacity to convey information efficiently in context. Many theories provide informal accounts of communicative inference, yet there have been few successes in making precise, quantitative predictions about pragmatic reasoning. We examined judgments about simple referential communication games, modeling behavior in these games by assuming that speakers attempt to be informative and that listeners use Bayesian inference to recover speakers’ intended referents. Our model provides a close, parameter-free fit to human judgments, suggesting that the use of information-theoretic tools to predict pragmatic reasoning may lead to more effective formal models of communication.", "Much emphasis in multiagent reinforcement learning (MARL) research is placed on ensuring that MARL algorithms (eventually) converge to desirable equilibria. As in standard reinforcement learning, convergence generally requires sufficient exploration of strategy space. However, exploration often comes at a price in the form of penalties or foregone opportunities. In multiagent settings, the problem is exacerbated by the need for agents to \"coordinate\" their policies on equilibria. We propose a Bayesian model for optimal exploration in MARL problems that allows these exploration costs to be weighed against their expected benefits using the notion of value of information. Unlike standard RL models, this model requires reasoning about how one's actions will influence the behavior of other agents. We develop tractable approximations to optimal Bayesian exploration, and report on experiments illustrating the benefits of this approach in identical interest games.", "Fully cooperative multiagent systems--those in which agents share a joint utility model--is of special interest in AI. A key problem is that of ensuring that the actions of individual agents are coordinated, especially in settings where the agents are autonomous decision makers. We investigate approaches to learning coordinated strategies in stochastic domains where an agent's actions are not directly observable by others. Much recent work in game theory has adopted a Bayesian learning perspective to the more general problem of equilibrium selection, but tends to assume that actions can be observed. We discuss the special problems that arise when actions are not observable, including effects on rates of convergence, and the effect of action failure probabilities and asymmetries. We also use likelihood estimates as a means of generalizing fictitious play learning models in our setting. Finally, we propose the use of maximum likelihood as a means of removing strategies from consideration, with the aim of convergence to a conventional equilibrium, at which point learning and deliberation can cease.", "" ] }
1706.02202
2087854645
The problem presented in this paper is a generalization of the usual coupled-tasks scheduling problem in presence of compatibility constraints. The reason behind this study is the data acquisition problem for a submarine torpedo. We investigate a particular configuration for coupled tasks (any task is divided into two sub-tasks separated by an idle time), in which the idle time of a coupled task is equal to the sum of durations of its two sub-tasks. We prove @math -completeness of the minimization of the schedule length, we show that finding a solution to our problem amounts to solving a graph problem, which in itself is close to the minimum-disjoint-path cover (min-DCP) problem. We design a @math -approximation, where a and b (the processing time of the two sub-tasks) are two input data such as a>b>0, and that leads to a ratio between @math and @math . Using a polynomial-time algorithm developed for some class of graph of min-DCP, we show that the ratio decreases to @math .
The problem of coupled-tasks has been studied in regard to different conditions on the values of @math , @math , @math for @math , and precedence constraints @cite_5 @cite_20 @cite_13 @cite_18 . Note that, in the previous works, all tasks are compatible by considering a complete graph @cite_5 @cite_20 @cite_13 @cite_18 . Moreover, in presence of any compatibility graph, we find several complexity results @cite_14 @cite_7 @cite_8 , which are summarized in Table . The notation @math implies that for all @math , @math is equal to a constant @math . This notation can be extended to @math and @math with the constants @math and @math .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_8", "@cite_5", "@cite_13", "@cite_20" ], "mid": [ "1966465131", "2285180621", "1981499630", "2606268922", "1985321454", "2174653384", "2091078967" ], "abstract": [ "A coupled-task is a job consisting of two distinct operations. These operations require processing in a predetermined order and at a specified interval apart. This paper considers the problem of sequencing n coupled-task jobs on a single machine with the objective of minimizing the makespan. By making assumptions about processing times, we obtain many special cases and explore the complexity of each case. NP-hardness proofs, or polynomial algorithms, are given to all except one of these special cases. The practical scenario from which this problem originated is also discussed.", "Nous presentons dans cet article un nouveau type de probleme d'ordonnancement sur monoprocesseur avec tâches-couplees en presence d'un graphe de compatibilite. Ce type de probleme est motive par l'etude d'une torpille autonome sous-marine. Dans un premier temps, nous presentons et modelisons la problematique generale, puis nous etudions la complexite de ce type de problemes en presence de contrainte de compatibilite. Des resultats de N P-completude sont obtenus et un algorithme de complexite polynomiale est donne pour resoudre deux problemes en particulier. Enfin apres une synthese de ces resultats, nous presentons une vision globale de la complexite de cette classe de probleme.", "This paper considers a special case of the coupled-tasks scheduling problem on one processor. The general problems were analyzed in depth by Orman ad Potts [1]. In this paper, we cosider that all processing times are equal to 1, the gap has exact length L, we have precedence constraits, compatibility constraits are introduced and the criterion is to minimize the scheduling length. We use this problem to study the problem of data acquisittion and data treatment of a torpedo under the water. We show that this problem is NP-complete and we propose an ρ-approximation algorithm where ρ ≤ (L+6) 6.", "This paper introduces a scheduling problem with coupled-tasks in presence of a compatibility graph on a single processor. We investigate a specific configuration, in which the coupled-tasks possess an idle time equal to 2. The complexity of these problems will be studied according to the presence or absence of triangles in the compatibility graph. As an extended matching, we propose a polynomial-time algorithm which consists in minimizing the number of non-covered vertices, by covering vertices with edges or paths of length two in the compatibility graph. This type of covering will be denoted by 2-cover technique. According on the compatibility graph type, the 2-cover technique provides a polynomial-time rho-approximation algorithm with rho=13 12 (resp. rho=10 9) in absence (resp. presence) of triangles.", "The coupled tasks scheduling problem was originally introduced for modeling complex radar devices. It is still used for controlling such devices and applied in similar applications. This paper considers a problem of coupled tasks scheduling on a single processor, under the assumptions that all processing times are equal to 1, the gap has exact integer length L and the precedence constraints are strict. We prove that the general problem, when L is part of the input and the precedence constraints graph is a general graph, is NP-hard in the strong sense. We also show that the special case when L=2 and the precedence constraints graph is an in-tree or an out-tree, can be solved in O(n) time.", "Coupled tasks are two-operation tasks, where the two operations are separated by a time interval of fixed duration. Coupled task scheduling problems refer then to the scheduling of a set of coupled tasks on a single machine. Applications of these problems, reported in the literature, arise in connection with radar systems, robotic cells, and in manufacturing. Most of the complexity issues for scheduling coupled tasks have been settled. However, the complexity status has been unknown for the identical coupled task problem, where multiple copies of a single coupled task are to be processed. The purpose of the article is to solve this open problem in the cyclic case, for which we prove the polynomial complexity.", "The coupled task problem is to schedule n jobs on one machine where each job consists of two subtasks with required delay time between them. The objective is to minimize the makespan. This problem was analyzed in depth by Orman and Potts [3]. They investigated the complexity of different cases depending on the lengths a i and b i of the two subtasks and the delay time L i . Open image in new window Open image in new window -hardness proofs or polynomial algorithms were given for all cases except for the one where a i =a, b i =b and L i =L. In this paper we present an exact algorithm for this problem with time complexity O(nr 2L ) where Open image in new window holds. Therefore the algorithm is linear in the number of jobs for fixed L." ] }
1706.01679
2529333329
Process Control Systems (PCSs) are the operating core of Critical Infrastructures (CIs). As such, anomaly detection has been an active research field to ensure CI normal operation. Previous approaches have leveraged network level data for anomaly detection, or have disregarded the existence of process disturbances, thus opening the possibility of mislabelling disturbances as attacks and vice versa. In this paper we present an anomaly detection and diagnostic system based on Multivariate Statistical Process Control (MSPC), that aims to distinguish between attacks and disturbances. For this end, we expand traditional MSPC to monitor process level and controller level data. We evaluate our approach using the Tennessee-Eastman process. Results show that our approach can be used to distinguish disturbances from intrusions to a certain extent and we conclude that the proposed approach can be extended with other sources of data for improving results.
While most of the approaches leverage network level data to detect anomalies in PCSs (see survey @cite_4 ), other proposals, such as ours, address this task by leveraging process and sensor-level data.
{ "cite_N": [ "@cite_4" ], "mid": [ "2039427951" ], "abstract": [ "Last year marked a turning point in the history of cybersecurity-the arrival of the first cyber warfare weapon ever, known as Stuxnet. Not only was Stuxnet much more complex than any other piece of malware seen before, it also followed a completely new approach that's no longer aligned with conven tional confidentiality, integrity, and availability thinking. Con trary to initial belief, Stuxnet wasn't about industrial espionage: it didn't steal, manipulate, or erase information. Rather, Stuxnet's goal was to physically destroy a military target-not just meta phorically, but literally. Let's see how this was done." ] }
1706.01679
2529333329
Process Control Systems (PCSs) are the operating core of Critical Infrastructures (CIs). As such, anomaly detection has been an active research field to ensure CI normal operation. Previous approaches have leveraged network level data for anomaly detection, or have disregarded the existence of process disturbances, thus opening the possibility of mislabelling disturbances as attacks and vice versa. In this paper we present an anomaly detection and diagnostic system based on Multivariate Statistical Process Control (MSPC), that aims to distinguish between attacks and disturbances. For this end, we expand traditional MSPC to monitor process level and controller level data. We evaluate our approach using the Tennessee-Eastman process. Results show that our approach can be used to distinguish disturbances from intrusions to a certain extent and we conclude that the proposed approach can be extended with other sources of data for improving results.
When dealing with process level data, proposals can be further classified in two subgroups: (1) solutions that require a model of the monitored process to detect anomalies and (2) approaches where modelling the process is not necessary. Process model dependant contributions include the work of McEvoy and Wolthusen @cite_2 and Svendsen and Wolthusen @cite_14 . While effective to detect anomalies, these approaches require accurate modelling of the physical process. This requirement poses an important obstacle for implementing detection systems of this nature, especially in complex processes. More process-independent approaches on the other hand, include the work of @cite_5 and @cite_13 .
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_13", "@cite_2" ], "mid": [ "1613804959", "1501307199", "", "56492688" ], "abstract": [ "Modern Process Control Systems (PCS) exhibit an increasing trend towards the pervasive adoption of commodity, off-the-shelf Information and Communication Technologies (ICT). This has brought significant economical and operational benefits, but it also shifted the architecture of PCS from a completely isolated environment to an open, “system of systems” integration with traditional ICT systems, susceptible to traditional computer attacks. In this paper we present a novel approach to detect cyber attacks targeting measurements sent to control hardware, i.e., typically to Programmable Logical Controllers (PLC). The approach builds on the Gaussian mixture model to cluster sensor measurement values and a cluster assessment technique known as silhouette. We experimentally demonstrate that in this particular problem the Gaussian mixture clustering outperforms the k-means clustering algorithm. The effectiveness of the proposed technique is tested in a scenario involving the simulated Tennessee-Eastman chemical process and three different cyber attacks.", "Supervisory control and data acquisition (SCADA) systems are increasingly used to operate critical infrastructure assets. However, the inclusion of advanced information technology and communications components and elaborate control strategies in SCADA systems increase the threat surface for external and subversion-type attacks. The problems are exacerbated by site-specific properties of SCADA environments that make subversion detection impractical; and by sensor noise and feedback characteristics that degrade conventional anomaly detection systems. Moreover, potential attack mechanisms are ill-defined and may include both physical and logical aspects.", "", "Industrial control systems are a vital part of the critical infrastructure. The potentially large impact of a failure makes them attractive targets for adversaries. Unfortunately, simplistic approaches to intrusion detection using protocol analysis or naive statistical estimation techniques are inadequate in the face of skilled adversaries who can hide their presence with the appearance of legitimate actions." ] }
1706.01679
2529333329
Process Control Systems (PCSs) are the operating core of Critical Infrastructures (CIs). As such, anomaly detection has been an active research field to ensure CI normal operation. Previous approaches have leveraged network level data for anomaly detection, or have disregarded the existence of process disturbances, thus opening the possibility of mislabelling disturbances as attacks and vice versa. In this paper we present an anomaly detection and diagnostic system based on Multivariate Statistical Process Control (MSPC), that aims to distinguish between attacks and disturbances. For this end, we expand traditional MSPC to monitor process level and controller level data. We evaluate our approach using the Tennessee-Eastman process. Results show that our approach can be used to distinguish disturbances from intrusions to a certain extent and we conclude that the proposed approach can be extended with other sources of data for improving results.
@cite_5 present an anomaly detection technique based on the Gaussian mixture model clustering of the sensor-level observations. Later, they use silhouette examinations to interpret the results. Nevertheless, they only consider attacks as possible factors for abnormal situations in the process, without considering process faults or disturbances. Therefore, process related anomalies could be mislabeled as attacks and vice versa.
{ "cite_N": [ "@cite_5" ], "mid": [ "1613804959" ], "abstract": [ "Modern Process Control Systems (PCS) exhibit an increasing trend towards the pervasive adoption of commodity, off-the-shelf Information and Communication Technologies (ICT). This has brought significant economical and operational benefits, but it also shifted the architecture of PCS from a completely isolated environment to an open, “system of systems” integration with traditional ICT systems, susceptible to traditional computer attacks. In this paper we present a novel approach to detect cyber attacks targeting measurements sent to control hardware, i.e., typically to Programmable Logical Controllers (PLC). The approach builds on the Gaussian mixture model to cluster sensor measurement values and a cluster assessment technique known as silhouette. We experimentally demonstrate that in this particular problem the Gaussian mixture clustering outperforms the k-means clustering algorithm. The effectiveness of the proposed technique is tested in a scenario involving the simulated Tennessee-Eastman chemical process and three different cyber attacks." ] }
1706.01919
2623026428
We present the results of a user study with novice NMR analysts (N=19) involving a gamified simulation of the NMR analysis process. Participants solved randomly generated spectrum puzzles for up to three hours. We used eye tracking, event logging, and observations to record symptoms of cognitive depletion while participants worked. Analysis of results indicate that we can detect both signs of learning and signs of cognitive depletion in participants over the course of the three hours. Participants' break strategies did not predict or reflect game scores, but certain symptoms appear predictive of breaks.
Research into attention processes often attempts to measure and predict how long individuals can attend to a specific task. One important concept from attention research is the often observed vigilance decrement: after periods of sustained effort on a vigilance task, individuals’ begin to miss cues critical to their task or workflow @cite_4 @cite_1 @cite_23 @cite_14 . Task-unrelated thoughts (TUTs) are another important phenomenon from attentional research. Individuals required to focus on a task for long periods of time often report self-distracting thoughts that are unrelated to the task at hand @cite_6 @cite_18 . Suppressing TUTs is a current topic of research but for our purposes, we view TUTs as a potential symptom of cognitive depletion. Interruption and task resumption research seeks to understand workflows and predict the optimal time to interrupt an individual so that the interruption is as minimally detrimental as possible (see our references for just a few examples). Such understanding is important to the study of cognitive depletion because any coping or mitigation strategies must also be minimally intrusive to workflows.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_1", "@cite_6", "@cite_23" ], "mid": [ "2019490284", "2056699334", "1984018499", "2128974123", "1995994322", "2086210198" ], "abstract": [ "Perceptual load is a key determinant of distraction by task-irrelevant stimuli (e.g., Lavie, N. (2005). Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75–82). Here we establish the role of perceptual load in determining an internal form of distraction by task-unrelated thoughts (TUTs or “mind-wandering”). Four experiments demonstrated reduced frequency of TUTs with high compared to low perceptual load in a visual-search task. Alternative accounts in terms of increased demands on responses, verbal working memory or motivation were ruled out and clear effects of load were found for unintentional TUTs. Individual differences in load effects on internal (TUTs) and external (response-competition) distractors were correlated. These results suggest that exhausting attentional capacity in task-relevant processing under high perceptual load can reduce processing of task-irrelevant information from external and internal sources alike.", "Objective: We describe major discoveries and developments in vigilance research. Background: Vigilance tasks have typically been viewed as undemanding assignments requiring little mental effort. The vigilance decrement function has also been considered to result from a decline in arousal brought about by understimulation. Methods: Recent research in vigilance is reviewed in four areas: studies of task type, perceived mental workload during vigilance, neural measures of resource demand in vigilance, and studies of task-induced stress. Results: Experiments comparing successive and simultaneous vigilance tasks support an attentional resource theory of vigilance. Subjective reports also show that the workload of vigilance is high and sensitive to factors that increase processing demands. Neuroimaging studies using transcranial Doppler sonography provide strong, independent evidence for resource changes linked to performance decrement in vigilance tasks. Finally, physiological and subjective reports confirm th...", "Research on security warnings consistently points to habituation as a key reason why users ignore security warnings. However, because habituation as a mental state is difficult to observe, previous research has examined habituation indirectly by observing its influence on security behaviors. This study addresses this gap by using functional magnetic resonance imaging (fMRI) to open the \"black box\" of the brain to observe habituation as it develops in response to security warnings. Our results show a dramatic drop in the visual processing centers of the brain after only the second exposure to a warning, with further decreases with subsequent exposures. To combat the problem of habituation, we designed a polymorphic warning that changes its appearance. We show in two separate experiments using fMRI and mouse cursor tracking that our polymorphic warning is substantially more resistant to habituation than conventional warnings. Together, our neurophysiological findings illustrate the considerable influence of human biology on users' habituation to security warnings.", "We newly propose that the vigilance decrement occurs because the cognitive control system fails to maintain active the goal of the vigilance task over prolonged periods of time (goal habituation). Further, we hypothesized that momentarily deactivating this goal (via a switch in tasks) would prevent the activation level of the vigilance goal from ever habituating. We asked observers to perform a visual vigilance task while maintaining digits in-memory. When observers retrieved the digits at the end of the vigilance task, their vigilance performance steeply declined over time. However, when observers were asked to sporadically recollect the digits during the vigilance task, the vigilance decrement was averted. Our results present a direct challenge to the pervasive view that vigilance decrements are due to a depletion of attentional resources and provide a tractable mechanism to prevent this insidious phenomenon in everyday life.", "Human multitasking is often the result of self-initiated interruptions in the performance of an ongoing task. These self-interruptions occur in the absence of external triggers such as electronic alerts or email notifications. Compared to externally induced interruptions, self-interruptions have not received enough research attention. To address this gap, this paper develops a typology of self-interruptions based on the integration of Flow Theory and Self-regulation Theory. In this new typology, the two major categories stem from positive and negative feelings of task progress and prospects of goal attainment. The proposed classification is validated in an experimental multitasking environment with pre-defined tasks. Empirical findings indicate that negative feelings trigger more self-interruptions than positive feelings. In general, more self-interruptions result in lower accuracy in all tasks. The results suggest that negative internal triggers of self-interruptions unleash a downward spiral that may degrade performance.", "The vigilance decrement has been described as a slowing in reaction times or an increase in error rates as an effect of time-on-task during tedious monitoring tasks. This decrement has been alternatively ascribed to either withdrawal of the supervisory attentional system, due to underarousal caused by the insufficient workload, or to a decreased attentional capacity and thus the impossibility to sustain mental effort. Furthermore, it has previously been reported that controlled processing is the locus of the vigilance decrement. This study aimed at answering three questions, to better define sustained attention. First, is endogenous attention more vulnerable to time-on-task than exogenous attention? Second, do measures of autonomic arousal provide evidence to support the underload vs overload hypothesis? And third, do these measures show a different effect for endogenous and exogenous attention? We applied a cued (valid vs invalid) conjunction search task, and ECG and respiration recordings were used to compute sympathetic (normalized low frequency power) and parasympathetic tone (respiratory sinus arrhythmia, RSA). Behavioural results showed a dual effect of time-on-task: the usually described vigilance decrement, expressed as increased reaction times (RTs) after 30 min for both conditions; and a higher cost in RTs after invalid cues for the endogenous condition only, appearing after 60 min. Physiological results clearly support the underload hypothesis to subtend the vigilance decrement, since heart period and RSA increased over time-on-task. There was no physiological difference between the endogenous and exogenous conditions. Subjective experience of participants was more compatible with boredom than with high mental effort." ] }
1706.01919
2623026428
We present the results of a user study with novice NMR analysts (N=19) involving a gamified simulation of the NMR analysis process. Participants solved randomly generated spectrum puzzles for up to three hours. We used eye tracking, event logging, and observations to record symptoms of cognitive depletion while participants worked. Analysis of results indicate that we can detect both signs of learning and signs of cognitive depletion in participants over the course of the three hours. Participants' break strategies did not predict or reflect game scores, but certain symptoms appear predictive of breaks.
The study of Mechanical Turk-style economies has provided a wealth of techniques for assessing the quality of worker contributions and for detecting workers who are abusing task structures for their own gain @cite_21 @cite_3 @cite_8 . These techniques are beneficial to the study of cognitive depletion as they provide potential metrics that can be automatically collected without interrupting workers as well as providing insight into the working patterns of large groups of individuals. It also provides an easily accessible real-world example of an ideal scenario for cognitive depletion research: an economy based on constant completion of micro-tasks where workers are motivated to work beyond the point where their cognitively depleted state begins affecting the quality of their output.
{ "cite_N": [ "@cite_21", "@cite_3", "@cite_8" ], "mid": [ "1999308248", "2124994029", "2030192188" ], "abstract": [ "Crowdsourcing systems lack effective measures of the effort required to complete each task. Without knowing how much time workers need to execute a task well, requesters struggle to accurately structure and price their work. Objective measures of effort could better help workers identify tasks that are worth their time. We propose a data-driven effort metric, ETA (error-time area), that can be used to determine a task's fair price. It empirically models the relationship between time and error rate by manipulating the time that workers have to complete a task. ETA reports the area under the error-time curve as a continuous metric of worker effort. The curve's 10th percentile is also interpretable as the minimum time most workers require to complete the task without error, which can be used to price the task. We validate the ETA metric on ten common crowdsourcing tasks, including tagging, transcription, and search, and find that ETA closely tracks how workers would rank these tasks by effort. We also demonstrate how ETA allows requesters to rapidly iterate on task designs and measure whether the changes improve worker efficiency. Our findings can facilitate the process of designing, pricing, and allocating crowdsourcing tasks.", "Detecting and correcting low quality submissions in crowdsourcing tasks is an important challenge. Prior work has primarily focused on worker outcomes or reputation, using approaches such as agreement across workers or with a gold standard to evaluate quality. We propose an alternative and complementary technique that focuses on the way workers work rather than the products they produce. Our technique captures behavioral traces from online crowd workers and uses them to predict outcome measures such quality, errors, and the likelihood of cheating. We evaluate the effectiveness of the approach across three contexts including classification, generation, and comprehension tasks. The results indicate that we can build predictive models of task performance based on behavioral traces alone, and that these models generalize to related tasks. Finally, we discuss limitations and extensions of the approach.", "Multitasking in user behavior can be represented along a continuum in terms of the time spent on one task before switching to another. In this paper, we present a theory of behavior along the multitasking continuum, from concurrent tasks with rapid switching to sequential tasks with longer time between switching. Our theory unifies several theoretical effects - the ACT-R cognitive architecture, the threaded cognition theory of concurrent multitasking, and the memory-for-goals theory of interruption and resumption - to better understand and predict multitasking behavior. We outline the theory and discuss how it accounts for numerous phenomena in the recent empirical literature." ] }
1706.01789
2623892189
In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70 . Our method has also been submitted for evaluation as part of the Menpo challenge.
The main differences between the variety of CSR based methods introduced in the literature lie in the choice of the feature extraction method @math and the regression method @math . For instance, Supervised Descent Method (SDM) @cite_8 uses SIFT @cite_12 features and a simple linear regressor. LBF @cite_31 takes advantage of sparse features generated from binary trees and intensity differences of individual pixels. LBF uses Support Vector Regression @cite_15 for regression which, combined with the sparse features, leads to a very efficient method running at up to 3000 fps.
{ "cite_N": [ "@cite_31", "@cite_15", "@cite_12", "@cite_8" ], "mid": [ "1998294030", "", "2151103935", "2157285372" ], "abstract": [ "This paper presents a highly efficient, very accurate regression approach for face alignment. Our approach has two novel components: a set of local binary features, and a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. Our approach achieves the state-of-the-art results when tested on the current most challenging benchmarks. Furthermore, because extracting and regressing local binary features is computationally very cheap, our system is much faster than previous methods. It achieves over 3, 000 fps on a desktop or 300 fps on a mobile phone for locating a few dozens of landmarks.", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu intraface." ] }
1706.01789
2623892189
In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70 . Our method has also been submitted for evaluation as part of the Menpo challenge.
Coarse to Fine Shape Searching (CFSS) @cite_41 , similarly to SDM, uses SIFT features extracted at landmark locations. However the regression step of CSR is replaced with a search over the space of possible face shapes which goes from coarse to fine over several iterations. This reduces the probability of falling into a local minimum and thus improves convergence.
{ "cite_N": [ "@cite_41" ], "mid": [ "1960706641" ], "abstract": [ "We present a novel face alignment framework based on coarse-to-fine shape searching. Unlike the conventional cascaded regression approaches that start with an initial shape and refine the shape in a cascaded manner, our approach begins with a coarse search over a shape space that contains diverse shapes, and employs the coarse solution to constrain subsequent finer search of shapes. The unique stage-by-stage progressive and adaptive search i) prevents the final solution from being trapped in local optima due to poor initialisation, a common problem encountered by cascaded regression approaches; and ii) improves the robustness in coping with large pose variations. The framework demonstrates real-time performance and state-of-the-art results on various benchmarks including the challenging 300-W dataset." ] }
1706.01789
2623892189
In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70 . Our method has also been submitted for evaluation as part of the Menpo challenge.
MIX @cite_23 also uses SIFT for feature extraction, while regression is performed using a mixture of experts, where each expert is specialized in a certain part of the space of face shapes. Moreover MIX, warps the input image before each iteration so that the current estimate of the face shape matches a predefined canonical face shape.
{ "cite_N": [ "@cite_23" ], "mid": [ "2241943627" ], "abstract": [ "Face alignment, which is the task of finding the locations of a set of facial landmark points in an image of a face, is useful in widespread application areas. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a mixture of regression experts. Each expert learns a customized regression model that is specialized to a different subset of the joint space of pose and expressions. The system is invariant to a predefined class of transformations (e.g., affine), because the input is transformed to match each expert’s prototype shape before the regression is applied. We also present a method to include deformation constraints within the discriminative alignment framework, which makes our algorithm more robust. Our algorithm significantly outperforms previous methods on publicly available face alignment datasets." ] }
1706.01789
2623892189
In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70 . Our method has also been submitted for evaluation as part of the Menpo challenge.
Mnemonic Descent Method (MDM) @cite_5 fuses the feature extraction and regression steps of CSR into a single Recurrent Neural Network that is trained end-to-end. MDM also introduces memory into the process which allows information to be passed between CSR iterations.
{ "cite_N": [ "@cite_5" ], "mid": [ "2474575620" ], "abstract": [ "Cascaded regression has recently become the method of choice for solving non-linear least squares problems such as deformable image alignment. Given a sizeable training set, cascaded regression learns a set of generic rules that are sequentially applied to minimise the least squares problem. Despite the success of cascaded regression for problems such as face alignment and head pose estimation, there are several shortcomings arising in the strategies proposed thus far. Specifically, (a) the regressors are learnt independently, (b) the descent directions may cancel one another out and (c) handcrafted features (e.g., HoGs, SIFT etc.) are mainly used to drive the cascade, which may be sub-optimal for the task at hand. In this paper, we propose a combined and jointly trained convolutional recurrent neural network architecture that allows the training of an end-to-end to system that attempts to alleviate the aforementioned drawbacks. The recurrent module facilitates the joint optimisation of the regressors by assuming the cascades form a nonlinear dynamical system, in effect fully utilising the information between all cascade levels by introducing a memory unit that shares information across all levels. The convolutional module allows the network to extract features that are specialised for the task at hand and are experimentally shown to outperform hand-crafted features. We show that the application of the proposed architecture for the problem of face alignment results in a strong improvement over the current state-of-the-art." ] }
1706.01789
2623892189
In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70 . Our method has also been submitted for evaluation as part of the Menpo challenge.
While all of the above mentioned methods perform face alignment based only on local patches, there are some methods @cite_44 @cite_46 that estimate initial landmark positions using the entire face image and use local patches for refinement. In contrast, DAN localizes the landmarks based on the entire face image at all of its stages.
{ "cite_N": [ "@cite_44", "@cite_46" ], "mid": [ "2129210471", "2219124274" ], "abstract": [ "We present a new approach to localize extensive facial landmarks with a coarse-to-fine convolutional network cascade. Deep convolutional neural networks (DCNN) have been successfully utilized in facial landmark localization for two-fold advantages: 1) geometric constraints among facial points are implicitly utilized, 2) huge amount of training data can be leveraged. However, in the task of extensive facial landmark localization, a large number of facial landmarks (more than 50 points) are required to be located in a unified system, which poses great difficulty in the structure design and training process of traditional convolutional networks. In this paper, we design a four-level convolutional network cascade, which tackles the problem in a coarse-to-fine manner. In our system, each network level is trained to locally refine a subset of facial landmarks generated by previous network levels. In addition, each level predicts explicit geometric constraints (the position and rotation angles of a specific facial component) to rectify the inputs of the current network level. The combination of coarse-to-fine cascade and geometric refinement enables our system to locate extensive facial landmarks (68 points) accurately in the 300-W facial landmark localization challenge.", "In this paper we present our solution to the 300 Faces in the Wild Facial Landmark Localization Challenge. We demonstrate how to achieve very competitive localization performance with a simple deep learning based system. Human study is conducted to show that the accuracy of our system has been very close to human performance. We discuss how this finding would affect our future direction to improve our system. We show how to achieve state-of-the-art facial landmark localization by CNN.The system's performance is improved by deeper network.We show our system's performance is close to human." ] }
1706.01789
2623892189
In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70 . Our method has also been submitted for evaluation as part of the Menpo challenge.
The use of heatmaps for face alignment related tasks precedes the proposed method. One method that uses heatmaps is @cite_42 , where a neural network outputs predictions in the form of a heatmap. In contrast, the proposed method uses heatmaps solely as a means for transferring information between stages.
{ "cite_N": [ "@cite_42" ], "mid": [ "2527681779" ], "abstract": [ "This paper describes our submission to the 1st 3D Face Alignment in the Wild (3DFAW) Challenge. Our method builds upon the idea of convolutional part heatmap regression (Bulat and Tzimiropoulos, 2016), extending it for 3D face alignment. Our method decomposes the problem into two parts: (a) X,Y (2D) estimation and (b) Z (depth) estimation. At the first stage, our method estimates the X,Y coordinates of the facial landmarks by producing a set of 2D heatmaps, one for each landmark, using convolutional part heatmap regression. Then, these heatmaps, alongside the input RGB image, are used as input to a very deep subnetwork trained via residual learning for regressing the Z coordinate. Our method ranked 1st in the 3DFAW Challenge, surpassing the second best result by more than 22 . Code can be found at http: www.cs.nott.ac.uk psxab5 ." ] }
1706.01574
2621525845
A significant amount of search queries originate from some real world information need or tasks [13]. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions [9], for better recommendations [41], for satisfaction prediction [36] and for improved personalization in terms of tasks [24, 38]. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks & subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task subtask hierarchies.
There has been a large body of work focused on the problem of segmenting and organizing query logs into semantically coherent structures. Many such methods use the idea of a cutoff between queries, where two consecutive queries are considered as two different sessions or tasks if the time interval between them exceeds a certain threshold @cite_15 @cite_25 @cite_9 . Often a 30-minute timeout is used to segment sessions. However, experimental results of these methods indicate that the timeouts are of limited utility in predicting whether two queries belong to the same task, and unsuitable for identifying session boundaries.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_25" ], "mid": [ "1982889956", "1982896842", "2099513082" ], "abstract": [ "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such.", "Abstract This paper presents the results of a study conducted at Georgia Institute of Technology that captured client-side user events of NCSA's XMosaic. Actual user behavior, as determined from client-side log file analysis, supplemented our understanding of user navigation strategies as well as provided real interface usage data. Log file analysis also yielded design and usability suggestions for WWW pages, sites and browsers. The methodology of the study and findings are discussed along with future research directions.", "Contextual information provides an important basis for identifying and understanding users' information needs. Our previous work in traditional information retrieval systems has shown how using contextual information could improve retrieval performance. With the vast quantity and variety of information available on the Web, and the short query lengths within Web searches, it becomes even more crucial that appropriate contextual information is extracted to facilitate personalized services. However, finding users' contextual information is not straightforward, especially in the Web search environment where less is known about the individual users. In this paper, we will present an approach that has significant potential far studying Web users' search contexts. The approach automatically groups a user's consecutive search activities on the same search topic into one session. It uses Dempster-Shafer theory to combine evidence extracted from two sources, each of which is based on the statistical data from Web search logs. The evaluation we have performed demonstrates that our approach has achieved a significant improvement over previous methods of session identification." ] }
1706.01574
2621525845
A significant amount of search queries originate from some real world information need or tasks [13]. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions [9], for better recommendations [41], for satisfaction prediction [36] and for improved personalization in terms of tasks [24, 38]. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks & subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task subtask hierarchies.
More recent studies suggest that users often seek to complete multiple search tasks within a single search session @cite_4 @cite_7 with over 50
{ "cite_N": [ "@cite_4", "@cite_7" ], "mid": [ "2315767692", "2113363259" ], "abstract": [ "Multi-tasking within a single online search sessions is an increasingly popular phenomenon. In this work, we quantify multi-tasking behavior of web search users. Using insights from large-scale search logs, we seek to characterize user groups and search sessions with a focus on multi-task sessions. Our findings show that dual-task sessions are more prevalent than single-task sessions in online search, and that over 50 of search sessions have more than 2 tasks. Further, we provide a method to categorize users into focused, multi-taskers or supertaskers depending on their level of task-multiplicity and show that the search effort expended by these users varies across the groups. The findings from this analysis provide useful insights about task-multiplicity in an online search environment and hold potential value for search engines that wish to personalize and support search experiences of users based on their task behavior.", "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75 of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users." ] }
1706.01574
2621525845
A significant amount of search queries originate from some real world information need or tasks [13]. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions [9], for better recommendations [41], for satisfaction prediction [36] and for improved personalization in terms of tasks [24, 38]. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks & subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task subtask hierarchies.
@cite_34 and @cite_16 studied the problem of cross-session task extraction via binary same-task classification, and found different types of tasks demonstrate different life spans. While such task extraction methods are good at linking a new query to an on-going task, often these query links form long chains which result in a task cluster containing queries from many potentially different tasks. With the realization that sessions are not enough to represent tasks, recent work has started exploring cross-section task extraction, which often results in complex non-homogeneous clusters of queries solving a number of related yet different tasks. Unfortunately, pairwise predictions alone cannot generate the partition of tasks efficiently and even with post-processing, the final task partitions obtained are not expressive enough to demarcate subtasks @cite_29 . Finally, authors in @cite_14 model query temporal patterns using a special class of point process called Hawkes processes, and combine topic model with Hawkes processes for simultaneously identifying and labeling search tasks.
{ "cite_N": [ "@cite_14", "@cite_29", "@cite_34", "@cite_16" ], "mid": [ "1972882261", "2150263845", "2104255729", "2140387367" ], "abstract": [ "We consider a search task as a set of queries that serve the same user information need. Analyzing search tasks from user query streams plays an important role in building a set of modern tools to improve search engine performance. In this paper, we propose a probabilistic method for identifying and labeling search tasks based on the following intuitive observations: queries that are issued temporally close by users in many sequences of queries are likely to belong to the same search task, meanwhile, different users having the same information needs tend to submit topically coherent search queries. To capture the above intuitions, we directly model query temporal patterns using a special class of point processes called Hawkes processes, and combine topic models with Hawkes processes for simultaneously identifying and labeling search tasks. Essentially, Hawkes processes utilize their self-exciting properties to identify search tasks if influence exists among a sequence of queries for individual users, while the topic model exploits query co-occurrence across different users to discover the latent information needed for labeling search tasks. More importantly, there is mutual reinforcement between Hawkes processes and the topic model in the unified model that enhances the performance of both. We evaluate our method based on both synthetic data and real-world query log data. In addition, we also apply our model to query clustering and search task identification. By comparing with state-of-the-art methods, the results demonstrate that the improvement in our proposed approach is consistent and promising.", "In this paper, we introduce \"task trail\" as a new concept to understand user search behaviors. We define task to be an atomic user information need. Web search logs have been studied mainly at session or query level where users may submit several queries within one task and handle several tasks within one session. Although previous studies have addressed the problem of task identification, little is known about the advantage of using task over session and query for search applications. In this paper, we conduct extensive analyses and comparisons to evaluate the effectiveness of task trails in three search applications: determining user satisfaction, predicting user search interests, and query suggestion. Experiments are conducted on large scale datasets from a commercial search engine. Experimental results show that: (1) Sessions and queries are not as precise as tasks in determining user satisfaction. (2) Task trails provide higher web page utilities to users than other sources. (3) Tasks represent atomic user information needs, and therefore can preserve topic similarity between query pairs. (4) Task-based query suggestion can provide complementary results to other models. The findings in this paper verify the need to extract task trails from web search logs and suggest potential applications in search and recommendation systems.", "The information needs of search engine users vary in complexity, depending on the task they are trying to accomplish. Some simple needs can be satisfied with a single query, whereas others require a series of queries issued over a longer period of time. While search engines effectively satisfy many simple needs, searchers receive little support when their information needs span session boundaries. In this work, we propose methods for modeling and analyzing user search behavior that extends over multiple search sessions. We focus on two problems: (i) given a user query, identify all of the related queries from previous sessions that the same user has issued, and (ii) given a multi-query task for a user, predict whether the user will return to this task in the future. We model both problems within a classification framework that uses features of individual queries and long-term user search behavior at different granularity. Experimental evaluation of the proposed models for both tasks indicates that it is possible to effectively model and analyze cross-session search behavior. Our findings have implications for improving search for complex information needs and designing search engine features to support cross-session search tasks.", "Many important search tasks require multiple search sessions to complete. Tasks such as travel planning, large purchases, or job searches can span hours, days, or even weeks. Inevitably, life interferes, requiring the searcher either to recover the \"state\" of the search manually (most common), or plan for interruption in advance (unlikely). The goal of this work is to better understand, characterize, and automatically detect search tasks that will be continued in the near future. To this end, we analyze a query log from the Bing Web search engine to identify the types of intents, topics, and search behavior patterns associated with long-running tasks that are likely to be continued. Using our insights, we develop an effective prediction algorithm that significantly outperforms both the previous state-of-the-art method, and even the ability of human judges, to predict future task continuation. Potential applications of our techniques would allow a search engine to pre-emptively \"save state\" for a searcher (e.g., by caching search results), perform more targeted personalization, and otherwise better support the searcher experience for interrupted search tasks." ] }
1706.01574
2621525845
A significant amount of search queries originate from some real world information need or tasks [13]. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions [9], for better recommendations [41], for satisfaction prediction [36] and for improved personalization in terms of tasks [24, 38]. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks & subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task subtask hierarchies.
@cite_39 was the first work to consider the fact that there may be multiple subtasks associated with a user's information need and that these subtasks could be interleaved across different sessions. However, their method only focuses on the queries submitted by a single user and attempts to segment them based on whether they fall under the same information need. Hence, they only consider solving the task boundary identification and same task identification problem and cannot be used directly for task extraction. Our work alleviates the same user assumption and considers queries across different users for task extraction. Finally, in a recent poster @cite_33 , we proposed the idea of extracting task hierarchies and presented a basic tree extraction algorithm. Our current work extends the preliminary model in a number of dimensions including novel model of query affinities and task coherence based pruning strategy, which we observe gives substantial improvement in results. Unlike past work, we also present detailed derivation and evaluation of the extracted hierarchy and application on task extraction.
{ "cite_N": [ "@cite_33", "@cite_39" ], "mid": [ "1921743246", "1975409939" ], "abstract": [ "Current search systems do not provide adequate support for users tackling complex tasks due to which the cognitive burden of keeping track of such tasks is placed on the searcher. As opposed to recent approaches to search task extraction, a more naturalistic viewpoint would involve viewing query logs as hierarchies of tasks with each search task being decomposed into more focussed sub-tasks. In this work, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks & subtasks. The proposed approach makes use of the multi-relational aspect of query associations which are important in identifying query-task associations. We describe a greedy agglomerative model selection algorithm based on the Gamma-Poisson conjugate mixture that take just one pass through the data to learn a fully probabilistic, hierarchical model of trees that is capable of learning trees with arbitrary branching structures as opposed to the more common binary structured trees. We evaluate our method based on real world query log data based on query term prediction. To the best of our knowledge, this work is the first to consider hierarchies of search tasks and subtasks.", "Most analysis of web search relevance and performance takes a single query as the unit of search engine interaction. When studies attempt to group queries together by task or session, a timeout is typically used to identify the boundary. However, users query search engines in order to accomplish tasks at a variety of granularities, issuing multiple queries as they attempt to accomplish tasks. In this work we study real sessions manually labeled into hierarchical tasks, and show that timeouts, whatever their length, are of limited utility in identifying task boundaries, achieving a maximum precision of only 70 . We report on properties of this search task hierarchy, as seen in a random sample of user interactions from a major web search engine's log, annotated by human editors, learning that 17 of tasks are interleaved, and 20 are hierarchically organized. No previous work has analyzed or addressed automatic identification of interleaved and hierarchically organized search tasks. We propose and evaluate a method for the automated segmentation of users' query streams into hierarchical units. Our classifiers can improve on timeout segmentation, as well as other previously published approaches, bringing the accuracy up to 92 for identifying fine-grained task boundaries, and 89-97 for identifying pairs of queries from the same task when tasks are interleaved hierarchically. This is the first work to identify, measure and automatically segment sequences of user queries into their hierarchical structure. The ability to perform this kind of segmentation paves the way for evaluating search engines in terms of user task completion." ] }
1706.01574
2621525845
A significant amount of search queries originate from some real world information need or tasks [13]. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions [9], for better recommendations [41], for satisfaction prediction [36] and for improved personalization in terms of tasks [24, 38]. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks & subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task subtask hierarchies.
There has been a significant amount of work on task continuation assistance @cite_31 @cite_16 , building task tours and trails @cite_22 @cite_19 , query suggestions @cite_17 @cite_11 @cite_27 , predicting next search action @cite_0 and notes taking when accomplishing complex tasks @cite_1 . The quality of most of these methods depends on forming accurate representations of tasks, which is the problem we are addressing in this paper.
{ "cite_N": [ "@cite_11", "@cite_22", "@cite_1", "@cite_0", "@cite_19", "@cite_27", "@cite_31", "@cite_16", "@cite_17" ], "mid": [ "2170741935", "1901600440", "2138862413", "2127539404", "2147187250", "2153190022", "2137750620", "2140387367", "1564094940" ], "abstract": [ "We introduce the notion of query substitution, that is, generating a new query to replace a user's original search query. Our technique uses modifications based on typical substitutions web searchers make to their queries. In this way the new query is strongly related to the original query, containing terms closely related to all of the original terms. This contrasts with query expansion through pseudo-relevance feedback, which is costly and can lead to query drift. This also contrasts with query relaxation through boolean or TFIDF retrieval, which reduces the specificity of the query. We define a scale for evaluating query substitution, and show that our method performs well at generating new queries related to the original queries. We build a model for selecting between candidates, by using a number of features relating the query-candidate pair, and by fitting the model to human judgments of relevance of query suggestions. This further improves the quality of the candidates generated. Experiments show that our techniques significantly increase coverage and effectiveness in the setting of sponsored search.", "We present TweetMotif, an exploratory search application for Twitter. Unlike traditional approaches to information retrieval, which present a simple list of messages, TweetMotif groups messages by frequent significant terms — a result set’s subtopics — which facilitate navigation and drilldown through a faceted search interface. The topic extraction system is based on syntactic filtering, language modeling, near-duplicate detection, and set cover heuristics. We have used TweetMotif to deflate rumors, uncover scams, summarize sentiment, and track political protests in real-time. A demo of TweetMotif, plus its source code, is available at http: tweetmotif.com.", "Addressing user's information needs has been one of the main goals of Web search engines since their early days. In some cases, users cannot see their needs immediately answered by search results, simply because these needs are too complex and involve multiple aspects that are not covered by a single Web or search results page. This typically happens when users investigate a certain topic in domains such as education, travel or health, which often require collecting facts and information from many pages. We refer to this type of activities as \"research missions\". These research missions account for 10 of users' sessions and more than 25 of all query volume, as verified by a manual analysis that was conducted by Yahoo! editors. We demonstrate in this paper that such missions can be automatically identified on-the-fly, as the user interacts with the search engine, through careful runtime analysis of query flows and query sessions. The on-the-fly automatic identification of research missions has been implemented in Search Pad, a novel Yahoo! application that was launched in 2009, and that we present in this paper. Search Pad helps users keeping trace of results they have consulted. Its novelty however is that unlike previous notes taking products, it is automatically triggered only when the system decides, with a fair level of confidence, that the user is undertaking a research mission and thus is in the right context for gathering notes. Beyond the Search Pad specific application, we believe that changing the level of granularity of query modeling, from an isolated query to a list of queries pertaining to the same research missions, so as to better reflect a certain type of information needs, can be beneficial in a number of other Web search applications. Session-awareness is growing and it is likely to play, in the near future, a fundamental role in many on-line tasks: this paper presents a first step on this path.", "Capturing the context of a user's query from the previous queries and clicks in the same session may help understand the user's information need. A context-aware approach to document re-ranking, query suggestion, and URL recommendation may improve users' search experience substantially. In this paper, we propose a general approach to context-aware search. To capture contexts of queries, we learn a variable length Hidden Markov Model (vlHMM) from search sessions extracted from log data. Although the mathematical model is intuitive, how to learn a large vlHMM with millions of states from hundreds of millions of search sessions poses a grand challenge. We develop a strategy for parameter initialization in vlHMM learning which can greatly reduce the number of parameters to be estimated in practice. We also devise a method for distributed vlHMM learning under the map-reduce model. We test our approach on a real data set consisting of 1.8 billion queries, 2.6 billion clicks, and 840 million search sessions, and evaluate the effectiveness of the vlHMM learned from the real data on three search applications: document re-ranking, query suggestion, and URL recommendation. The experimental results show that our approach is both effective and efficient.", "Search engines return ranked lists of Web pages in response to queries. These pages are starting points for post-query navigation, but may be insufficient for search tasks involving multiple steps. Search trails mined from toolbar logs start with a query and contain pages visited by one user during post-query navigation. Implicit endorsements from many trails can enhance result ranking. Rather than using trails solely to improve ranking, it may also be worth providing trail information directly to users. In this paper, we quantify the benefit that users currently obtain from trail-following and compare different methods for finding the best trail for a given query and each top-ranked result. We compare the relevance, topic coverage, topic diversity, and utility of trails selected using different methods, and break out findings by factors such as query type and origin relevance. Our findings demonstrate value in trails, highlight interesting differences in the performance of trailfinding algorithms, and show we can find best-trails for a query that outperform the trails most users follow. Findings have implications for enhancing Web information seeking using trails.", "Generating alternative queries, also known as query suggestion, has long been proved useful to help a user explore and express his information need. In many scenarios, such suggestions can be generated from a large scale graph of queries and other accessory information, such as the clickthrough. However, how to generate suggestions while ensuring their semantic consistency with the original query remains a challenging problem. In this work, we propose a novel query suggestion algorithm based on ranking queries with the hitting time on a large scale bipartite graph. Without involvement of twisted heuristics or heavy tuning of parameters, this method clearly captures the semantic consistency between the suggested query and the original query. Empirical experiments on a large scale query log of a commercial search engine and a scientific literature collection show that hitting time is effective to generate semantically consistent query suggestions. The proposed algorithm and its variations can successfully boost long tail queries, accommodating personalized query suggestion, as well as finding related authors in research.", "Current user interfaces for Web search, including browsers and search engine sites, typically treat search as a transient activity. However, people often conduct complex, multi-query investigations that may span long durations and may be interrupted by other tasks. In this paper, we first present the results of a survey of users' search habits, which show that many search tasks span long periods of time. We then introduce SearchBar, a system for proactively and persistently storing query histories, browsing histories, and users' notes and ratings in an interrelated fashion. SearchBar supports multi-session investigations by assisting with task context resumption and information re-finding. We describe a user study comparing use of SearchBar to status-quo tools such as browser histories, and discuss our findings, which show that users find SearchBar valuable for task reacquisition. Our study also reveals the strategies employed by users of status-quo tools for handling multi-query, multi-session search tasks.", "Many important search tasks require multiple search sessions to complete. Tasks such as travel planning, large purchases, or job searches can span hours, days, or even weeks. Inevitably, life interferes, requiring the searcher either to recover the \"state\" of the search manually (most common), or plan for interruption in advance (unlikely). The goal of this work is to better understand, characterize, and automatically detect search tasks that will be continued in the near future. To this end, we analyze a query log from the Bing Web search engine to identify the types of intents, topics, and search behavior patterns associated with long-running tasks that are likely to be continued. Using our insights, we develop an effective prediction algorithm that significantly outperforms both the previous state-of-the-art method, and even the ability of human judges, to predict future task continuation. Potential applications of our techniques would allow a search engine to pre-emptively \"save state\" for a searcher (e.g., by caching search results), perform more targeted personalization, and otherwise better support the searcher experience for interrupted search tasks.", "In this paper we propose a method that, given a query submitted to a search engine, suggests a list of related queries The related queries are based in previously issued queries, and can be issued by the user to the search engine to tune or redirect the search process The method proposed is based on a query clustering process in which groups of semantically similar queries are identified The clustering process uses the content of historical preferences of users registered in the query log of the search engine The method not only discovers the related queries, but also ranks them according to a relevance criterion Finally, we show with experiments over the query log of a search engine the effectiveness of the method." ] }
1706.01574
2621525845
A significant amount of search queries originate from some real world information need or tasks [13]. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions [9], for better recommendations [41], for satisfaction prediction [36] and for improved personalization in terms of tasks [24, 38]. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks & subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task subtask hierarchies.
Rich hierarchies are common in data across many domains, hence quite a few hierarchical clustering techniques have been proposed. The traditional methods for hierarchically clustering data are bottom-up agglomerative algorithms. Probabilistic methods of learning hierarchies have also been proposed @cite_37 @cite_18 along with hierarchical clustering based methods @cite_38 @cite_5 . Most algorithms for hierarchical clustering construct binary tree representations of data, where leaf nodes correspond to data points and internal nodes correspond to clusters. There are several limitations to existing hierarchy construction algorithms. The algorithms provide no guide to choosing the correct number of clusters or the level at which to prune the tree. It is often difficult to know which distance metric to choose. Additionally and more importantly, restriction of the hypothesis space to binary trees alone is undesirable in many situations - indeed, a task can have any number of subtasks, not necessarily two. Past work has also considered constructing task-specific taxonomies from document collections @cite_20 , browsing hierarchy construction @cite_23 , generating hierarchical summaries @cite_13 . While most of these techniques work in supervised settings on document collections, our work instead focused on short text queries and offers an unsupervised method of constructing task hierarchies.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_18", "@cite_23", "@cite_5", "@cite_13", "@cite_20" ], "mid": [ "2128999403", "2096297741", "2100071287", "2005488407", "2123656745", "2038519936", "1813825043" ], "abstract": [ "We present a novel algorithm for agglomerative hierarchical clustering based on evaluating marginal likelihoods of a probabilistic model. This algorithm has several advantages over traditional distance-based agglomerative clustering algorithms. (1) It defines a probabilistic model of the data which can be used to compute the predictive distribution of a test point and the probability of it belonging to any of the existing clusters in the tree. (2) It uses a model-based criterion to decide on merging clusters rather than an ad-hoc distance metric. (3) Bayesian hypothesis testing is used to decide which merges are advantageous and to output the recommended depth of the tree. (4) The algorithm can be interpreted as a novel fast bottom-up approximate inference method for a Dirichlet process (i.e. countably infinite) mixture model (DPM). It provides a new lower bound on the marginal likelihood of a DPM by summing over exponentially many clusterings of the data in polynomial time. We describe procedures for learning the model hyperpa-rameters, computing the predictive distribution, and extensions to the algorithm. Experimental results on synthetic and real-world data sets demonstrate useful properties of the algorithm.", "We propose an efficient Bayesian nonparametric model for discovering hierarchical community structure in social networks. Our model is a tree-structured mixture of potentially exponentially many stochastic blockmodels. We describe a family of greedy agglomerative model selection algorithms that take just one pass through the data to learn a fully probabilistic, hierarchical community model. In the worst case, Our algorithms scale quadratically in the number of vertices of the network, but independent of the number of nested communities. In practice, the run time of our algorithms are two orders of magnitude faster than the Infinite Relational Model, achieving comparable or better accuracy.", "Taxonomies, especially the ones in specific domains, are becoming indispensable to a growing number of applications. State-of-the-art approaches assume there exists a text corpus to accurately characterize the domain of interest, and that a taxonomy can be derived from the text corpus using information extraction techniques. In reality, neither assumption is valid, especially for highly focused or fast-changing domains. In this paper, we study a challenging problem: Deriving a taxonomy from a set of keyword phrases. A solution can benefit many real life applications because i) keywords give users the flexibility and ease to characterize a specific domain; and ii) in many applications, such as online advertisements, the domain of interest is already represented by a set of keywords. However, it is impossible to create a taxonomy out of a keyword set itself. We argue that additional knowledge and contexts are needed. To this end, we first use a general purpose knowledgebase and keyword search to supply the required knowledge and context. Then we develop a Bayesian approach to build a hierarchical taxonomy for a given set of keywords. We reduce the complexity of previous hierarchical clustering approaches from O(n2 log n) to O(n log n), so that we can derive a domain specific taxonomy from one million keyword phrases in less than an hour. Finally, we conduct comprehensive large scale experiments to show the effectiveness and efficiency of our approach. A real life example of building an insurance-related query taxonomy illustrates the usefulness of our approach for specific domains.", "Hierarchies serve as browsing tools to access information in document collections. This article explores techniques to derive browsing hierarchies that can be used as an information map for task-based search. It proposes a novel minimum-evolution hierarchy construction framework that directly learns semantic distances from training data and from users to construct hierarchies. The aim is to produce globally optimized hierarchical structures by incorporating user-generated task specifications into the general learning framework. Both an automatic version of the framework and an interactive version are presented. A comparison with state-of-the-art systems and a user study jointly demonstrate that the proposed framework is highly effective.", "Most previous work on automatic query clustering generated a flat, un-nested partition of query terms. In this work, we discuss the organization of query terms into a hierarchical structure and construct a query taxonomy in an automatic way. The proposed approach is designed based on a hierarchical agglomerative clustering algorithm to hierarchically group similar queries and generate cluster hierarchies using a novel cluster partition technique. The search processes of real-world search engines are combined to obtain highly ranked Web documents as the feature source for each query term. Preliminary experiments show that the proposed approach is effective for obtaining thesaurus information for query terms, and is also feasible for constructing a query taxonomy which provides a basis for in-depth analysis of users' search interests and domain-specific vocabulary on a larger scale.", "Hierarchies provide a means of organizing, summarizing and accessing information. We describe a method for automatically generating hierarchies from small collections of text, and then apply this technique to summarizing the documents retrieved by a search engine.", "Taxonomies can serve as browsing tools for document collections. However, given an arbitrary collection, pre-constructed taxonomies could not easily adapt to the specific topic task present in the collection. This paper explores techniques to quickly derive task-specific taxonomies supporting browsing in arbitrary document collections. The supervised approach directly learns semantic distances from users to propose meaningful task-specific taxonomies. The approach aims to produce globally optimized taxonomy structures by incorporating path consistency control and usergenerated task specification into the general learning framework. A comparison to state-of-the-art systems and a user study jointly demonstrate that our techniques are highly effective." ] }
1706.01574
2621525845
A significant amount of search queries originate from some real world information need or tasks [13]. In order to improve the search experience of the end users, it is important to have accurate representations of tasks. As a result, significant amount of research has been devoted to extracting proper representations of tasks in order to enable search systems to help users complete their tasks, as well as providing the end user with better query suggestions [9], for better recommendations [41], for satisfaction prediction [36] and for improved personalization in terms of tasks [24, 38]. Most existing task extraction methodologies focus on representing tasks as flat structures. However, tasks often tend to have multiple subtasks associated with them and a more naturalistic representation of tasks would be in terms of a hierarchy, where each task can be composed of multiple (sub)tasks. To this end, we propose an efficient Bayesian nonparametric model for extracting hierarchies of such tasks & subtasks. We evaluate our method based on real world query log data both through quantitative and crowdsourced experiments and highlight the importance of considering task subtask hierarchies.
Finally, Bayesian Rose Trees and their extensions have been proposed @cite_6 @cite_28 @cite_37 to model arbitrary branching trees. These algorithms naively cast relationships between objects as binary (0-1) associations while the query-query relationships in general are much richer in content and structure.
{ "cite_N": [ "@cite_28", "@cite_37", "@cite_6" ], "mid": [ "1561114193", "2096297741", "2032394720" ], "abstract": [ "Hierarchical structure is ubiquitous in data across many domains. There are many hierarchical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these methods limit discoverable hierarchies to those with binary branching structure. This limitation, while computationally convenient, is often undesirable. In this paper we explore a Bayesian hierarchical clustering algorithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy agglomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms.", "We propose an efficient Bayesian nonparametric model for discovering hierarchical community structure in social networks. Our model is a tree-structured mixture of potentially exponentially many stochastic blockmodels. We describe a family of greedy agglomerative model selection algorithms that take just one pass through the data to learn a fully probabilistic, hierarchical community model. In the worst case, Our algorithms scale quadratically in the number of vertices of the network, but independent of the number of nested communities. In practice, the run time of our algorithms are two orders of magnitude faster than the Infinite Relational Model, achieving comparable or better accuracy.", "Biological data, such as gene expression profiles or protein sequences, is often organized in a hierarchy of classes, where the instances assigned to \"nearby\" classes in the tree are similar. Most approaches for constructing a hierarchy use simple local operations, that are very sensitive to noise or variation in the data. In this paper, we describe probabilistic abstraction hierarchies (PAH) [11], a general probabilistic framework for clustering data into a hierarchy, and show how it can be applied to a wide variety of biological data sets. In a PAH, each class is associated with a probabilistic generative model for the data in the class. The PAH clustering algorithm simultaneously optimizes three things: the assignment of data instances to clusters, the models associated with the clusters, and the structure of the PAH approach is that it utilizes global optimization algorithms for the last two steps, substantially reducing the sensitivity to noise and the propensity to local maxima. We show how to apply this framework to gene expression data, protein sequence data, and HIV protease sequence data. We also show how our framework supports hierarchies involving more than one type of data. We demonstrate that our method extracts useful biological knowledge and is substantially more robust than hierarchical agglomerative clustering." ] }
1706.01739
2622911056
Smartphones have ubiquitously integrated into our home and work environments, however, users normally rely on explicit but inefficient identification processes in a controlled environment. Therefore, when a device is stolen, a thief can have access to the owner's personal information and services against the stored password s. As a result of this potential scenario, this work demonstrates the possibilities of legitimate user identification in a semi-controlled environment through the built-in smartphones motion dynamics captured by two different sensors. This is a two-fold process: sub-activity recognition followed by user impostor identification. Prior to the identification; Extended Sammon Projection (ESP) method is used to reduce the redundancy among the features. To validate the proposed system, we first collected data from four users walking with their device freely placed in one of their pants pockets. Through extensive experimentation, we demonstrate that together time and frequency domain features optimized by ESP to train the wavelet kernel based extreme learning machine classifier is an effective system to identify the legitimate user or an impostor with (97 ) accuracy.
The objective of this work is to provide convenience in using smartphones differently from explicit identification, by using sensor data information. Therefore, hereby we present some key related works on this field which can be categorized into two groups: implicit user identification and multiple modality biometrics. Shi, et al @cite_9 proposed a Sen-guard method for user identification, offering continuous and implicit user identification service for smartphone users. This method leverages the sensors available on smartphones, e.g. voice, multi-touch and location; these sensors are processed together in order to get the user's identification features implicitly. Explicit identification is performed only when there is an important evidence of change in the user activity, which is not real-time upto some extent.
{ "cite_N": [ "@cite_9" ], "mid": [ "2115973169" ], "abstract": [ "User identification and access control have become a high demand feature on mobile devices because those devices are wildly used by employees in corporations and government agencies for business and store increasing amount of sensitive data. This paper describes SenGuard, a user identification framework that enables continuous and implicit user identification service for smartphone. Different from traditional active user authentication and access control, SenGuard leverages availability of multiple sensors on today's smartphones and passively use sensor inputs as sources of user authentication. It extracts sensor modality dependent user identification features from captured sensor data and performs user identification at background. SenGuard invokes active user authentication when there is a mounting evidence that the phone user has changed. In addition, SenGuard uses a novel virtualization based system architecture as a safeguard to prevent subversion of the background user identification mechanism by moving it into a privileged virtual domain. An initial prototype of SenGuard was created using four sensor modalities including, voice, location, multitouch, and locomotion. Preliminary empirical studies with a set of users indicate that those four modalities are suited as data sources for implicit mobile user identification." ] }
1706.01739
2622911056
Smartphones have ubiquitously integrated into our home and work environments, however, users normally rely on explicit but inefficient identification processes in a controlled environment. Therefore, when a device is stolen, a thief can have access to the owner's personal information and services against the stored password s. As a result of this potential scenario, this work demonstrates the possibilities of legitimate user identification in a semi-controlled environment through the built-in smartphones motion dynamics captured by two different sensors. This is a two-fold process: sub-activity recognition followed by user impostor identification. Prior to the identification; Extended Sammon Projection (ESP) method is used to reduce the redundancy among the features. To validate the proposed system, we first collected data from four users walking with their device freely placed in one of their pants pockets. Through extensive experimentation, we demonstrate that together time and frequency domain features optimized by ESP to train the wavelet kernel based extreme learning machine classifier is an effective system to identify the legitimate user or an impostor with (97 ) accuracy.
In recent years, several other implicit identification approaches have been proposed leveraging smartphone's sensor devices such as accelerometer @cite_38 , touch screen @cite_44 , GPS @cite_19 and microphone @cite_23 . T. Feng, et al @cite_15 proposed to extract finger motion speed and acceleration of touch patterns as features. Luca, et al @cite_18 suggested to directly compute the distance between pattern traces using the dynamic time warping algorithm. Sae-Bae, et al @cite_36 present 22 special touch patterns for user identification, most of which involve all five fingers simultaneously. They then computed dynamic time warping distance and Freshet distance between multi-touch traces. Frank, et al @cite_2 studied the correlation between 22 analytic features from touch traces and classified them using k nearest neighbors and support vector machines. Shahzad, et al @cite_16 explained the use of touchscreen patterns as a secure unlocking mechanism at the login screen.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_36", "@cite_44", "@cite_19", "@cite_23", "@cite_2", "@cite_15", "@cite_16" ], "mid": [ "2151373013", "2181430729", "2013045163", "", "86197", "2070381776", "", "2535614671", "" ], "abstract": [ "Identifying users of portable devices from gait signals acquired with three-dimensional accelerometers was studied. Three approaches, correlation, frequency domain and data distribution statistics, were used. Test subjects (N=36) walked with fast, normal and slow walking speeds in enrolment and test sessions on separate days wearing the accelerometer device on their belt, at back. It was shown to be possible to identify users with this novel gait recognition method. Best equal error rate (EER=7 ) was achieved with the signal correlation method, while the frequency domain method and two variations of the data distribution statistics method produced EER of 10 , 18 and 19 , respectively.", "", "We propose a new behavioral biometric modality based on multi-touch gestures. We define a canonical set of multi-touch gestures based on the movement characteristics of the palm and fingertips being used to perform the gesture. We developed an algorithm to generate and verify multi-touch gesture templates. We tested our techniques on a set of 22 different gestures. Employing a matching algorithm for a multi-touch verification system with a k-NN classifier we achieved 1.28 Equal Error Rate (EER). With score-based classifiers where only the first five samples of a genuine subject were considered as templates, we achieved 4.46 EER. Further, with the combination of three commonly used gestures: pinch, zoom, and rotate, using all five fingers, 1.58 EER was achieved using a score-based classifier. These results are encouraging and point to the possibility of touch based biometric systems in real world applications like user verification and active authentication.", "", "Mobile devices are more and more integrated in workflows, especially when interacting with stationary resources like machines in order to improve productivity or usability, but risk unauthorized access or unwanted unattended operation. Systems for location based access control have been developed to restrict the user to be in specific locations in order to proceed in a workflow. However, these approaches do not consider the movement pattern of a user nor do they distinguish the severity of false-positives that might arise from imperfect location measurements which is crucial in certain workflows. In this paper, focusing on mobile users interacting with stationary machines, an approach for workflow policies is presented using three types of location constraints to enforce movement patterns. The evaluation of these constraints is based on a user’s location history which is generated in a tamper-proof environment on his mobile device and describes his geographical trajectory for a given timespan.", "The need for more security on mobile devices is increasing with new functionalities and features made available. To improve the device security we propose gait recognition as a protection mechanism. Unlike previous work on gait recognition, which was based on the use of video sources, floor sensors or dedicated high-grade accelerometers, this paper reports the performance when the data is collected with a commercially available mobile device containing low-grade accelerometers. To be more specific, the used mobile device is the Google G1 phone containing the AK8976A embedded accelerometer sensor. The mobile device was placed at the hip on each volunteer to collect gait data. Preproccesing, cycle detection and recognition-analysis were applied to the acceleration signal. The performance of the system was evaluated having 51 volunteers and resulted in an equal error rate (EER) of 20 .", "", "Securing the sensitive data stored and accessed from mobile devices makes user authentication a problem of paramount importance. The tension between security and usability renders however the task of user authentication on mobile devices a challenging task. This paper introduces FAST (Fingergestures Authentication System using Touchscreen), a novel touchscreen based authentication approach on mobile devices. Besides extracting touch data from touchscreen equipped smartphones, FAST complements and validates this data using a digital sensor glove that we have built using off-the-shelf components. FAST leverages state-of-the-art classification algorithms to provide transparent and continuous mobile system protection. A notable feature is FAST 's continuous, user transparent post-login authentication. We use touch data collected from 40 users to show that FAST achieves a False Accept Rate (FAR) of 4.66 and False Reject Rate of 0.13 for the continuous post-login user authentication. The low FAR and FRR values indicate that FAST provides excellent post-login access security, without disturbing the honest mobile users.", "" ] }
1706.01739
2622911056
Smartphones have ubiquitously integrated into our home and work environments, however, users normally rely on explicit but inefficient identification processes in a controlled environment. Therefore, when a device is stolen, a thief can have access to the owner's personal information and services against the stored password s. As a result of this potential scenario, this work demonstrates the possibilities of legitimate user identification in a semi-controlled environment through the built-in smartphones motion dynamics captured by two different sensors. This is a two-fold process: sub-activity recognition followed by user impostor identification. Prior to the identification; Extended Sammon Projection (ESP) method is used to reduce the redundancy among the features. To validate the proposed system, we first collected data from four users walking with their device freely placed in one of their pants pockets. Through extensive experimentation, we demonstrate that together time and frequency domain features optimized by ESP to train the wavelet kernel based extreme learning machine classifier is an effective system to identify the legitimate user or an impostor with (97 ) accuracy.
Moreover, the idea behind the behavior-based model is that the person's habits are a set of its attributes; therefore, each event (activity) has a correlation between two fundamental attributes: space and time. In addition, the architecture proposed in @cite_27 , utilizes the resources found in the smartphone devices, such as user calls, user schedule, GPS, device battery level, user applications, and sensors. A similar methodology has also been adopted in @cite_45 . Clarke, et al @cite_32 proposed smartphone's user perception of identification, in which results showed the system implicitly and continuously performing user identification in the background. Koreman, et al @cite_1 recommended continuous multiple model-based approaches for user identification. Mantyjarvi, et al @cite_38 used an accelerometer in television remote controls to identify individuals. Gafurov, et al @cite_54 and Cuntoor. et al @cite_49 suggested an experimented user identification using gait analysis and recognition. Jakobsson, et al @cite_24 put forward another unique implicit user identification framework by using recorded phone call history and location for continuous user identification.
{ "cite_N": [ "@cite_38", "@cite_54", "@cite_1", "@cite_32", "@cite_24", "@cite_27", "@cite_45", "@cite_49" ], "mid": [ "2151373013", "2171062881", "2600874563", "2154123601", "1492832934", "2009015035", "", "2142532896" ], "abstract": [ "Identifying users of portable devices from gait signals acquired with three-dimensional accelerometers was studied. Three approaches, correlation, frequency domain and data distribution statistics, were used. Test subjects (N=36) walked with fast, normal and slow walking speeds in enrolment and test sessions on separate days wearing the accelerometer device on their belt, at back. It was shown to be possible to identify users with this novel gait recognition method. Best equal error rate (EER=7 ) was achieved with the signal correlation method, while the frequency domain method and two variations of the data distribution statistics method produced EER of 10 , 18 and 19 , respectively.", "This paper presents a user authentication method based on gait (walking style). Human gait (in terms of acceleration signal) is collected using a wearable accelerometer sensor attached to the ankle of the person. Ankle accelerations from three directions (up-down, forward-backward and sideways) are utilized for identity verification purposes. Applying cycle matching method on experimental data from 30 subjects, we obtained an encouraging EER (Equal Error Rate) of 1.6 using sideway acceleration signals. In addition, our analysis indicate that performance is better with light shoes than with heavy shoes; and sideways motion appear to provide higher discrimination power compared to the up-down and forward-backward motions. Application area for such gait-based user authentication approach can be in improving security and usability of user authentication in emerging applications such as pervasive environment.", "", "Mobile handsets have found an important place in modern society, with hundreds of millions currently in use. The majority of these devices use inherently weak authentication mechanisms, based upon passwords and PINs. This paper presents a feasibility study into a biometric-based technique, known as keystroke analysis – which authenticates the user based upon their typing characteristic. In particular, this paper identifies two typical handset interactions, entering telephone numbers and typing text messages, and seeks to authenticate the user during their normal handset interaction. It was found that neural network classifiers were able to perform classification with average equal error rates of 12.8 . Based upon these results, the paper concludes by proposing a flexible and robust framework to permit the continuous and transparent authentication of the user, thereby maximising security and minimising user inconvenience, to service the needs of the insecure and evermore functional mobile handset.", "We introduce the notion of implicit authentication - the ability to authenticate mobile users based on actions they would carry out anyway. We develop a model for how to perform implicit authentication, and describe experiments aimed at assessing the benefits of our techniques. Our preliminary findings support that this is a meaningful approach, whether used to increase usability or increase security.", "Mobile devices use traditional authentication processes, which are vulnerable and unsuitable for highly dynamic environments, such as ubiquitous and pervasive environments. Therefore, new approaches are necessary in such environments. These approaches must be (i) context-aware by considering environmental and operational characteristics, restrictions of devices and information provided by sensors within the pervasive space, and (ii) customizable for balancing the systems proactivity with the control that the user wants to have over his systems. In this paper, it is presented an approach that provides a recommendation system based on the user behavior and the pervasive space where he belongs. The user behavior is defined by the actions and events that compose the activity that is being performed. The experimental results indicate (a) a dynamic and autonomic mechanism, which is more suitable for authenticating users in mobile and pervasive environments, and (b) significant efficiency to detect authentication anomalies by using a similarity model and a spatio-temporal permutation model.", "", "Gait is a spatio-temporal phenomenon that typifies the motion characteristics of an individual. In this paper, we propose a view-based approach to recognize humans through gait. The width of the outer contour of the binarized silhouette of a walking person is chosen as the image feature. A set of stances or key frames that occur during the walk cycle of an individual is chosen. Euclidean distances of a given image from this stance set are computed and a lower-dimensional observation vector is generated. A continuous hidden Markov model (HMM) is trained using several such lower-dimensional vector sequences extracted from the video. This methodology serves to compactly capture structural and transitional features that are unique to an individual. The statistical nature of the HMM renders overall robustness to gait representation and recognition. The human identification performance of the proposed scheme is found to be quite good when tested in natural walking conditions." ] }
1706.01487
2624512376
In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition.
Dictionary-based scene text recognition. Traditionally, scene text recognition systems use character recognizers in a sequential way by localizing characters using a sliding window @cite_19 @cite_16 @cite_5 and then grouping responses by arranging the character windows from left to right as words. A variety of techniques have been used to classify character bounding boxes, including random ferns @cite_5 , integer programming @cite_17 and Convolutional Neural Networks (CNNs) @cite_19 . These methods often use the lexical constrains imposed by a fixed lexicon while grouping the character hypotheses into words.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_16", "@cite_17" ], "mid": [ "70975097", "1998042868", "2061802763", "1973772835" ], "abstract": [ "The goal of this work is text spotting in natural images. This is divided into two sequential tasks: detecting words regions in the image, and recognizing the words within these regions. We make the following contributions: first, we develop a Convolutional Neural Network (CNN) classifier that can be used for both tasks. The CNN has a novel architecture that enables efficient feature sharing (by using a number of layers in common) for text detection, character case-sensitive and insensitive classification, and bigram classification. It exceeds the state-of-the-art performance for all of these. Second, we make a number of technical changes over the traditional CNN architectures, including no downsampling for a per-pixel sliding window, and multi-mode learning with a mixture of linear models (maxout). Third, we have a method of automated data mining of Flickr, that generates word and character level annotations. Finally, these components are used together to form an end-to-end, state-of-the-art text spotting system. We evaluate the text-spotting system on two standard benchmarks, the ICDAR Robust Reading data set and the Street View Text data set, and demonstrate improvements over the state-of-the-art on multiple measures.", "This paper focuses on the problem of word detection and recognition in natural images. The problem is significantly more challenging than reading text in scanned documents, and has only recently gained attention from the computer vision community. Sub-components of the problem, such as text detection and cropped image word recognition, have been studied in isolation [7, 4, 20]. However, what is unclear is how these recent approaches contribute to solving the end-to-end problem of word recognition. We fill this gap by constructing and evaluating two systems. The first, representing the de facto state-of-the-art, is a two stage pipeline consisting of text detection followed by a leading OCR engine. The second is a system rooted in generic object recognition, an extension of our previous work in [20]. We show that the latter approach achieves superior performance. While scene text recognition has generally been treated with highly domain-specific methods, our results demonstrate the suitability of applying generic computer vision methods. Adopting this approach opens the door for real world scene text recognition to benefit from the rapid advances that have been taking place in object recognition.", "An end-to-end real-time scene text localization and recognition method is presented. The real-time performance is achieved by posing the character detection problem as an efficient sequential selection from the set of Extremal Regions (ERs). The ER detector is robust to blur, illumination, color and texture variation and handles low-contrast text. In the first classification stage, the probability of each ER being a character is estimated using novel features calculated with O(1) complexity per region tested. Only ERs with locally maximal probability are selected for the second stage, where the classification is improved using more computationally expensive features. A highly efficient exhaustive search with feedback loops is then applied to group ERs into words and to select the most probable character segmentation. Finally, text is recognized in an OCR stage trained using synthetic fonts. The method was evaluated on two public datasets. On the ICDAR 2011 dataset, the method achieves state-of-the-art text localization results amongst published methods and it is the first one to report results for end-to-end text recognition. On the more challenging Street View Text dataset, the method achieves state-of-the-art recall. The robustness of the proposed method against noise and low contrast of characters is demonstrated by “false positives” caused by detected watermark text in the dataset.", "The recognition of text in everyday scenes is made difficult by viewing conditions, unusual fonts, and lack of linguistic context. Most methods integrate a priori appearance information and some sort of hard or soft constraint on the allowable strings. Weinman and Learned-Miller [14] showed that the similarity among characters, as a supplement to the appearance of the characters with respect to a model, could be used to improve scene text recognition. In this work, we make further improvements to scene text recognition by taking a novel approach to the incorporation of similarity. In particular, we train a similarity expert that learns to classify each pair of characters as equivalent or not. After removing logical inconsistencies in an equivalence graph, we formulate the search for the maximum likelihood interpretation of a sign as an integer program. We incorporate the equivalence information as constraints in the integer program and build an optimization criterion out of appearance features and character bigrams. Finally, we take the optimal solution from the integer program, and compare all “nearby” solutions using a probability model for strings derived from search engine queries. We demonstrate word error reductions of more than 30 relative to previous methods on the same data set." ] }
1706.01487
2624512376
In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition.
To model inter-dependencies between characters, the authors of @cite_10 used a recursive CNN and variants of Recurrent Neural Networks on top of CNN features.
{ "cite_N": [ "@cite_10" ], "mid": [ "2294053032" ], "abstract": [ "We present recursive recurrent neural networks with attention modeling (R2AM) for lexicon-free optical character recognition in natural scene images. The primary advantages of the proposed method are: (1) use of recursive convolutional neural networks (CNNs), which allow for parametrically efficient and effective image feature extraction, (2) an implicitly learned character-level language model, embodied in a recurrent neural network which avoids the need to use N-grams, and (3) the use of a soft-attention mechanism, allowing the model to selectively exploit image features in a coordinated way, and allowing for end-to-end training within a standard backpropagation framework. We validate our method with state-of-the-art performance on challenging benchmark datasets: Street View Text, IIIT5k, ICDAR and Synth90k." ] }
1706.01487
2624512376
In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition.
Visual attention models for recognition. Recently visual attention models have gained a lot of attention and have been used for machine translation @cite_4 and image captioning @cite_12 . In this last work the attention model is combined with an LSTM on top of CNN features. The LSTM outputs one word at every step focusing on a specific part of the image driven by the attention model. Two models of attention, hard and soft attention are proposed. In our work, we mainly follow the soft attention model, adapted to the particular case of text recognition. Attention models appear to have the potential ability to overcome some of the limitations of existing text recognition methods. They can leverage a fixed-length representation, but at the same time, they are able to guide recognition to relevant parts of the image, performing in this way a kind of implicit character segmentation.
{ "cite_N": [ "@cite_4", "@cite_12" ], "mid": [ "2133564696", "2950178297" ], "abstract": [ "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO." ] }
1706.01487
2624512376
In this paper we propose an approach to lexicon-free recognition of text in scene images. Our approach relies on a LSTM-based soft visual attention model learned from convolutional features. A set of feature vectors are derived from an intermediate convolutional layer corresponding to different areas of the image. This permits encoding of spatial information into the image representation. In this way, the framework is able to learn how to selectively focus on different parts of the image. At every time step the recognizer emits one character using a weighted combination of the convolutional feature vectors according to the learned attention model. Training can be done end-to-end using only word level annotations. In addition, we show that modifying the beam search algorithm by integrating an explicit language model leads to significantly better recognition results. We validate the performance of our approach on standard SVT and ICDAR'03 scene text datasets, showing state-of-the-art performance in unconstrained text recognition.
Recently, a soft attention model has also been proposed for text recognition in the wild @cite_10 . The main differences with our work are the following. Firstly, the success of @cite_10 can be largely attributed to the use of Recursive Neural Network (RNN) features. They rely on the RNN features to model the dependencies between characters. Instead we use traditional CNN features and it is the visual attention model who learns to selectively attend to parts of the image and the dependencies between them. Secondly, Lee al @cite_10 used the features from fully connected layer, while we use features from an earlier convolutional layer, thus preserving the local spatial characteristics of the image and reducing the model complexity. This also allows the model to focus on a subset of features corresponding to certain area of the image and learn the underlying inter-dependencies. Thirdly, we used LSTM instead of RNN which has been shown to learn long term dependencies better than traditional RNNs.
{ "cite_N": [ "@cite_10" ], "mid": [ "2294053032" ], "abstract": [ "We present recursive recurrent neural networks with attention modeling (R2AM) for lexicon-free optical character recognition in natural scene images. The primary advantages of the proposed method are: (1) use of recursive convolutional neural networks (CNNs), which allow for parametrically efficient and effective image feature extraction, (2) an implicitly learned character-level language model, embodied in a recurrent neural network which avoids the need to use N-grams, and (3) the use of a soft-attention mechanism, allowing the model to selectively exploit image features in a coordinated way, and allowing for end-to-end training within a standard backpropagation framework. We validate our method with state-of-the-art performance on challenging benchmark datasets: Street View Text, IIIT5k, ICDAR and Synth90k." ] }
1706.01120
2623439553
Missing data has a ubiquitous presence in real-life applications of machine learning techniques. Imputation methods are algorithms conceived for restoring missing values in the data, based on other entries in the database. The choice of the imputation method has an influence on the performance of the machine learning technique, e.g., it influences the accuracy of the classification algorithm applied to the data. Therefore, selecting and applying the right imputation method is important and usually requires a substantial amount of human intervention. In this paper we propose the use of genetic programming techniques to search for the right combination of imputation and classification algorithms. We build our work on the recently introduced Python-based TPOT library, and incorporate a heterogeneous set of imputation algorithms as part of the machine learning pipeline search. We show that genetic programming can automatically find increasingly better pipelines that include the most effective combinations of imputation methods, feature pre-processing, and classifiers for a variety of classification problems with missing data.
In @cite_24 , the same authors proposed a similar method, with a more explicit multiple imputation implementation, and this time they used the accuracy and the error between the imputed and the real values as a metric to evaluate the imputation method. Again, the proposed method regularly beat the other methods.
{ "cite_N": [ "@cite_24" ], "mid": [ "1990086013" ], "abstract": [ "Missing values are a common problem in many real world databases. Inadequate handing of missing data can lead to serious problems in data analysis. A common way to cope with this problem is to use imputation methods to fill missing values with plausible values. This paper proposes GPMI, a multiple imputation method that uses genetic programming as a regression method to estimate missing values. Experiments on eight datasets with six levels of missing values compare GPMI with seven other popular and advanced imputation methods on two measures: the prediction accuracy and the classification accuracy. The results show that, in most cases, GPMI not only achieves better prediction accuracy, but also better classification accuracy than the other imputation methods." ] }
1706.01394
2624555836
We study loss functions that measure the accuracy of a prediction based on multiple data points simultaneously. To our knowledge, such loss functions have not been studied before in the area of property elicitation or in machine learning more broadly. As compared to traditional loss functions that take only a single data point, these multi-observation loss functions can in some cases drastically reduce the dimensionality of the hypothesis required. In elicitation, this corresponds to requiring many fewer reports; in empirical risk minimization, it corresponds to algorithms on a hypothesis space of much smaller dimension. We explore some examples of the tradeoff between dimensionality and number of observations, give some geometric characterizations and intuition for relating loss functions and the properties that they elicit, and discuss some implications for both elicitation and machine-learning contexts.
Our work is inspired in part by @cite_1 which proposes a way to elicit the confidence (inverse of variance) of an agent's estimate of the bias of a coin by simply flipping it twice. In our terminology, this follows from the fact that the variance is @math -elicitable. Multi-observation losses have been previously introduced to learn embeddings , though an explicit property statistic is never discussed.
{ "cite_N": [ "@cite_1" ], "mid": [ "2951275468" ], "abstract": [ "We study the problem of eliciting and aggregating probabilistic information from multiple agents. In order to successfully aggregate the predictions of agents, the principal needs to elicit some notion of confidence from agents, capturing how much experience or knowledge led to their predictions. To formalize this, we consider a principal who wishes to elicit predictions about a random variable from a group of Bayesian agents, each of whom have privately observed some independent samples of the random variable, and hopes to aggregate the predictions as if she had directly observed the samples of all agents. Leveraging techniques from Bayesian statistics, we represent confidence as the number of samples an agent has observed, which is quantified by a hyperparameter from a conjugate family of prior distributions. This then allows us to show that if the principal has access to a few samples, she can achieve her aggregation goal by eliciting predictions from agents using proper scoring rules. In particular, if she has access to one sample, she can successfully aggregate the agents' predictions if and only if every posterior predictive distribution corresponds to a unique value of the hyperparameter. Furthermore, this uniqueness holds for many common distributions of interest. When this uniqueness property does not hold, we construct a novel and intuitive mechanism where a principal with two samples can elicit and optimally aggregate the agents' predictions." ] }
1706.01205
2728796327
Most real world dynamic networks are evolved very fast with time. It is not feasible to collect the entire network at any given time to study its characteristics. This creates the need to propose local algorithms to study various properties of the network. In the present work, we estimate degree rank of a node without having the entire network. The proposed methods are based on the power law degree distribution characteristic or sampling techniques. The proposed methods are simulated on synthetic networks, as well as on real world social networks. The efficiency of the proposed methods is evaluated using absolute and weighted error functions. Results show that the degree rank of a node can be estimated with high accuracy using only @math samples of the network size. The accuracy of the estimation decreases from high ranked to low ranked nodes. We further extend the proposed methods for random networks and validate their efficiency on synthetic random networks, that are generated using Erdős-Renyi model. Results show that the proposed methods can be efficiently used for random networks as well.
The node or edge sampling methods are not feasible in real world networks as the structure of social networks is not known in advance. So, these networks can be sampled using graph traversal techniques like breadth first search (BFS) @cite_19 , depth first search (DFS) @cite_19 , forest fire sampling (FFS) @cite_32 , snowball sampling @cite_10 , or random walk based methods like simple random walk (RW) @cite_54 , Metropolis Hastings random walk (MHRW) @cite_48 , reweighted random walk (RWRW) @cite_36 , respondent driven sampling (RDS) @cite_14 , supervised random walk @cite_25 , Modified TOpology Sampling (MTO) @cite_69 , walk-estimate @cite_35 , Frontier sampling (m-dimensional random walk) @cite_64 , Rank Degree sampling based on edge selection @cite_46 , preferential random walk @cite_42 , and so on.
{ "cite_N": [ "@cite_35", "@cite_69", "@cite_14", "@cite_64", "@cite_36", "@cite_48", "@cite_54", "@cite_42", "@cite_32", "@cite_19", "@cite_46", "@cite_10", "@cite_25" ], "mid": [ "1823866642", "1992899822", "2117740169", "2103799649", "", "2056760934", "2964180168", "", "", "", "", "", "2107569009" ], "abstract": [ "In this paper, we introduce a novel, general purpose, technique for faster sampling of nodes over an online social network. Specifically, unlike traditional random walks which wait for the convergence of sampling distribution to a predetermined target distribution - a waiting process that incurs a high query cost - we develop WALK-ESTIMATE, which starts with a much shorter random walk, and then proactively estimate the sampling probability for the node taken before using acceptance-rejection sampling to adjust the sampling probability to the predetermined target distribution. We present a novel backward random walk technique which provides provably unbiased estimations for the sampling probability, and demonstrate the superiority of WALK-ESTIMATE over traditional random walks through theoretical analysis and extensive experiments over real world online social networks.", "Many online social networks feature restrictive web interfaces which only allow the query of a user's local neighborhood through the interface. To enable analytics over such an online social network through its restrictive web interface, many recent efforts reuse the existing Markov Chain Monte Carlo methods such as random walks to sample the social network and support analytics based on the samples. The problem with such an approach, however, is the large amount of queries often required (i.e., a long “mixing time”) for a random walk to reach a desired (stationary) sampling distribution. In this paper, we consider a novel problem of enabling a faster random walk over online social networks by “rewiring” the social network on-the-fly. Specifically, we develop Modified TOpology (MTO)-Sampler which, by using only information exposed by the restrictive web interface, constructs a “virtual” overlay topology of the social network while performing a random walk, and ensures that the random walk follows the modified overlay topology rather than the original one. We show that MTO-Sampler not only provably enhances the efficiency of sampling, but also achieves significant savings on query cost over real-world online social networks such as Google Plus, Epinion etc.", "Standard statistical methods often provide no way to make accurate estimates about the characteristics of hidden populations such as injection drug users, the homeless, and artists. In this paper, we further develop a sampling and estimation technique called respondent-driven sampling, which allows researchers to make asymptotically unbiased estimates about these hidden populations. The sample is selected with a snowball-type design that can be done more cheaply, quickly, and easily than other methods currently in use. Further, we can show that under certain specified (and quite general) conditions, our estimates for the percentage of the population with a specific trait are asymptotically unbiased. We further show that these estimates are asymptotically unbiased no matter how the seeds are selected. We conclude with a comparison of respondent-driven samples of jazz musicians in New York and San Francisco, with corresponding institutional samples of jazz musicians from these cities. The results show that ...", "Estimating characteristics of large graphs via sampling is a vital part of the study of complex networks. Current sampling methods such as (independent) random vertex and random walks are useful but have drawbacks. Random vertex sampling may require too many resources (time, bandwidth, or money). Random walks, which normally require fewer resources per sample, can suffer from large estimation errors in the presence of disconnected or loosely connected graphs. In this work we propose a new m-dimensional random walk that uses m dependent random walkers. We show that the proposed sampling method, which we call Frontier sampling, exhibits all of the nice sampling properties of a regular random walk. At the same time, our simulations over large real world graphs show that, in the presence of disconnected or loosely connected components, Frontier sampling exhibits lower estimation errors than regular random walks. We also show that Frontier sampling is more suitable than random vertex sampling to sample the tail of the degree distribution of the graph.", "", "A general method, suitable for fast computing machines, for investigating such properties as equations of state for substances consisting of interacting individual molecules is described. The method consists of a modified Monte Carlo integration over configuration space. Results for the two‐dimensional rigid‐sphere system have been obtained on the Los Alamos MANIAC and are presented here. These results are compared to the free volume equation of state and to a four‐term virial coefficient expansion.", "Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.", "", "", "", "", "", "Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open. We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function. Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction." ] }
1706.01205
2728796327
Most real world dynamic networks are evolved very fast with time. It is not feasible to collect the entire network at any given time to study its characteristics. This creates the need to propose local algorithms to study various properties of the network. In the present work, we estimate degree rank of a node without having the entire network. The proposed methods are based on the power law degree distribution characteristic or sampling techniques. The proposed methods are simulated on synthetic networks, as well as on real world social networks. The efficiency of the proposed methods is evaluated using absolute and weighted error functions. Results show that the degree rank of a node can be estimated with high accuracy using only @math samples of the network size. The accuracy of the estimation decreases from high ranked to low ranked nodes. We further extend the proposed methods for random networks and validate their efficiency on synthetic random networks, that are generated using Erdős-Renyi model. Results show that the proposed methods can be efficiently used for random networks as well.
studied the mean square error while computing the degree distribution of the network @cite_63 . They further compute the normalized mean square error for estimating the out-degree and in-deg distribution of the directed networks @cite_6 . The proposed method uses Directed Unbiased Random Walk (DURW) that takes a random jump with a fixed probability depending on the degree of the node while taking the walk. The results show that the out-degree distribution can be estimated more efficiently and accuracy of the in-degree distribution is very less unless the graph is not symmetric.
{ "cite_N": [ "@cite_6", "@cite_63" ], "mid": [ "2081082600", "1994312128" ], "abstract": [ "Despite recent efforts to characterize complex networks such as citation graphs or online social networks (OSNs), little attention has been given to developing tools that can be used to characterize directed graphs in the wild, where no pre-processed data is available. The presence of hidden incoming edges but observable outgoing edges poses a challenge to characterize large directed graphs through crawling, as existing sampling methods cannot cope with hidden incoming links. The driving principle behind our random walk (RW) sampling method is to construct, in real-time, an undirected graph from the directed graph such that the random walk on the directed graph is consistent with one on the undirected graph. We then use the RW on the undirected graph to estimate the outdegree distribution. Our algorithm accurately estimates outdegree distributions of a variety of real world graphs. We also study the hardness of indegree distribution estimation when indegrees are latent (i.e., incoming links are only observed as outgoing edges). We observe that, in the same scenarios, indegree distribution estimates are highly innacurate unless the directed graph is highly symmetrical.", "Estimating characteristics of large graphs via sampling is vital in the study of complex networks. In this work, we study the Mean Squared Error (MSE) associated with different sampling methods for the degree distribution. These sampling methods include independent random vertex (RV) and random edge (RE) sampling, and crawling methods such as random walks (RWs) and the widely used Metropolis-Hastings algorithm for uniformly sampling vertices (MHRWu). We see that the RW MSE is upper bounded by a quantity that is proportional to the RE MSE and inversely proportional to the spectral gap of the RW transition probability matrix. We also determine conditions under which RW is preferable to RV. Finally, we present an approximation of the MHRWu MSE. We evaluate the accuracy of our approximations and bounds through simulations on large real world graphs." ] }
1706.01433
2622672190
From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the object-based dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.
Another class of approaches learn to predict summary physical judgments and produce simple actions from images. There have been several efforts @cite_25 @cite_23 which used CNN-based models to predict whether a stack of blocks would fall. @cite_6 @cite_16 predicted coarse, image-space motion trajectories of objects in real images. Several efforts @cite_19 @cite_4 @cite_0 @cite_20 have fit the parameters of Newtonian mechanics equations to systems depicted in images and videos, though the dynamic equations themselves were not learned. @cite_24 trained a system that learns to move objects by poking.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_0", "@cite_19", "@cite_24", "@cite_23", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "", "2964269434", "", "2130151115", "2473208550", "2312995908", "", "2951384764", "" ], "abstract": [ "", "In this paper, we study the challenging problem of predicting the dynamics of objects in static images. Given a query object in an image, our goal is to provide a physical understanding of the object in terms of the forces acting upon it and its long term motion as response to those forces. Direct and explicit estimation of the forces and the motion of objects from a single image is extremely challenging. We define intermediate physical abstractions called Newtonian scenarios and introduce Newtonian Neural Network (N3) that learns to map a single image to a state in a Newtonian scenario. Our evaluations show that our method can reliably predict dynamics of a query object from a single image. In addition, our approach can provide physical reasoning that supports the predicted dynamics in terms of velocity and force vectors. To spur research in this direction we compiled Visual Newtonian Dynamics (VIND) dataset that includes more than 6000 videos aligned with Newtonian scenarios represented using game engines, and more than 4500 still images with their ground truth dynamics.", "", "This paper presents an optimization framework for estimating the motion and underlying physical parameters of a rigid body in free flight from video. The algorithm takes a video clip of a tumbling rigid body of known shape and generates a physical simulation of the object observed in the video clip. This solution is found by optimizing the simulation parameters to best match the motion observed in the video sequence. These simulation parameters include initial positions and velocities, environment parameters like gravity direction and parameters of the camera. A global objective function computes the sum squared difference between the silhouette of the object in simulation and the silhouette obtained from video at each frame. Applications include creating interesting rigid body animations, tracking complex rigid body motions in video and estimating camera parameters from video.", "We investigate an experiential learning paradigm for acquiring an internal model of intuitive physics. Our model is evaluated on a real-world robotic manipulation task that requires displacing objects to target locations by poking. The robot gathered over 400 hours of experience by executing more than 100K pokes on different objects. We propose a novel approach based on deep neural networks for modeling the dynamics of robot's interactions directly from images, by jointly estimating forward and inverse models of dynamics. The inverse model objective provides supervision to construct informative visual features, which the forward model can then predict and in turn regularize the feature space for the inverse model. The interplay between these two objectives creates useful, accurate models that can then be used for multi-step decision making. This formulation has the additional benefit that it is possible to learn forward models in an abstract feature space and thus alleviate the need of predicting pixels. Our experiments show that this joint modeling approach outperforms alternative methods.", "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.", "", "Wooden blocks are a common toy for infants, allowing them to develop motor skills and gain intuition about the physical behavior of the world. In this paper, we explore the ability of deep feed-forward models to learn such intuitive physics. Using a 3D game engine, we create small towers of wooden blocks whose stability is randomized and render them collapsing (or remaining upright). This data allows us to train large convolutional network models which can accurately predict the outcome, as well as estimating the block trajectories. The models are also able to generalize in two important ways: (i) to new physical scenarios, e.g. towers with an additional block and (ii) to images of real wooden blocks, where it obtains a performance comparable to human subjects.", "" ] }
1706.01433
2622672190
From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a general-purpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual front-end based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual front-end learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the object-based dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for model-based decision-making and planning from raw sensory observations in complex physical environments.
A third class of methods @cite_15 @cite_14 @cite_9 @cite_22 have been used to predict future state descriptions from pixels. These models have to be tailored to the particular physical domain of interest, are only effective over a few time steps, and use side information such as object locations and physical constraints at test time.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_14", "@cite_22" ], "mid": [ "2271155703", "2951942526", "2592431710", "2521461771" ], "abstract": [ "The ability to plan and execute goal specific actions in varied, unexpected settings is a central requirement of intelligent agents. In this paper, we explore how an agent can be equipped with an internal model of the dynamics of the external world, and how it can use this model to plan novel actions by running multiple internal simulations (\"visual imagination\"). Our models directly process raw visual input, and use a novel object-centric prediction formulation based on visual glimpses centered on objects (fixations) to enforce translational invariance of the learned physical laws. The agent gathers training data through random interaction with a collection of different environments, and the resulting model can then be used to plan goal-directed actions in novel environments that the agent has not seen before. We demonstrate that our agent can accurately plan actions for playing a simulated billiards game, which requires pushing a ball into a target position or into collision with another ball.", "Boundary estimation in images and videos has been a very active topic of research, and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on estimating boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and corresponding motion patterns -- including a notion of \"intuitive physics\". We experiment on natural video sequences along with synthetic sequences with deterministic physics-based and agent-based motions. While not being our primary goal, we also show that fusion of RGB and boundary prediction leads to improved RGB predictions.", "Evolution has resulted in highly developed abilities in many natural intelligences to quickly and accurately predict mechanical phenomena. Humans have successfully developed laws of physics to abstract and model such mechanical phenomena. In the context of artificial intelligence, a recent line of work has focused on estimating physical parameters based on sensory data and use them in physical simulators to make long-term predictions. In contrast, we investigate the effectiveness of a single neural network for end-to-end long-term prediction of mechanical phenomena. Based on extensive evaluation, we demonstrate that such networks can outperform alternate approaches having even access to ground-truth physical simulators, especially when some physical parameters are unobserved or not known a-priori. Further, our network outputs a distribution of outcomes to capture the inherent uncertainty in the data. Our approach demonstrates for the first time the possibility of making actionable long-term predictions from sensor data without requiring to explicitly model the underlying physical laws.", "In many machine learning applications, labeled data is scarce and obtaining more labels is expensive. We introduce a new approach to supervising neural networks by specifying constraints that should hold over the output space, rather than direct examples of input-output pairs. These constraints are derived from prior domain knowledge, e.g., from known laws of physics. We demonstrate the effectiveness of this approach on real world and simulated computer vision tasks. We are able to train a convolutional neural network to detect and track objects without any labeled examples. Our approach can significantly reduce the need for labeled training data, but introduces new challenges for encoding prior knowledge into appropriate loss functions." ] }
1706.01331
2621430944
Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a mid-level of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.
Automated Story Generation has been a research problem of interest since nearly the inception of artificial intelligence. Early attempts relied on symbolic planning @cite_21 @cite_10 @cite_23 @cite_15 or case-based reasoning using ontologies @cite_17 . These techniques could only generate stories for predetermined and well-defined domains of characters, places, and actions. The creativity of these systems conflated the robustness of manually-engineered knowledge and algorithm suitability.
{ "cite_N": [ "@cite_21", "@cite_23", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "1510205845", "2156264173", "2170516265", "", "2090487795" ], "abstract": [ "Abstract : People draw on many diverse sources of real-world knowledge in order to make up stories, including the following: knowledge of the physical world; rules of social behavior and relationships; techniques for solving everyday problems such as transportation, acquisition of objects, and acquisition of information; knowledge about physical needs such as hunger and thirst; knowledge about stories their organization and contents; knowledge about planning behavior and the relationships between kinds of goals; and knowledge about expressing a story in a natural language. This thesis describes a computer program which uses all information to write stories. The areas of knowledge, called problem domains, are defined by a set of representational primitives, a set of problems expressed in terms of those primitives, and a set of procedures for solving those problems. These may vary from one domain to the next. All this specialized knowledge must be integrated in order to accomplish a task such as storytelling. The program, called TALE-SPIN, produces stories in English, interacting with the user, who specifies characters, personality characteristics, and relationships between characters. Operating in a different mode, the program can make those decisions in order to produce Aesop-like fables. (Author)", "MEXICA is a computer model that produces frameworks for short stories based on the engagement-reflection cognitive account of writing. During engagement MEXICA generates material guided by content and rhetorical constraints, avoiding the use of explicit goals or story-structure information. During reflection the system breaks impasses, evaluates the novelty and interestingness of the story in progress and verifies that coherence requirements are satisfied. In this way, MEXICA complements and extends those models of computerised story-telling based on traditional problem-solving techniques where explicit goals drive the generation of stories. This paper describes the engagement-reflection account of writing, the general characteristics of MEXICA and reports an evaluation of the program.", "Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors - logical and aesthetic - that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience's suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem - to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm - the Intent-based Partial Order Causal Link (IPOCL) planner - that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners.", "", "In this paper we present a system for automatic story generation that reuses existing stories to produce a new story that matches a given user query. The plot structure is obtained by a case-based reasoning (CBR) process over a case base of tales and an ontology of explicitly declared relevant knowledge. The resulting story is generated as a sketch of a plot described in natural language by means of natural language generation (NLG) techniques." ] }
1706.01331
2621430944
Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a mid-level of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.
Recently, machine learning has been used to attempt to learn the domain model from which stories can be created or to identify segments of story content available in an existing repository to assemble stories. The SayAnthing system @cite_8 uses textual case-based reasoning to identify relevant existing story content in online blogs. The Scheherazade system @cite_7 uses a crowdsourced corpus of example stories to learn a domain model from which to generate novel stories.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "25648700", "2128116572" ], "abstract": [ "Story generation is the problem of automatically selecting a sequence of events that meet a set of criteria and can be told as a story. Story generation is knowledge-intensive; traditional story generators rely on a priori defined domain models about fictional worlds, including characters, places, and actions that can be performed. Manually authoring the domain models is costly and thus not scalable. We present a novel class of story generation system that can generate stories in an unknown domain. Our system (a) automatically learns a domain model by crowdsourcing a corpus of narrative examples and (b) generates stories by sampling from the space defined by the domain model. A large-scale evaluation shows that stories generated by our system for a previously unknown topic are comparable in quality to simple stories authored by untrained humans.", "We describe Say Anything, a new interactive storytelling system that collaboratively writes textual narratives with human users. Unlike previous attempts, this interactive storytelling system places no restrictions on the content or direction of the user’s contribution to the emerging storyline. In response to these contributions, the computer continues the storyline with narration that is both coherent and entertaining. This capacity for open-domain interactive storytelling is enabled by an extremely large repository of nonfiction personal stories, which is used as a knowledge base in a case-based reasoning architecture. In this article, we describe the three main components of our case-based reasoning approach: a million-item corpus of personal stories mined from internet weblogs, a case retrieval strategy that is optimized for narrative coherence, and an adaptation strategy that ensures that repurposed sentences from the case base are appropriate for the user’s emerging fiction. We describe a series of evaluations of the system’s ability to produce coherent and entertaining stories, and we compare these narratives with single-author stories posted to internet weblogs." ] }
1706.01417
2951901237
Non-stationary domains, that change in unpredicted ways, are a challenge for agents searching for optimal policies in sequential decision-making problems. This paper presents a combination of Markov Decision Processes (MDP) with Answer Set Programming (ASP), named Online ASP for MDP (oASP(MDP)), which is a method capable of constructing the set of domain states while the agent interacts with a changing environment. oASP(MDP) updates previously obtained policies, learnt by means of Reinforcement Learning (RL), using rules that represent the domain changes observed by the agent. These rules represent a set of domain constraints that are processed as ASP programs reducing the search space. Results show that oASP(MDP) is capable of finding solutions for problems in non-stationary domains without interfering with the action-value function approximation process.
Previous attempts at combining RL with ASP include @cite_3 , which proposes the use of ASP to find a pre-defined plan for a RL agent. This plan is described as a hierarchical MDP and RL is used to find the optimal policy for this MDP. However, changes in the environment, as used in the present work, were not considered in @cite_3 .
{ "cite_N": [ "@cite_3" ], "mid": [ "1616853483" ], "abstract": [ "Deployment of robots in practical domains poses key knowledge representation and reasoning challenges. Robots need to represent and reason with incomplete domain knowledge, acquiring and using sensor inputs based on need and availability. This paper presents an architecture that exploits the complementary strengths of declarative programming and probabilistic graphical models as a step toward addressing these challenges. Answer Set Prolog (ASP), a declarative language, is used to represent, and perform inference with, incomplete domain knowledge, including default information that holds in all but a few exceptional situations. A hierarchy of partially observable Markov decision processes (POMDPs) probabilistically models the uncertainty in sensor input processing and navigation. Nonmonotonic logical inference in ASP is used to generate a multinomial prior for probabilistic state estimation with the hierarchy of POMDPs. It is also used with historical data to construct a beta (meta) density model of priors for metareasoning and early termination of trials when appropriate. Robots equipped with this architecture automatically tailor sensor input processing and navigation to tasks at hand, revising existing knowledge using information extracted from sensor inputs. The architecture is empirically evaluated in simulation and on a mobile robot visually localizing objects in indoor domains." ] }
1706.01417
2951901237
Non-stationary domains, that change in unpredicted ways, are a challenge for agents searching for optimal policies in sequential decision-making problems. This paper presents a combination of Markov Decision Processes (MDP) with Answer Set Programming (ASP), named Online ASP for MDP (oASP(MDP)), which is a method capable of constructing the set of domain states while the agent interacts with a changing environment. oASP(MDP) updates previously obtained policies, learnt by means of Reinforcement Learning (RL), using rules that represent the domain changes observed by the agent. These rules represent a set of domain constraints that are processed as ASP programs reducing the search space. Results show that oASP(MDP) is capable of finding solutions for problems in non-stationary domains without interfering with the action-value function approximation process.
Analogous methods were proposed by @cite_10 @cite_0 , in which an agent interacts with an environment and updates an action's cost function. While @cite_10 uses the action language (BC ), @cite_0 uses ASP to find a description of the environment. Although both methods consider action costs, none of them uses Reinforcement Learning and they do not deal with changes in the action-value function description during the agent's interaction with the environment.
{ "cite_N": [ "@cite_0", "@cite_10" ], "mid": [ "2524647857", "1959035372" ], "abstract": [ "For mobile robots to perform complex missions, it may be necessary for them to plan with incomplete information and reason about the indirect effects of their actions. Answer Set Programming (ASP) provides an elegant way of formalizing domains which involve indirect effects of an action and recursively defined fluents. In this paper, we present an approach that uses ASP for robotic task planning, and demonstrate how ASP can be used to generate plans that acquire missing information necessary to achieve the goal. Action costs are also incorporated with planning to produce optimal plans, and we show how these costs can be estimated from experience making planning adaptive. We evaluate our approach using a realistic simulation of an indoor environment where a robot learns to complete its objective in the shortest time.", "The action language BC provides an elegant way of formalizing dynamic domains which involve indirect effects of actions and recursively defined fluents. In complex robot task planning domains, it may be necessary for robots to plan with incomplete information, and reason about indirect or recursive action effects. In this paper, we demonstrate how BC can be used for robot task planning to solve these issues. Additionally, action costs are incorporated with planning to produce optimal plans, and we estimate these costs from experience making planning adaptive. This paper presents the first application of BC on a real robot in a realistic domain, which involves human-robot interaction for knowledge acquisition, optimal plan generation to minimize navigation time, and learning for adaptive planning." ] }
1706.01417
2951901237
Non-stationary domains, that change in unpredicted ways, are a challenge for agents searching for optimal policies in sequential decision-making problems. This paper presents a combination of Markov Decision Processes (MDP) with Answer Set Programming (ASP), named Online ASP for MDP (oASP(MDP)), which is a method capable of constructing the set of domain states while the agent interacts with a changing environment. oASP(MDP) updates previously obtained policies, learnt by means of Reinforcement Learning (RL), using rules that represent the domain changes observed by the agent. These rules represent a set of domain constraints that are processed as ASP programs reducing the search space. Results show that oASP(MDP) is capable of finding solutions for problems in non-stationary domains without interfering with the action-value function approximation process.
Works related to non-stationary MDPs such as @cite_4 @cite_15 , which deal only with changes in reward function, are more associated with RL alone than with a hybrid method such as oASP(MDP), since RL methods are already capable of handling changes in the reward and transition functions. The advantage of ASP is to find the set of states so that it is possible to search for an optimal solution regardless of the agent's transition and reward functions.
{ "cite_N": [ "@cite_15", "@cite_4" ], "mid": [ "2156211713", "2074680702" ], "abstract": [ "We consider a learning problem where the decision maker interacts with a standard Markov decision process, with the exception that the reward functions vary arbitrarily over time. We show that, against every possible realization of the reward process, the agent can perform as well---in hindsight---as every stationary policy. This generalizes the classical no-regret result for repeated games. Specifically, we present an efficient online algorithm---in the spirit of reinforcement learning---that ensures that the agent's average performance loss vanishes over time, provided that the environment is oblivious to the agent's actions. Moreover, it is possible to modify the basic algorithm to cope with instances where reward observations are limited to the agent's trajectory. We present further modifications that reduce the computational cost by using function approximation and that track the optimal policy through infrequent changes.", "We consider a Markov decision process (MDP) setting in which the reward function is allowed to change after each time step (possibly in an adversarial manner), yet the dynamics remain fixed. Similar to the experts setting, we address the question of how well an agent can do when compared to the reward achieved under the best stationary policy over time. We provide efficient algorithms, which have regret bounds with no dependence on the size of state space. Instead, these bounds depend only on a certain horizon time of the process and logarithmically on the number of actions." ] }
1706.01417
2951901237
Non-stationary domains, that change in unpredicted ways, are a challenge for agents searching for optimal policies in sequential decision-making problems. This paper presents a combination of Markov Decision Processes (MDP) with Answer Set Programming (ASP), named Online ASP for MDP (oASP(MDP)), which is a method capable of constructing the set of domain states while the agent interacts with a changing environment. oASP(MDP) updates previously obtained policies, learnt by means of Reinforcement Learning (RL), using rules that represent the domain changes observed by the agent. These rules represent a set of domain constraints that are processed as ASP programs reducing the search space. Results show that oASP(MDP) is capable of finding solutions for problems in non-stationary domains without interfering with the action-value function approximation process.
A proposal that closely resembles oASP(MDP) is @cite_9 . This method proposes the combination of deep learning to find a description to a set of states, which are then described as rules to a probabilistic logic program and, finally, a RL agent interacts with the environment using the results and learns the optimal policy.
{ "cite_N": [ "@cite_9" ], "mid": [ "2521274174" ], "abstract": [ "Deep reinforcement learning (DRL) brings the power of deep neural networks to bear on the generic task of trial-and-error learning, and its effectiveness has been convincingly demonstrated on tasks such as Atari video games and the game of Go. However, contemporary DRL systems inherit a number of shortcomings from the current generation of deep learning techniques. For example, they require very large datasets to work effectively, entailing that they are slow to learn even when such datasets are available. Moreover, they lack the ability to reason on an abstract level, which makes it difficult to implement high-level cognitive functions such as transfer learning, analogical reasoning, and hypothesis-based reasoning. Finally, their operation is largely opaque to humans, rendering them unsuitable for domains in which verifiability is important. In this paper, we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings. As proof-of-concept, we present a preliminary implementation of the architecture and apply it to several variants of a simple video game. We show that the resulting system -- though just a prototype -- learns effectively, and, by acquiring a set of symbolic rules that are easily comprehensible to humans, dramatically outperforms a conventional, fully neural DRL system on a stochastic variant of the game." ] }
1706.01152
2785483209
The rate of a network code is the ratio of the block size of the network's messages to that of its edge codewords. We compare the linear capacities and achievable rate regions of networks using finite field alphabets to the more general cases of arbitrary ring and module alphabets. For non-commutative rings, two-sided linearity is allowed. Specifically, we prove the following for directed acyclic networks: (i) The linear rate region and the linear capacity of any network over a finite field depend only on the characteristic of the field. Furthermore, any two fields with different characteristics yield different linear capacities for at least one network. (ii) Whenever the characteristic of a given finite field divides the size of a given finite ring, each network's linear rate region over the ring is contained in its linear rate region over the field. Thus, any network's linear capacity over a field is at least its linear capacity over any other ring of the same size. An analogous result also holds for linear network codes over module alphabets. (iii) Whenever the characteristic of a given finite field does not divide the size of a given finite ring, there is some network whose linear capacity over the ring is strictly greater than its linear capacity over the field. Thus, for any finite field, there always exist rings over which some networks have higher linear capacities than over the field.
Li, Yeung, and Cai @cite_35 showed that when each of a network's receivers demands all of the messages (i.e., a network), the linear capacity over any finite field is equal to the (nonlinear) capacity. Ho et. al @cite_12 showed that for multicast networks, random fractional linear codes over finite fields achieve the network's capacity with probability approaching one as the block sizes increase. Jaggi et. al @cite_21 developed polynomial-time algorithms for constructing capacity-achieving fractional linear codes over finite fields for multicast networks. Algorithms for constructing fractional linear solutions over finite fields for other classes of networks have also been a subject of considerable interest (e.g., @cite_5 , @cite_39 , @cite_28 , and @cite_15 ).
{ "cite_N": [ "@cite_35", "@cite_28", "@cite_21", "@cite_39", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2106403318", "1984617503", "", "2023242304", "2117872622", "", "2048235391" ], "abstract": [ "Consider a communication network in which certain source nodes multicast information to other nodes on the network in the multihop fashion where every node can pass on any of its received data to others. We are interested in how fast each node can receive the complete information, or equivalently, what the information rate arriving at each node is. Allowing a node to encode its received data before passing it on, the question involves optimization of the multicast mechanisms at the nodes. Among the simplest coding schemes is linear coding, which regards a block of data as a vector over a certain base field and allows a node to apply a linear transformation to a vector before passing it on. We formulate this multicast problem and prove that linear coding suffices to achieve the optimum, which is the max-flow from the source to each receiving node.", "In this paper, we present an achievable rate region for double-unicast networks by assuming that the intermediate nodes perform random linear network coding, and the source and sink nodes optimize their strategies to maximize the achievable region. Such a setup can be modeled as a deterministic interference channel, whose capacity region is known. For the particular class of linear deterministic interference channels of our interest, in which the outputs and interference are linear deterministic functions of the inputs, we show that the known capacity region can be achieved by linear strategies. As a result, for a given set of network coding coefficients chosen by the intermediate nodes, the proposed linear precoding and decoding for the source and sink nodes will give the maximum achievable rate region for double-unicast networks. We further derive a suboptimal but easy-to-compute rate region that is independent of the network coding coefficients used at the intermediate nodes, and is instead specified by the min-cuts of the network. It is found that even this suboptimal region is strictly larger than the existing achievable rate regions in the literature.", "", "We consider the multiple-unicast problem with three source–terminal pairs over directed acyclic networks with unit-capacity edges. The three @math pairs wish to communicate at unit-rate via network coding. The connectivity between the @math pairs is quantified by means of a connectivity-level vector, @math such that there exist @math edge-disjoint paths between @math and @math . In this paper, we attempt to classify networks based on the connectivity level. It can be observed that unit-rate transmission can be supported by routing if @math , for all @math . In this paper, we consider connectivity-level vectors such that $ i = 1, , 3 k_i . We present either a constructive linear network coding scheme or an instance of a network that cannot support the desired unit-rate requirement, for all such connectivity-level vectors except the vector [1 2 4] (and its permutations). The benefits of our schemes extend to networks with higher and potentially different edge capacities. Specifically, our experimental results indicate that for networks where the different source–terminal paths have a significant overlap, our constructive unit-rate schemes can be packed along with routing to provide higher throughput as compared to a pure routing approach.", "We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L × L coding matrices that play a similar role as coding coefficients in scalar coding. We start our work by extending the algebraic framework developed for multicasting over graphs by Koetter and Medard to include operations over matrices; we build on this generalized framework, to provide a new approach for both scalar and vector code design which attempts to minimize the employed field size and employed vector length, while selecting the coding operations. Our algorithms also lead as a special case to network code designs that employ structured matrices.", "", "We present a distributed random linear network coding approach for transmission and compression of information in general multisource multicast networks. Network nodes independently and randomly select linear mappings from inputs onto output links over some field. We show that this achieves capacity with probability exponentially approaching 1 with the code length. We also demonstrate that random linear coding performs compression when necessary in a network, generalizing error exponents for linear Slepian-Wolf coding in a natural way. Benefits of this approach are decentralized operation and robustness to network changes or link failures. We show that this approach can take advantage of redundant network capacity for improved success probability and robustness. We illustrate some potential advantages of random linear network coding over routing in two examples of practical scenarios: distributed network operation and networks with dynamically varying connections. Our derivation of these results also yields a new bound on required field size for centralized network coding on general multicast networks" ] }
1706.01152
2785483209
The rate of a network code is the ratio of the block size of the network's messages to that of its edge codewords. We compare the linear capacities and achievable rate regions of networks using finite field alphabets to the more general cases of arbitrary ring and module alphabets. For non-commutative rings, two-sided linearity is allowed. Specifically, we prove the following for directed acyclic networks: (i) The linear rate region and the linear capacity of any network over a finite field depend only on the characteristic of the field. Furthermore, any two fields with different characteristics yield different linear capacities for at least one network. (ii) Whenever the characteristic of a given finite field divides the size of a given finite ring, each network's linear rate region over the ring is contained in its linear rate region over the field. Thus, any network's linear capacity over a field is at least its linear capacity over any other ring of the same size. An analogous result also holds for linear network codes over module alphabets. (iii) Whenever the characteristic of a given finite field does not divide the size of a given finite ring, there is some network whose linear capacity over the ring is strictly greater than its linear capacity over the field. Thus, for any finite field, there always exist rings over which some networks have higher linear capacities than over the field.
It was shown in @cite_32 that the capacity of a network is independent of the coding alphabet. However, there are multiple examples in the literature (e.g., @cite_2 , @cite_10 , @cite_40 ) of networks whose linear capacity over a finite field can depend on the field alphabet, specifically by way of the characteristic of the field. Muralidharan and Rajan @cite_20 demonstrated that a fractional linear solution over a finite field @math exists for a network if and only if the network is associated with a discrete polymatroid representable over @math . Linear rank inequalities of vector subspaces and linear information inequalities (e.g., @cite_38 ) are known to be closely related and have been shown to be useful in determining or bounding networks' linear capacities over finite fields (e.g., @cite_29 , @cite_40 , and @cite_31 ).
{ "cite_N": [ "@cite_38", "@cite_29", "@cite_32", "@cite_40", "@cite_2", "@cite_31", "@cite_10", "@cite_20" ], "mid": [ "2114325434", "", "2567286734", "", "1483151078", "", "1488435683", "1757164770" ], "abstract": [ "We present a framework for information inequalities, namely, inequalities involving only Shannon's information measures, for discrete random variables. A region in IR(2 sup n -1), denoted by spl Gamma *, is identified to be the origin of all information inequalities involving n random variables in the sense that all such inequalities are partial characterizations of spl Gamma *. A product from this framework is a simple calculus for verifying all unconstrained and constrained linear information identities and inequalities which can be proved by conventional techniques. These include all information identities and inequalities of such types in the literature. As a consequence of this work, most identities and inequalities involving a definite number of random variables can now be verified by a software called ITIP which is available on the World Wide Web. Our work suggests the possibility of the existence of information inequalities which cannot be proved by conventional techniques. We also point out the relation between spl Gamma * and some important problems in probability theory and information theory.", "", "We define the routing capacity of a network to be the supremum of all possible fractional message throughputs achievable by routing. We prove that the routing capacity of every network is achievable and rational, we present an algorithm for its computation, and we prove that every non-negative rational number is the routing capacity of some network. We also determine the routing capacity for various example networks. Finally, we discuss the extension of routing capacity to fractional coding solutions and show that the coding capacity of a network is independent of the alphabet used", "", "Vector linear network coding (LNC) is a generalization of the conventional scalar LNC, such that the data unit transmitted on every edge is an L-dimensional vector of data symbols over a base field GF(q). Vector LNC enriches the choices of coding operations at intermediate nodes, and there is a popular conjecture on the benefit of vector LNC over scalar LNC in terms of alphabet size of data units: there exist (singlesource) multicast networks that are vector linearly solvable of dimension L over GF(q) but not scalar linearly solvable over any field of size q' qL. This paper introduces a systematic way to construct such multicast networks, and subsequently establish explicit instances to affirm the positive answer of this conjecture for infinitely many alphabet sizes pL with respect to an arbitrary prime p. On the other hand, this paper also presents explicit instances with the special property that they do not have a vector linear solution of dimension L over GF(2) but have scalar linear solutions over GF(q') for someq' <; 2 L , where q' can be odd or even. This discovery also unveils that over a given base field, a multicast network that has a vector linear solution of dimension L does not necessarily have a vector linear solution of dimension L' > L.", "", "1. The field of values 2. Stable matrices and inertia 3. Singular value inequalities 4. Matrix equations and Kronecker products 5. Hadamard products 6. Matrices and functions.", "Discrete polymatroids are the multi-set analogue of matroids. In this paper, we explore the connections among linear network coding, linear index coding, and representable discrete polymatroids. We consider the vector linear solutions of networks over a field @math , with possibly different message and edge vector dimensions, which are referred to as linear fractional solutions. It is well known that a scalar linear solution over @math exists for a network if and only if the network is matroidal with respect to a matroid representable over @math . We define a discrete polymatroidal network and show that a linear fractional solution over a field @math exists for a network if and only if the network is discrete polymatroidal with respect to a discrete polymatroid representable over @math . An algorithm to construct the networks starting from certain class of discrete polymatroids is provided. Every representation over @math for the discrete polymatroid, results in a linear fractional solution over @math for the constructed network. Next, we consider the index coding problem, which involves a sender, which generates a set of messages @math , and a set of receivers @math , which demand messages. A receiver @math is specified by the tuple @math , where @math is the message demanded by @math and @math is the side information possessed by @math . We first show that a linear solution to an index coding problem exists if and only if there exists a representable discrete polymatroid, satisfying certain conditions, which are determined by the index coding problem considered. showed that the problem of finding a multi-linear representation for a matroid can be reduced to finding a perfect linear index coding solution for an index coding problem obtained from that matroid. The multi-linear representation of a matroid can be viewed as a special case of representation of an appropriate discrete polymatroid. We generalize the result of , by showing that the problem of finding a representation for a discrete polymatroid can be reduced to finding a perfect linear index coding solution for an index coding problem obtained from that discrete polymatroid." ] }
1706.01077
2622014086
In many robotic applications, some aspects of the system dynamics can be modeled accurately while others are difficult to obtain or model. We present a novel reinforcement learning (RL) method for continuous state and action spaces that learns with partial knowledge of the system and without active exploration. It solves linearly-solvable Markov decision processes (L-MDPs), which are well suited for continuous state and action spaces, based on an actor-critic architecture. Compared to previous RL methods for L-MDPs and path integral methods which are model based, the actor-critic learning does not need a model of the uncontrolled dynamics and, importantly, transition noise levels; however, it requires knowing the control dynamics for the problem. We evaluate our method on two synthetic test problems, and one real-world problem in simulation and using real traffic data. Our experiments demonstrate improved learning and policy performance.
Previous approaches for solving L-MDPs are predominantly model based @cite_13 @cite_15 @cite_16 . These efficiently optimize control policies by solving the linearized Bellman in discrete- or continuous-state L-MDPs when the system dynamics is fully known. Our method relaxes this requirement using samples of passive dynamics, while knowing the control dynamics. We also introduce multi-layer neural networks for approximating the value functions in L-MDPs in addition to the previously used radial basis functions.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2128152413", "2015383196", "" ], "abstract": [ "We have identified a general class of nonlinear stochastic optimal control problems which can be reduced to computing the principal eigenfunction of a linear operator. Here we develop function approximation methods exploiting this inherent linearity. First we discretize the time axis in a novel way, yielding an integral operator that approximates not only our control problems but also more general elliptic PDEs. The eigenfunction problem is then approximated with a finite-dimensional eigenvector problem - by discretizing the state space, or by projecting on a set of adaptive bases evaluated at a set of collocation states. Solving the resulting eigenvector problem is faster than applying policy or value iteration. The bases are adapted via Levenberg-Marquardt minimization with guaranteed convergence. The collocation set can also be adapted so as to focus the approximation on a region of interest. Numerical results on test problems are provided.", "Abstract A general class of stochastic optimal control problems has recently been reduced to computing the principle eigenfunction of a linear operator. Here we present an approximation framework for solving such problems by using soft state aggregation over a continuous space. This approach enables us to avoid matrix factorization and take advantage of sparsity by using efficient iterative solvers. Adaptive schemes for basis placement are developed so as to provide higher resolution at the regions of state space that are visited more often. Numerical results on test problems are provided.", "" ] }
1706.01077
2622014086
In many robotic applications, some aspects of the system dynamics can be modeled accurately while others are difficult to obtain or model. We present a novel reinforcement learning (RL) method for continuous state and action spaces that learns with partial knowledge of the system and without active exploration. It solves linearly-solvable Markov decision processes (L-MDPs), which are well suited for continuous state and action spaces, based on an actor-critic architecture. Compared to previous RL methods for L-MDPs and path integral methods which are model based, the actor-critic learning does not need a model of the uncontrolled dynamics and, importantly, transition noise levels; however, it requires knowing the control dynamics for the problem. We evaluate our method on two synthetic test problems, and one real-world problem in simulation and using real traffic data. Our experiments demonstrate improved learning and policy performance.
As pAC can learn from data containing samples of passive dynamics, it bears resemblance to batch RL methods @cite_0 . A popular and model-free batch RL method is fitted Q-iteration @cite_14 @cite_12 , which finds policy from collected data without a model of system dynamics. It searches for actions that minimize the Q-value, which requires that either the action space be discrete or the Q-function has structure such as being quadratic due to computational cost. In contrast, pAC uses policy that is analytically derived from the estimated Z-value, parameter for transition noise, and known control dynamics.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_12" ], "mid": [ "192920577", "", "2416434549" ], "abstract": [ "Batch reinforcement learning is a subfield of dynamic programming-based reinforcement learning. Originally defined as the task of learning the best possible policy from a fixed set of a priori-known transition samples, the (batch) algorithms developed in this field can be easily adapted to the classical online case, where the agent interacts with the environment while learning. Due to the efficient use of collected data and the stability of the learning process, this research area has attracted a lot of attention recently. In this chapter, we introduce the basic principles and the theory behind batch reinforcement learning, describe the most important algorithms, exemplarily discuss ongoing research within this field, and briefly survey real-world applications of batch reinforcement learning.", "", "Abstract In this paper a new offline model-free approximate Q-iteration is proposed. Following the idea of Fitted Q-iteration, we use a computational scheme based on Functional Networks, which have been proved to be a powerful alternative to Neural Networks, because they do not require a large number of training samples. We state a condition for the convergence of the proposed technique and we apply it to three classical control problems, namely, a DC motor, a pendulum swing up, a robotic arm. We present a comparative study to show the approximation capabilities of our method with a relatively small number of training samples." ] }
1706.01077
2622014086
In many robotic applications, some aspects of the system dynamics can be modeled accurately while others are difficult to obtain or model. We present a novel reinforcement learning (RL) method for continuous state and action spaces that learns with partial knowledge of the system and without active exploration. It solves linearly-solvable Markov decision processes (L-MDPs), which are well suited for continuous state and action spaces, based on an actor-critic architecture. Compared to previous RL methods for L-MDPs and path integral methods which are model based, the actor-critic learning does not need a model of the uncontrolled dynamics and, importantly, transition noise levels; however, it requires knowing the control dynamics for the problem. We evaluate our method on two synthetic test problems, and one real-world problem in simulation and using real traffic data. Our experiments demonstrate improved learning and policy performance.
Path integral control also learns a policy based on linearized Bellman equation @cite_7 @cite_2 . Unlike approaches for L-MDPs, path integral control can directly optimize the policy. However, the approach has to sample sample many trajectories under a training policy from a certain initial state. As we mentioned previously, we seek to avoid such active and potentially unsafe explorations in the real world.
{ "cite_N": [ "@cite_7", "@cite_2" ], "mid": [ "1925816294", "91905023" ], "abstract": [ "With the goal to generate more scalable algorithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classical techniques from optimal control and dynamic programming with modern learning techniques from statistical estimation theory. In this vein, this paper suggests to use the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton-Jacobi-Bellman (HJB) equations, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The update equations have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Our new algorithm demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a simulated 12 degree-of-freedom robot dog illustrates the functionality of our algorithm in a complex robot learning scenario. We believe that Policy Improvement with Path Integrals (PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based on trajectory roll-outs.", "Path integral (PI) control defines a general class of control problems for which the optimal control computation is equivalent to an inference problem that can be solved by evaluation of a path integral over state trajectories. However, this potential is mostly unused in real-world problems because of two main limitations: first, current approaches can typically only be applied to learn open-loop controllers and second, current sampling procedures are inefficient and not scalable to high dimensional systems. We introduce the efficient Path Integral Relative-Entropy Policy Search (PI-REPS) algorithm for learning feedback policies with PI control. Our algorithm is inspired by information theoretic policy updates that are often used in policy search. We use these updates to approximate the state trajectory distribution that is known to be optimal from the PI control theory. Our approach allows for a principled treatment of different sampling distributions and can be used to estimate many types of parametric or non-parametric feedback controllers. We show that PI-REPS significantly outperforms current methods and is able to solve tasks that are out of reach for current methods." ] }
1706.00935
2623128150
The accurate assessment of White matter hyperintensities (WMH) burden is of crucial importance for epidemiological studies to determine association between WMHs, cognitive and clinical data. The manual delineation of WMHs is tedious, costly and time consuming. This is further complicated by the fact that other pathological features (i.e. stroke lesions) often also appear as hyperintense. Several automated methods aiming to tackle the challenges of WMH segmentation have been proposed, however cannot differentiate between WMH and strokes. Other methods, capable of distinguishing between different pathologies in brain MRI, are not designed with simultaneous WMH and stroke segmentation in mind. In this work we propose to use a convolutional neural network (CNN) that is able to segment hyperintensities and differentiate between WMHs and stroke lesions. Specifically, we aim to distinguish between WMH pathologies from those caused by stroke lesions due to either cortical, large or small subcortical infarcts. As far as we know, this is the first time such differentiation task has explicitly been proposed. The proposed fully convolutional CNN architecture, is comprised of an analysis path, that gradually learns low and high level features, followed by a synthesis path, that gradually combines and up-samples the low and high level features into a class likelihood semantic segmentation. Quantitatively, the proposed CNN architecture is shown to outperform other well established and state-of-the-art algorithms in terms of overlap with manual expert annotations. Clinically, the extracted WMH volumes were found to correlate better with the Fazekas visual rating score. Additionally, a comparison of the associations found between clinical risk-factors and the WMH volumes generated by the proposed method, were found to be in line with the associations found with the expert-annotated volumes.
In the following we review existing methods and challenges that are related to our work, especially on Multiple sclerosis (MS), WMH and stroke lesion segmentation in MR imaging. Additionally, some more general CNN segmentation approaches that share architectural similarities with the method we propose here are also reviewed in this section. Over the last few years, there has been an increased amount of research going on in these areas @cite_70 @cite_47 @cite_24 @cite_18 . Although some of the methods mentioned here were proposed for segmenting different pathologies rather than the ones we explore in this work, they can in fact be applied to different tasks. As mentioned before, these methods can be broadly classified into , , and , depending on the amount of expertly annotated data available.
{ "cite_N": [ "@cite_70", "@cite_47", "@cite_18", "@cite_24" ], "mid": [ "2021204548", "2049791419", "2097805840", "2484736472" ], "abstract": [ "Abstract Magnetic resonance (MR) imaging is often used to characterize and quantify multiple sclerosis (MS) lesions in the brain and spinal cord. The number and volume of lesions have been used to evaluate MS disease burden, to track the progression of the disease and to evaluate the effect of new pharmaceuticals in clinical trials. Accurate identification of MS lesions in MR images is extremely difficult due to variability in lesion location, size and shape in addition to anatomical variability between subjects. Since manual segmentation requires expert knowledge, is time consuming and is subject to intra- and inter-expert variability, many methods have been proposed to automatically segment lesions. The objective of this study was to carry out a systematic review of the literature to evaluate the state of the art in automated multiple sclerosis lesion segmentation. From 1240 hits found initially with PubMed and Google scholar, our selection criteria identified 80 papers that described an automatic lesion segmentation procedure applied to MS. Only 47 of these included quantitative validation with at least one realistic image. In this paper, we describe the complexity of lesion segmentation, classify the automatic MS lesion segmentation methods found, and review the validation methods applied in each of the papers reviewed. Although many segmentation solutions have been proposed, including some with promising results using MRI data obtained on small groups of patients, no single method is widely employed due to performance issues related to the high variability of MS lesion appearance and differences in image acquisition. The challenge remains to provide segmentation techniques that work in all cases regardless of the type of MS, duration of the disease, or MRI protocol, and this within a comprehensive, standardized validation framework. MS lesion segmentation remains an open problem.", "White matter hyperintensities (WMH) are commonly seen in the brain of healthy elderly subjects and patients with several neurological and vascular disorders. A truly reliable and fully automated method for quantitative assessment of WMH on magnetic resonance imaging (MRI) has not yet been identified. In this paper, we review and compare the large number of automated approaches proposed for segmentation of WMH in the elderly and in patients with vascular risk factors. We conclude that, in order to avoid artifacts and exclude the several sources of bias that may influence the analysis, an optimal method should comprise a careful preprocessing of the images, be based on multimodal, complementary data, take into account spatial information about the lesions and correct for false positives. All these features should not exclude computational leanness and adaptability to available data.", "Over the last 15 years, basic thresholding techniques in combination with standard statistical correlation-based data analysis tools have been widely used to investigate different aspects of evolution of acute or subacute to late stage ischemic stroke in both human and animal data. Yet, a wave of biology-dependent and imaging-dependent issues is still untackled pointing towards the key question: “how does an ischemic stroke evolve?” Paving the way for potential answers to this question, both magnetic resonance (MRI) and CT (computed tomography) images have been used to visualize the lesion extent, either with or without spatial distinction between dead and salvageable tissue. Combining diffusion and perfusion imaging modalities may provide the possibility of predicting further tissue recovery or eventual necrosis. Going beyond these basic thresholding techniques, in this critical appraisal, we explore different semi-automatic or fully automatic 2D 3D medical image analysis methods and mathematical models applied to human, animal (rats rodents) and or synthetic ischemic stroke to tackle one of the following three problems: (1) segmentation of infarcted and or salvageable (also called penumbral) tissue, (2) prediction of final ischemic tissue fate (death or recovery) and (3) dynamic simulation of the lesion core and or penumbra evolution. To highlight the key features in the reviewed segmentation and prediction methods, we propose a common categorization pattern. We also emphasize some key aspects of the methods such as the imaging modalities required to build and test the presented approach, the number of patients animals or synthetic samples, the use of external user interaction and the methods of assessment (clinical or imaging-based). Furthermore, we investigate how any key difficulties, posed by the evolution of stroke such as swelling or reperfusion, were detected (or not) by each method. In the absence of any imaging-based macroscopic dynamic model applied to ischemic stroke, we have insights into relevant microscopic dynamic models simulating the evolution of brain ischemia in the hope to further promising and challenging 4D imaging-based dynamic models. By depicting the major pitfalls and the advanced aspects of the different reviewed methods, we present an overall critique of their performances and concluded our discussion by suggesting some recommendations for future research work focusing on one or more of the three addressed problems.", "Ischemic stroke is the most common cerebrovascular disease, and its diagnosis, treatment, and study relies on non-invasive imaging. Algorithms for stroke lesion segmentation from magnetic resonance imaging (MRI) volumes are intensely researched, but the reported results are largely incomparable due to different datasets and evaluation schemes. We approached this urgent problem of comparability with the Ischemic Stroke Lesion Segmentation (ISLES) challenge organized in conjunction with the MICCAI 2015 conference. In this paper we propose a common evaluation framework, describe the publicly available datasets, and present the results of the two sub-challenges: Sub-Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES). A total of 16 research groups participated with a wide range of state-of-the-art automatic segmentation algorithms. A thorough analysis of the obtained data enables a critical evaluation of the current state-of-the-art, recommendations for further developments, and the identification of remaining challenges. The segmentation of acute perfusion lesions addressed in SPES was found to be feasible. However, algorithms applied to sub-acute lesion segmentation in SISS still lack accuracy. Overall, no algorithmic characteristic of any method was found to perform superior to the others. Instead, the characteristics of stroke lesion appearances, their evolution, and the observed challenges should be studied in detail. The annotated ISLES image datasets continue to be publicly available through an online evaluation system to serve as an ongoing benchmarking resource (www.isles-challenge.org)." ] }
1706.00935
2623128150
The accurate assessment of White matter hyperintensities (WMH) burden is of crucial importance for epidemiological studies to determine association between WMHs, cognitive and clinical data. The manual delineation of WMHs is tedious, costly and time consuming. This is further complicated by the fact that other pathological features (i.e. stroke lesions) often also appear as hyperintense. Several automated methods aiming to tackle the challenges of WMH segmentation have been proposed, however cannot differentiate between WMH and strokes. Other methods, capable of distinguishing between different pathologies in brain MRI, are not designed with simultaneous WMH and stroke segmentation in mind. In this work we propose to use a convolutional neural network (CNN) that is able to segment hyperintensities and differentiate between WMHs and stroke lesions. Specifically, we aim to distinguish between WMH pathologies from those caused by stroke lesions due to either cortical, large or small subcortical infarcts. As far as we know, this is the first time such differentiation task has explicitly been proposed. The proposed fully convolutional CNN architecture, is comprised of an analysis path, that gradually learns low and high level features, followed by a synthesis path, that gradually combines and up-samples the low and high level features into a class likelihood semantic segmentation. Quantitatively, the proposed CNN architecture is shown to outperform other well established and state-of-the-art algorithms in terms of overlap with manual expert annotations. Clinically, the extracted WMH volumes were found to correlate better with the Fazekas visual rating score. Additionally, a comparison of the associations found between clinical risk-factors and the WMH volumes generated by the proposed method, were found to be in line with the associations found with the expert-annotated volumes.
Using multi-resolution inputs @cite_3 @cite_22 @cite_49 can increase the field of view with smaller feature maps, while also allowing more non-linearities (more layers) to be used at higher resolution, both of which are desired properties. However, down-sampling patches has the drawback that valuable information is being discarded before any processing is done, and since filters learned by the first few layers of CNNs tend to be basic feature detectors, e.g. lines or curves, different paths risk capturing redundant information. Furthermore, although convolutions performed in 3D as in @cite_3 intuitively make sense for 3D volumetric images, FLAIR image acquisitions are actually often acquired as 2D images with large slice thickness and then stacked into a 3D volume. Further to this, gold standard annotations, such as those generated by trained radiologists (e.g. WMH delineation or Fazekas scores) are usually derived by assessing images slice by slice. Thus, as pointed out by @cite_22 , 3D convolutions for FLAIR MR image segmentation are in fact less intuitive.
{ "cite_N": [ "@cite_49", "@cite_22", "@cite_3" ], "mid": [ "2422852360", "2532750509", "2301358467" ], "abstract": [ "Convolutional neural networks (CNN) have been widely used for visual recognition tasks including semantic segmentation of images. While the existing methods consider uniformly sampled single-or multi-scale patches from the neighborhood of each voxel, this approach might be sub-optimal as it captures and processes unnecessary details far away from the center of the patch. We instead propose to train CNNs with non-uniformly sampled patches that allow a wider extent for the sampled patches. This results in more captured contextual information, which is in particular of interest for biomedical image analysis, where the anatomical location of imaging features are often crucial. We evaluate and compare this strategy for white matter hyperintensity segmentation on a test set of 46 MRI scans. We show that the proposed method not only outperforms identical CNNs with uniform patches of the same size (0.780 Dice coefficient compared to 0.736), but also gets very close to the performance of an independent human expert (0.796 Dice coefficient).", "The anatomical location of imaging features is of crucial importance for accurate diagnosis in many medical tasks. Convolutional neural networks (CNN) have had huge successes in computer vision, but they lack the natural ability to incorporate the anatomical location in their decision making process, hindering success in some medical image analysis tasks. In this paper, to integrate the anatomical location information into the network, we propose several deep CNN architectures that consider multi-scale patches or take explicit location features while training. We apply and compare the proposed architectures for segmentation of white matter hyperintensities in brain MR images on a large dataset. As a result, we observe that the CNNs that incorporate location information substantially outperform a conventional segmentation method with handcrafted features as well as CNNs that do not integrate location information. On a test set of 50 scans, the best configuration of our networks obtained a Dice score of 0.792, compared to 0.805 for an independent human observer. Performance levels of the machine and the independent human observer were not statistically significantly different (p-value = 0.06).", "This work is supported by the EPSRC First Grant scheme (grant ref no. EP N023668 1) and partially funded under the 7th Framework Programme by the European Commission (TBIcare: http: www.tbicare.eu ; CENTER-TBI: https: www.center-tbi.eu ). This work was further supported by a Medical Research Council (UK) Program Grant (Acute brain injury: heterogeneity of mechanisms, therapeutic targets and outcome effects [G9439390 ID 65883]), the UK National Institute of Health Research Biomedical Research Centre at Cambridge and Technology Platform funding provided by the UK Department of Health. KK is supported by the Imperial College London PhD Scholarship Programme. VFJN is supported by a Health Foundation Academy of Medical Sciences Clinician Scientist Fellowship. DKM is supported by an NIHR Senior Investigator Award. We gratefully acknowledge the support of NVIDIA Corporation with the donation of two Titan X GPUs for our research." ] }
1706.00923
2953043284
Inferring trust relations between social media users is critical for a number of applications wherein users seek credible information. The fact that available trust relations are scarce and skewed makes trust prediction a challenging task. To the best of our knowledge, this is the first work on exploring representation learning for trust prediction. We propose an approach that uses only a small amount of binary user-user trust relations to simultaneously learn user embeddings and a model to predict trust between user pairs. We empirically demonstrate that for trust prediction, our approach outperforms classifier-based approaches which use state-of-the-art representation learning methods like DeepWalk and LINE as features. We also conduct experiments which use embeddings pre-trained with DeepWalk and LINE each as an input to our model, resulting in further performance improvement. Experiments with a dataset of @math 356K user pairs show that the proposed method can obtain an high F-score of 92.65 .
: The binary trust prediction problem can be posed as a classification problem. Typically, a binary classifier is trained using the available trust information between a small number of user pairs as labels. @cite_10 present a trust-inducing framework composed of factors pertaining to knowledge, similarity, propensity, reputation and relationship. Features derived from that, using both structural (network) information and contextual data (users' product ratings), are used for the trust versus no-trust classification. @cite_15 provide a taxonomy to obtain a set of relevant features derived from user attributes and user interactions for predicting trust. @cite_11 , quantitative trust prediction models are proposed on the basis of the Trust Antecedent Framework from organizational behavior research. @cite_7 use side information, namely user ratings for online product reviews, for trust prediction by defining and computing features.
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_7", "@cite_11" ], "mid": [ "1982944175", "", "2017533400", "2151859395" ], "abstract": [ "Trust between a pair of users is an important piece of information for users in an online community (such as electronic commerce websites and product review websites) where users may rely on trust information to make decisions. In this paper, we address the problem of predicting whether a user trusts another user. Most prior work infers unknown trust ratings from known trust ratings. The effectiveness of this approach depends on the connectivity of the known web of trust and can be quite poor when the connectivity is very sparse which is often the case in an online community. In this paper, we therefore propose a classification approach to address the trust prediction problem. We develop a taxonomy to obtain an extensive set of relevant features derived from user attributes and user interactions in an online community. As a test case, we apply the approach to data collected from Epinions, a large product review community that supports various types of interactions as well as a web of trust that can be used for training and evaluation. Empirical results show that the trust among users can be effectively predicted using pre-trained classifiers.", "", "Trust relationships between users in various online communities are notoriously hard to model for computer scientists. It can be easily verified that trying to infer trust based on the social network alone is often inefficient. Therefore, the avenue we explore is applying Data Mining algorithms to unearth latent relationships and patterns from background data. In this paper, we focus on a case where the background data are user ratings for online product reviews. We consider as a testing ground a large dataset provided by Epinions.com that contains a trust network as well as user ratings for reviews on products from a wide range of categories. In order to predict trust we define and compute a critical set of features, which we show to be highly effective in providing the basis for trust predictions. Then, we show that state-of-the-art classifiers can do an impressive job in predicting trust based on our extracted features. For this, we employ a variety of measures to evaluate the classification based on these features. We show that by carefully collecting and synthesizing readily available background information, such as ratings for online reviews, one can accurately predict social links based on trust.", "This paper analyzes the trustor and trustee factors that lead to inter-personal trust using a well studied Trust Antecedent framework in management science mayer . To apply these factors to trust ranking problem in online rating systems, we derive features that correspond to each factor and develop different trust ranking models. The advantage of this approach is that features relevant to trust can be systematically derived so as to achieve good prediction accuracy. Through a series of experiments on real data from Epinions, we show that even a simple model using the derived features yields good accuracy and outperforms MoleTrust, a trust propagation based model. SVM classifiers using these features also show improvements." ] }
1706.00941
2624346620
The fast growth of social networks and their privacy requirements in recent years, has lead to increasing difficulty in obtaining complete topology of these networks. However, diffusion information over these networks is available and many algorithms have been proposed to infer the underlying networks by using this information. The previously proposed algorithms only focus on inferring more links and do not pay attention to the important characteristics of the underlying social networks In this paper, we propose a novel algorithm, called DANI, to infer the underlying network structure while preserving its properties by using the diffusion information. Moreover, the running time of the proposed method is considerably lower than the previous methods. We applied the proposed method to both real and synthetic networks. The experimental results showed that DANI has higher accuracy and lower run time compared to well-known network inference methods.
This category of research tries to infer the edges of a network by using cascade information, which in most cases is the infection time of nodes in different cascades @cite_24 @cite_53 @cite_37 @cite_1 @cite_13 @cite_57 . @cite_45 introduces a comprehensive survey on previous works in this area. In the following, we describe some of the most important works in this category.
{ "cite_N": [ "@cite_37", "@cite_53", "@cite_1", "@cite_24", "@cite_57", "@cite_45", "@cite_13" ], "mid": [ "2547263067", "2949064044", "2952347589", "", "2949499549", "2054476043", "" ], "abstract": [ "The spread of information cascades over social networks forms the diffusion networks. The latent structure of diffusion networks makes the problem of extracting diffusion links difficult. As observing the sources of information is not usually possible, the only available prior knowledge is the infection times of individuals. We confront these challenges by proposing a new method called to extract the diffusion networks by using the time-series data. We model the diffusion process on information networks as a Markov random walk process and develop an algorithm to discover the most probable diffusion links. We validate our model on both synthetic and real data and show the low dependency of our method to the number of transmitting cascades over the underlying networks. Moreover, The proposed model can speed up the extraction process up to 300 times with respect to the existing state of the art method.", "In many real-world scenarios, it is nearly impossible to collect explicit social network data. In such cases, whole networks must be inferred from underlying observations. Here, we formulate the problem of inferring latent social networks based on network diffusion or disease propagation data. We consider contagions propagating over the edges of an unobserved social network, where we only observe the times when nodes became infected, but not who infected them. Given such node infection times, we then identify the optimal network that best explains the observed data. We present a maximum likelihood approach based on convex programming with a l1-like penalty term that encourages sparsity. Experiments on real and synthetic data reveal that our method near-perfectly recovers the underlying network structure as well as the parameters of the contagion propagation model. Moreover, our approach scales well as it can infer optimal networks of thousands of nodes in a matter of minutes.", "Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data.", "", "Diffusion and propagation of information, influence and diseases take place over increasingly larger networks. We observe when a node copies information, makes a decision or becomes infected but networks are often hidden or unobserved. Since networks are highly dynamic, changing and growing rapidly, we only observe a relatively small set of cascades before a network changes significantly. Scalable network inference based on a small cascade set is then necessary for understanding the rapidly evolving dynamics that govern diffusion. In this article, we develop a scalable approximation algorithm with provable near-optimal performance based on submodular maximization which achieves a high accuracy in such scenario, solving an open problem first introduced by Gomez- (2010). Experiments on synthetic and real diffusion data show that our algorithm in practice achieves an optimal trade-off between accuracy and running time.", "Online social networks play a major role in the spread of information at very large scale. A lot of effort have been made in order to understand this phenomenon, ranging from popular topic detection to information diffusion modeling, including influential spreaders identification. In this article, we present a survey of representative methods dealing with these issues and propose a taxonomy that summarizes the state-of-the-art. The objective is to provide a comprehensive analysis and guide of existing efforts around information diffusion in social networks. This survey is intended to help researchers in quickly understanding existing works and possible improvements to bring.", "" ] }
1706.00941
2624346620
The fast growth of social networks and their privacy requirements in recent years, has lead to increasing difficulty in obtaining complete topology of these networks. However, diffusion information over these networks is available and many algorithms have been proposed to infer the underlying networks by using this information. The previously proposed algorithms only focus on inferring more links and do not pay attention to the important characteristics of the underlying social networks In this paper, we propose a novel algorithm, called DANI, to infer the underlying network structure while preserving its properties by using the diffusion information. Moreover, the running time of the proposed method is considerably lower than the previous methods. We applied the proposed method to both real and synthetic networks. The experimental results showed that DANI has higher accuracy and lower run time compared to well-known network inference methods.
NETRATE is an improvement over NETINF. It assumes that cascades occur at different rates and temporally infers heterogeneous interactions with different transmission rates, which is closer to reality @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2952347589" ], "abstract": [ "Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data." ] }
1706.00941
2624346620
The fast growth of social networks and their privacy requirements in recent years, has lead to increasing difficulty in obtaining complete topology of these networks. However, diffusion information over these networks is available and many algorithms have been proposed to infer the underlying networks by using this information. The previously proposed algorithms only focus on inferring more links and do not pay attention to the important characteristics of the underlying social networks In this paper, we propose a novel algorithm, called DANI, to infer the underlying network structure while preserving its properties by using the diffusion information. Moreover, the running time of the proposed method is considerably lower than the previous methods. We applied the proposed method to both real and synthetic networks. The experimental results showed that DANI has higher accuracy and lower run time compared to well-known network inference methods.
CoNNIE improves NETINF by adding an optimal and robust approach which uses prior probabilistic knowledge about the relation between infection times @cite_53 .
{ "cite_N": [ "@cite_53" ], "mid": [ "2949064044" ], "abstract": [ "In many real-world scenarios, it is nearly impossible to collect explicit social network data. In such cases, whole networks must be inferred from underlying observations. Here, we formulate the problem of inferring latent social networks based on network diffusion or disease propagation data. We consider contagions propagating over the edges of an unobserved social network, where we only observe the times when nodes became infected, but not who infected them. Given such node infection times, we then identify the optimal network that best explains the observed data. We present a maximum likelihood approach based on convex programming with a l1-like penalty term that encourages sparsity. Experiments on real and synthetic data reveal that our method near-perfectly recovers the underlying network structure as well as the parameters of the contagion propagation model. Moreover, our approach scales well as it can infer optimal networks of thousands of nodes in a matter of minutes." ] }
1706.00941
2624346620
The fast growth of social networks and their privacy requirements in recent years, has lead to increasing difficulty in obtaining complete topology of these networks. However, diffusion information over these networks is available and many algorithms have been proposed to infer the underlying networks by using this information. The previously proposed algorithms only focus on inferring more links and do not pay attention to the important characteristics of the underlying social networks In this paper, we propose a novel algorithm, called DANI, to infer the underlying network structure while preserving its properties by using the diffusion information. Moreover, the running time of the proposed method is considerably lower than the previous methods. We applied the proposed method to both real and synthetic networks. The experimental results showed that DANI has higher accuracy and lower run time compared to well-known network inference methods.
Similar to NETINF, MultiTree models cascades with the trees, but considers all the possible trees. By this approach, its accuracy is higher than NETINF, NETRATE and CoNNIE when low number of cascades is available. Although, its running time is several orders lower than other tree-based algorithms, but still it is not scalable. @cite_57 .
{ "cite_N": [ "@cite_57" ], "mid": [ "2949499549" ], "abstract": [ "Diffusion and propagation of information, influence and diseases take place over increasingly larger networks. We observe when a node copies information, makes a decision or becomes infected but networks are often hidden or unobserved. Since networks are highly dynamic, changing and growing rapidly, we only observe a relatively small set of cascades before a network changes significantly. Scalable network inference based on a small cascade set is then necessary for understanding the rapidly evolving dynamics that govern diffusion. In this article, we develop a scalable approximation algorithm with provable near-optimal performance based on submodular maximization which achieves a high accuracy in such scenario, solving an open problem first introduced by Gomez- (2010). Experiments on synthetic and real diffusion data show that our algorithm in practice achieves an optimal trade-off between accuracy and running time." ] }
1706.00941
2624346620
The fast growth of social networks and their privacy requirements in recent years, has lead to increasing difficulty in obtaining complete topology of these networks. However, diffusion information over these networks is available and many algorithms have been proposed to infer the underlying networks by using this information. The previously proposed algorithms only focus on inferring more links and do not pay attention to the important characteristics of the underlying social networks In this paper, we propose a novel algorithm, called DANI, to infer the underlying network structure while preserving its properties by using the diffusion information. Moreover, the running time of the proposed method is considerably lower than the previous methods. We applied the proposed method to both real and synthetic networks. The experimental results showed that DANI has higher accuracy and lower run time compared to well-known network inference methods.
: These studies create artificial diffusion over the network and offer a community detection algorithm that uses this diffusion as part of its procedure. In multi-agent approaches, information such as color, is exchanged between nodes via the edges according to different diffusion models. At the end, nodes with the same colors are detected as a community @cite_9 . Some works use label propagation in order to define communities. Nodes adopt the maximum label of their neighbors in an iterative process, and at last nodes with identical labels form communities @cite_17 @cite_47 @cite_28 . In another research, the number of common neighbors between nodes is used to define the weight of edges as a similarity measure @cite_31 . By considering this assumption, another work introduces a new comportment named diffusion, which combines weighted edges in a hierarchical method to detect communities @cite_7 .
{ "cite_N": [ "@cite_31", "@cite_7", "@cite_28", "@cite_9", "@cite_47", "@cite_17" ], "mid": [ "2111656864", "2039842461", "2117526408", "2127245701", "", "2132202037" ], "abstract": [ "Discovering underlying communities in networks is an important task in network analysis. In the last decade, a large variety of algorithms have been proposed. However, most of them require global information or a centralized control. Those algorithms are infeasible in large-scale real networks due to computation and accessibility. In this paper, we propose a novel decentralized community detection algorithm based on information diffusion. We believe information diffusion in human society can allow us to understand the emergence of community structure. Being able to find out some critical nodes which play an important role in the formation of a community is an important byproduct for our algorithm. Experiments on various networks, including benchmark networks and synthetic networks, show that it is comparable to three decentralized algorithms and two representative centralized algorithms, in terms of stability and accuracy.", "Community discovery is one of the most important steps to understand the social networks. We propose a hierarchical diffusion method to detect the community structure. Our algorithm is based on the idea that people in different communities usually share less common friends. We also make use of the fact that people usually make decisions based others’choices, especially their friends’. Our algorithm can distinguish between pseudo-communities and meaningful ones. Tests on both classical and synthetic benchmarks show that our algorithm is comparable to state-of-the-art community detection algorithms in both computational complexity and accuracy measured by the so-called normalized mutual information.", "The recent boom of large-scale online social networks (OSNs) both enables and necessitates the use of parallelizable and scalable computational techniques for their analysis. We examine the problem of real-time community detection and a recently proposed linear time--- @math on a network with @math edges---label propagation, or epidemic'' community detection algorithm. We identify characteristics and drawbacks of the algorithm and extend it by incorporating different heuristics to facilitate reliable and multifunctional real-time community detection. With limited computational resources, we employ the algorithm on OSN data with @math nodes and about @math directed edges. Experiments and benchmarks reveal that the extended algorithm is not only faster but its community detection accuracy compares favorably over popular modularity-gain optimization algorithms known to suffer from their resolution limits.", "Research has shown that many social networks come into being hierarchically based on some basic building blocks called communities, within which the social interactions are very intensive, but between which they are very weak. Network community mining algorithms aim at efficiently and effectively discovering all such communities from a given network. Many related methods have been proposed and applied to different areas including social network analysis, gene network analysis and web clustering engine. Most of the existing methods for mining communities are centralized. In this paper, we present a multi-agent based decentralized algorithm, in which a group of autonomous agents work together to mine a network through a proposed self-aggregation and self-organization mechanism. Thanks to its decentralized feature, our method is potentially suitable for dealing with distributed networks, whose global structures are hard to obtain due to their geographical distributions, decentralized controls or huge sizes. The effectiveness of our method has been tested against different benchmark networks.", "", "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far." ] }
1706.00941
2624346620
The fast growth of social networks and their privacy requirements in recent years, has lead to increasing difficulty in obtaining complete topology of these networks. However, diffusion information over these networks is available and many algorithms have been proposed to infer the underlying networks by using this information. The previously proposed algorithms only focus on inferring more links and do not pay attention to the important characteristics of the underlying social networks In this paper, we propose a novel algorithm, called DANI, to infer the underlying network structure while preserving its properties by using the diffusion information. Moreover, the running time of the proposed method is considerably lower than the previous methods. We applied the proposed method to both real and synthetic networks. The experimental results showed that DANI has higher accuracy and lower run time compared to well-known network inference methods.
: The only major work in this category proposes a stochastic generative model named CCN that uses both the complete network topology and diffusion information to extract communities @cite_8 . This model does not consider any special assumption about link formation, arrival probability of a contagion at each node, cascades models over the network, and rather tries to model them with a random process. Despite good features such as overlapping community detection, this method suffers from high running time complexity. In @cite_6 , the authors expand CCN by considering that the underlying network is not available. This method adopts a mathematical model similar to CCN, and uses an independent cascade assumption to introduce a model called C-IC, and also uses NETRATE to introduce a model called C-Rate. The output of this method is the network communities without producing the links of network.
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "1542058102", "2117289809" ], "abstract": [ "This article presents a hub-based approach to community finding in complex networks. After identifying the network nodes with highest degree (the so-called hubs), the network is flooded with wavefronts of labels emanating from the hubs, accounting for the identification of the involved communities. The simplicity and potential of this method, which is presented for direct undirected and weighted unweighted networks, is illustrated with respect to the Zachary karate club data, image segmentation, and concept association. Attention is also given to the identification of the boundaries between communities.", "Given a directed social graph and a set of past informa- tion cascades observed over the graph, we study the novel problem of detecting modules of the graph (communities of nodes), that also explain the cascades. Our key observation is that both information propagation and social ties forma- tion in a social network can be explained according to the same latent factor, which ultimately guide a user behavior within the network. Based on this observation, we propose the Community-Cascade Network (CCN) model, a stochas- tic mixture membership generative model that can fit, at the same time, the social graph and the observed set of cas- cades. Our model produces overlapping communities and for each node, its level of authority and passive interest in each community it belongs. For learning the parameters of the CCN model, we devise a Generalized Expectation Maximization procedure. We then apply our model to real-world social networks and in- formation cascades: the results witness the validity of the proposed CCN model, providing useful insights on its signif- icance for analyzing social behavior." ] }
1706.00941
2624346620
The fast growth of social networks and their privacy requirements in recent years, has lead to increasing difficulty in obtaining complete topology of these networks. However, diffusion information over these networks is available and many algorithms have been proposed to infer the underlying networks by using this information. The previously proposed algorithms only focus on inferring more links and do not pay attention to the important characteristics of the underlying social networks In this paper, we propose a novel algorithm, called DANI, to infer the underlying network structure while preserving its properties by using the diffusion information. Moreover, the running time of the proposed method is considerably lower than the previous methods. We applied the proposed method to both real and synthetic networks. The experimental results showed that DANI has higher accuracy and lower run time compared to well-known network inference methods.
Network inference is the main goal of this article which is more related to works explained in the category. Community detection algorithms that are based on diffusion concepts are not relevant to the proposed method in this paper, because they assume that the complete network topology is accessible. On the other hand, @cite_6 which is the only community detection work that does not consider the aforementioned assumption, utilizes a network inference approach ( @cite_1 ).
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "2952347589", "1542058102" ], "abstract": [ "Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected -- but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data -- observed infection times of nodes -- we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data.", "This article presents a hub-based approach to community finding in complex networks. After identifying the network nodes with highest degree (the so-called hubs), the network is flooded with wavefronts of labels emanating from the hubs, accounting for the identification of the involved communities. The simplicity and potential of this method, which is presented for direct undirected and weighted unweighted networks, is illustrated with respect to the Zachary karate club data, image segmentation, and concept association. Attention is also given to the identification of the boundaries between communities." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
Other methods add an auto-encoder part to an existing network with the goal of enforcing efficient representations ( @cite_41 @cite_40 @cite_1 ).
{ "cite_N": [ "@cite_41", "@cite_40", "@cite_1" ], "mid": [ "2162262658", "2159291644", "" ], "abstract": [ "Finding good representations of text documents is crucial in information retrieval and classification systems. Today the most popular document representation is based on a vector of word counts in the document. This representation neither captures dependencies between related words, nor handles synonyms or polysemous words. In this paper, we propose an algorithm to learn text document representations based on semi-supervised autoencoders that are stacked to form a deep network. The model can be trained efficiently on partially labeled corpora, producing very compact representations of documents, while retaining as much class information and joint word statistics as possible. We show that it is advantageous to exploit even a few labeled samples during training.", "We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.", "" ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
Recently, @cite_20 introduced a regularization term that uses unlabeled data to push decision boundaries of neural networks to less dense areas of decision space and enforces mutual exclusivity of classes in a classification task. When combined with a cost function that enforces invariance to random transformations as in @cite_0 , state-of-the-art results on various classification tasks can be obtained.
{ "cite_N": [ "@cite_0", "@cite_20" ], "mid": [ "2431080869", "2412035906" ], "abstract": [ "Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.", "In this paper we consider the problem of semi-supervised learning with deep Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated on the observation that unlabeled data is cheap and can be used to improve the accuracy of classifiers. In this paper we propose an unsupervised regularization term that explicitly forces the classifier's prediction for multiple classes to be mutually-exclusive and effectively guides the decision boundary to lie on the low density space between the manifolds corresponding to different classes of data. Our proposed approach is general and can be used with any backpropagation-based learning method. We show through different experiments that our method can improve the object recognition performance of ConvNets using unlabeled data." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
@cite_39 propose to use Restricted Boltzmann Machines ( @cite_25 ) to pre-train a network layer-wise with unlabeled data in an auto-encoder fashion.
{ "cite_N": [ "@cite_25", "@cite_39" ], "mid": [ "1813659000", "44815768" ], "abstract": [ "Abstract : At this early stage in the development of cognitive science, methodological issues are both open and central. There may have been times when developments in neuroscience, artificial intelligence, or cognitive psychology seduced researchers into believing that their discipline was on the verge of discovering the secret of intelligence. But a humbling history of hopes disappointed has produced the realization that understanding the mind will challenge the power of all these methodologies combined. The work reported in this chapter rests on the conviction that a methodology that has a crucial role to play in the development of cognitive science is mathematical analysis. The success of cognitive science, like that of many other sciences, will, I believe, depend upon the construction of a solid body of theoretical results: results that express in a mathematical language the conceptual insights of the field; results that squeeze all possible implications out of those insights by exploiting powerful mathematical techniques. This body of results, which I will call the theory of information processing, exists because information is a concept that lends itself to mathematical formalization. One part of the theory of information processing is already well-developed. The classical theory of computation provides powerful and elegant results about the notion of effective procedure, including languages for precisely expressing them and theoretical machines for realizing them.", "Restricted Boltzmann machines (RBMs) have been used as generative models of many different types of data. RBMs are usually trained using the contrastive divergence learning procedure. This requires a certain amount of practical experience to decide how to set the values of numerical meta-parameters. Over the last few years, the machine learning group at the University of Toronto has acquired considerable expertise at training RBMs and this guide is an attempt to share this expertise with other machine learning researchers." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
@cite_6 @cite_34 @cite_1 build a neural network upon an auto-encoder that acts as a regularizer and encourages representations that capture the essence of the input.
{ "cite_N": [ "@cite_34", "@cite_1", "@cite_6" ], "mid": [ "2950789693", "", "2439880944" ], "abstract": [ "We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.", "", "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of \"objectness\"." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
A whole new category of unsupervised training is to generate surrogate labels from data. @cite_11 employ clustering methods that produce weak labels.
{ "cite_N": [ "@cite_11" ], "mid": [ "2519998487" ], "abstract": [ "Attributes offer useful mid-level features to interpret visual data. While most attribute learning methods are supervised by costly human-generated labels, we introduce a simple yet powerful unsupervised approach to learn and predict visual attributes directly from data. Given a large unlabeled image collection as input, we train deep Convolutional Neural Networks (CNNs) to output a set of discriminative, binary attributes often with semantic meanings. Specifically, we first train a CNN coupled with unsupervised discriminative clustering, and then use the cluster membership as a soft supervision to discover shared attributes from the clusters while maximizing their separability. The learned attributes are shown to be capable of encoding rich imagery properties from both natural images and contour patches. The visual representations learned in this way are also transferrable to other tasks such as object detection. We show other convincing results on the related tasks of image retrieval and classification, and contour detection." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
@cite_30 generate surrogate classes from transformed samples from the data set. These transformations have hand-tuned parameters making it non-trivial to ensure they are capable of representing the variations in an arbitrary data set.
{ "cite_N": [ "@cite_30" ], "mid": [ "2148349024" ], "abstract": [ "Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101)." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
In the work of @cite_32 , context prediction is used as a surrogate task. The objective for the network is to predict the relative position of two randomly sampled patches of an image. The size of the patches needs to be manually tuned such that parts of objects in the image are not over- or undersampled.
{ "cite_N": [ "@cite_32" ], "mid": [ "2950187998" ], "abstract": [ "This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
@cite_23 employ a multi-layer LSTM for unsupervised image sequence prediction reconstruction, leveraging the temporal dimension of videos as the context for individual frames.
{ "cite_N": [ "@cite_23" ], "mid": [ "2952453038" ], "abstract": [ "We use multilayer Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations (\"percepts\") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We try to visualize and interpret the learned features. We stress test the model by running it on longer time scales and on out-of-domain data. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only a few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
The introduction of generative adversarial nets (GANs) @cite_10 enabled a new discipline in unsupervised training. A generator network ( @math ) and a discriminator network ( @math ) are trained jointly where the @math tries to generate images that look as if drawn from an unlabeled data set, whereas @math is supposed to identify the difference between real samples and generated ones. Apart from providing compelling visual results, these networks have been shown to learn useful hierarchical representations @cite_31 .
{ "cite_N": [ "@cite_31", "@cite_10" ], "mid": [ "2173520492", "2099471712" ], "abstract": [ "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1706.00909
2949092679
In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. "Associations" are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.
@cite_5 presents improvements in designing and training GANs, in particular, these authors achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN.
{ "cite_N": [ "@cite_5" ], "mid": [ "2432004435" ], "abstract": [ "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes." ] }
1706.00740
2621245257
Initial value problem involving Atangana-Baleanu derivative is considered. An Explicit solution of the given problem is obtained by reducing the differential equation to Volterra integral equation of second kind and by using Laplace transform. To find the solution of the Volterra equation, the successive approximation method is used and a lemma simplifying the resolvent kernel has been presented. The use of the given initial value problem is illustrated by considering a boundary value problem in which the solution is expressed in the form of series expansion using orthogonal basis obtained by separation of variables.
Recently, two newly definitions of fractional derivative without singular kernel were suggested, namely, Caputo-Fabrizio fractional derivative @cite_5 and Atangana-Baleanu fractional derivative @cite_2 . These new derivatives have been applied to real life problems, for example, in the fields of thermal science, material sciences, groundwater modelling and mass-spring system @cite_12 , @cite_10 , @cite_4 , @cite_11 , @cite_2 and have been considered in a number of other recent work, see for example, @cite_13 , @cite_9 , @cite_8 , @cite_0 , @cite_3 , @cite_6 . The main difference between these two definitions is that Caputo-Fabrizio derivative is based on exponential kernel while Atangana-Baleanu definition used Mittag-leffler function as a non-local kernel. The non-locality of the kernel gives better description of the memory within structure with different scale. These two new derivatives are defined as follows and the Atangana-Baleanu fractional derivative is given by where @math denotes a normalization function such that @math and @math is the Mittag-leffler function of one parameter @cite_7 .
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2313824314", "", "", "2740460687", "2301173684", "2187932833", "", "", "2963641381", "", "2317923533", "", "" ], "abstract": [ "In this work, we consider a number of boundary-value problems for time-fractional heat equation with the recently introduced Caputo-Fabrizio derivative. Using the method of separation of variables, we prove a unique solvability of the stated problems. Moreover, we have found an explicit solution to certain initial value problem for Caputo-Fabrizio fractional order differential equation by reducing the problem to a Volterra integral equation. Different forms of solution were presented depending on the values of the parameter appeared in the problem.", "", "", "In the paper, we present some applications and features related with the new notions of fractional derivatives with a time exponential kernel and with spatial Gauss kernel for gradient and Laplacian operators. Specifically, for these new mode ls we have proved the coherence with the thermodynamic laws. Hence, we have revised the standard linear solid of Zener within continuum mechanics and the model of Cole and Cole inside electromagnetism by these new fractional operators. Moreover, by the Gaussian fractional gradient and through numerical simulations, we have studied the bell shaped filtering effects comparing the results with exponential and Caputo kernel.", "Abstract Recently, Atangana and Baleanu proposed a derivative with fractional order to answer some outstanding questions that were posed by many researchers within the field of fractional calculus. Their derivative has a non-singular and nonlocal kernel. In this paper, we presented further relationship of their derivatives with some integral transform operators. New results are presented. We applied this derivative to a simple nonlinear system. We show in detail the existence and uniqueness of the system solutions of the fractional system. We obtain a chaotic behavior which was not obtained by local derivative.", "We introduce the fractional integral corresponding to the new concept of fractional derivative recently introduced by Caputo and Fabrizio and we study some related fractional differential equations.", "", "", "In this manuscript we proposed a new fractional derivative with non-local and no-singular kernel. We presented some useful properties of the new derivative and applied it to solve the fractional heat transfer model.", "", "Abstract In order to control the movement of waves on the area of shallow water, the newly derivative with fractional order proposed by Caputo and Fabrizio was used. To achieve this, we first proposed a transition from ordinary to fractional differential equation. We proved the existence and uniqueness of the coupled solutions of the modified system using the fixed-point theorem. We derive the special solution of the modified system using an iterative method. We proved the stability of the used method and also the uniqueness of the special solution. We presented the numerical simulations for different values of alpha.", "", "" ] }
1706.00687
2621324842
Exploiting the great expressive power of Deep Neural Network architectures, relies on the ability to train them. While current theoretical work provides, mostly, results showing the hardness of this task, empirical evidence usually differs from this line, with success stories in abundance. A strong position among empirically successful architectures is captured by networks where extensive weight sharing is used, either by Convolutional or Recurrent layers. Additionally, characterizing specific aspects of different tasks, making them "harder" or "easier", is an interesting direction explored both theoretically and empirically. We consider a family of ConvNet architectures, and prove that weight sharing can be crucial, from an optimization point of view. We explore different notions of the frequency, of the target function, proving necessity of the target function having some low frequency components. This necessity is not sufficient - only with weight sharing can it be exploited, thus theoretically separating architectures using it, from others which do not. Our theoretical results are aligned with empirical experiments in an even more general setting, suggesting viability of examination of the role played by interleaving those aspects in broader families of tasks.
Recently, several works have attempted to study the optimization performance of gradient-based methods for neural networks. To mention just a few pertinent examples, @cite_12 @cite_1 @cite_20 @cite_9 @cite_8 consider the optimization landscape for various networks, showing it has favorable properties under various assumptions, but does not consider the behavior of a specific algorithm. Other works, such as @cite_18 @cite_0 @cite_10 @cite_19 , show how certain neural networks can be learned under (generally strong) assumptions, but not with standard gradient-based methods. More closer to our work, @cite_14 @cite_5 @cite_16 provide positive learning results using gradient-based algorithms, but do not show the benefit of a convolutional architecture for optimization performance, compared to a fully-connected architecture. The hardness of learning in the case of Boolean functions, using the degree of the target function, was discussed in the statistical queries literature, for instance in @cite_21 . In terms of techniques, our construction is inspired by target functions proposed in @cite_13 @cite_17 , and based on ideas from the statistical queries literature (e.g. @cite_4 ), to study the difficulty of learning with gradient-based methods.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_9", "@cite_21", "@cite_1", "@cite_0", "@cite_19", "@cite_5", "@cite_16", "@cite_10", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2623127191", "", "2113517874", "2064680241", "2952574409", "813605148", "1924023668", "", "2951635495", "2952110295", "2952318479", "2951207584", "1839868949", "", "2399994860", "2509435534" ], "abstract": [ "In recent years, Deep Learning has become the go-to solution for a broad range of applications, often outperforming state-of-the-art. However, it is important, for both theoreticians and practitioners, to gain a deeper understanding of the difficulties and limitations associated with common approaches and algorithms. We describe four types of simple problems, for which the gradient-based algorithms commonly used in deep learning either fail or suffer from significant difficulties. We illustrate the failures through practical experiments, and provide theoretical insights explaining their source, and how they might be remedied.", "", "We study the effectiveness of learning low degree polynomials using neural networks by the gradient descent method. While neural networks have been shown to have great expressive power, and gradient descent has been widely used in practice for learning neural networks, few theoretical guarantees are known for such methods. In particular, it is well known that gradient descent can get stuck at local minima, even for simple classes of target functions. In this paper, we present several positive theoretical results to support the effectiveness of neural networks. We focus on twolayer neural networks where the bottom layer is a set of non-linear hidden nodes, and the top layer node is a linear function, similar to Barron (1993). First we show that for a randomly initialized neural network with sufficiently many hidden units, the generic gradient descent algorithm learns any low degree polynomial, assuming we initialize the weights randomly. Secondly, we show that if we use complex-valued weights (the target function can still be real), then under suitable conditions, there are no \"robust local minima\": the neural network can always escape a local minimum by performing a random perturbation. This property does not hold for real-valued weights. Thirdly, we discuss whether sparse polynomials can be learned with small neural networks, with the size dependent on the sparsity of the target function.", "", "An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as , but was also key to the immense success of . In this work, we put the principle of on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for linear feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.", "Techniques involving factorization are found in a wide range of applications and have enjoyed significant empirical success in many fields. However, common to a vast majority of these problems is the significant disadvantage that the associated optimization problems are typically non-convex due to a multilinear form or other convexity destroying transformation. Here we build on ideas from convex relaxations of matrix factorizations and present a very general framework which allows for the analysis of a wide range of non-convex factorization problems - including matrix factorization, tensor factorization, and deep neural network training formulations. We derive sufficient conditions to guarantee that a local minimum of the non-convex optimization problem is a global minimum and show that if the size of the factorized variables is large enough then from any initialization it is possible to find a global minimizer using a purely local descent algorithm. Our framework also provides a partial theoretical justification for the increasingly common use of Rectified Linear Units (ReLUs) in deep neural networks and offers guidance on deep network architectures and regularization strategies to facilitate efficient optimization.", "A function @math is @math -resilient if all its Fourier coefficients of degree at most @math are zero, i.e., @math is uncorrelated with all low-degree parities. We study the notion of @math @math of Boolean functions, where we say that @math is @math -approximately @math -resilient if @math is @math -close to a @math -valued @math -resilient function in @math distance. We show that approximate resilience essentially characterizes the complexity of agnostic learning of a concept class @math over the uniform distribution. Roughly speaking, if all functions in a class @math are far from being @math -resilient then @math can be learned agnostically in time @math and conversely, if @math contains a function close to being @math -resilient then agnostic learning of @math in the statistical query (SQ) framework of Kearns has complexity of at least @math . This characterization is based on the duality between @math approximation by degree- @math polynomials and approximate @math -resilience that we establish. In particular, it implies that @math approximation by low-degree polynomials, known to be sufficient for agnostic learning over product distributions, is in fact necessary. Focusing on monotone Boolean functions, we exhibit the existence of near-optimal @math -approximately @math -resilient monotone functions for all @math . Prior to our work, it was conceivable even that every monotone function is @math -far from any @math -resilient function. Furthermore, we construct simple, explicit monotone functions based on @math and @math that are close to highly resilient functions. Our constructions are based on a fairly general resilience analysis and amplification. These structural results, together with the characterization, imply nearly optimal lower bounds for agnostic learning of monotone juntas.", "", "We give algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others. Our generative model is an @math node multilayer neural net that has degree at most @math for some @math and each edge has a random edge weight in @math . Our algorithm learns almost all networks in this class with polynomial running time. The sample complexity is quadratic or cubic depending upon the details of the model. The algorithm uses layerwise learning. It is based upon a novel idea of observing correlations among features and using these to infer the underlying edge structure via a global graph recovery procedure. The analysis of the algorithm reveals interesting structure of neural networks with random edge weights.", "We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has @math hidden layers and that the @math -norm of the incoming weights of any neuron is bounded by @math . We present a kernel-based method, such that with probability at least @math , it learns a predictor whose generalization error is at most @math worse than that of the neural network. The sample complexity and the time complexity of the presented method are polynomial in the input dimension and in @math , where @math is a function depending on @math and on the activation function, independent of the number of neurons. The algorithm applies to both sigmoid-like activation functions and ReLU-like activation functions. It implies that any sufficiently sparse neural network is learnable in polynomial time.", "Deep learning models are often successfully trained using gradient descent, despite the worst case hardness of the underlying non-convex optimization problem. The key question is then under what conditions can one prove that optimization will succeed. Here we provide a strong result of this kind. We consider a neural net with one hidden layer and a convolutional structure with no overlap and a ReLU activation function. For this architecture we show that learning is NP-complete in the general case, but that when the input distribution is Gaussian, gradient descent converges to the global optimum in polynomial time. To the best of our knowledge, this is the first global optimality guarantee of gradient descent on a convolutional neural network with ReLU activations.", "We show that the standard stochastic gradient decent (SGD) algorithm is guaranteed to learn, in polynomial time, a function that is competitive with the best function in the conjugate kernel space of the network, as defined in Daniely, Frostig and Singer. The result holds for log-depth networks from a rich family of architectures. To the best of our knowledge, it is the first polynomial-time guarantee for the standard neural network learning algorithm for networks of depth more that two. As corollaries, it follows that for neural networks of any depth between @math and @math , SGD is guaranteed to learn, in polynomial time, constant degree polynomials with polynomially bounded coefficients. Likewise, it follows that SGD on large enough networks can learn any continuous function (not in polynomial time), complementing classical expressivity results.", "Author(s): Janzamin, M; Sedghi, H; Anandkumar, A | Abstract: Training neural networks is a challenging non-convex optimization problem, and backpropagation or gradient descent can get stuck in spurious local optima. We propose a novel algorithm based on tensor decomposition for guaranteed training of two-layer neural networks. We provide risk bounds for our proposed method, with a polynomial sample complexity in the relevant parameters, such as input dimension and number of neurons. While learning arbitrary target functions is NP-hard, we provide transparent conditions on the function and the input for learnability. Our training method is based on tensor decomposition, which provably converges to the global optimum, under a set of mild non-degeneracy conditions. It consists of simple embarrassingly parallel linear and multi-linear operations, and is competitive with standard stochastic gradient descent (SGD), in terms of computational complexity. Thus, we propose a computationally efficient method with guaranteed risk bounds for training neural networks with one hidden layer.", "", "We use smoothed analysis techniques to provide guarantees on the training loss of Multilayer Neural Networks (MNNs) at differentiable local minima. Specifically, we examine MNNs with piecewise linear activation functions, quadratic loss and a single output, under mild over-parametrization. We prove that for a MNN with one hidden layer, the training error is zero at every differentiable local minimum, for almost every dataset and dropout-like noise realization. We then extend these results to the case of more than one hidden layer. Our theoretical guarantees assume essentially nothing on the training data, and are verified numerically. These results suggest why the highly non-convex loss of such MNNs can be easily optimized using local updates (e.g., stochastic gradient descent), as observed empirically.", "Although neural networks are routinely and successfully trained in practice using simple gradient-based methods, most existing theoretical results are negative, showing that learning such networks is difficult, in a worst-case sense over all data distributions. In this paper, we take a more nuanced view, and consider whether specific assumptions on the \"niceness\" of the input distribution, or \"niceness\" of the target function (e.g. in terms of smoothness, non-degeneracy, incoherence, random choice of parameters etc.), are sufficient to guarantee learnability using gradient-based methods. We provide evidence that neither class of assumptions alone is sufficient: On the one hand, for any member of a class of \"nice\" target functions, there are difficult input distributions. On the other hand, we identify a family of simple target functions, which are difficult to learn even if the input distribution is \"nice\". To prove our results, we develop some tools which may be of independent interest, such as extending Fourier-based hardness techniques developed in the context of statistical queries blum1994weakly , from the Boolean cube to Euclidean space and to more general classes of functions." ] }
1706.00631
2770875074
Face retrieval has received much attention over the past few decades, and many efforts have been made in retrieving face images against pose, illumination, and expression variations. However, the conventional works fail to meet the requirements of a potential and novel task --- retrieving a person's face image at a specific age, especially when the specific 'age' is not given as a numeral, i.e. 'retrieving someone's image at the similar age period shown by another person's image'. To tackle this problem, we propose a dual reference face retrieval framework in this paper, where the system takes two inputs: an identity reference image which indicates the target identity and an age reference image which reflects the target age. In our framework, the raw images are first projected on a joint manifold, which preserves both the age and identity locality. Then two similarity metrics of age and identity are exploited and optimized by utilizing our proposed quartet-based model. The experiments show promising results, outperforming hierarchical methods.
A broad array of research @cite_14 @cite_32 has been completed on facial feature representation. As facial features extracting is not the core part of our framework, we just give a rough review here. For a comprehensive review, we refer our readers to @cite_8 . Early works mainly take heuristic features such as Gabor @cite_29 , HOG @cite_21 , LBP @cite_5 or their extensions. However, designing hand-crafted features is a trial and error process which is less than adequate for our purpose. Another branch of research regarding facial features is based on utilizing deep learning. For example, @cite_31 employed a nine-layer deep neural network to extract facial features for face verification and @cite_0 proposed a precisely designed deep convolutional networks for joint face identification-Verification.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_29", "@cite_21", "@cite_32", "@cite_0", "@cite_5", "@cite_31" ], "mid": [ "", "2165806354", "", "2161969291", "2054846461", "", "2163808566", "2145287260" ], "abstract": [ "", "Over the last decade facial feature extraction has been actively researched for face recognition. This paper provides an up-to-date review of major human facia recognition research. Earlier sections we presented an overview of face recognition and its applications. In later sections, a literature review of the most recent face recognition technique is presented. The most prominent feature extraction and the techniques are also given. Finally, we summarized all research results discussed.", "", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "Sparse representation based classification (SRC) has recently been proposed for robust face recognition. To deal with occlusion, SRC introduces an identity matrix as an occlusion dictionary on the assumption that the occlusion has sparse representation in this dictionary. However, the results show that SRC's use of this occlusion dictionary is not nearly as robust to large occlusion as it is to random pixel corruption. In addition, the identity matrix renders the expanded dictionary large, which results in expensive computation. In this paper, we present a novel method, namely structured sparse representation based classification (SSRC), for face recognition with occlusion. A novel structured dictionary learning method is proposed to learn an occlusion dictionary from the data instead of an identity matrix. Specifically, a mutual incoherence of dictionaries regularization term is incorporated into the dictionary learning objective function which encourages the occlusion dictionary to be as independent as possible of the training sample dictionary. So that the occlusion can then be sparsely represented by the linear combination of the atoms from the learned occlusion dictionary and effectively separated from the occluded face image. The classification can thus be efficiently carried out on the recovered non-occluded face images and the size of the expanded dictionary is also much smaller than that used in SRC. The extensive experiments demonstrate that the proposed method achieves better results than the existing sparse representation based face recognition methods, especially in dealing with large region contiguous occlusion and severe illumination variation, while the computational cost is much lower.", "", "This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed", "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance." ] }
1706.00631
2770875074
Face retrieval has received much attention over the past few decades, and many efforts have been made in retrieving face images against pose, illumination, and expression variations. However, the conventional works fail to meet the requirements of a potential and novel task --- retrieving a person's face image at a specific age, especially when the specific 'age' is not given as a numeral, i.e. 'retrieving someone's image at the similar age period shown by another person's image'. To tackle this problem, we propose a dual reference face retrieval framework in this paper, where the system takes two inputs: an identity reference image which indicates the target identity and an age reference image which reflects the target age. In our framework, the raw images are first projected on a joint manifold, which preserves both the age and identity locality. Then two similarity metrics of age and identity are exploited and optimized by utilizing our proposed quartet-based model. The experiments show promising results, outperforming hierarchical methods.
Once the proper facial image representation is selected, the retrieval is conducted based on the similarity measurements. There are many works @cite_3 @cite_34 @cite_7 @cite_10 focusing on the similarity metric learning. @cite_26 induced a contrastive loss to ensure that the neighbors are pulled together while the non-neighbors are pushed apart on the learned metric. Different with the contrastive loss that only considers pairwise examples at a time, @cite_27 and @cite_17 proposed the triplet loss, which minimizes the @math -distance between an anchor and a positive sample, both of which belong to the same instance, and maximizes the distance between the anchor and a negative sample. However, the traditional triplet-loss may lead to a large intra-class variation during testing. @cite_28 added a fourth sample in the triplet to enlarge the inter-class variation thus reducing the intra-class variation.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_28", "@cite_3", "@cite_27", "@cite_34", "@cite_10", "@cite_17" ], "mid": [ "2138621090", "2574678123", "2952976870", "2621210799", "1975517671", "2766596763", "2609698233", "2096733369" ], "abstract": [ "Dimensionality reduction involves mapping a set of high dimensional input points onto a low dimensional manifold so that 'similar\" points in input space are mapped to nearby points on the manifold. We present a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold. The learning relies solely on neighborhood relationships and does not require any distancemeasure in the input space. The method can learn mappings that are invariant to certain transformations of the inputs, as is demonstrated with a number of experiments. Comparisons are made to other techniques, in particular LLE.", "In this paper, we propose to learn cross-view binary identities (CBI) for fast person re-identification. To achieve this, two sets of discriminative hash functions for two different views are learned by simultaneously minimising their distance in the Hamming space, and maximising the cross-covariance and margin. Thus, similar binary codes can be found for images of a same person captured at different views by embedding the images into the Hamming space. Therefore, person re-identification can be solved by efficiently computing and ranking the Hamming distances between the images. Extensive experiments are conducted on two public datasets and CBI produces comparable results as state-of-the-art re-identification approaches but is at least 2200 times faster.", "Person re-identification (ReID) is an important task in wide area video surveillance which focuses on identifying people across different cameras. Recently, deep learning networks with a triplet loss become a common framework for person ReID. However, the triplet loss pays main attentions on obtaining correct orders on the training set. It still suffers from a weaker generalization capability from the training set to the testing set, thus resulting in inferior performance. In this paper, we design a quadruplet loss, which can lead to the model output with a larger inter-class variation and a smaller intra-class variation compared to the triplet loss. As a result, our model has a better generalization ability and can achieve a higher performance on the testing set. In particular, a quadruplet deep network using a margin-based online hard negative mining is proposed based on the quadruplet loss for the person ReID. In extensive experiments, the proposed network outperforms most of the state-of-the-art algorithms on representative datasets which clearly demonstrates the effectiveness of our proposed method.", "Recently considerable efforts have been dedicated to unconstrained face recognition, which requires to identify faces “in the wild” for a set of images and or video frames captured without human intervention. Unlike traditional face recognition that compares one-to-one media (either a single image or a video frame) only, we encounter a problem of matching sets with heterogeneous contents containing both images and videos. In this paper, we propose a novel set-to-set (S2S) distance measure to calculate the similarity between two sets with the aim to improve the recognition accuracy for faces with real-world challenges, such as extreme poses or severe illumination conditions. Our S2S distance adopts the kNN -average pooling for the similarity scores computed on all the media in two sets, making the identification far less susceptible to the poor representations (outliers) than traditional feature-average pooling and score-average pooling. Furthermore, we show that various metrics can be embedded into our S2S distance framework, including both predefined and learned ones. This allows to choose the appropriate metric depending on the recognition task in order to achieve the best results. To evaluate the proposed S2S distance, we conduct extensive experiments on the challenging set-based IJB-A face data set, which demonstrate that our algorithm achieves the state-of-the-art results and is clearly superior to the baselines, including several deep learning-based face recognition algorithms.", "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "The recent years have witnessed the emerging of vector quantization (VQ) techniques for efficient similarity search. VQ partitions the feature space into a set of codewords and encodes data points as integer indices using the codewords. Then the distance between data points can be efficiently approximated by simple memory lookup operations. By the compact quantization, the storage cost, and searching complexity are significantly reduced, thereby facilitating efficient large-scale similarity search. However, the performance of several celebrated VQ approaches degrades significantly when dealing with noisy data. In addition, it can barely facilitate a wide range of applications as the distortion measurement only limits to @math norm . To address the shortcomings of the squared Euclidean ( @math norm) loss function employed by the VQ approaches, in this paper, we propose a novel robust and general VQ framework, named RGVQ, to enhance both robustness and generalization of VQ approaches. Specifically, a @math -norm loss function is proposed to conduct the @math -norm similarity search, rather than the @math norm search, and the @math -th order loss is used to enhance the robustness. Despite the fact that changing the loss function to @math norm makes VQ approaches more robust and generic, it brings us a challenge that a non-smooth and non-convex orthogonality constrained @math - norm function has to be minimized. To solve this problem, we propose a novel and efficient optimization scheme and specify it to VQ approaches and theoretically prove its convergence. Extensive experiments on benchmark data sets demonstrate that the proposed RGVQ is better than the original VQ for several approaches, especially when searching similarity in noisy data.", "By transferring knowledge from the abundant labeled samples of known source classes, zero-shot learning (ZSL) makes it possible to train recognition models for novel target classes that have no labeled samples. Conventional ZSL approaches usually adopt a two-step recognition strategy, in which the test sample is projected into an intermediary space in the first step, and then the recognition is carried out by considering the similarity between the sample and target classes in the intermediary space. Due to this redundant intermediate transformation, information loss is unavoidable, thus degrading the performance of overall system. Rather than adopting this two-step strategy, in this paper, we propose a novel one-step recognition framework that is able to perform recognition in the original feature space by using directly trained classifiers. To address the lack of labeled samples for training supervised classifiers for the target classes, we propose to transfer samples from source classes with pseudo labels assigned, in which the transferred samples are selected based on their transferability and diversity. Moreover, to account for the unreliability of pseudo labels of transferred samples, we modify the standard support vector machine formulation such that the unreliable positive samples can be recognized and suppressed in the training phase. The entire framework is fairly general with the possibility of further extensions to several common ZSL settings. Extensive experiments on four benchmark data sets demonstrate the superiority of the proposed framework, compared with the state-of-the-art approaches, in various settings.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors." ] }
1706.00672
2963835147
Abstract We propose a new framework that extends the standard Probability Hypothesis Density (PHD) filter for multiple targets having N ⩾ 2 different types based on Random Finite Set theory, taking into account not only background clutter, but also confusions among detections of different target types, which are in general different in character from background clutter. Under Gaussianity and linearity assumptions, our framework extends the existing Gaussian mixture (GM) implementation of the standard PHD filter to create a N-type GM-PHD filter. The methodology is applied to real video sequences by integrating object detectors’ information into this filter for two scenarios. For both cases, Munkres’s variant of the Hungarian assignment algorithm is used to associate tracked target identities between frames. This approach is evaluated and compared to both raw detection and independent GM-PHD filters using the Optimal Sub-pattern Assignment metric and discrimination rate. This shows the improved performance of our strategy on real video sequences.
Traditionally, multi-target trackers including GNN @cite_33 , JPDAF @cite_32 , and MHT @cite_19 , are based on the concept of finding associations between targets and measurements. However, these approaches have faced challenges not only in the uncertainty caused by data association but also in algorithmic complexity that increases exponentially with the number of targets and measurements. For instance, the total number of possible hypotheses in MHT increases exponentially with time and heuristic pruning merging of hypotheses is performed to reduce computational cost.
{ "cite_N": [ "@cite_19", "@cite_32", "@cite_33" ], "mid": [ "2143073828", "2107401725", "2167462877" ], "abstract": [ "This paper describes a probabilistic multiple-hypothesis framework for tracking highly articulated objects. In this framework, the probability density of the tracker state is represented as a set of modes with piecewise Gaussians characterizing the neighborhood around these modes. The temporal evolution of the probability density is achieved through sampling from the prior distribution, followed by local optimization of the sample positions to obtain updated modes. This method of generating hypotheses from state-space search does not require the use of discrete features unlike classical multiple-hypothesis tracking. The parametric form of the model is suited for high dimensional state-spaces which cannot be efficiently modeled using non-parametric approaches. Results are shown for tracking Fred Astaire in a movie dance sequence.", "We describe a framework that explicitly reasons about data association to improve tracking performance in many difficult visual environments. A hierarchy of tracking strategies results from ascribing ambiguous or missing data to: 1) noise-like visual occurrences, 2) persistent, known scene elements (i.e., other tracked objects), or 3) persistent, unknown scene elements. First, we introduce a randomized tracking algorithm adapted from an existing probabilistic data association filter (PDAF) that is resistant to clutter and follows agile motion. The algorithm is applied to three different tracking modalities-homogeneous regions, textured regions, and snakes-and extensibly defined for straightforward inclusion of other methods. Second, we add the capacity to track multiple objects by adapting to vision a joint PDAF which oversees correspondence choices between same-modality trackers and image features. We then derive a related technique that allows mixed tracker modalities and handles object overlaps robustly. Finally, we represent complex objects as conjunctions of cues that are diverse both geometrically (e.g., parts) and qualitatively (e.g., attributes). Rigid and hinge constraints between part trackers and multiple descriptive attributes for individual parts render the whole object more distinctive, reducing susceptibility to mistracking. Results are given for diverse objects such as people, microscopic cells, and chess pieces.", "We address the problem of robust multi-target tracking within the application of hockey player tracking. The particle filter technique is adopted and modified to fit into the multi-target tracking framework. A rectification technique is employed to find the correspondence between the video frame coordinates and the standard hockey rink coordinates so that the system can compensate for camera motion and improve the dynamics of the players. A global nearest neighbor data association algorithm is introduced to assign boosting detections to the existing tracks for the proposal distribution in particle filters. The mean-shift algorithm is embedded into the particle filter framework to stabilize the trajectories of the targets for robust tracking during mutual occlusion. Experimental results show that our system is able to automatically and robustly track a variable number of targets and correctly maintain their identities regardless of background clutter, camera motion and frequent mutual occlusion between targets." ] }