aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1905.06648
2945683581
Correlation filters (CFs) have been continuously advancing the state-of-the-art tracking performance and have been extensively studied in the recent few years. Most of the existing CF trackers adopt a cosine window to spatially reweight base image to alleviate boundary discontinuity. However, cosine window emphasizes more on the central region of base image and has the risk of contaminating negative training samples during model learning. On the other hand, spatial regularization deployed in many recent CF trackers plays a similar role as cosine window by enforcing spatial penalty on CF coefficients. Therefore, we in this paper investigate the feasibility to remove cosine window from CF trackers with spatial regularization. When simply removing cosine window, CF with spatial regularization still suffers from small degree of boundary discontinuity. To tackle this issue, binary and Gaussian shaped mask functions are further introduced for eliminating boundary discontinuity while reweighting the estimation error of each training sample, and can be incorporated with multiple CF trackers with spatial regularization. In comparison to the counterparts with cosine window, our methods are effective in handling boundary discontinuity and sample contamination, thereby benefiting tracking performance. Extensive experiments on three benchmarks show that our methods perform favorably against the state-of-the-art trackers using either handcrafted or deep CNN features. The code is publicly available at this https URL.
The core problem of CF trackers is to learn a discriminative filter for the next frame from current frame and historical information. Early methods, e.g., MOSSE @cite_20 and KCF @cite_43 , formulate the CF framework with one single base image from the current frame, and update the CFs using the linear interpolation strategy. Denote by the sample pair @math in frame @math , where each sample @math consists of @math feature maps with @math , and @math represents the Gaussian shaped label. Then the correlation filter @math is obtained by minimizing the following objective, where @math and @math respectively stand for circular convolution and Hadamard product, @math denotes cosine window, and @math denotes the tradeoff parameter of the regularization term @math .
{ "cite_N": [ "@cite_43", "@cite_20" ], "mid": [ "2154889144", "1964846093" ], "abstract": [ "The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies—any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.", "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears." ] }
1905.06648
2945683581
Correlation filters (CFs) have been continuously advancing the state-of-the-art tracking performance and have been extensively studied in the recent few years. Most of the existing CF trackers adopt a cosine window to spatially reweight base image to alleviate boundary discontinuity. However, cosine window emphasizes more on the central region of base image and has the risk of contaminating negative training samples during model learning. On the other hand, spatial regularization deployed in many recent CF trackers plays a similar role as cosine window by enforcing spatial penalty on CF coefficients. Therefore, we in this paper investigate the feasibility to remove cosine window from CF trackers with spatial regularization. When simply removing cosine window, CF with spatial regularization still suffers from small degree of boundary discontinuity. To tackle this issue, binary and Gaussian shaped mask functions are further introduced for eliminating boundary discontinuity while reweighting the estimation error of each training sample, and can be incorporated with multiple CF trackers with spatial regularization. In comparison to the counterparts with cosine window, our methods are effective in handling boundary discontinuity and sample contamination, thereby benefiting tracking performance. Extensive experiments on three benchmarks show that our methods perform favorably against the state-of-the-art trackers using either handcrafted or deep CNN features. The code is publicly available at this https URL.
Since the pioneering work of MOSSE @cite_20 , many improvements have been made to CF trackers. On the one hand, the CF models have been consistently improved with the introduction of non-linear kernel @cite_43 , scale adaptivity @cite_50 @cite_9 @cite_19 , long-term tracking @cite_47 , part-based CFs @cite_28 , particle filters @cite_16 , spatial regularization @cite_8 @cite_49 , continuous convolution @cite_26 @cite_24 , and formulation with multiple base images @cite_14 @cite_24 @cite_46 . On the other hand, progress in feature engineering, e.g., HOG @cite_4 , ColorName @cite_48 and deep CNN features @cite_1 @cite_37 @cite_33 , also greatly benefits the performance of CF trackers.
{ "cite_N": [ "@cite_43", "@cite_20", "@cite_4", "@cite_8", "@cite_48", "@cite_49", "@cite_46", "@cite_37", "@cite_26", "@cite_28", "@cite_19", "@cite_50", "@cite_16", "@cite_14", "@cite_33", "@cite_9", "@cite_1", "@cite_24", "@cite_47" ], "mid": [ "2154889144", "1964846093", "2161969291", "1892578678", "2044986361", "", "2963074722", "2473868734", "2518013266", "2560610478", "2963571423", "2520477759", "2771877920", "1955741794", "2576085163", "818325216", "", "2557641257", "" ], "abstract": [ "The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies—any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.", "Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "Correlation filters take advantage of specific properties in the Fourier domain allowing them to be estimated efficiently: O(N D log D) in the frequency domain, versus O(D3 + N D2) spatially where D is signal length, and N is the number of signals. Recent extensions to correlation filters, such as MOSSE, have reignited interest of their use in the vision community due to their robustness and attractive computational properties. In this paper we demonstrate, however, that this computational efficiency comes at a cost. Specifically, we demonstrate that only 1 D proportion of shifted examples are unaffected by boundary effects which has a dramatic effect on detection tracking performance. In this paper, we propose a novel approach to correlation filter estimation that: (i) takes advantage of inherent computational redundancies in the frequency domain, (ii) dramatically reduces boundary effects, and (iii) is able to implicitly exploit all possible patches densely extracted from training examples during learning process. Impressive object tracking and detection results are presented in terms of both accuracy and computational efficiency.", "Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power. This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attribute-based evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24 in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second.", "", "In the field of generic object tracking numerous attempts have been made to exploit deep features. Despite all expectations, deep trackers are yet to reach an outstanding level of performance compared to methods solely based on handcrafted features. In this paper, we investigate this key issue and propose an approach to unlock the true potential of deep features for tracking. We systematically study the characteristics of both deep and shallow features, and their relation to tracking accuracy and robustness. We identify the limited data and low spatial resolution as the main challenges, and propose strategies to counter these issues when integrating deep features for tracking. Furthermore, we propose a novel adaptive fusion approach that leverages the complementary properties of deep and shallow features to improve both robustness and accuracy. Extensive experiments are performed on four challenging datasets. On VOT2017, our approach significantly outperforms the top performing tracker from the challenge with a relative gain of (17 ) in EAO.", "In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers.", "Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ...", "In order to better deal with the partial occlusion issue, part-based trackers are widely used in visual object tracking recently. However, it is still difficult to realize fast and robust tracking, due to complicated online training and updating process. Correlation filters have been used in visual object tracking tasks recently because of their high efficiency. However, the traditional correlation filter based tracking methods do not deal with occlusion well. In this paper, we propose a novel tracking method which tracks objects based on parts with multiple correlation filters. The Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Additionally, a discriminative part selection scheme is adopted to further improve performance and accelerate our method. Experimental results demonstrate that our multiple part tracker can significantly improve tracking performance on benchmark datasets.", "The aspect ratio variation frequently appears in visual tracking and has a severe influence on performance. Although many correlation filter (CF)-based trackers have also been suggested for scale adaptive tracking, few studies have been given to handle the aspect ratio variation for CF trackers. In this paper, we make the first attempt to address this issue by introducing a family of 1D boundary CFs to localize the left, right, top, and bottom boundaries in videos. This allows us cope with the aspect ratio variation flexibly during tracking. Specifically, we present a novel tracking model to integrate 1D Boundary and 2D Center CFs (IBCCF) where boundary and center filters are enforced by a near-orthogonality regularization term. To optimize our IBCCF model, we develop an alternating direction method of multipliers. Experiments on several datasets show that IBCCF can effectively handle aspect ratio variation, and achieves state-of-the-art performance in terms of accuracy and robustness.", "Accurate scale estimation of a target is a challenging research problem in visual object tracking. Most state-of-the-art methods employ an exhaustive scale search to estimate the target size. The exhaustive search strategy is computationally expensive and struggles when encountered with large scale variations. This paper investigates the problem of accurate and robust scale estimation in a tracking-by-detection framework. We propose a novel scale adaptive tracking approach by learning separate discriminative correlation filters for translation and scale estimation. The explicit scale filter is learned online using the target appearance sampled at a set of different scales. Contrary to standard approaches, our method directly learns the appearance change induced by variations in the target scale. Additionally, we investigate strategies to reduce the computational cost of our approach. Extensive experiments are performed on the OTB and the VOT2014 datasets. Compared to the standard exhaustive scale search, our approach achieves a gain of 2.5 percent in average overlap precision on the OTB dataset. Additionally, our method is computationally efficient, operating at a 50 percent higher frame rate compared to the exhaustive scale search. Our method obtains the top rank in performance by outperforming 19 state-of-the-art trackers on OTB and 37 state-of-the-art trackers on VOT2014.", "In this paper, we propose a novel correlation particle filter (CPF) for robust visual tracking. Instead of a simple combination of a correlation filter and a particle filter, we exploit and complement the strength of each one. Compared with existing tracking methods based on correlation filters and particle filters, the proposed tracker has four major advantages: 1) it is robust to partial and total occlusions, and can recover from lost tracks by maintaining multiple hypotheses; 2) it can effectively handle large-scale variation via a particle sampling strategy; 3) it can efficiently maintain multiple modes in the posterior density using fewer particles than conventional particle filters, resulting in low computational cost; and 4) it can shepherd the sampled particles toward the modes of the target state distribution using a mixture of correlation filters, resulting in robust tracking performance. Extensive experimental results on challenging benchmark data sets demonstrate that the proposed CPF tracking algorithm performs favorably against the state-of-the-art methods.", "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0 and 8.2 respectively, in mean overlap precision, compared to the best existing trackers.", "Robust visual tracking is a challenging computer vision problem, with many real-world applications. Most existing approaches employ hand-crafted appearance features, such as HOG or Color Names. Recently, deep RGB features extracted from convolutional neural networks have been successfully applied for tracking. Despite their success, these features only capture appearance information. On the other hand, motion cues provide discriminative and complementary information that can improve tracking performance. Contrary to visual tracking, deep motion features have been successfully applied for action recognition and video classification tasks. Typically, the motion features are learned by training a CNN on optical flow images extracted from large amounts of labeled videos. This paper presents an investigation of the impact of deep motion features in a tracking-by-detection framework. We further show that hand-crafted, deep RGB, and deep motion features contain complementary information. To the best of our knowledge, we are the first to propose fusing appearance information with deep motion features for visual tracking. Comprehensive experiments clearly suggest that our fusion approach with deep motion features outperforms standard methods relying on appearance information alone.", "Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there is still a need to improve the overall tracking capability. In this paper, we presented a very appealing tracker based on the correlation filter framework. To tackle the problem of the fixed template size in kernel correlation filter tracker, we suggest an effective scale adaptive scheme. Moreover, the powerful features including HoG and color-naming are integrated together to further boost the overall tracking performance. The extensive empirical evaluations on the benchmark videos and VOT 2014 dataset demonstrate that the proposed tracker is very promising for the various challenging scenarios. Our method successfully tracked the targets in about 72 videos and outperformed the state-of-the-art trackers on the benchmark dataset with 51 sequences.", "", "In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0 relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0 AUC on OTB-2015.", "" ] }
1905.06648
2945683581
Correlation filters (CFs) have been continuously advancing the state-of-the-art tracking performance and have been extensively studied in the recent few years. Most of the existing CF trackers adopt a cosine window to spatially reweight base image to alleviate boundary discontinuity. However, cosine window emphasizes more on the central region of base image and has the risk of contaminating negative training samples during model learning. On the other hand, spatial regularization deployed in many recent CF trackers plays a similar role as cosine window by enforcing spatial penalty on CF coefficients. Therefore, we in this paper investigate the feasibility to remove cosine window from CF trackers with spatial regularization. When simply removing cosine window, CF with spatial regularization still suffers from small degree of boundary discontinuity. To tackle this issue, binary and Gaussian shaped mask functions are further introduced for eliminating boundary discontinuity while reweighting the estimation error of each training sample, and can be incorporated with multiple CF trackers with spatial regularization. In comparison to the counterparts with cosine window, our methods are effective in handling boundary discontinuity and sample contamination, thereby benefiting tracking performance. Extensive experiments on three benchmarks show that our methods perform favorably against the state-of-the-art trackers using either handcrafted or deep CNN features. The code is publicly available at this https URL.
Among these improvements, we specifically mention a category of CF formulations with multiple base images @cite_14 @cite_24 @cite_46 . Given a set of @math base images @math , CF with multiple base images can then be expressed as, where @math represents the weight of the @math -th base image @math . For example, SRDCF @cite_14 and CCOT @cite_26 simply adopt the latest @math frames as base images. In SRDCFdecon @cite_10 , an adaptive decontamination model is presented to downweight corrupted samples while up-weighting faithful ones. ECO @cite_24 and UPDT @cite_46 apply a Gaussian mixture model (GMM) to determine both the weights as well as base images. In general, CF trackers with multiple base images perform much better than those with single base image, and have achieved state-of-the-art tracking performance.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_24", "@cite_46", "@cite_10" ], "mid": [ "1955741794", "2518013266", "2557641257", "2963074722", "2469582947" ], "abstract": [ "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0 and 8.2 respectively, in mean overlap precision, compared to the best existing trackers.", "Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ...", "In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0 relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0 AUC on OTB-2015.", "In the field of generic object tracking numerous attempts have been made to exploit deep features. Despite all expectations, deep trackers are yet to reach an outstanding level of performance compared to methods solely based on handcrafted features. In this paper, we investigate this key issue and propose an approach to unlock the true potential of deep features for tracking. We systematically study the characteristics of both deep and shallow features, and their relation to tracking accuracy and robustness. We identify the limited data and low spatial resolution as the main challenges, and propose strategies to counter these issues when integrating deep features for tracking. Furthermore, we propose a novel adaptive fusion approach that leverages the complementary properties of deep and shallow features to improve both robustness and accuracy. Extensive experiments are performed on four challenging datasets. On VOT2017, our approach significantly outperforms the top performing tracker from the challenge with a relative gain of (17 ) in EAO.", "Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limite ..." ] }
1905.06596
2946462349
The dominant neural machine translation models are based on the encoder-decoder structure, and many of them rely on an unconstrained receptive field over source and target sequences. In this paper we study a new architecture that breaks with both conventions. Our simplified architecture consists in the decoder part of a transformer model, based on self-attention, but with locality constraints applied on the attention receptive field. As input for training, both source and target sentences are fed to the network, which is trained as a language model. At inference time, the target tokens are predicted autoregressively starting with the source sequence as previous tokens. The proposed model achieves a new state of the art of 35.7 BLEU on IWSLT'14 German-English and matches the best reported results in the literature on the WMT'14 English-German and WMT'14 English-French translation benchmarks.
A similar architecture was also proposed by @cite_3 for Cross-lingual language model pretraining using pairs of parallel sentences. However, the proposed Translation Language Model (TLM) is only used for cross-lingual classification.
{ "cite_N": [ "@cite_3" ], "mid": [ "2914120296" ], "abstract": [ "Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9 accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available." ] }
1905.06618
2946699248
Influence maximization has found applications in a wide range of real-world problems, for instance, viral marketing of products in an online social network, and information propagation of valuable information such as job vacancy advertisements and health-related information. While existing algorithmic techniques usually aim at maximizing the total number of people influenced, the population often comprises several socially salient groups, e.g., based on gender or race. As a result, these techniques could lead to disparity across different groups in receiving important information. Furthermore, in many of these applications, the spread of influence is time-critical, i.e., it is only beneficial to be influenced before a time deadline. As we show in this paper, the time-criticality of the information could further exacerbate the disparity of influence across groups. This disparity, introduced by algorithms aimed at maximizing total influence, could have far-reaching consequences, impacting people's prosperity and putting minority groups at a big disadvantage. In this work, we propose a notion of group fairness in time-critical influence maximization. We introduce surrogate objective functions to solve the influence maximization problem under fairness considerations. By exploiting the submodularity structure of our objectives, we provide computationally efficient algorithms with guarantees that are effective in enforcing fairness during the propagation process. We demonstrate the effectiveness of our approach through synthetic and real-world experiments.
Typically, identifying the most influential nodes is studied in two ways: (i) using network structural properties to find the set of most central nodes @cite_2 @cite_23 , and (ii) formulating the problem as discrete optimization @cite_36 @cite_33 @cite_4 . studied influence maximization under different social contagion models and showed that submodularity of the influence function can be used to obtain provable approximation guarantees. Since then, there has been a large body of work studying various extensions @cite_36 @cite_13 @cite_3 @cite_25 . However, the notion of fairness in the influence maximization problem has not been studied by this line of previous works.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_36", "@cite_3", "@cite_23", "@cite_2", "@cite_13", "@cite_25" ], "mid": [ "", "", "2136486572", "2139297408", "2061820396", "1981000476", "1512602432", "2105509646" ], "abstract": [ "", "", "Online social networks are now a popular way for users to connect, express themselves, and share content. Users in today's online social networks often post a profile, consisting of attributes like geographic location, interests, and schools attended. Such profile information is used on the sites as a basis for grouping users, for sharing content, and for suggesting users who may benefit from interaction. However, in practice, not all users provide these attributes. In this paper, we ask the question: given attributes for some fraction of the users in an online social network, can we infer the attributes of the remaining users? In other words, can the attributes of users, in combination with the social network graph, be used to predict the attributes of another user in the network? To answer this question, we gather fine-grained data from two social networks and try to infer user profile attributes. We find that users with common attributes are more likely to be friends and often form dense communities, and we propose a method of inferring user attributes that is inspired by previous approaches to detecting communities in social networks. Our results show that certain user attributes can be inferred with high accuracy when given information on as little as 20 of the users.", "In this work, we study the notion of competing campaigns in a social network and address the problem of influence limitation where a \"bad\" campaign starts propagating from a certain node in the network and use the notion of limiting campaigns to counteract the effect of misinformation. The problem can be summarized as identifying a subset of individuals that need to be convinced to adopt the competing (or \"good\") campaign so as to minimize the number of people that adopt the \"bad\" campaign at the end of both propagation processes. We show that this optimization problem is NP-hard and provide approximation guarantees for a greedy solution for various definitions of this problem by proving that they are submodular. We experimentally compare the performance of the greedy method to various heuristics. The experiments reveal that in most cases inexpensive heuristics such as degree centrality compare well with the greedy approach. We also study the influence limitation problem in the presence of missing data where the current states of nodes in the network are only known with a certain probability and show that prediction in this setting is a supermodular problem. We propose a prediction algorithm that is based on generating random spanning trees and evaluate the performance of this approach. The experiments reveal that using the prediction algorithm, we are able to tolerate about 90 missing data before the performance of the algorithm starts degrading and even with large amounts of missing data the performance degrades only to 75 of the performance that would be achieved with complete data.", "Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of \"word of mouth\" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63 of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks.", "This paper proposes an alternative way to identify nodes with high betweenness centrality. It introduces a new metric, κ-path centrality, and a randomized algorithm for estimating it, and shows empirically that nodes with high κ-path centrality have high node betweenness centrality. The randomized algorithm runs in time O(κ3n2−2αlog n) and outputs, for each vertex v, an estimate of its κ-path centrality up to additive error of ±n1 2+α with probability 1 − 1 n2. Experimental evaluations on real and synthetic social networks show improved accuracy in detecting high betweenness centrality nodes and significantly reduced execution time when compared with existing randomized algorithms.", "Social networks often serve as a medium for the diffusion of ideas or innovations. An individual's decision whether to adopt a product or innovation will be highly dependent on the choices made by the individual's peers or neighbors in the social network. In this work, we study the game of innovation diffusion with multiple competing innovations such as when multiple companies market competing products using viral marketing. Our first contribution is a natural and mathematically tractable model for the diffusion of multiple innovations in a network. We give a (1-1 e) approximation algorithm for computing the best response to an opponent's strategy, and prove that the \"price of competition\" of this game is at most 2. We also discuss \"first mover\" strategies which try to maximize the expected diffusion against perfect competition. Finally, we give an FPTAS for the problem of maximizing the influence of a single player when the underlying graph is a tree.", "We consider the problem faced by a company that wants to use viral marketing to introduce a new product into a market where a competing product is already being introduced. We assume that consumers will use only one of the two products and will influence their friends in their decision of which product to use. We propose two models for the spread of influence of competing technologies through a social network and consider the influence maximization problem from the follower's perspective. In particular we assume the follower has a fixed budget available that can be used to target a subset of consumers and show that, although it is NP-hard to select the most influential subset to target, it is possible to give an efficient algorithm that is within 63 of optimal. Our computational experiments show that by using knowledge of the social network and the set of consumers targeted by the competitor, the follower may in fact capture a majority of the market by targeting a relatively small set of the right consumers." ] }
1905.06618
2946699248
Influence maximization has found applications in a wide range of real-world problems, for instance, viral marketing of products in an online social network, and information propagation of valuable information such as job vacancy advertisements and health-related information. While existing algorithmic techniques usually aim at maximizing the total number of people influenced, the population often comprises several socially salient groups, e.g., based on gender or race. As a result, these techniques could lead to disparity across different groups in receiving important information. Furthermore, in many of these applications, the spread of influence is time-critical, i.e., it is only beneficial to be influenced before a time deadline. As we show in this paper, the time-criticality of the information could further exacerbate the disparity of influence across groups. This disparity, introduced by algorithms aimed at maximizing total influence, could have far-reaching consequences, impacting people's prosperity and putting minority groups at a big disadvantage. In this work, we propose a notion of group fairness in time-critical influence maximization. We introduce surrogate objective functions to solve the influence maximization problem under fairness considerations. By exploiting the submodularity structure of our objectives, we provide computationally efficient algorithms with guarantees that are effective in enforcing fairness during the propagation process. We demonstrate the effectiveness of our approach through synthetic and real-world experiments.
Fairness in Algorithmic Decision Making Recently a growing amount of work has focused on bias and unfairness in algorithmic decision-making systems @cite_10 @cite_14 @cite_34 . The aim here is to examine and mitigate unfair decisions that may lead to discrimination. Fairness has been divided into two broad areas: individual and group-level fairness. The notion of individual fairness , first proposed by @cite_0 , requires that similar individuals who have similar attributes to be treated similarly. Whereas, the concept of Demographic Parity falls into a larger category called group fairness @cite_9 , which requires that the outcomes of an algorithm should equally benefit different groups with different sensitive attributes (e.g., groups based on race, gender or age) @cite_24 @cite_16 . Although fairness along different dimensions of political science, moral philosophy, economics, and law @cite_18 @cite_1 @cite_7 @cite_27 has been extensively studied, only a few contemporary works have investigated fairness in influence maximization, as described next.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_9", "@cite_1", "@cite_16", "@cite_0", "@cite_24", "@cite_27", "@cite_34", "@cite_10" ], "mid": [ "2964436851", "2604166239", "2971548201", "2507358938", "", "", "2100960835", "2014352947", "2731796896", "2901662176", "2811494126" ], "abstract": [ "We consider the problem of fairly dividing a collection of indivisible goods among a set of players. Much of the existing literature on fair division focuses on notions of individual fairness. For instance, envy-freeness requires that no player prefer the set of goods allocated to another player to her own allocation. We observe that an algorithm satisfying such individual fairness notions can still treat groups of players unfairly, with one group desiring the goods allocated to another. Our main contribution is a notion of group fairness, which implies most existing notions of individual fairness. Group fairness (like individual fairness) cannot be satisfied exactly with indivisible goods. Thus, we introduce two “up to one good” style relaxations. We show that, somewhat surprisingly, certain local optima of the Nash welfare function satisfy both relaxations and can be computed in pseudo-polynomial time by local search. Our experiments reveal faster computation and stronger fairness guarantees in practice.", "Users of social media sites like Facebook and Twitter rely on crowdsourced content recommendation systems (e.g., Trending Topics) to retrieve important and useful information. Contents selected for recommendation indirectly give the initial users who promoted (by liking or posting) the content an opportunity to propagate their messages to a wider audience. Hence, it is important to understand the demographics of people who make a content worthy of recommendation, and explore whether they are representative of the media site's overall population. In this work, using extensive data collected from Twitter, we make the first attempt to quantify and explore the demographic biases in the crowdsourced recommendations. Our analysis, focusing on the selection of trending topics, finds that a large fraction of trends are promoted by crowds whose demographics are significantly different from the overall Twitter population. More worryingly, we find that certain demographic groups are systematically under-represented among the promoters of the trending topics. To make the demographic biases in Twitter trends more transparent, we developed and deployed a Web-based service 'Who-Makes-Trends' at twitter-app.mpi-sws.org who-makes-trends.", "Abstract We study the problem of fairly allocating indivisible goods to groups of agents. Agents in the same group share the same set of goods even though they may have different preferences. Previous work has focused on unanimous fairness, in which all agents in each group must agree that their group's share is fair. Under this strict requirement, fair allocations exist only for small groups. We introduce the concept of democratic fairness, which aims to satisfy a certain fraction of the agents in each group. This concept is better suited to large groups such as cities or countries. We present protocols for democratic fair allocation among two or more arbitrarily large groups of agents with monotonic, additive, or binary valuations. For two groups with arbitrary monotonic valuations, we give an efficient protocol that guarantees envy-freeness up to one good for at least 1 2 of the agents in each group, and prove that the 1 2 fraction is optimal. We also present other protocols that make weaker fairness guarantees to more agents in each group, or to more groups. Our protocols combine techniques from different fields, including combinatorial game theory, cake cutting, and voting.", "Algorithms and decision making based on Big Data have become pervasive in all aspects of our daily lives lives (offline and online), as they have become essential tools in personal finance, health care, hiring, housing, education, and policies. It is therefore of societ al and ethical importance to ask whether these algorithms can be discriminative on grounds such as gender, ethnicity, or health status. It turns out that the answer is positive: for instance, recent studies in the context of online advertising show that ads for high-income jobs are presented to men much more often than to women [, 2015]; and ads for arrest records are significantly more likely to show up on searches for distinctively black names [Sweeney, 2013]. This algorithmic bias exists even when there is no discrimination intention in the developer of the algorithm. Sometimes it may be inherent to the data sources used (software making decisions based on data can reflect, or even amplify, the results of historical discrimination), but even when the sensitive attributes have been suppressed from the input, a well trained machine learning algorithm may still discriminate on the basis of such sensitive attributes because of correlations existing in the data. These considerations call for the development of data mining systems which are discrimination-conscious by-design. This is a novel and challenging research area for the data mining community. The aim of this tutorial is to survey algorithmic bias, presenting its most common variants, with an emphasis on the algorithmic techniques and key ideas developed to derive efficient solutions. The tutorial covers two main complementary approaches: algorithms for discrimination discovery and discrimination prevention by means of fairness-aware data mining. We conclude by summarizing promising paths for future research.", "", "", "We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of \"fair affirmative action,\" which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.", "What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender) and an explicit description of the process. When computers are involved, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the process, we propose making inferences based on the data it uses. We present four contributions. First, we link disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on how well the protected class can be predicted from the other attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.", "Abstract We investigate the problem of fairly allocating indivisible goods among interested agents using the concept of maximin share. Procaccia and Wang showed that while an allocation that gives every agent at least her maximin share does not necessarily exist, one that gives every agent at least 2 ∕ 3 of her share always does. In this paper, we consider the more general setting where we allocate the goods to groups of agents. The agents in each group share the same set of goods even though they may have conflicting preferences. For two groups, we characterize the cardinality of the groups for which a positive approximation of the maximin share is possible regardless of the number of goods. We also show settings where an approximation is possible or impossible when there are several groups.", "To help their users to discover important items at a particular time, major websites like Twitter, Yelp, TripAdvisor or NYTimes provide Top-K recommendations (e.g., 10 Trending Topics, Top 5 Hotels in Paris or 10 Most Viewed News Stories), which rely on crowdsourced popularity signals to select the items. However, different sections of a crowd may have different preferences, and there is a large silent majority who do not explicitly express their opinion. Also, the crowd often consists of actors like bots, spammers, or people running orchestrated campaigns. Recommendation algorithms today largely do not consider such nuances, hence are vulnerable to strategic manipulation by small but hyper-active user groups. To fairly aggregate the preferences of all users while recommending top-K items, we borrow ideas from prior research on social choice theory, and identify a voting mechanism called Single Transferable Vote (STV) as having many of the fairness properties we desire in top-K item (s)elections. We develop an innovative mechanism to attribute preferences of silent majority which also make STV completely operational. We show the generalizability of our approach by implementing it on two different real-world datasets. Through extensive experimentation and comparison with state-of-the-art techniques, we show that our proposed approach provides maximum user satisfaction, and cuts down drastically on items disliked by most but hyper-actively promoted by a few users.", "Discrimination via algorithmic decision making has received considerable attention. Prior work largely focuses on defining conditions for fairness, but does not define satisfactory measures of algorithmic unfairness. In this paper, we focus on the following question: Given two unfair algorithms, how should we determine which of the two is more unfair? Our core idea is to use existing inequality indices from economics to measure how unequally the outcomes of an algorithm benefit different individuals or groups in a population. Our work offers a justified and general framework to compare and contrast the (un)fairness of algorithmic predictors. This unifying approach enables us to quantify unfairness both at the individual and the group level. Further, our work reveals overlooked tradeoffs between different fairness notions: using our proposed measures, the overall individual-level unfairness of an algorithm can be decomposed into a between-group and a within-group component. Earlier methods are typically designed to tackle only between-group un- fairness, which may be justified for legal or other reasons. However, we demonstrate that minimizing exclusively the between-group component may, in fact, increase the within-group, and hence the overall unfairness. We characterize and illustrate the tradeoffs between our measures of (un)fairness and the prediction accuracy." ] }
1905.06618
2946699248
Influence maximization has found applications in a wide range of real-world problems, for instance, viral marketing of products in an online social network, and information propagation of valuable information such as job vacancy advertisements and health-related information. While existing algorithmic techniques usually aim at maximizing the total number of people influenced, the population often comprises several socially salient groups, e.g., based on gender or race. As a result, these techniques could lead to disparity across different groups in receiving important information. Furthermore, in many of these applications, the spread of influence is time-critical, i.e., it is only beneficial to be influenced before a time deadline. As we show in this paper, the time-criticality of the information could further exacerbate the disparity of influence across groups. This disparity, introduced by algorithms aimed at maximizing total influence, could have far-reaching consequences, impacting people's prosperity and putting minority groups at a big disadvantage. In this work, we propose a notion of group fairness in time-critical influence maximization. We introduce surrogate objective functions to solve the influence maximization problem under fairness considerations. By exploiting the submodularity structure of our objectives, we provide computationally efficient algorithms with guarantees that are effective in enforcing fairness during the propagation process. We demonstrate the effectiveness of our approach through synthetic and real-world experiments.
Contemporary Works Very recently, proposed a notion of individual fairness in information access, but did not consider the group fairness aspects. In addition, some prior works have proposed constrained optimization problems to encourage diversity in selecting the most influential nodes @cite_30 @cite_35 @cite_20 @cite_17 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_20", "@cite_17" ], "mid": [ "2768254850", "2972175238", "2963893636", "2949871360" ], "abstract": [ "We develop a model of multiwinner elections that combines performance-based measures of the quality of the committee (such as, e.g., Borda scores of the committee members) with diversity constraints. Specifically, we assume that the candidates have certain attributes (such as being a male or a female, being junior or senior, etc.) and the goal is to elect a committee that, on the one hand, has as high a score regarding a given performance measure, but that, on the other hand, meets certain requirements (e.g., of the form \"at least @math of the committee members are junior candidates and at least @math are females\"). We analyze the computational complexity of computing winning committees in this model, obtaining polynomial-time algorithms (exact and approximate) and NP-hardness results. We focus on several natural classes of voting rules and diversity constraints.", "", "The state of Singapore operates a national public housing program, accounting for over @math of its residential real estate. Singapore uses its housing allocation program to promote ethnic diversity in its neighborhoods; it does so by imposing ethnic quotas: every ethnic group must not own more than a certain percentage in a housing project, thus ensuring that every neighborhood contains members from each ethnic group. However, imposing diversity constraints naturally results in some welfare loss. Our work studies the tradeoff between diversity and (utilitarian) social welfare from the perspective of computational economics. We model the problem as an extension of the classic assignment problem, with additional diversity constraints. While the classic assignment program is poly-time computable, we show that adding diversity constraints makes the problem computationally intractable; however, we identify a 1 2-approximation algorithm, as well as reasonable agent utility models which admit poly-time algorithms. In addition, we study the price of diversity: this is the loss in welfare incurred by imposing diversity constraints; we provide upper bounds on the price of diversity as functions of natural problem parameters. Finally, we use recent, public demographic and real-estate data from Singapore to create a simulated framework testing the welfare loss due to diversity constraints in realistic large-scale scenarios.", "In recent years, automated data-driven decision-making systems have enjoyed a tremendous success in a variety of fields (e.g., to make product recommendations, or to guide the production of entertainment). More recently, these algorithms are increasingly being used to assist socially sensitive decision-making (e.g., to decide who to admit into a degree program or to prioritize individuals for public housing). Yet, these automated tools may result in discriminative decision-making in the sense that they may treat individuals unfairly or unequally based on membership to a category or a minority, resulting in disparate treatment or disparate impact and violating both moral and ethical standards. This may happen when the training dataset is itself biased (e.g., if individuals belonging to a particular group have historically been discriminated upon). However, it may also happen when the training dataset is unbiased, if the errors made by the system affect individuals belonging to a category or minority differently (e.g., if misclassification rates for Blacks are higher than for Whites). In this paper, we unify the definitions of unfairness across classification and regression. We propose a versatile mixed-integer optimization framework for learning optimal and fair decision trees and variants thereof to prevent disparate treatment and or disparate impact as appropriate. This translates to a flexible schema for designing fair and interpretable policies suitable for socially sensitive decision-making. We conduct extensive computational studies that show that our framework improves the state-of-the-art in the field (which typically relies on heuristics) to yield non-discriminative decisions at lower cost to overall accuracy." ] }
1905.06291
2945838944
Autonomous optimization refers to the design of feedback controllers that steer a physical system to a steady state that solves a predefined, possibly constrained, optimization problem. As such, no exogenous control inputs such as setpoints or trajectories are required. Instead, these controllers are modeled after optimization algorithms that take the form of dynamical systems. The interconnection of this type of optimization dynamics with a physical system is however not guaranteed to be stable unless both dynamics act on sufficiently different timescales. In this paper, we quantify the required timescale separation and give prescriptions that can be directly used in the design of this type of feedback controllers. Using ideas from singular perturbation analysis we derive stability bounds for different feedback optimization schemes that are based on common continuous-time optimization schemes. In particular, we consider gradient descent and its variations, including projected gradient, and Newton gradient. We further give stability bounds for momentum methods and saddle-point flows interconnected with dynamical systems. Finally, we discuss how optimization algorithms like subgradient and accelerated gradient descent, while well-behaved in offline settings, are unsuitable for autonomous optimization due to their general lack of robustness.
Further, the concept of @cite_27 @cite_11 @cite_4 aims at learning a gradient direction without recourse to any model information by means of a probing signal and exploitation of non-commutativity, but significant limitations arise when considering high-dimensional systems or constraints.
{ "cite_N": [ "@cite_27", "@cite_4", "@cite_11" ], "mid": [ "", "2166003201", "2963117493" ], "abstract": [ "", "Extremum seeking feedback is a powerful method to steer a dynamical system to an extremum of a partially or completely unknown map. It often requires advanced system-theoretic tools to understand the qualitative behavior of extremum seeking systems. In this paper, a novel interpretation of extremum seeking is introduced. We show that the trajectories of an extremum seeking system can be approximated by the trajectories of a system which involves certain Lie brackets of the vector fields of the extremum seeking system. It turns out that the Lie bracket system directly reveals the optimizing behavior of the extremum seeking system. Furthermore, we establish a theoretical foundation and prove that uniform asymptotic stability of the Lie bracket system implies practical uniform asymptotic stability of the corresponding extremum seeking system. We use the established results in order to prove local and semi-global practical uniform asymptotic stability of the extrema of a certain map for multi-agent extremum seeking systems.", "A novel class of derivative-free optimization algorithms is developed. The main idea is to utilize certain non-commutative maps in order to approximate the gradient of the objective function. Convergence properties of the novel algorithms are established and simulation examples are presented." ] }
1905.06291
2945838944
Autonomous optimization refers to the design of feedback controllers that steer a physical system to a steady state that solves a predefined, possibly constrained, optimization problem. As such, no exogenous control inputs such as setpoints or trajectories are required. Instead, these controllers are modeled after optimization algorithms that take the form of dynamical systems. The interconnection of this type of optimization dynamics with a physical system is however not guaranteed to be stable unless both dynamics act on sufficiently different timescales. In this paper, we quantify the required timescale separation and give prescriptions that can be directly used in the design of this type of feedback controllers. Using ideas from singular perturbation analysis we derive stability bounds for different feedback optimization schemes that are based on common continuous-time optimization schemes. In particular, we consider gradient descent and its variations, including projected gradient, and Newton gradient. We further give stability bounds for momentum methods and saddle-point flows interconnected with dynamical systems. Finally, we discuss how optimization algorithms like subgradient and accelerated gradient descent, while well-behaved in offline settings, are unsuitable for autonomous optimization due to their general lack of robustness.
The historic roots of the approach pursued in this paper can be traced back to the study of communication networks where congestion control algorithms have been analyzed from an optimization perspective @cite_22 @cite_34 @cite_15 . Similar ideas have recently attracted a lot of interest in power systems, where feedback-based optimization schemes have been proposed for voltage control @cite_8 @cite_32 , frequency control @cite_21 @cite_35 @cite_23 , or general power flow optimization @cite_28 @cite_7 @cite_16 @cite_10 . For a survey see @cite_5 .
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_7", "@cite_8", "@cite_28", "@cite_21", "@cite_32", "@cite_16", "@cite_23", "@cite_5", "@cite_15", "@cite_34", "@cite_10" ], "mid": [ "2520314131", "2159715570", "2963763190", "1969576074", "2294416312", "2027270344", "2733716725", "2738184248", "2122773676", "2737628516", "2078595922", "2037710455", "2963463042" ], "abstract": [ "Automatic generation control (AGC) regulates mechanical power generation in response to load changes through local measurements. Its main objective is to maintain system frequency and keep energy balanced within each control area in order to maintain the scheduled net interchanges between control areas. The scheduled interchanges as well as some other factors of AGC are determined at a slower time scale by considering a centralized economic dispatch (ED) problem among different generators. However, how to make AGC more economically efficient is less studied. In this paper, we study the connections between AGC and ED by reverse engineering AGC from an optimization view, and then we propose a distributed approach to slightly modify the conventional AGC to improve its economic efficiency by incorporating ED into the AGC automatically and dynamically.", "This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing.", "This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of ac optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the ac power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall, the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.", "We consider the problem of exploiting the microgenerators dispersed in the power distribution network in order to provide distributed reactive power compensation for power losses minimization and voltage regulation. In the proposed strategy, microgenerators are smart agents that can measure their phasorial voltage, share these data with the other agents on a cyber layer, and adjust the amount of reactive power injected into the grid, according to a feedback control law that descends from duality-based methods applied to the optimal reactive power flow problem. Convergence to the configuration of minimum losses and feasible voltages is proved analytically for both a synchronous and an asynchronous version of the algorithm, where agents update their state independently one from the other. Simulations are provided in order to illustrate the performance and the robustness of the algorithm, and the innovative feedback nature of such strategy is discussed.", "We propose an online algorithm for solving optimal power flow (OPF) problems on radial networks where the controllable devices continuously interact with the network that implicitly computes a power flow solution given a control action. Collectively the controllable devices and the network implement a gradient projection algorithm for the OPF problem in real time. The key design feature that enables this approach is that the intermediate iterates of our algorithm always satisfy power flow equations and operational constraints. This is achieved by explicitly exploiting the network to implicitly solve power flow equations for us in real time at scale. We prove that the proposed algorithm converges to the set of local optima and provide sufficient conditions under which it converges to a global optimum. We derive an upper bound on the suboptimality gap of any local optimum. This bound suggests that any local minimum is almost as good as any strictly feasible point. We explain how to greatly reduce the gradient computation in each iteration by using approximate gradient derived from linearized power flow equations. Numerical results on test networks, ranging from 42-bus to 1990-bus, show a great speedup over a second-order cone relaxation method with negligible difference in objective values.", "We present a systematic method to design ubiquitous continuous fast-acting distributed load control for primary frequency regulation in power networks, by formulating an optimal load control (OLC) problem where the objective is to minimize the aggregate cost of tracking an operating point subject to power balance over the network. We prove that the swing dynamics and the branch power flows, coupled with frequency-based load control, serve as a distributed primal-dual algorithm to solve OLC. We establish the global asymptotic stability of a multimachine network under such type of load-side primary frequency control. These results imply that the local frequency deviations on each bus convey exactly the right information about the global power imbalance for the loads to make individual decisions that turn out to be globally optimal. Simulations confirm that the proposed algorithm can rebalance power and resynchronize bus frequencies after a disturbance with significantly improved transient performance.", "A standard operational requirement in power systems is that the voltage magnitudes lie within prespecified bounds. Conventional engineering wisdom suggests that having a tightly regulated voltage profile should also guarantee that the system operates far from static bifurcation instabilities, such as voltage collapse. In general, however, these two objectives are distinct and must be separately enforced. We formulate an optimization problem that maximizes the distance to voltage collapse through injections of reactive power, subject to power flow and operational voltage constraints. By exploiting a linear approximation of the power flow equations, we arrive at a convex reformulation, which can be efficiently solved for the optimal injections. We then propose a distributed feedback controller, based on a dual-ascent algorithm, to solve for the prescribed optimization problem in real time. This is possible, thanks to a further manipulation of the problem into a form that is amenable for distributed implementation. We also address the planning problem of allocating control resources by recasting our problem in a sparsity-promoting framework. This allows us to choose a desired tradeoff between optimality of injections and the number of required actuators. We illustrate the performance of our results with the IEEE 30-bus network.", "The focus of this paper is the online load flow optimization of power systems in closed loop. In contrast to the conventional approach where an AC OPF solution is computed before being applied to the system, our objective is to design an adaptive feedback controller that steers the system in real time to the optimal operating point without explicitly solving an AC OPF problem. Our approach can be used for example to simultaneously regulate voltages, mitigate line congestion, and optimize operating costs under time-varying conditions. In contrast to related work which is mostly focused on distribution grids, we introduce a modeling approach in terms of manifold optimization that is applicable in general scenarios. For this, we treat the power flow equations as implicit constraints that are naturally enforced and hence give rise to the power flow manifold (PFM). Based on our theoretical results for this type of optimization problems, we propose a discrete-time projected gradient descent scheme on the PFM. In this work, we confirm through a detailed simulation study that the algorithm performs well in a more realistic power system setup and reliably tracks the time-varying optimum of the underlying AC OPF problem.", "This article presents a novel control scheme for achieving optimal power balancing and congestion management in electrical power systems via nodal prices. We develop a dynamic controller that guarantees economically optimal steady-state operation while respecting all line flow constraints in steady-state. A benchmark example illustrates the effectiveness of the proposed control scheme.", "Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. This paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.", "This article reviews the current transmission control protocol (TCP) congestion control protocols and overviews recent advances that have brought analytical tools to this problem. We describe an optimization-based framework that provides an interpretation of various flow control mechanisms, in particular, the utility being optimized by the protocol's equilibrium structure. We also look at the dynamics of TCP and employ linear models to exhibit stability limitations in the predominant TCP versions, despite certain built-in compensations for delay. Finally, we present a new protocol that overcomes these limitations and provides stability in a way that is scalable to arbitrary networks, link capacities, and delays.", "We propose an optimization approach to flow control where the objective is to maximize the aggregate source utility over their transmission rates. We view network links and sources as processors of a distributed computation system to solve the dual problem using a gradient projection algorithm. In this system, sources select transmission rates that maximize their own benefits, utility minus bandwidth cost, and network links adjust bandwidth prices to coordinate the sources' decisions. We allow feedback delays to be different, substantial, and time varying, and links and sources to update at different times and with different frequencies. We provide asynchronous distributed algorithms and prove their convergence in a static environment. We present measurements obtained from a preliminary prototype to illustrate the convergence of the algorithm in a slowly time-varying environment. We discuss its fairness property.", "Consider a polynomial optimisation problem, whose instances vary continuously over time. We propose to use a coordinate-descent algorithm for solving such time-varying optimisation problems. In particular, we focus on relaxations of transmission-constrained problems in power systems. On the example of the alternating-current optimal power flows (ACOPF), we bound the difference between the current approximate optimal cost generated by our algorithm and the optimal cost for a relaxation using the most recent data from above by a function of the properties of the instance and the rate of change to the instance over time. We also bound the number of floating-point operations that need to be performed between two updates in order to guarantee the error is bounded from above by a given constant." ] }
1905.06292
2945795706
In this paper, we propose a novel generative adversarial network (GAN) for 3D point clouds generation, which is called tree-GAN. To achieve state-of-the-art performance for multi-class 3D point cloud generation, a tree-structured graph convolution network (TreeGCN) is introduced as a generator for tree-GAN. Because TreeGCN performs graph convolutions within a tree, it can use ancestor information to boost the representation power for features. To evaluate GANs for 3D point clouds accurately, we develop a novel evaluation metric called Frechet point cloud distance (FPD). Experimental results demonstrate that the proposed tree-GAN outperforms state-of-the-art GANs in terms of both conventional metrics and FPD, and can generate point clouds for different semantic parts without prior knowledge.
Over the past few years, a number of works have focused on the generalization of deep neural networks for graph problems @cite_1 @cite_13 @cite_47 @cite_46 . Defferrard @cite_9 proposed fast-learning convolutional filters for graph classification problems. Using these filters, they significantly accelerated the spectral decomposition process, which was one of the main computational bottlenecks in traditional graph convolution problmes with large datasets. Kipf and Welling @cite_12 introduced scalable GCNs based on first-order approximations of spectral graph convolutions for semi-supervised classification, in which convolution filters only use the information from neighboring vertices instead of the information from the entire network.
{ "cite_N": [ "@cite_46", "@cite_9", "@cite_1", "@cite_47", "@cite_13", "@cite_12" ], "mid": [ "2950898568", "2964321699", "2964311892", "637153065", "2964113829", "2964015378" ], "abstract": [ "Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (, 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.", "In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.", "Abstract: Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.", "Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.", "We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin." ] }
1905.06292
2945795706
In this paper, we propose a novel generative adversarial network (GAN) for 3D point clouds generation, which is called tree-GAN. To achieve state-of-the-art performance for multi-class 3D point cloud generation, a tree-structured graph convolution network (TreeGCN) is introduced as a generator for tree-GAN. Because TreeGCN performs graph convolutions within a tree, it can use ancestor information to boost the representation power for features. To evaluate GANs for 3D point clouds accurately, we develop a novel evaluation metric called Frechet point cloud distance (FPD). Experimental results demonstrate that the proposed tree-GAN outperforms state-of-the-art GANs in terms of both conventional metrics and FPD, and can generate point clouds for different semantic parts without prior knowledge.
GANs @cite_16 for 2D image generation tasks have been widely studied with great success @cite_51 @cite_45 @cite_33 @cite_40 @cite_2 @cite_34 @cite_8 @cite_32 @cite_14 @cite_7 , but GANs for 3D point cloud generation have rarely been studied in the computer vision field. Recently, Achlioptas @cite_6 proposed a GAN for 3D point clouds called r-GAN, generator of which is based on fully connected layers. As fully connected layers cannot maintain structural information, the r-GAN has difficulty in generating realistic shapes with diversity. Valsesia @cite_49 used graph convolutions for generators for GANs. At each layer of graph convolutions during training, adjacency matrices were dynamically constructed using the feature vectors from each vertex. Unlike traditional graph convolutions, the connectivity of a graph was not assumed to be given as prior knowledge. However, to extract the connectivity of a graph, computing the adjacency matrix at a single layer incurs quadratic computational complexity @math where @math indicates the number of vertices. Therefore, this approach is intractable for multi-batch and multi-layer networks.
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_7", "@cite_8", "@cite_32", "@cite_34", "@cite_6", "@cite_40", "@cite_45", "@cite_49", "@cite_2", "@cite_16", "@cite_51" ], "mid": [ "2963540914", "2963470893", "2788771790", "2298992465", "2607037079", "2964242925", "2930206194", "2964287360", "", "2910792243", "2949999304", "2963373786", "2604176797" ], "abstract": [ "Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness of convolutional neural networks in explicitly borrowing or copying information from distant spatial locations. On the other hand, traditional texture and patch synthesis approaches are particularly suitable when it needs to borrow textures from the surrounding regions. Motivated by these observations, we propose a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions. The model is a feedforward, fully convolutional neural network which can process images with multiple holes at arbitrary locations and with variable sizes during the test time. Experiments on multiple datasets including faces (CelebA, CelebA-HQ), textures (DTD) and natural images (ImageNet, Places2) demonstrate that our proposed approach generates higher-quality inpainting results than existing ones. Code, demo and models are available at: https: github.com JiahuiYu generative_inpainting.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Synthesized medical images have several important applications, e.g., as an intermedium in cross-modality image registration and as supplementary training samples to boost the generalization capability of a classifier. Especially, synthesized computed tomography (CT) data can provide X-ray attenuation map for radiation therapy planning. In this work, we propose a generic cross-modality synthesis approach with the following targets: 1) synthesizing realistic looking 3D images using unpaired training data, 2) ensuring consistent anatomical structures, which could be changed by geometric distortion in cross-modality synthesis and 3) improving volume segmentation by using synthetic data for modalities with limited training samples. We show that these goals can be achieved with an end-to-end 3D convolutional neural network (CNN) composed of mutually-beneficial generators and segmentors for image synthesis and segmentation tasks. The generators are trained with an adversarial loss, a cycle-consistency loss, and also a shape-consistency loss, which is supervised by segmentors, to reduce the geometric distortion. From the segmentation view, the segmentors are boosted by synthetic data from generators in an online manner. Generators and segmentors prompt each other alternatively in an end-to-end training fashion. With extensive experiments on a dataset including a total of 4,496 CT and magnetic resonance imaging (MRI) cardiovascular volumes, we show both tasks are beneficial to each other and coupling these two tasks results in better performance than solving them exclusively.", "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "How do we learn an object detector that is invariant to occlusions and deformations? Our current solution is to use a data-driven strategy – collect large-scale datasets which have object instances under different conditions. The hope is that the final classifier can use these examples to learn invariances. But is it really possible to see all the occlusions in a dataset? We argue that like categories, occlusions and object deformations also follow a long-tail. Some occlusions and deformations are so rare that they hardly happen, yet we want to learn a model invariant to such occurrences. In this paper, we propose an alternative solution. We propose to learn an adversarial network that generates examples with occlusions and deformations. The goal of the adversary is to generate examples that are difficult for the object detector to classify. In our framework both the original detector and adversary are learned in a joint manner. Our experimental results indicate a 2.3 mAP boost on VOC07 and a 2.6 mAP boost on VOC2012 object detection challenge compared to the Fast-RCNN pipeline.", "The tracking-by-detection framework consists of two stages, i.e., drawing samples around the target object in the first stage and classifying each sample as the target object or as background in the second stage. The performance of existing trackers using deep classification networks is limited by two aspects. First, the positive samples in each frame are highly spatially overlapped, and they fail to capture rich appearance variations. Second, there exists extreme class imbalance between positive and negative samples. This paper presents the VITAL algorithm to address these two problems via adversarial learning. To augment positive samples, we use a generative network to randomly generate masks, which are applied to adaptively dropout input features to capture a variety of appearance changes. With the use of adversarial learning, our network identifies the mask that maintains the most robust features of the target objects over a long temporal span. In addition, to handle the issue of class imbalance, we propose a high-order cost sensitive loss to decrease the effect of easy negative samples to facilitate training the classification network. Extensive experiments on benchmark datasets demonstrate that the proposed tracker performs favorably against state-of-the-art approaches.", "Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep autoencoder (AE) network with excellent reconstruction quality and generalization ability. The learned representations outperform the state of the art in 3D recognition tasks and enable basic shape editing applications via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation. We also perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space our AEs and, Gaussian mixture models (GMM). Interestingly, GMMs trained in the latent space of our AEs produce samples of the best fidelity and diversity. To perform our quantitative evaluation of generative models, we propose simple measures of fidelity and diversity based on optimally matching between sets point clouds.", "Image-to-image translation tasks have been widely investigated with Generative Adversarial Networks (GANs) and dual learning. However, existing models lack the ability to control the translated results in the target domain and their results usually lack of diversity in the sense that a fixed image usually leads to (almost) deterministic translation result. In this paper, we study a new problem, conditional image-to-image translation, which is to translate an image from the source domain to the target domain conditioned on a given image in the target domain. It requires that the generated image should inherit some domain-specific features of the conditional image from the target domain. Therefore, changing the conditional image in the target domain will lead to diverse translation results for a fixed input image from the source domain, and therefore the conditional input image helps to control the translation results. We tackle this problem with unpaired data based on GANs and dual learning. We twist two conditional translation models (one translation from A domain to B domain, and the other one from B domain to A domain) together for inputs combination and reconstruction while preserving domain independent features. We carry out experiments on men's faces from-to women's faces translation and edges to shoes&bags translations. The results demonstrate the effectiveness of our proposed method.", "", "", "Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.", "We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.", "Objects often occlude each other in scenes; Inferring their appearance beyond their visible parts plays an important role in scene understanding, depth estimation, object interaction and manipulation. In this paper, we study the challenging problem of completing the appearance of occluded objects. Doing so requires knowing which pixels to paint (segmenting the invisible parts of objects) and what color to paint them (generating the invisible parts). Our proposed novel solution, SeGAN, jointly optimizes for both segmentation and generation of the invisible parts of objects. Our experimental results show that: (a) SeGAN can learn to generate the appearance of the occluded parts of objects; (b) SeGAN outperforms state-of-the-art segmentation baselines for the invisible parts of objects; (c) trained on synthetic photo realistic images, SeGAN can reliably segment natural images; (d) by reasoning about occluder-occludee relations, our method can infer depth layering." ] }
1905.06292
2945795706
In this paper, we propose a novel generative adversarial network (GAN) for 3D point clouds generation, which is called tree-GAN. To achieve state-of-the-art performance for multi-class 3D point cloud generation, a tree-structured graph convolution network (TreeGCN) is introduced as a generator for tree-GAN. Because TreeGCN performs graph convolutions within a tree, it can use ancestor information to boost the representation power for features. To evaluate GANs for 3D point clouds accurately, we develop a novel evaluation metric called Frechet point cloud distance (FPD). Experimental results demonstrate that the proposed tree-GAN outperforms state-of-the-art GANs in terms of both conventional metrics and FPD, and can generate point clouds for different semantic parts without prior knowledge.
There have been several attempts to represent convolutional neural networks or long short-term memory using tree structures @cite_43 @cite_27 @cite_31 @cite_30 @cite_37 . However, to the best of our knowledge, no previous methods have used tree structures for either graph convolutions or GANs. For example, Gadelha @cite_17 used tree-structured networks to generate 3D point clouds via variational autoencoder (VAE). However, this method needed the assumption that inputs are the 1D-ordered lists of points obtained by space-partitioning algorithms such as K-dimensional tree and random projection tree @cite_28 . Thus, it required additional preprocessing steps for valid implementations. Because its network only comprised 1D convolution layers, the method could not extract the meaningful information from unordered 3D point clouds. In contrast, the proposed tree-GAN can not only deal with unordered points, but also extract semantic parts of objects.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_28", "@cite_43", "@cite_27", "@cite_31", "@cite_17" ], "mid": [ "2513005088", "2787358858", "2118123209", "2808534336", "2741815235", "2218408410", "2850910281" ], "abstract": [ "We present an online visual tracking algorithm by managing multiple target appearance models in a tree structure. The proposed algorithm employs Convolutional Neural Networks (CNNs) to represent target appearances, where multiple CNNs collaborate to estimate target states and determine the desirable paths for online model updates in the tree. By maintaining multiple CNNs in diverse branches of tree structure, it is convenient to deal with multi-modality in target appearances and preserve model reliability through smooth updates along tree paths. Since multiple CNNs share all parameters in convolutional layers, it takes advantage of multiple models with little extra cost by saving memory space and avoiding redundant network evaluations. The final target state is estimated by sampling target candidates around the state in the previous frame and identifying the best sample in terms of a weighted average score from a set of active CNNs. Our algorithm illustrates outstanding performance compared to the state-of-the-art techniques in challenging datasets such as online tracking benchmark and visual object tracking challenge.", "In recent years, Convolutional Neural Networks (CNNs) have shown remarkable performance in many computer vision tasks such as object recognition and detection. However, complex training issues, such as \"catastrophic forgetting\" and hyper-parameter tuning, make incremental learning in CNNs a difficult challenge. In this paper, we propose a hierarchical deep neural network, with CNNs at multiple levels, and a corresponding training method for lifelong learning. The network grows in a tree-like manner to accommodate the new classes of data without losing the ability to identify the previously trained classes. The proposed network was tested on CIFAR-10 and CIFAR-100 datasets, and compared against the method of fine tuning specific layers of a conventional CNN. We obtained comparable accuracies and achieved 40 and 20 reduction in training effort in CIFAR-10 and CIFAR 100 respectively. The network was able to organize the incoming classes of data into feature-driven super-classes. Our model improves upon existing hierarchical CNN models by adding the capability of self-growth and also yields important observations on feature selective classification.", "We present a simple variant of the k-d tree which automatically adapts to intrinsic low dimensional structure in data without having to explicitly learn this structure.", "", "", "Most deep architectures for image classification–even those that are trained to classify a large number of diverse categories–learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and Image Net datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures.", "We present multiresolution tree-structured networks to process point clouds for 3D shape understanding and generation tasks. Our network represents a 3D shape as a set of locality-preserving 1D ordered list of points at multiple resolutions. This allows efficient feed-forward processing through 1D convolutions, coarse-to-fine analysis through a multi-grid architecture, and it leads to faster convergence and small memory footprint during training. The proposed tree-structured encoders can be used to classify shapes and outperform existing point-based architectures on shape classification benchmarks, while tree-structured decoders can be used for generating point clouds directly and they outperform existing approaches for image-to-shape inference tasks learned using the ShapeNet dataset. Our model also allows unsupervised learning of point-cloud based shapes by using a variational autoencoder, leading to higher-quality generated shapes." ] }
1905.06517
2950367101
Authentication is a task aiming to confirm the truth between data instances and personal identities. Typical authentication applications include face recognition, person re-identification, authentication based on mobile devices and so on. The recently-emerging data-driven authentication process may encounter undesired biases, i.e., the models are often trained in one domain (e.g., for people wearing spring outfits) while required to apply in other domains (e.g., they change the clothes to summer outfits). To address this issue, we propose a novel two-stage method that disentangles the class identity from domain-differences, and we consider multiple types of domain-difference. In the first stage, we learn disentangled representations by a one-versus-rest disentangle learning (OVRDL) mechanism. In the second stage, we improve the disentanglement by an additive adversarial learning (AAL) mechanism. Moreover, we discuss the necessity to avoid a learning dilemma due to disentangling causally related types of domain-difference. Comprehensive evaluation results demonstrate the effectiveness and superiority of the proposed method.
The first family eliminates marginal-distribution differences between domains. This family of methods includes Transfer Component Analysis (TCA) @cite_38 , Deep Adaptation Network (DAN) @cite_6 , Reversing Gradient (RevGrad) @cite_19 , Adversarial Discriminative Domain Adaptation (ADDA) @cite_10 , among others. FML methods proposed by Goel al @cite_36 and Zhang al @cite_1 also fall into this category. Many FML methods adopt RevGrad, such as those proposed by Wadsworth al @cite_40 and Beutel al @cite_34 .
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_1", "@cite_6", "@cite_19", "@cite_40", "@cite_34", "@cite_10" ], "mid": [ "2115403315", "2788284633", "2784397426", "2951670162", "1882958252", "2810290439", "2725155646", "2593768305" ], "abstract": [ "Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.", "We introduce a novel technique to achieve non-discrimination in machine learning without sacrificing convexity and probabilistic interpretation. We also propose a new notion of fairness for machine learning called the weighted proportional fairness and show that our technique satisfies this subjective fairness criterion.", "Machine learning is a tool for building models that accurately represent input training data. When undesired biases concerning demographic groups are in the training data, well-trained models will reflect those biases. We present a framework for mitigating such biases by including a variable for the group of interest and simultaneously learning a predictor and an adversary. The input to the network X, here text or census data, produces a prediction Y, such as an analogy completion or income bracket, while the adversary tries to model a protected variable Z, here gender or zip code. The objective is to maximize the predictor's ability to predict Y while minimizing the adversary's ability to predict Z. Applied to analogy completion, this method results in accurate predictions that exhibit less evidence of stereotyping Z. When applied to a classification task using the UCI Adult (Census) Dataset, it results in a predictive model that does not lose much accuracy while achieving very close to equality of odds (Hardt, et al, 2016). The method is flexible and applicable to multiple definitions of fairness as well as a wide range of gradient-based learning models, including both regression and classification tasks.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "Recidivism prediction scores are used across the USA to determine sentencing and supervision for hundreds of thousands of inmates. One such generator of recidivism prediction scores is Northpointe's Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) score, used in states like California and Florida, which past research has shown to be biased against black inmates according to certain measures of fairness. To counteract this racial bias, we present an adversarially-trained neural network that predicts recidivism and is trained to remove racial bias. When comparing the results of our model to COMPAS, we gain predictive accuracy and get closer to achieving two out of three measures of fairness: parity and equality of odds. Our model can be generalized to any prediction and demographic. This piece of research contributes an example of scientific replication and simplification in a high-stakes real-world application like recidivism prediction.", "How can we learn a classifier that is \"fair\" for a protected or sensitive group, when we do not know if the input to the classifier belongs to the protected group? How can we train such a classifier when data on the protected group is difficult to attain? In many settings, finding out the sensitive input attribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we often do not know many attributes of the user, e.g., race or age, and many attributes of the content are hard to determine, e.g., the language or topic. Thus, it is not feasible to use a different classifier calibrated based on knowledge of the sensitive attribute. Here, we use an adversarial training procedure to remove information about the sensitive attribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training effects the resulting fairness properties. We find two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary's notion of fairness.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task." ] }
1905.06517
2950367101
Authentication is a task aiming to confirm the truth between data instances and personal identities. Typical authentication applications include face recognition, person re-identification, authentication based on mobile devices and so on. The recently-emerging data-driven authentication process may encounter undesired biases, i.e., the models are often trained in one domain (e.g., for people wearing spring outfits) while required to apply in other domains (e.g., they change the clothes to summer outfits). To address this issue, we propose a novel two-stage method that disentangles the class identity from domain-differences, and we consider multiple types of domain-difference. In the first stage, we learn disentangled representations by a one-versus-rest disentangle learning (OVRDL) mechanism. In the second stage, we improve the disentanglement by an additive adversarial learning (AAL) mechanism. Moreover, we discuss the necessity to avoid a learning dilemma due to disentangling causally related types of domain-difference. Comprehensive evaluation results demonstrate the effectiveness and superiority of the proposed method.
The second family generates data samples associated with unseen @math class, domain @math combinations, such as ELEGANT @cite_30 , DNA-GAN @cite_25 , Multi-Level Variational Autoencoder (ML-VAE) @cite_11 , CausalGAN @cite_17 , ResGAN @cite_20 , SaGAN @cite_31 , among others. FML methods Fairness GAN @cite_13 and FairGAN @cite_22 also fall into this category. These methods generate synthetic data, then ordinary models can be trained on both real and the generated data.
{ "cite_N": [ "@cite_30", "@cite_11", "@cite_22", "@cite_31", "@cite_13", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2964118024", "2620364083", "2805173664", "2895743108", "2803475667", "2963008857", "2579578355", "2752819974" ], "abstract": [ "Recent studies on face attribute transfer have achieved great success. A lot of models are able to transfer face attributes with an input image. However, they suffer from three limitations: (1) incapability of generating image by exemplars; (2) being unable to transfer multiple face attributes simultaneously; (3) low quality of generated images, such as low-resolution or artifacts. To address these limitations, we propose a novel model which receives two images of opposite attributes as inputs. Our model can transfer exactly the same type of attributes from one image to another by exchanging certain part of their encodings. All the attributes are encoded in a disentangled manner in the latent space, which enables us to manipulate several attributes simultaneously. Besides, our model learns the residual images so as to facilitate training on higher resolution images. With the help of multi-scale discriminators for adversarial training, it can even generate high-quality images with finer details and less artifacts. We demonstrate the effectiveness of our model on overcoming the above three limitations by comparing with other methods on the CelebA face database. A pytorch implementation is available at https: github.com Prinsphield ELEGANT.", "We would like to learn a representation of the data which decomposes an observation into factors of variation which we can independently control. Specifically, we want to use minimal supervision to learn a latent representation that reflects the semantics behind a specific grouping of the data, where within a group the samples share a common factor of variation. For example, consider a collection of face images grouped by identity. We wish to anchor the semantics of the grouping into a relevant and disentangled representation that we can easily exploit. However, existing deep probabilistic models often assume that the observations are independent and identically distributed. We present the Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model for learning a disentangled representation of a set of grouped observations. The ML-VAE separates the latent representation into semantically meaningful parts by working both at the group level and the observation level, while retaining efficient test-time inference. Quantitative and qualitative evaluations show that the ML-VAE model (i) learns a semantically meaningful disentanglement of grouped data, (ii) enables manipulation of the latent representation, and (iii) generalises to unseen groups.", "Fairness-aware learning is increasingly important in data mining. Discrimination prevention aims to prevent discrimination in the training data before it is used to conduct predictive analysis. In this paper, we focus on fair data generation that ensures the generated data is discrimination free. Inspired by generative adversarial networks (GAN), we present fairness-aware generative adversarial networks, called FairGAN, which are able to learn a generator producing fair data and also preserving good data utility. Compared with the naive fair data generation models, FairGAN further ensures the classifiers which are trained on generated data can achieve fair classification on real data. Experiments on a real dataset show the effectiveness of FairGAN.", "Face attribute editing aims at editing the face image with the given attribute. Most existing works employ Generative Adversarial Network (GAN) to operate face attribute editing. However, these methods inevitably change the attribute-irrelevant regions, as shown in Fig. 1. Therefore, we introduce the spatial attention mechanism into GAN framework (referred to as SaGAN), to only alter the attribute-specific region and keep the rest unchanged. Our approach SaGAN consists of a generator and a discriminator. The generator contains an attribute manipulation network (AMN) to edit the face image, and a spatial attention network (SAN) to localize the attribute-specific region which restricts the alternation of AMN within this region. The discriminator endeavors to distinguish the generated images from the real ones, and classify the face attribute. Experiments demonstrate that our approach can achieve promising visual results, and keep those attribute-irrelevant regions unchanged. Besides, our approach can benefit the face recognition by data augmentation.", "In this paper, we introduce the Fairness GAN, an approach for generating a dataset that is plausibly similar to a given multimedia dataset, but is more fair with respect to protected attributes in allocative decision making. We propose a novel auxiliary classifier GAN that strives for demographic parity or equality of opportunity and show empirical results on several datasets, including the CelebFaces Attributes (CelebA) dataset, the Quick, Draw! dataset, and a dataset of soccer player images and the offenses they were called for. The proposed formulation is well-suited to absorbing unlabeled data; we leverage this to augment the soccer dataset with the much larger CelebA dataset. The methodology tends to improve demographic parity and equality of opportunity while generating plausible images.", "Disentangling factors of variation has always been a challenging problem in representation learning. Existing algorithms suffer from many limitations, such as unpredictable disentangling factors, bad quality of generated images from encodings, lack of identity information, etc. In this paper, we proposed a supervised algorithm called DNA-GAN trying to disentangle different attributes of images. The latent representations of images are DNA-like, in which each individual piece represents an independent factor of variation. By annihilating the recessive piece and swapping a certain piece of two latent representations, we obtain another two different representations which could be decoded into images. In order to obtain realistic images and also disentangled representations, we introduced the discriminator for adversarial training. Experiments on Multi-PIE and CelebA datasets demonstrate the effectiveness of our method and the advantage of overcoming limitations existing in other methods.", "Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a face image according to a given attribute value. Instead of manipulating the whole image, we propose to learn the corresponding residual image defined as the difference between images before and after the manipulation. In this way, the manipulation can be operated efficiently with modest pixel modification. The framework of our approach is based on the Generative Adversarial Network. It consists of two image transformation networks and a discriminative network. The transformation networks are responsible for the attribute manipulation and its dual operation and the discriminative network is used to distinguish the generated images from real images. We also apply dual learning to allow transformation networks to learn from each other. Experiments show that residual images can be effectively learned and used for attribute manipulations. The generated images remain most of the details in attribute-irrelevant areas.", "We propose an adversarial training procedure for learning a causal implicit generative model for a given causal graph. We show that adversarial training can be used to learn a generative model with true observational and interventional distributions if the generator architecture is consistent with the given causal graph. We consider the application of generating faces based on given binary labels where the dependency structure between the labels is preserved with a causal graph. This problem can be seen as learning a causal implicit generative model for the image and labels. We devise a two-stage procedure for this problem. First we train a causal implicit generative model over binary labels using a neural network consistent with a causal graph as the generator. We empirically show that WassersteinGAN can be used to output discrete labels. Later, we propose two new conditional GAN architectures, which we call CausalGAN and CausalBEGAN. We show that the optimal generator of the CausalGAN, given the labels, samples from the image distributions conditioned on these labels. The conditional GAN combined with a trained causal implicit generative model for the labels is then a causal implicit generative model over the labels and the generated image. We show that the proposed architectures can be used to sample from observational and interventional image distributions, even for interventions which do not naturally occur in the dataset." ] }
1905.06517
2950367101
Authentication is a task aiming to confirm the truth between data instances and personal identities. Typical authentication applications include face recognition, person re-identification, authentication based on mobile devices and so on. The recently-emerging data-driven authentication process may encounter undesired biases, i.e., the models are often trained in one domain (e.g., for people wearing spring outfits) while required to apply in other domains (e.g., they change the clothes to summer outfits). To address this issue, we propose a novel two-stage method that disentangles the class identity from domain-differences, and we consider multiple types of domain-difference. In the first stage, we learn disentangled representations by a one-versus-rest disentangle learning (OVRDL) mechanism. In the second stage, we improve the disentanglement by an additive adversarial learning (AAL) mechanism. Moreover, we discuss the necessity to avoid a learning dilemma due to disentangling causally related types of domain-difference. Comprehensive evaluation results demonstrate the effectiveness and superiority of the proposed method.
The third family performs both marginal-distribution-difference elimination and synthetic-data generation, such as Cross-Domain Representation Disentangler (CDRD) @cite_2 , Synthesized Examples for Generalized Zero-Shot Learning (SE-GZSL) @cite_26 , Disentangled Synthesis for Domain Adaptation (DiDA) @cite_7 , Attribute-Based Synthetic Network (ABS-Net) @cite_14 , among others. Madras al @cite_8 proposed such a FML framework.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_7", "@cite_8", "@cite_2" ], "mid": [ "2603389765", "2963545832", "2804532943", "2788416960", "2951380757" ], "abstract": [ "Abstract In large-scale visual recognition tasks, researchers are usually faced with some challenging problems, such as the extreme imbalance in the number of training data between classes or the lack of annotated data for some classes. In this paper, we propose a novel neural network architecture that automatically synthesizes pseudo feature representations for the classes in lack of annotated images. With the supply of semantic attributes for classes, the proposed Attribute-Based Synthetic Network (ABS-Net) can be applied to zero-shot learning (ZSL) scenario and conventional supervised learning (CSL) scenario as well. For ZSL tasks, the pseudo feature representations can be viewed as annotated feature-level instances for novel concepts, which facilitates the construction of unseen class predictor. For CSL tasks, the pseudo feature representations can be viewed as products of data augmentation on training set, which enriches the interpretation capacity of CSL systems. We demonstrate the effectiveness of the proposed ABS-Net in ZSL and CSL settings on a synthetic colored MNIST dataset (C-MNIST). For several popular ZSL benchmark datasets, our architecture also shows competitive results on zero-shot recognition task, especially leading to tremendous improvement to state-of-the-art mAP on zero-shot retrieval task.", "We present a generative framework for generalized zero-shot learning where the training and test classes are not necessarily disjoint. Built upon a variational autoencoder based architecture, consisting of a probabilistic encoder and a probabilistic conditional decoder, our model can generate novel exemplars from seen unseen classes, given their respective class attributes. These exemplars can subsequently be used to train any off-the-shelf classification model. One of the key aspects of our encoder-decoder architecture is a feedback-driven mechanism in which a discriminator (a multivariate regressor) learns to map the generated exemplars to the corresponding class attribute vectors, leading to an improved generator. Our model's ability to generate and leverage examples from unseen classes to train the classification model naturally helps to mitigate the bias towards predicting seen classes in generalized zero-shot learning settings. Through a comprehensive set of experiments, we show that our model outperforms several state-of-the-art methods, on several benchmark datasets, for both standard as well as generalized zero-shot learning.", "Unsupervised domain adaptation aims at learning a shared model for two related, but not identical, domains by leveraging supervision from a source domain to an unsupervised target domain. A number of effective domain adaptation approaches rely on the ability to extract discriminative, yet domain-invariant, latent factors which are common to both domains. Extracting latent commonality is also useful for disentanglement analysis, enabling separation between the common and the domain-specific features of both domains. In this paper, we present a method for boosting domain adaptation performance by leveraging disentanglement analysis. The key idea is that by learning to separately extract both the common and the domain-specific features, one can synthesize more target domain data with supervision, thereby boosting the domain adaptation performance. Better common feature extraction, in turn, helps further improve the disentanglement analysis and disentangled synthesis. We show that iterating between domain adaptation and disentanglement analysis can consistently improve each other on several unsupervised domain adaptation tasks, for various domain adaptation backbone models.", "In this work, we advocate for representation learning as the key to mitigating unfair prediction outcomes downstream. We envision a scenario where learned representations may be handed off to other entities with unknown objectives. We propose and explore adversarial representation learning as a natural method of ensuring those entities will act fairly, and connect group fairness (demographic parity, equalized odds, and equal opportunity) to different adversarial objectives. Through worst-case theoretical guarantees and experimental validation, we show that the choice of this objective is crucial to fair prediction. Furthermore, we present the first in-depth experimental demonstration of fair transfer learning, by showing that our learned representations admit fair predictions on new tasks while maintaining utility, an essential goal of fair representation learning.", "While representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated. However, one cannot easily address this task without observing ground truth annotation for the training data. To address this problem, we propose a novel deep learning model of Cross-Domain Representation Disentangler (CDRD). By observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the information across data domains and transfers the attribute information accordingly. Thus, cross-domain joint feature disentanglement and adaptation can be jointly performed. In the experiments, we provide qualitative results to verify our disentanglement capability. Moreover, we further confirm that our model can be applied for solving classification tasks of unsupervised domain adaptation, and performs favorably against state-of-the-art image disentanglement and translation methods." ] }
1905.06517
2950367101
Authentication is a task aiming to confirm the truth between data instances and personal identities. Typical authentication applications include face recognition, person re-identification, authentication based on mobile devices and so on. The recently-emerging data-driven authentication process may encounter undesired biases, i.e., the models are often trained in one domain (e.g., for people wearing spring outfits) while required to apply in other domains (e.g., they change the clothes to summer outfits). To address this issue, we propose a novel two-stage method that disentangles the class identity from domain-differences, and we consider multiple types of domain-difference. In the first stage, we learn disentangled representations by a one-versus-rest disentangle learning (OVRDL) mechanism. In the second stage, we improve the disentanglement by an additive adversarial learning (AAL) mechanism. Moreover, we discuss the necessity to avoid a learning dilemma due to disentangling causally related types of domain-difference. Comprehensive evaluation results demonstrate the effectiveness and superiority of the proposed method.
Such a phenomenon of grouped classes was also discussed by Bouchacourt al @cite_11 and Zhao @cite_35 . However, they did not provide learning methods to eliminate domain-differences. It was also discussed by Heinze-Deml and Meinshausen @cite_4 . However, they assumed classes with various domains are already included in the training data. Yu al @cite_27 also discussed the setting that classes were not necessarily shared by multiple source domains. However, their method assumes that all the @math class, domain @math combinations are included in the training data set.
{ "cite_N": [ "@cite_35", "@cite_27", "@cite_4", "@cite_11" ], "mid": [ "2888744714", "2889828381", "2771374750", "2620364083" ], "abstract": [ "In social network analysis, the observed data is usually some social behavior, such as the formation of groups, rather than an explicit network structure. Zhao and Weko (2017) propose a model-based approach called the hub model to infer implicit networks from grouped observations. The hub model assumes independence between groups, which sometimes is not valid in practice. In this article, we generalize the idea of the hub model into the case of grouped observations with temporal dependence. As in the hub model, we assume that the group at each time point is gathered by one leader. Unlike in the hub model, the group leaders are not sampled independently but follow a Markov chain, and other members in adjacent groups can also be correlated. An expectation-maximization (EM) algorithm is developed for this model and a polynomial-time algorithm is proposed for the E-step. The performance of the new model is evaluated under different simulation settings. We apply this model to a data set of the Kibale Chimpanzee Project.", "Unsupervised domain adaptation (UDA) aims to learn the unlabeled target domain by transferring the knowledge of the labeled source domain. To date, most of the existing works focus on the scenario of one source domain and one target domain (1S1T), and just a few works concern the scenario of multiple source domains and one target domain (mS1T). While, to the best of our knowledge, almost no work concerns the scenario of one source domain and multiple target domains (1SmT), in which these unlabeled target domains may not necessarily share the same categories, therefore, contrasting to mS1T, 1SmT is more challenging. Accordingly, for such a new UDA scenario, we propose a UDA framework through the model parameter adaptation (PA-1SmT). A key ingredient of PA-1SmT is to transfer knowledge through adaptive learning of a common model parameter dictionary, which is completely different from existing popular methods for UDA, such as subspace alignment, distribution matching etc., and can also be directly used for DA of privacy protection due to the fact that the knowledge is transferred just via the model parameters rather than data itself. Finally, our experimental results on three domain adaptation benchmark datasets demonstrate the superiority of our framework.", "When training a deep neural network for supervised image classification, one can broadly distinguish between two types of latent features of images that will drive the classification of class Y. Following the notation of (2016), we can divide features broadly into the classes of (i) “core” or “conditionally invariant” features X^ci whose distribution P(X^ci | Y) does not change substantially across domains and (ii) “style” or “orthogonal” features X^orth whose distribution P(X^orth | Y) can change substantially across domains. These latter orthogonal features would generally include features such as position, rotation, image quality or brightness but also more complex ones like hair color or posture for images of persons. We try to guard against future adversarial domain shifts by ideally just using the “conditionally invariant” features for classification. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable. We can hence not directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called identifier or ID variable. We might know, for example, that two images show the same person, with ID referring to the identity of the person. In data augmentation, we generate several images from the same original image, with ID referring to the relevant original image. The method requires only a small fraction of images to have an ID variable. We provide a causal framework for the problem by adding the ID variable to the model of (2016). However, we are interested in settings where we cannot observe the domain directly and we treat domain as a latent variable. If two or more samples share the same class and identifier, (Y, ID)=(y,i), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features. Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian. This is shown to substantially improve performance in settings where domains change in terms of image quality, brightness, color changes, and more complex changes such as changes in movement and posture. We show links to questions of interpretability, fairness and transfer learning.", "We would like to learn a representation of the data which decomposes an observation into factors of variation which we can independently control. Specifically, we want to use minimal supervision to learn a latent representation that reflects the semantics behind a specific grouping of the data, where within a group the samples share a common factor of variation. For example, consider a collection of face images grouped by identity. We wish to anchor the semantics of the grouping into a relevant and disentangled representation that we can easily exploit. However, existing deep probabilistic models often assume that the observations are independent and identically distributed. We present the Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model for learning a disentangled representation of a set of grouped observations. The ML-VAE separates the latent representation into semantically meaningful parts by working both at the group level and the observation level, while retaining efficient test-time inference. Quantitative and qualitative evaluations show that the ML-VAE model (i) learns a semantically meaningful disentanglement of grouped data, (ii) enables manipulation of the latent representation, and (iii) generalises to unseen groups." ] }
1905.06435
2944859147
Existing methods for reducing the computational burden of neural networks at run-time, such as parameter pruning or dynamic computational path selection, focus solely on improving computational efficiency during inference. On the other hand, in this work, we propose a novel method which reduces the memory footprint and number of computing operations required for training and inference. Our framework efficiently integrates pruning as part of the training procedure by exploring and tracking the relative importance of convolutional channels. At each training step, we select only a subset of highly salient channels to execute according to the combinatorial upper confidence bound algorithm, and run a forward and backward pass only on these activated channels, hence learning their parameters. Consequently, we enable the efficient discovery of compact models. We validate our approach empirically on state-of-the-art CNNs - VGGNet, ResNet and DenseNet, and on several image classification datasets. Results demonstrate our framework for dynamic channel execution reduces computational cost up to 4x and parameter count up to 9x, thus reducing the memory and computational demands for discovering and training compact neural network models.
is applied after training an initial network in order to eliminate model parameters, therefore making inference less computationally intensive, and decreasing model size and memory footprint. One approach is non-structured pruning which dates back to Optimal Brain Damage @cite_27 . More recently, Han @cite_20 propose to prune individual weights with small magnitude, and Srinivas and Babu @cite_7 propose to remove redundant neurons iteratively. The issue with non-structured pruning is that it requires specialized hardware @cite_24 . Structured pruning @cite_32 @cite_5 @cite_26 @cite_30 @cite_16 @cite_23 ,on the other hand, does not require dedicated libraries hardware as it prunes whole filters, channels of convolutional kernels or even layers, based on some importance criteria. Other post-processing techniques to achieve compact network models include knowledge distillation @cite_4 @cite_28 , weight quantization @cite_3 @cite_0 , low-rank approximation of weights @cite_33 @cite_22 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_4", "@cite_33", "@cite_7", "@cite_22", "@cite_28", "@cite_32", "@cite_3", "@cite_24", "@cite_0", "@cite_27", "@cite_23", "@cite_5", "@cite_16", "@cite_20" ], "mid": [ "", "", "1821462560", "2963048316", "2964217848", "", "", "2963094099", "2300242332", "2285660444", "", "2114766824", "", "", "", "2963674932" ], "abstract": [ "", "", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "Abstract: We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the second step, this decomposition is used to replace the original convolutional layer with a sequence of four convolutional layers with small kernels. After such replacement, the entire network is fine-tuned on the training data using standard backpropagation process. We evaluate this approach on two CNNs and show that it is competitive with previous approaches, leading to higher obtained CPU speedups at the cost of lower accuracy drops for the smaller of the two networks. Thus, for the 36-class character classification CNN, our approach obtains a 8.5x CPU speedup of the whole network with only minor accuracy drop (1 from 91 to 90 ). For the standard ImageNet architecture (AlexNet), the approach speeds up the second convolution layer by a factor of 4x at the cost of @math increase of the overall top-5 classification error.", "Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Unlike previous works, our pruning method does not require access to any training validation data.", "", "", "Channel pruning is one of the predominant approaches for deep model compression. Existing pruning methods either (i) train from scratch with sparsity constraints on channels, or (ii) minimize the reconstruction error between the pre-trained feature maps and the compressed ones. Both strategies suffer from limitations: the former kind is computationally expensive and difficult to converge, whilst the latter kind optimizes the reconstruction error but ignores the discriminative power of channels. To overcome these drawbacks, we investigate a simple-yet-effective method, named discrimination-aware channel pruning (DCP), which seeks to select those channels that really contribute to discriminative power. To this end, we introduce additional losses into the network to increase the discriminative power of intermediate layers. We then propose to select the most discriminative channels for each layer, where both an additional loss and the reconstruction error are considered. Last, we propose a greedy algorithm to make channel selection and parameter optimization in an iterative way. Extensive experiments demonstrate the effectiveness of our method. For example, on ILSVRC-12, our pruned ResNet-50 with 30 reduction of channels even outperforms the original model by 0.39 in top-1 accuracy.", "We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32 ( ) memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58 ( ) faster convolutional operations (in terms of number of the high precision operations) and 32 ( ) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is the same as the full-precision AlexNet. We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than (16 , ) in top-1 accuracy. Our code is available at: http: allenai.org plato xnornet.", "State-of-the-art deep neural networks (DNNs) have hundreds of millions of connections and are both computationally and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources and power budgets. While custom hardware helps the computation, fetching weights from DRAM is two orders of magnitude more expensive than ALU operations, and dominates the required power. Previously proposed 'Deep Compression' makes it possible to fit large DNNs (AlexNet and VGGNet) fully in on-chip SRAM. This compression is achieved by pruning the redundant connections and having multiple connections share the same weight. We propose an energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing. Going from DRAM to SRAM gives EIE 120× energy saving; Exploiting sparsity saves 10×; Weight sharing gives 8×; Skipping zero activations from ReLU saves another 3×. Evaluated on nine DNN benchmarks, EIE is 189× and 13× faster when compared to CPU and GPU implementations of the same DNN without compression. EIE has a processing power of 102 GOPS working directly on a compressed network, corresponding to 3 TOPS on an uncompressed network, and processes FC layers of AlexNet at 1.88×104 frames sec with a power dissipation of only 600mW. It is 24,000× and 3,400× more energy efficient than a CPU and GPU respectively. Compared with DaDianNao, EIE has 2.9×, 19× and 3× better throughput, energy efficiency and area efficiency.", "", "We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.", "", "", "", "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy." ] }
1905.06307
2945866386
The olfactory system is constantly solving pattern-recognition problems by the creation of a large space to codify odour representations and optimizing their distribution within it. A model of the Olfactory Bulb was developed by Z. Li and J. J. Hopfield Li and Hopfield (1989) based on anatomy and electrophysiology. They used nonlinear simulations observing that the collective behavior produce an oscillatory frequency. Here, we show that the Subthreshold hopf bifurcation is a good candidate for modeling the bulb and the Subthreshold subcritical hopf bifurcation is a good candidate for modeling the olfactory cortex. Network topology analysis of the subcritical regime is presented as a proof of the importance of synapse plasticity for memory functions in the olfactory cortex.
Afterwards, a model of the Olfactory Bulb was developed by Z. Li and J. J. Hopfield @cite_2 based on anatomy and electrophysiology. They used nonlinear simulations observing that the collective behavior produce an oscillatory frequency of 35-60Hz across the bulb. Almost parallel to this research, Yong Yao and Walter J. Freeman @cite_1 developed their research in the dynamics of the olfactory system as coupled nonlinear differential equations in order to understand the role of chaos for its pattern recognition function. Here, they found that the system maintains a low dimensional global chaotic attractor that create a stable state ready to be accessed for pattern recognition.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2066381067", "2050416021" ], "abstract": [ "Abstract This article describes computer simulation of the dynamics of a distributed model of the olfactory system that is aimed at understanding the role of chaos in biological pattern recognition. The model is governed by coupled nonlinear differential equations with many variables and parameters, which allow multiple high-dimensional chaotic states. An appropriate set of the parameters is identified by computer experiments with the guidance of biological measurements, through which this model of the olfactory system maintains a low dimensional global chaotic attractor with multiple “wings.” The central part of the attractor is its basal chaotic activity, which simulates the electroencephalographic (EEG) activity of the olfactory system under zero signal input (exhalation). It provides the system with a ready state so that it is unnecessary for the system to “wake up” from or return to a “dormant” equilibrium state every time that an input is given (by inhalation). Each of the wings may be either a near-limit cycle (a narrow band chaos) or a broad band chaos. The reproducible spatial pattern of each near-limit cycle is determined by a template made in the system. A novel input with no template activates the system to either a nonreproducible near-limit cycle wing or a broad band chaotic wing. Pattern recognition in the system may be considered as the transition from one wing to another, as demonstrated by the computer simulation. The time series of the manifestations of the attractor are EEG-like waveforms with fractal dimensions that reflect which wing the system is placed in by input or lack of input. The computer simulation also shows that the adaptive behavior of the system is scaling invariant, and it is independent of the initial conditions at the transition from one wing to another. These properties enable the system to classify an uninterrupted sequence of stimuli.", "The olfactory bulb of mammals aids in the discrimination of odors. A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations of the highly non-linear model produce a 35---60 Hz modulated activity which is coherent across the bulb. The decision states (for the odor information) in this system can be thought of as stable cycles, rather than point stable states typical of simpler neuro-computing models. Analysis shows that a group of coupled non-linear oscillators are responsible for the oscillatory activities. The output oscillation pattern of the bulb is determined by the odor input. The model provides a framework in which to understand the transform between odor input and the bulbar output to olfactory cortex. There is significant correspondence between the model behavior and observed electrophysiology." ] }
1905.06365
2944989199
Online knowledge libraries refer to the online data warehouses that systematically organize and categorize the knowledge-based information about different kinds of concepts and entities. In the era of big data, the setup of online knowledge libraries is an extremely challenging and laborious task, in terms of efforts, time and expense required in the completion of knowledge entities. Especially nowadays, a large number of new knowledge entities, like movies, are keeping on being produced and coming out at a continuously accelerating speed, which renders the knowledge library setup and completion problem more difficult to resolve manually. In this paper, we will take the online movie knowledge libraries as an example, and study the "Multiple aligned ISomeric Online Knowledge LIbraries Completion problem" (Miso-Klic) problem across multiple online knowledge libraries. Miso-Klic aims at identifying the missing entities for multiple knowledge libraries synergistically and ranking them for editing based on certain ranking criteria. To solve the problem, a thorough investigation of two isomeric online knowledge libraries, Douban and IMDB, have been carried out in this paper. Based on analyses results, a novel deep online knowledge library completion framework "Integrated Deep alignEd Auto-encoder" (IDEA) is introduced to solve the problem. By projecting the entities from multiple isomeric knowledge libraries to a shared feature space, IDEA solves the Miso-Klic problem via three steps: (1) entity feature space unification via embedding, (2) knowledge library fusion based missing entity identification, and (3) missing entity ranking. Extensive experiments done on the real-world online knowledge library dataset have demonstrated the effectiveness of IDEA in addressing the problem.
From the language processing study perspective, some works have been done to group the entities about the same entities across languages. @cite_16 propose to cluster the text mentions across different languages, which may refer to the same concept. They test their model based on the Arabic-English corpus and yields a good performance. @cite_26 propose to study the problem of entity linking to associate references to some entity that are found in unstructured natural language content to an authoritative inventory of known entities. @cite_7 introduce an efficient way to create a test collection for evaluating the accuracy of cross-language entity linking. They @cite_7 apply the technique to produce the first publicly available multilingual cross-language entity linking collection, which includes approximately 55,000 queries, comprising between 875 and 4,329 queries for each of twenty-one non-English languages.
{ "cite_N": [ "@cite_16", "@cite_7", "@cite_26" ], "mid": [ "2167146316", "1823018422", "" ], "abstract": [ "Standard entity clustering systems commonly rely on mention (string) matching, syntactic features, and linguistic resources like English WordNet. When co-referent text mentions appear in different languages, these techniques cannot be easily applied. Consequently, we develop new methods for clustering text mentions across documents and languages simultaneously, producing cross-lingual entity clusters. Our approach extends standard clustering algorithms with cross-lingual mention and context similarity measures. Crucially, we do not assume a pre-existing entity list (knowledge base), so entity characteristics are unknown. On an Arabic-English corpus that contains seven different text genres, our best model yields a 24.3 F1 gain over the baseline.", "We describe an efficient way to create a test collection for evaluating the accuracy of cross-language entity linking. Queries are created by semiautomatically identifying person names on the English side of a parallel corpus, using judgments obtained through crowdsourcing to identify the entity corresponding to the name, and projecting the English name onto the non-English document using word alignments. We applied the technique to produce the first publicly available multilingual cross-language entity linking collection. The collection includes approximately 55,000 queries, comprising between 875 and 4,329 queries for each of twenty-one non-English languages.", "" ] }
1905.06365
2944989199
Online knowledge libraries refer to the online data warehouses that systematically organize and categorize the knowledge-based information about different kinds of concepts and entities. In the era of big data, the setup of online knowledge libraries is an extremely challenging and laborious task, in terms of efforts, time and expense required in the completion of knowledge entities. Especially nowadays, a large number of new knowledge entities, like movies, are keeping on being produced and coming out at a continuously accelerating speed, which renders the knowledge library setup and completion problem more difficult to resolve manually. In this paper, we will take the online movie knowledge libraries as an example, and study the "Multiple aligned ISomeric Online Knowledge LIbraries Completion problem" (Miso-Klic) problem across multiple online knowledge libraries. Miso-Klic aims at identifying the missing entities for multiple knowledge libraries synergistically and ranking them for editing based on certain ranking criteria. To solve the problem, a thorough investigation of two isomeric online knowledge libraries, Douban and IMDB, have been carried out in this paper. Based on analyses results, a novel deep online knowledge library completion framework "Integrated Deep alignEd Auto-encoder" (IDEA) is introduced to solve the problem. By projecting the entities from multiple isomeric knowledge libraries to a shared feature space, IDEA solves the Miso-Klic problem via three steps: (1) entity feature space unification via embedding, (2) knowledge library fusion based missing entity identification, and (3) missing entity ranking. Extensive experiments done on the real-world online knowledge library dataset have demonstrated the effectiveness of IDEA in addressing the problem.
Other closely related problems to the task studied in this paper include network alignment and network matching. Network alignment problem has also been studied by many works, which was mostly applied to address many applications in bioinformatics, e.g., protein-protein interaction (PPI) network alignment @cite_2 @cite_10 @cite_27 @cite_20 @cite_3 @cite_4 . Most network alignment approaches focus on finding approximate isomorphism between two graphs under unsupervised settings, e.g., @cite_8 @cite_20 @cite_3 . Because the intractability of the problem, existing methods usually rely on practical heuristics to solve the alignment problem @cite_14 @cite_4 . Different from these works, @cite_15 proposed to use belief propagation to solve sparse network alignment problems and Todor proposes a probabilistic biological network alignment algorithm in @cite_22 . Biological networks can be of very large sizes and many scalable alignment methods have been proposed in @cite_23 @cite_17 @cite_5 @cite_12 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_8", "@cite_3", "@cite_27", "@cite_23", "@cite_2", "@cite_5", "@cite_15", "@cite_10", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2107427454", "", "2056903220", "1518994209", "", "", "2077237875", "", "", "2104690989", "", "", "", "" ], "abstract": [ "Background: In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NPhard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results: We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion: Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library.", "", "Interactions between molecules are probabilistic events. An interaction may or may not happen with some probability, depending on a variety of factors such as the size, abundance, or proximity of the interacting molecules. In this paper, we consider the problem of aligning two biological networks. Unlike existing methods, we allow one of the two networks to contain probabilistic interactions. Allowing interaction probabilities makes the alignment more biologically relevant at the expense of explosive growth in the number of alternative topologies that may arise from different subsets of interactions that take place. We develop a novel method that efficiently and precisely characterizes this massive search space. We represent the topological similarity between pairs of aligned molecules (i.e., proteins) with the help of random variables and compute their expected values. We validate our method showing that, without sacrificing the running time performance, it can produce novel alignments. Our results also demonstrate that our method identifies biologically meaningful mappings under a comprehensive set of criteria used in the literature as well as the statistical coherence measure that we developed to analyze the statistical significance of the similarity of the functions of the aligned protein pairs.", "We describe an algorithm, IsoRank, for global alignment of two protein-protein interaction (PPI) networks. IsoRank aims to maximize the overall match between the two networks; in contrast, much of previous work has focused on the local alignment problem-- identifying many possible alignments, each corresponding to a local region of similarity. IsoRank is guided by the intuition that a protein should be matched with a protein in the other network if and only if the neighbors of the two proteins can also be well matched. We encode this intuition as an eigenvalue problem, in a manner analogous to Google's PageRank method. We use IsoRank to compute the first known global alignment between the S. cerevisiae and D. melanogaster PPI networks. The common subgraph has 1420 edges and describes conserved functional components between the two species. Comparisons of our results with those of a well-known algorithm for local network alignment indicate that the globally optimized alignment resolves ambiguity introduced by multiple local alignments. Finally, we interpret the results of global alignment to identify functional orthologs between yeast and fly; our functional ortholog prediction method is much simpler than a recently proposed approach and yet provides results that are more comprehensive.", "", "", "Advances in high-throughput technology has led to an increased amount of available data on protein-protein interaction (PPI) data. Detecting and extracting functional modules that are common across multiple networks is an important step towards understanding the role of functional modules and how they have evolved across species. A global protein-protein interaction network alignment algorithm attempts to find such functional orthologs across multiple networks. In this article, we propose a scalable global network alignment algorithm based on clustering methods and graph matching techniques in order to detect conserved interactions while simultaneously attempting to maximize the sequence similarity of nodes involved in the alignment. We present an algorithm for multiple alignments, in which several protein-protein interaction networks are aligned. We empirically evaluated our algorithm on several real biological datasets. We find that our approach offers a significant benefit both in terms of quality as well as speed over the state-of-the-art.", "", "", "We propose a new distributed algorithm for sparse variants of the network alignment problem, which occurs in a variety of data mining areas including systems biology, database matching, and computer vision. Our algorithm uses a belief propagation heuristic and provides near optimal solutions for this NP-hard combinatorial optimization problem. We show that our algorithm is faster and outperforms or ties existing algorithms on synthetic problems, a problem in bioinformatics, and a problem in ontology matching. We also provide a unified framework for studying and comparing all network alignment solvers.", "", "", "", "" ] }
1905.06365
2944989199
Online knowledge libraries refer to the online data warehouses that systematically organize and categorize the knowledge-based information about different kinds of concepts and entities. In the era of big data, the setup of online knowledge libraries is an extremely challenging and laborious task, in terms of efforts, time and expense required in the completion of knowledge entities. Especially nowadays, a large number of new knowledge entities, like movies, are keeping on being produced and coming out at a continuously accelerating speed, which renders the knowledge library setup and completion problem more difficult to resolve manually. In this paper, we will take the online movie knowledge libraries as an example, and study the "Multiple aligned ISomeric Online Knowledge LIbraries Completion problem" (Miso-Klic) problem across multiple online knowledge libraries. Miso-Klic aims at identifying the missing entities for multiple knowledge libraries synergistically and ranking them for editing based on certain ranking criteria. To solve the problem, a thorough investigation of two isomeric online knowledge libraries, Douban and IMDB, have been carried out in this paper. Based on analyses results, a novel deep online knowledge library completion framework "Integrated Deep alignEd Auto-encoder" (IDEA) is introduced to solve the problem. By projecting the entities from multiple isomeric knowledge libraries to a shared feature space, IDEA solves the Miso-Klic problem via three steps: (1) entity feature space unification via embedding, (2) knowledge library fusion based missing entity identification, and (3) missing entity ranking. Extensive experiments done on the real-world online knowledge library dataset have demonstrated the effectiveness of IDEA in addressing the problem.
Meanwhile, in recent years, some works have been done on aligning social networks @cite_0 @cite_30 @cite_9 . To prune the predicted anchor links in network alignment problems, stable matching is first applied by the PI in @cite_0 , while assuming if a person has multiple accounts in a social network, these accounts can be identified and consolidated as a single account through a preprocessing step @cite_24 . College admission problem @cite_11 and stable marriage problem @cite_18 have been studied for many years and lots of works have been done in the last century. In recent years, some new papers have come out in these areas. @cite_25 propose to analyze the stability of the equilibrium outcomes in the admission games induced by stable matching rules. Meanwhile, in college admission problems, only the small core in nash equilibrium plays an important role, which was analyzed in @cite_29 . Flor ' e @cite_6 propose to study the almost stable matching by truncating the Gale-Shapley algorithm. In this proposal, we will address the problem by extending traditional stable matching to a generalized case to identify the non-anchor users and prune the redundant anchor links connected to them.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_9", "@cite_29", "@cite_6", "@cite_0", "@cite_24", "@cite_25", "@cite_11" ], "mid": [ "", "2091887239", "", "1502787658", "2004854811", "2047532797", "1982471600", "1971380014", "1985899049" ], "abstract": [ "", "This book probes the stable marriage problem and its variants as a rich source of problems and ideas that illustrate both the design and analysis of efficient algorithms. It covers the most recent structural and algorithmic work on stable matching problems, simplifies and unifies many earlier proofs, strengthens several earlier results, and presents new results and more efficient algorithms.The authors develop the structure of the set of stable matchings in the stable marriage problem in a more general and algebraic context than has been done previously; they discuss the problem's structure in terms of rings of sets, which allows many of the most useful features to be seen as features of a more general set of problems. The relationship between the structure of the stable marriage problem and the more general stable roommates problem is demonstrated, revealing many commonalities.The results the authors obtain provide an algorithmic response to the practical, and political, problems created by the asymmetry inherent in the Gale Shapley solutions, leading to alternative methods and better compromises than are provided by the Gale Shapley method. And, in contrast to Donald Knuth's earlier work which primarily focused on the application of mathematics to the analysis of algorithms, this book illustrates the productive and almost inseparable relationship between mathematical insight and the design of efficient algorithms.Dan Gusfield is Associate Professor of Computer Science at the University of California, Davis. Robert W. Irving is Senior Lecturer in Computing Science at the University of Glasgow. The Stable Marriage Problem is included in the Foundations of Computing Series, edited by Michael Garey and Albert Meyer.", "", "Both rematching proof and strong equilibrium outcomes are stable with respect to the true preferences in the marriage problem. We show that not all rematching proof or strong equilibrium outcomes are stable in the college admissions problem. But we show that both rematching proof and strong equilibrium outcomes in truncations at the match point are all stable in the college admissions problem. Further, all true stable matchings can be achieved in both rematching proof and strong equilibrium in truncations at the match point. We show that any Nash equilibrium in truncations admits one and only one matching, stable or not. Therefore, the core at a Nash equilibrium in truncations must be small. But examples exist such that the set of stable matchings with respect to a Nash equilibrium may contain more than one matching. Nevertheless, each Nash equilibrium can only admit at most one true stable matching. If, indeed, there is a true stable matching at a Nash equilibrium, then the only possible equilibrium outcome will be the true stable matching, no matter how different are players' equilibrium strategies from the true preferences and how many other unstable matchings are there at that Nash equilibrium. Thus, we show that a necessary and sufficient condition for the stable matching rule to be implemented in a subset of Nash equilibria by the direct revelation game induced by a stable mechanism is that every Nash equilibrium profile in that subset admits one and only one true stable matching.", "We show that the ratio of matched individuals to blocking pairs grows linearly with the number of propose–accept rounds executed by the Gale–Shapley algorithm for the stable marriage problem. Consequently, the participants can arrive at an almost stable matching even without full information about the problem instance; for each participant, knowing only its local neighbourhood is enough. In distributed-systems parlance, this means that if each person has only a constant number of acceptable partners, an almost stable matching emerges after a constant number of synchronous communication rounds. We apply our results to give a distributed (2+e)-approximation algorithm for maximum-weight matching in bicoloured graphs and a centralised randomised constant-time approximation scheme for estimating the size of a stable matching.", "Online social networks can often be represented as heterogeneous information networks containing abundant information about: who, where, when and what. Nowadays, people are usually involved in multiple social networks simultaneously. The multiple accounts of the same user in different networks are mostly isolated from each other without any connection between them. Discovering the correspondence of these accounts across multiple social networks is a crucial prerequisite for many interesting inter-network applications, such as link recommendation and community analysis using information from multiple networks. In this paper, we study the problem of anchor link prediction across multiple heterogeneous social networks, i.e., discovering the correspondence among different accounts of the same user. Unlike most prior work on link prediction and network alignment, we assume that the anchor links are one-to-one relationships (i.e., no two edges share a common endpoint) between the accounts in two social networks, and a small number of anchor links are known beforehand. We propose to extract heterogeneous features from multiple heterogeneous networks for anchor link prediction, including user's social, spatial, temporal and text information. Then we formulate the inference problem for anchor links as a stable matching problem between the two sets of user accounts in two different networks. An effective solution, MNA (Multi-Network Anchoring), is derived to infer anchor links w.r.t. the one-to-one constraint. Extensive experiments on two real-world heterogeneous social networks show that our MNA model consistently outperform other commonly-used baselines on anchor link prediction.", "Users' locations are important for many applications such as personalized search and localized content delivery. In this paper, we study the problem of profiling Twitter users' locations with their following network and tweets. We propose a multiple location profiling model (MLP), which has three key features: 1) it formally models how likely a user follows another user given their locations and how likely a user tweets a venue given his location, 2) it fundamentally captures that a user has multiple locations and his following relationships and tweeted venues can be related to any of his locations, and some of them are even noisy, and 3) it novelly utilizes the home locations of some users as partial supervision. As a result, MLP not only discovers users' locations accurately and completely, but also \"explains\" each following relationship by revealing users' true locations in the relationship. Experiments on a large-scale data set demonstrate those advantages. Particularly, 1) for predicting users' home locations, MLP successfully places 62 users and out-performs two state-of-the-art methods by 10 in accuracy, 2) for discovering users' multiple locations, MLP improves the baseline methods by 14 in recall, and 3) for explaining following relationships, MLP achieves 57 accuracy.", "A stable matching rule is used as the outcome function for the Admission game where colleges behave straightforwardly and the students’ strategies are given by their preferences over the colleges. We show that the college-optimal stable matching rule implements the set of stable matchings via the Nash equilibrium (NE) concept. For any other stable matching rule the strategic behavior of the students may lead to outcomes that are not stable under the true preferences. We then introduce uncertainty about the matching selected and prove that the natural solution concept is that of NE in the strong sense. A general result shows that the random stable matching rule, as well as any stable matching rule, implements the set of stable matchings via NE in the strong sense. Precise answers are given to the strategic questions raised.", "Abstract Two-sided matching markets of the kind known as the “college admissions problem” have been widely thought to be virtually equivalent to the simpler “marriage problem” for which some striking results concerning agents' preferences and incentives have been recently obtained. It is shown here that some of these results do not generalize to the college admissions problem, contrary to a number of assertions in the recent literature. No stable matching procedure exists that makes it a dominant strategy for colleges to reveal their true preferences, and some outcomes may be preferred by all colleges to the college-optimal stable outcome." ] }
1905.06463
2946341686
The purpose of this paper is to determine whether a particular context factor among the variables that a researcher is interested in causally affects the route choice behavior of drivers. To our knowledge, there is limited literature that consider the effects of various factors on route choice based on causal inference.Yet, collecting data sets that are sensitive to the aforementioned factors are challenging and the existing approaches usually take into account only the general factors motivating drivers route choice behavior. To fill these gaps, we carried out a study using Immersive Virtual Environment (IVE) tools to elicit drivers' route choice behavioral data, covering drivers' network familiarity, educationlevel, financial concern, etc, apart from conventional measurement variables. Having context-aware, high-fidelity properties, IVE data affords the opportunity to incorporate the impacts of human related factors into the route choice causal analysis and advance a more customizable research tool for investigating causal factors on path selection in network routing. This causal analysis provides quantitative evidence to support drivers' diversion decision.
In reviewing the existing research it can be gleaned that transportation researchers have employed two different types of empirical data collection in studying route choice behavior. First, collecting route choice data using observed actual choices and second, collecting route choice data in hypothetical experiments. Researchers have for the majority of cases used utility maximizing theory to explain route choice behavior that is rooted in econometrics @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "1979258712" ], "abstract": [ "This book, which is intended as a graduate level text and a general professional reference, presents the methods of discrete choice analysis and their applications in the modeling of transportation systems. The first seven chapters provide a basic introduction to discrete choice analysis that covers the material needed to apply basic binary and multiple choice models. The chapters are as follows: introduction; review of the statistics of model estimation; theories of individual choice behavior; binary choice models; multinomial choice; aggregate forecasting techniques; and tests and practical issues in developing discrete choice models. The rest of the chapters cover more advanced material and culminate in the development of a complete travel demand model system presented in chapter 11. The advanced chapters are as follows: theory of sampling; aggregation and sampling of alternatives; models of multidimensional choice and the nested logit model; and systems of models. The last chapter (12) presents an overview of current research frontiers." ] }
1905.06287
2944952555
Bayesian neural network (BNN) priors are defined in parameter space, making it hard to encode prior knowledge expressed in function space. We formulate a prior that incorporates functional constraints about what the output can or cannot be in regions of the input space. Output-Constrained BNNs (OC-BNN) represent an interpretable approach of enforcing a range of constraints, fully consistent with the Bayesian framework and amenable to black-box inference. We demonstrate how OC-BNNs improve model robustness and prevent the prediction of infeasible outputs in two real-world applications of healthcare and robotics.
Most closely related to our work, @cite_5 considered function-space equality and inequality constraints of deep probabilistic models. However, they focused on deep Gaussian processes (DGPs) rather than BNNs, and on low-dimensional data from simulated ODE systems, whereas we consider high-dimensional real-world settings. They also do not consider classification settings.
{ "cite_N": [ "@cite_5" ], "mid": [ "2787411620" ], "abstract": [ "We introduce a novel generative formulation of deep probabilistic models implementing \"soft\" constraints on the dynamics of the functions they can model. In particular we develop a flexible methodological framework where the modeled functions and derivatives of a given order are subject to inequality or equality constraints. We characterize the posterior distribution over model and constraint parameters through stochastic variational inference techniques. As a result, the proposed approach allows for accurate and scalable uncertainty quantification of predictions and parameters. We demonstrate the application of equality constraints in the challenging problem of parameter inference in ordinary differential equation models, while we showcase the application of inequality constraints on monotonic regression on count data. The proposed approach is extensively tested in several experimental settings, leading to highly competitive results in challenging modeling applications, while offering high expressiveness, flexibility and scalability." ] }
1905.06287
2944952555
Bayesian neural network (BNN) priors are defined in parameter space, making it hard to encode prior knowledge expressed in function space. We formulate a prior that incorporates functional constraints about what the output can or cannot be in regions of the input space. Output-Constrained BNNs (OC-BNN) represent an interpretable approach of enforcing a range of constraints, fully consistent with the Bayesian framework and amenable to black-box inference. We demonstrate how OC-BNNs improve model robustness and prevent the prediction of infeasible outputs in two real-world applications of healthcare and robotics.
@cite_9 specify a Gaussian function prior with the goal of preventing overconfident BNN predictions out-of-distribution. In contrast, we use "positive constraints" to guide the function where it should be. Also related are functional BNNs by @cite_0 , where variational inference is performed in function-space using a stochastic process model. Their view is more general---and accordingly, more complex to optimize---while we focus on constraints in specific regions of the input-output space.
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2949496227", "2884298516" ], "abstract": [ "Variational Bayesian neural networks (BNNs) perform variational inference over weights, but it is difficult to specify meaningful priors and approximate posteriors in a high-dimensional weight space. We introduce functional variational Bayesian neural networks (fBNNs), which maximize an Evidence Lower BOund (ELBO) defined directly on stochastic processes, i.e. distributions over functions. We prove that the KL divergence between stochastic processes equals the supremum of marginal KL divergences over all finite sets of inputs. Based on this, we introduce a practical training objective which approximates the functional ELBO using finite measurement sets and the spectral Stein gradient estimator. With fBNNs, we can specify priors entailing rich structures, including Gaussian processes and implicit stochastic processes. Empirically, we find fBNNs extrapolate well using various structured priors, provide reliable uncertainty estimates, and scale to large datasets.", "Obtaining reliable uncertainty estimates of neural network predictions is a long standing challenge. Bayesian neural networks have been proposed as a solution, but it remains open how to specify the prior. In particular, the common practice of a standard normal prior in weight space imposes only weak regularities, causing the function posterior to possibly generalize in unforeseen ways on out-of-distribution inputs. We propose noise contrastive priors (NCPs). The key idea is to train the model to output high uncertainty for data points outside of the training distribution. NCPs do so using an input prior, which adds noise to the inputs of the current mini batch, and an output prior, which is a wide distribution given these inputs. NCPs are compatible with any model that represents predictive uncertainty, are easy to scale, and yield reliable uncertainty estimates throughout training. Empirically, we show that NCPs offer clear improvements as an addition to existing baselines. We demonstrate the scalability on the flight delays data set, where we significantly improve upon previously published results." ] }
1905.06407
2945401621
One key task of fine-grained sentiment analysis on reviews is to extract aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using a modified CNN called controlled CNN (Ctrl). The modified CNN has two types of control modules. Through asynchronous parameter updating, it prevents over-fitting and boosts CNN's performance significantly. This model achieves state-of-the-art results on standard aspect extraction datasets. To the best of our knowledge, this is the first paper to apply control modules to aspect extraction.
CNN @cite_30 @cite_15 @cite_32 is recently adopted for machine translation @cite_42 , named entity recognition @cite_18 @cite_7 @cite_24 @cite_20 , sentiment analysis @cite_9 @cite_31 and aspect extraction @cite_26 . We does not purely use CNN but propose control modules to boost the performance of CNN.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_31", "@cite_26", "@cite_7", "@cite_9", "@cite_42", "@cite_32", "@cite_24", "@cite_15", "@cite_20" ], "mid": [ "1536929369", "2120615054", "", "2798431206", "", "2427312199", "2613904329", "2781548983", "2295030615", "2949541494", "2963140597" ], "abstract": [ "From the Publisher: Dramatically updating and extending the first edition, published in 1995, the second edition of The Handbook of Brain Theory and Neural Networks presents the enormous progress made in recent years in the many subfields related to the two great questions: How does the brain work? and, How can we build intelligent machines? Once again, the heart of the book is a set of almost 300 articles covering the whole spectrum of topics in brain theory and neural networks. The first two parts of the book, prepared by Michael Arbib, are designed to help readers orient themselves in this wealth of material. Part I provides general background on brain modeling and on both biological and artificial neural networks. Part II consists of \"Road Maps\" to help readers steer through articles in part III on specific topics of interest. The articles in part III are written so as to be accessible to readers of diverse backgrounds. They are cross-referenced and provide lists of pointers to Road Maps, background material, and related reading. The second edition greatly increases the coverage of models of fundamental neurobiology, cognitive neuroscience, and neural network approaches to language. It contains 287 articles, compared to the 266 in the first edition. Articles on topics from the first edition have been updated by the original authors or written anew by new authors, and there are 106 articles on new topics.", "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.", "", "One key task of fine-grained sentiment analysis of product reviews is to extract product aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using deep learning. Unlike other highly sophisticated supervised deep learning models, this paper proposes a novel and yet simple CNN model employing two types of pre-trained embeddings for aspect extraction: general-purpose embeddings and domain-specific embeddings. Without using any additional supervision, this model achieves surprisingly good results, outperforming state-of-the-art sophisticated existing methods. To our knowledge, this paper is the first to report such double embeddings based CNN model for aspect extraction and achieve very good results.", "", "In this paper, we present the first deep learning approach to aspect extraction in opinion mining. Aspect extraction is a subtask of sentiment analysis that consists in identifying opinion targets in opinionated text, i.e., in detecting the specific aspects of a product or service the opinion holder is either praising or complaining about. We used a 7-layer deep convolutional neural network to tag each word in opinionated sentences as either aspect or non-aspect word. We also developed a set of linguistic patterns for the same purpose and combined them with the neural network. The resulting ensemble classifier, coupled with a word-embedding model for sentiment analysis, allowed our approach to obtain significantly better accuracy than state-of-the-art methods.", "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "Neural network models with attention mechanism have shown their efficiencies on various tasks. However, there is little research work on attention mechanism for text classification and existing attention model for text classification lacks of cognitive intuition and mathematical explanation. In this paper, we propose a new architecture of neural network based on the attention model for text classification. In particular, we show that the convolutional neural network (CNN) is a reasonable model for extracting attentions from text sequences in mathematics. We then propose a novel attention model base on CNN and introduce a new network architecture which combines recurrent neural network with our CNN-based attention model. Experimental results on five datasets show that our proposed models can accurately capture the salient parts of sentences to improve the performance of text classification.", "State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55 accuracy for POS tagging and 91.21 F1 for NER.", "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.", "" ] }
1905.06407
2945401621
One key task of fine-grained sentiment analysis on reviews is to extract aspects or features that users have expressed opinions on. This paper focuses on supervised aspect extraction using a modified CNN called controlled CNN (Ctrl). The modified CNN has two types of control modules. Through asynchronous parameter updating, it prevents over-fitting and boosts CNN's performance significantly. This model achieves state-of-the-art results on standard aspect extraction datasets. To the best of our knowledge, this is the first paper to apply control modules to aspect extraction.
DAN @cite_0 solves incremental learning problem by (1) training a base CNN network on the initial task, (2) encountering a new task, train on the square linear transformations of the base CNN layer to utilize base CNN network for the new task and also maintain base CNN's performance for the initial task. Residual network @cite_6 solves gradient vanishing problem on a very deep neural network by providing high-way bridges between CNN layers. We do not solve incremental transfer learning nor gradient vanishing problems. We do asynchronous parameter update to prevent over-fitting and make the only one task better.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2613498939", "2194775991" ], "abstract": [ "Given an existing trained neural network, it is often desirable to learn new capabilities without hindering performance of those already learned. Existing approaches either learn sub-optimal solutions, require joint training, or incur a substantial increment in the number of parameters for each added domain, typically as many as the original network. We propose a method called (DAN) that constrains newly learned filters to be linear combinations of existing ones. DANs precisely preserve performance on the original domain, require a fraction (typically 13 , dependent on network architecture) of the number of parameters compared to standard fine-tuning procedures and converge in less cycles of training to a comparable or better level of performance. When coupled with standard network quantization techniques, we further reduce the parameter cost to around 3 of the original with negligible or no loss in accuracy. The learned architecture can be controlled to switch between various learned representations, enabling a single network to solve a task from multiple different domains. We conduct extensive experiments showing the effectiveness of our method on a range of image classification tasks and explore different aspects of its behavior.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation." ] }
1905.06361
2945115802
Local Differential Privacy (LDP) is popularly used in practice for privacy-preserving data collection. Although existing LDP protocols offer high data utility for large user populations (100,000 or more users), they perform poorly in scenarios with small user populations (such as those in the cybersecurity domain) and lack perturbation mechanisms that are effective for both ordinal and non-ordinal item sequences while protecting sequence length and content simultaneously. In this paper, we address the small user population problem by introducing the concept of Condensed Local Differential Privacy (CLDP) as a specialization of LDP, and develop a suite of CLDP protocols that offer desirable statistical utility while preserving privacy. Our protocols support different types of client data, ranging from ordinal data types in finite metric spaces (numeric malware infection statistics), to non-ordinal items (OS versions, transaction categories), and to sequences of ordinal and non-ordinal items. Extensive experiments are conducted on multiple datasets, including datasets that are an order of magnitude smaller than those used in existing approaches, which show that proposed CLDP protocols yield higher utility compared to existing LDP protocols. Furthermore, case studies with Symantec datasets demonstrate that our protocols outperform existing protocols in key cybersecurity-focused tasks of detecting ransomware outbreaks, identifying targeted and vulnerable OSs, and inspecting suspicious activities on infected machines.
Differential privacy was initially proposed in the centralized setting in which a trusted central data collector possesses a database containing clients' true values, and noise is applied on the database or queries executed on the database instead of each client's individual value @cite_20 @cite_32 . In contrast, in LDP, each client locally perturbs their data on their device before sending the perturbed version to the data collector @cite_10 . The local setting has seen practical real-world deployment, including Google's RAPPOR as a Chrome extension @cite_3 @cite_4 , Apple's use of LDP for spelling prediction and emoji frequency detection @cite_5 @cite_35 , and Microsoft's collection of application telemetry @cite_12 .
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_32", "@cite_3", "@cite_5", "@cite_10", "@cite_12", "@cite_20" ], "mid": [ "2596313079", "2964225135", "", "1981029888", "2734576275", "2053801139", "2963559079", "" ], "abstract": [ "Systems and methods are disclosed for a server learning new words generated by user client devices in a crowdsourced manner while maintaining local differential privacy of client devices. A client device can determine that a word typed on the client device is a new word that is not contained in a dictionary or asset catalog on the client device. New words can be grouped in classifications such as entertainment, health, finance, etc. A differential privacy system on the client device can comprise a privacy budget for each classification of new words. If there is privacy budget available for the classification, then one or more new terms in a classification can be sent to new term learning server, and the privacy budget for the classification reduced. The privacy budget can be periodically replenished.", "", "", "Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees. In short, RAPPORs allow the forest of client data to be studied, without permitting the possibility of looking at individual trees. By applying randomized response in a novel manner, RAPPOR provides the mechanisms for such collection as well as for efficient, high-utility analysis of the collected data. In particular, RAPPOR permits statistics to be collected on the population of client-side strings with strong privacy guarantees for each client, and without linkability of their reports. This paper describes and motivates RAPPOR, details its differential-privacy and utility guarantees, discusses its practical deployment and properties in the face of different attack models, and, finally, gives results of its application to both synthetic and real-world data.", "Systems and methods are disclosed for generating term frequencies of known terms based on crowdsourced differentially private sketches of the known terms. An asset catalog can be updated with new frequency counts for known terms based on the crowdsourced differentially private sketches. Known terms can have a classification. A client device can maintain a privacy budget for each classification of known terms. Classifications can include emojis, deep links, locations, finance terms, and health terms, etc. A privacy budget ensures that a client does not transmit too much information to a term frequency server, thereby compromising the privacy of the client device.", "Working under local differential privacy-a model of privacy in which data remains private even from the statistician or learner-we study the tradeoff between privacy guarantees and the utility of the resulting statistical estimators. We prove bounds on information-theoretic quantities, including mutual information and Kullback-Leibler divergence, that influence estimation rates as a function of the amount of privacy preserved. When combined with minimax techniques such as Le Cam's and Fano's methods, these inequalities allow for a precise characterization of statistical rates under local privacy constraints. In this paper, we provide a treatment of two canonical problem families: mean estimation in location family models and convex risk minimization. For these families, we provide lower and upper bounds for estimation of population quantities that match up to constant factors, giving privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds.", "The collection and analysis of telemetry data from user's devices is routinely performed by many software companies. Telemetry collection leads to improved user experience but poses significant risks to users' privacy. Locally differentially private (LDP) algorithms have recently emerged as the main tool that allows data collectors to estimate various population statistics, while preserving privacy. The guarantees provided by such algorithms are typically very strong for a single round of telemetry collection, but degrade rapidly when telemetry is collected regularly. In particular, existing LDP algorithms are not suitable for repeated collection of counter data such as daily app usage statistics. In this paper, we develop new LDP mechanisms geared towards repeated collection of counter data, with formal privacy guarantees even after being executed for an arbitrarily long period of time. For two basic analytical tasks, mean estimation and histogram estimation, our LDP mechanisms for repeated data collection provide estimates with comparable or even the same accuracy as existing single-round LDP collection mechanisms. We conduct empirical evaluation on real-world counter datasets to verify our theoretical results. Our mechanisms have been deployed by Microsoft to collect telemetry across millions of devices.", "" ] }
1905.06361
2945115802
Local Differential Privacy (LDP) is popularly used in practice for privacy-preserving data collection. Although existing LDP protocols offer high data utility for large user populations (100,000 or more users), they perform poorly in scenarios with small user populations (such as those in the cybersecurity domain) and lack perturbation mechanisms that are effective for both ordinal and non-ordinal item sequences while protecting sequence length and content simultaneously. In this paper, we address the small user population problem by introducing the concept of Condensed Local Differential Privacy (CLDP) as a specialization of LDP, and develop a suite of CLDP protocols that offer desirable statistical utility while preserving privacy. Our protocols support different types of client data, ranging from ordinal data types in finite metric spaces (numeric malware infection statistics), to non-ordinal items (OS versions, transaction categories), and to sequences of ordinal and non-ordinal items. Extensive experiments are conducted on multiple datasets, including datasets that are an order of magnitude smaller than those used in existing approaches, which show that proposed CLDP protocols yield higher utility compared to existing LDP protocols. Furthermore, case studies with Symantec datasets demonstrate that our protocols outperform existing protocols in key cybersecurity-focused tasks of detecting ransomware outbreaks, identifying targeted and vulnerable OSs, and inspecting suspicious activities on infected machines.
Local differential privacy has also sparked interest from the academic community. There have been several theoretical treatments for finding upper and lower bounds on the accuracy and utility of LDP @cite_10 @cite_37 @cite_27 @cite_19 @cite_23 . From a more practical perspective, @cite_38 showed the optimality of OLH for singleton item frequency estimation. @cite_22 and @cite_13 studied frequent item and itemset mining from set-valued client data. @cite_30 and @cite_31 studied the problem of obtaining marginal tables from high-dimensional data. Recently, LDP was considered in the contexts of geolocations @cite_16 , decentralized social graphs @cite_33 , and discovering emerging terms from text @cite_24 .
{ "cite_N": [ "@cite_13", "@cite_38", "@cite_37", "@cite_30", "@cite_22", "@cite_33", "@cite_24", "@cite_19", "@cite_27", "@cite_23", "@cite_31", "@cite_16", "@cite_10" ], "mid": [ "2794674331", "2742225091", "", "2964117144", "2532967691", "2766587611", "2794566778", "", "", "", "2890880325", "2440056311", "2053801139" ], "abstract": [ "The notion of Local Differential Privacy (LDP) enables users to respond to sensitive questions while preserving their privacy. The basic LDP frequent oracle (FO) protocol enables an aggregator to estimate the frequency of any value. But when each user has a set of values, one needs an additional padding and sampling step to find the frequent values and estimate their frequencies. In this paper, we formally define such padding and sample based frequency oracles (PSFO). We further identify the privacy amplification property in PSFO. As a result, we propose SVIM, a protocol for finding frequent items in the set-valued LDP setting. Experiments show that under the same privacy guarantee and computational cost, SVIM significantly improves over existing methods. With SVIM to find frequent items, we propose SVSM to effectively find frequent itemsets, which to our knowledge has not been done before in the LDP setting.", "", "", "Many analysis and machine learning tasks require the availability of marginal statistics on multidimensional datasets while providing strong privacy guarantees for the data subjects. Applications for these statistics range from finding correlations in the data to fitting sophisticated prediction models. In this paper, we provide a set of algorithms for materializing marginal statistics under the strong model of local differential privacy. We prove the first tight theoretical bounds on the accuracy of marginals compiled under each approach, perform empirical evaluation to confirm these bounds, and evaluate them for tasks such as modeling and correlation testing. Our results show that releasing information based on (local) Fourier transformations of the input is preferable to alternatives based directly on (local) marginals.", "In local differential privacy (LDP), each user perturbs her data locally before sending the noisy data to a data collector. The latter then analyzes the data to obtain useful statistics. Unlike the setting of centralized differential privacy, in LDP the data collector never gains access to the exact values of sensitive data, which protects not only the privacy of data contributors but also the collector itself against the risk of potential data leakage. Existing LDP solutions in the literature are mostly limited to the case that each user possesses a tuple of numeric or categorical values, and the data collector computes basic statistics such as counts or mean values. To the best of our knowledge, no existing work tackles more complex data mining tasks such as heavy hitter discovery over set-valued data. In this paper, we present a systematic study of heavy hitter mining under LDP. We first review existing solutions, extend them to the heavy hitter estimation, and explain why their effectiveness is limited. We then propose LDPMiner, a two-phase mechanism for obtaining accurate heavy hitters with LDP. The main idea is to first gather a candidate set of heavy hitters using a portion of the privacy budget, and focus the remaining budget on refining the candidate set in a second phase, which is much more efficient budget-wise than obtaining the heavy hitters directly from the whole dataset. We provide both in-depth theoretical analysis and extensive experiments to compare LDPMiner against adaptations of previous solutions. The results show that LDPMiner significantly improves over existing methods. More importantly, LDPMiner successfully identifies the majority true heavy hitters in practical settings.", "A large amount of valuable information resides in decentralized social graphs, where no entity has access to the complete graph structure. Instead, each user maintains locally a limited view of the graph. For example, in a phone network, each user keeps a contact list locally in her phone, and does not have access to other users' contacts. The contact lists of all users form an implicit social graph that could be very useful to study the interaction patterns among different populations. However, due to privacy concerns, one could not simply collect the unfettered local views from users and reconstruct a decentralized social network. In this paper, we investigate techniques to ensure local differential privacy of individuals while collecting structural information and generating representative synthetic social graphs. We show that existing local differential privacy and synthetic graph generation techniques are insufficient for preserving important graph properties, due to excessive noise injection, inability to retain important graph structure, or both. Motivated by this, we propose LDPGen, a novel multi-phase technique that incrementally clusters users based on their connections to different partitions of the whole population. Every time a user reports information, LDPGen carefully injects noise to ensure local differential privacy.We derive optimal parameters in this process to cluster structurally-similar users together. Once a good clustering of users is obtained, LDPGen adapts existing social graph generation models to construct a synthetic social graph. We conduct comprehensive experiments over four real datasets to evaluate the quality of the obtained synthetic graphs, using a variety of metrics, including (i) important graph structural measures; (ii) quality of community discovery; and (iii) applicability in social recommendation. Our experiments show that the proposed technique produces high-quality synthetic graphs that well represent the original decentralized social graphs, and significantly outperform those from baseline approaches.", "A mobile operating system often needs to collect frequent new terms from users in order to build and maintain a comprehensive dictionary. Collecting keyboard usage data, however, raises privacy concerns. Local differential privacy (LDP) has been established as a strong privacy standard for collecting sensitive information from users. Currently, the best known solution for LDP-compliant frequent term discovery transforms the problem into collecting n-grams under LDP, and subsequently reconstructs terms from the collected n-grams by modelling the latter into a graph, and identifying cliques on this graph. Because the transformed problem (i.e., collecting n-grams) is very different from the original one (discovering frequent terms), the end result has poor utility. Further, this method is also rather expensive due to clique computation on a large graph. In this paper we tackle the problem head on: our proposal, PrivTrie, directly collects frequent terms from users by iteratively constructing a trie under LDP. While the methodology of building a trie is an obvious choice, obtaining an accurate trie under LDP is highly challenging. PrivTrie achieves this with a novel adaptive approach that conserves privacy budget by building internal nodes of the trie with the lowest level of accuracy necessary. Experiments using real datasets confirm that PrivTrie achieves high accuracy on common privacy levels, and consistently outperforms all previous methods.", "", "", "", "Marginal tables are the workhorse of capturing the correlations among a set of attributes. We consider the problem of constructing marginal tables given a set of user's multi-dimensional data while satisfying Local Differential Privacy (LDP), a privacy notion that protects individual user's privacy without relying on a trusted third party. Existing works on this problem perform poorly in the high-dimensional setting; even worse, some incur very expensive computational overhead. In this paper, we propose CALM, Consistent Adaptive Local Marginal, that takes advantage of the careful challenge analysis and performs consistently better than existing methods. More importantly, CALM can scale well with large data dimensions and marginal sizes. We conduct extensive experiments on several real world datasets. Experimental results demonstrate the effectiveness and efficiency of CALM over existing methods.", "With the deep penetration of the Internet and mobile devices, privacy preservation in the local setting has become increasingly relevant. The local setting refers to the scenario where a user is willing to share his her information only if it has been properly sanitized before leaving his her own device. Moreover, a user may hold only a single data element to share, instead of a database. Despite its ubiquitousness, the above constraints make the local setting substantially more challenging than the traditional centralized or distributed settings. In this paper, we initiate the study of private spatial data aggregation in the local setting, which finds its way in many real-world applications, such as Waze and Google Maps. In response to users' varied privacy requirements that are natural in the local setting, we propose a new privacy model called personalized local differential privacy (PLDP) that allows to achieve desirable utility while still providing rigorous privacy guarantees. We design an efficient personalized count estimation protocol as a building block for achieving PLDP and give theoretical analysis of its utility, privacy and complexity. We then present a novel framework that allows an untrusted server to accurately learn the user distribution over a spatial domain while satisfying PLDP for each user. This is mainly achieved by designing a novel user group clustering algorithm tailored to our problem. We confirm the effectiveness and efficiency of our framework through extensive experiments on multiple real benchmark datasets.", "Working under local differential privacy-a model of privacy in which data remains private even from the statistician or learner-we study the tradeoff between privacy guarantees and the utility of the resulting statistical estimators. We prove bounds on information-theoretic quantities, including mutual information and Kullback-Leibler divergence, that influence estimation rates as a function of the amount of privacy preserved. When combined with minimax techniques such as Le Cam's and Fano's methods, these inequalities allow for a precise characterization of statistical rates under local privacy constraints. In this paper, we provide a treatment of two canonical problem families: mean estimation in location family models and convex risk minimization. For these families, we provide lower and upper bounds for estimation of population quantities that match up to constant factors, giving privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds." ] }
1905.06361
2945115802
Local Differential Privacy (LDP) is popularly used in practice for privacy-preserving data collection. Although existing LDP protocols offer high data utility for large user populations (100,000 or more users), they perform poorly in scenarios with small user populations (such as those in the cybersecurity domain) and lack perturbation mechanisms that are effective for both ordinal and non-ordinal item sequences while protecting sequence length and content simultaneously. In this paper, we address the small user population problem by introducing the concept of Condensed Local Differential Privacy (CLDP) as a specialization of LDP, and develop a suite of CLDP protocols that offer desirable statistical utility while preserving privacy. Our protocols support different types of client data, ranging from ordinal data types in finite metric spaces (numeric malware infection statistics), to non-ordinal items (OS versions, transaction categories), and to sequences of ordinal and non-ordinal items. Extensive experiments are conducted on multiple datasets, including datasets that are an order of magnitude smaller than those used in existing approaches, which show that proposed CLDP protocols yield higher utility compared to existing LDP protocols. Furthermore, case studies with Symantec datasets demonstrate that our protocols outperform existing protocols in key cybersecurity-focused tasks of detecting ransomware outbreaks, identifying targeted and vulnerable OSs, and inspecting suspicious activities on infected machines.
However, there have also been criticisms and concerns regarding the utility of LDP, which motivated recent works proposing relaxations or alternatives to LDP. BLENDER @cite_7 proposed a hybrid privacy model in which only a subset of users enjoy LDP, whereas remaining users act as opt-in beta testers who receive the guarantees of centralized DP. In contrast, our work stays purely in the local privacy model without requiring a trusted data collector (necessary in centralized or hybrid DP) or opt-in clients. Personalized LDP, a weaker form of LDP, was proposed for spatial data aggregation in @cite_16 ; whereas the Restricted LDP scheme proposed in @cite_17 treats certain client data as more sensitive than others and suggests restricted perturbation schemes to specifically address the more sensitive data. In contrast, our CLDP approach treats all users' data as sensitive (parallel with LDP assumptions), remains agnostic and extensible with respect to data types, and gives as strong protection as LDP under LDP's threat model.
{ "cite_N": [ "@cite_16", "@cite_7", "@cite_17" ], "mid": [ "2440056311", "2963490108", "2913864138" ], "abstract": [ "With the deep penetration of the Internet and mobile devices, privacy preservation in the local setting has become increasingly relevant. The local setting refers to the scenario where a user is willing to share his her information only if it has been properly sanitized before leaving his her own device. Moreover, a user may hold only a single data element to share, instead of a database. Despite its ubiquitousness, the above constraints make the local setting substantially more challenging than the traditional centralized or distributed settings. In this paper, we initiate the study of private spatial data aggregation in the local setting, which finds its way in many real-world applications, such as Waze and Google Maps. In response to users' varied privacy requirements that are natural in the local setting, we propose a new privacy model called personalized local differential privacy (PLDP) that allows to achieve desirable utility while still providing rigorous privacy guarantees. We design an efficient personalized count estimation protocol as a building block for achieving PLDP and give theoretical analysis of its utility, privacy and complexity. We then present a novel framework that allows an untrusted server to accurately learn the user distribution over a spatial domain while satisfying PLDP for each user. This is mainly achieved by designing a novel user group clustering algorithm tailored to our problem. We confirm the effectiveness and efficiency of our framework through extensive experiments on multiple real benchmark datasets.", "We propose a hybrid model of differential privacy that considers a combination of regular and opt-in users who desire the differential privacy guarantees of the local privacy model and the trusted curator model, respectively. We demonstrate that within this model, it is possible to design a new type of blended algorithm for the task of privately computing the most popular records of a web search log. This blended approach provides significant improvements in the utility of obtained data compared to related work while providing users with their desired privacy guarantees. Specifically, on two large search click data sets comprising 4.8 million and 13.2 million unique queries respectively, our approach attains NDCG values exceeding 95 across a range of commonly used privacy budget values.", "LDP (Local Differential Privacy) has been widely studied to estimate statistics of personal data (e.g., distribution underlying the data) while protecting users' privacy. Although LDP does not require a trusted third party, it regards all personal data equally sensitive, which causes excessive obfuscation hence the loss of utility. In this paper, we introduce the notion of ULDP (Utility-optimized LDP), which provides a privacy guarantee equivalent to LDP only for sensitive data. We first consider the setting where all users use the same obfuscation mechanism, and propose two mechanisms providing ULDP: utility-optimized randomized response and utility-optimized RAPPOR. We then consider the setting where the distinction between sensitive and non-sensitive data can be different from user to user. For this setting, we propose a personalized ULDP mechanism with semantic tags to estimate the distribution of personal data with high utility while keeping secret what is sensitive for each user. We show theoretically and experimentally that our mechanisms provide much higher utility than the existing LDP mechanisms when there are a lot of non-sensitive data. We also show that when most of the data are non-sensitive, our mechanisms even provide almost the same utility as non-private mechanisms in the low privacy regime." ] }
1905.06004
2957643644
Thanks to digitization of industrial assets in fleets, the ambitious goal of transferring fault diagnosis models fromone machine to the other has raised great interest. Solving these domain adaptive transfer learning tasks has the potential to save large efforts on manually labeling data and modifying models for new machines in the same fleet. Although data-driven methods have shown great potential in fault diagnosis applications, their ability to generalize on new machines and new working conditions are limited because of their tendency to overfit to the training set in reality. One promising solution to this problem is to use domain adaptation techniques. It aims to improve model performance on the target new machine. Inspired by its successful implementation in computer vision, we introduced Domain-Adversarial Neural Networks (DANN) to our context, along with two other popular methods existing in previous fault diagnosis research. We then carefully justify the applicability of these methods in realistic fault diagnosis settings, and offer a unified experimental protocol for a fair comparison between domain adaptation methods for fault diagnosis problems.
For fault diagnosis applications, existing papers usually focus on the case where unlabeled data in target domain are fully provided, and directly apply the above domain adaptation techniques to solve the problem. For example, @cite_7 proposes to use AdaBN to learn a model with good anti-noise and domain adaptation ability on raw vibration signals. Similarly, @cite_13 propose to align the distributions of intermediate layers between source and feature extractors by adversarial training. @cite_19 consider the problem of fault detection within a fleet using unsupervised feature alignment. Recently, @cite_23 uses MMD-minimization to align the full source and target distributions for rotationary machines.
{ "cite_N": [ "@cite_19", "@cite_23", "@cite_13", "@cite_7" ], "mid": [ "2971202925", "2904218127", "2798673311", "2584994008" ], "abstract": [ "Training data-driven approaches for complex industrial system health monitoring is challenging. When data on faulty conditions are rare or not available, the training has to be performed in a unsupervised manner. In addition, when the observation period, used for training, is kept short, to be able to monitor the system in its early life, the training data might not be representative of all the system normal operating conditions. In this paper, we propose five approaches to perform fault detection in such context. Two approaches rely on the data from the unit to be monitored only: the baseline is trained on the early life of the unit. An incremental learning procedure tries to learn new operating conditions as they arise. Three other approaches take advantage of data from other similar units within a fleet. In two cases, units are directly compared to each other with similarity measures, and the data from similar units are combined in the training set. We propose, in the third case, a new deep-learning methodology to perform, first, a feature alignment of different units with an Unsupervised Feature Alignment Network (UFAN). Then, features of both units are combined in the training set of the fault detection neural network.The approaches are tested on a fleet comprising 112 units, observed over one year of data. All approaches proposed here are an improvement to the baseline, trained with two months of data only. As units in the fleet are found to be very dissimilar, the new architecture UFAN, that aligns units in the feature space, is outperforming others.", "Abstract In the past years, data-driven approaches such as deep learning have been widely applied on machinery signal processing to develop intelligent fault diagnosis systems. In real-world applications, domain shift problem usually occurs where the distribution of the labeled training data, denoted as source domain, is different from that of the unlabeled testing data, known as target domain. That results in serious diagnosis performance degradation. This paper proposes a novel domain adaptation method for rolling bearing fault diagnosis based on deep learning techniques. A deep convolutional neural network is used as the main architecture. The multi-kernel maximum mean discrepancies (MMD) between the two domains in multiple layers are minimized to adapt the learned representations from supervised learning in the source domain to be applied in the target domain. The domain-invariant features can be efficiently extracted in this way, and the cross-domain testing performance can be significantly improved. Experiments on two rolling bearing datasets are carried out to validate the effectiveness of the domain adaptation approach. Comparisons with other approaches and related works demonstrate the superiority of the proposed method. The experimental results of this study suggest the proposed domain adaptation method offers a new and promising tool for intelligent fault diagnosis.", "Traditional intelligent fault diagnosis of rolling bearings work well only under a common assumption that the labeled training data (source domain) and unlabeled testing data (target domain) are drawn from the same distribution. However, in many real-world applications, this assumption does not hold, especially when the working condition varies. In this paper, a new adversarial adaptive 1-D CNN called A2CNN is proposed to address this problem. A2CNN consists of four parts, namely, a source feature extractor, a target feature extractor, a label classifier and a domain discriminator. The layers between the source and target feature extractor are partially untied during the training stage to take both training efficiency and domain adaptation into consideration. Experiments show that A2CNN has strong fault-discriminative and domain-invariant capacity, and therefore can achieve high accuracy under different working conditions. We also visualize the learned features and the networks to explore the reasons behind the high performance of our proposed model.", "Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100 classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions." ] }
1905.05925
2945553106
Bullet-screen is a technique that enables the website users to send real-time comment bullet' cross the screen. Compared with the traditional review of a video, bullet-screen provides new features of feeling expression to video watching and more iterations between video viewers. However, since all the comments from the viewers are shown on the screen publicly and simultaneously, some low-quality bullets will reduce the watching enjoyment of the users. Although the bullet-screen video websites have provided filter functions based on regular expression, bad bullets can still easily pass the filter through making a small modification. In this paper, we present SmartBullets, a user-centered bullet-screen filter based on deep learning techniques. A convolutional neural network is trained as the classifier to determine whether a bullet need to be removed according to its quality. Moreover, to increase the scalability of the filter, we employ a cloud-assisted framework by developing a backend cloud server and a front-end browser extension. The evaluation of 40 volunteers shows that SmartBullets can effectively remove the low-quality bullets and improve the overall watching experience of viewers.
Inspired by the rapidly increased social media impact of bullet-screen videos, there are more and more studies related to danmaku proposed. @cite_8 analyzed the comment distribution of bullets over natural time and discovers the burst patterns of danmaku system. @cite_23 designed a new application that extracts time-sync tags for video shots by automatically exploiting bullet comments of the video. After that, Lv al proposed T-DSSM, a temporal deep structured semantic model which can represent bullet-screen comment into semiotic vectors @cite_26 . T-DSSM is further used to label highlight shots in videos. Chen al took advantage of the real-time property of bullets and proposed a personalized keyframe recommendation system @cite_1 . In 2016, He al made use of danmaku to predict the popularity of a videos @cite_11 . On the other hand, Chen al employed deep learning model that trained by a bullet-screen comment dataset to predict the attractiveness of fine-grained videos @cite_6 . Other research related to danmaku in recent years can be found in @cite_15 @cite_14 @cite_12 ping2017video .
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_8", "@cite_1", "@cite_6", "@cite_23", "@cite_15", "@cite_12", "@cite_11" ], "mid": [ "2562476021", "2566760506", "2749929993", "2740409734", "", "2963834699", "2735561960", "2295356354", "2506342487" ], "abstract": [ "Recent years have witnessed the boom of online sharing media contents, which raise significant challenges in effective management and retrieval. Though a large amount of efforts have been made, precise retrieval on video shots with certain topics has been largely ignored. At the same time, due to the popularity of novel time-sync comments, or so-called \"bullet-screen comments\", video semantics could be now combined with timestamps to support further research on temporal video labeling. In this paper, we propose a novel video understanding framework to assign temporal labels on highlighted video shots. To be specific, due to the informal expression of bullet-screen comments, we first propose a temporal deep structured semantic model (T-DSSM) to represent comments into semantic vectors by taking advantage of their temporal correlation. Then, video highlights are recognized and labeled via semantic vectors in a supervised way. Extensive experiments on a real-world dataset prove that our framework could effectively label video highlights with a significant margin compared with baselines, which clearly validates the potential of our framework on video understanding, as well as bullet-screen comments interpretation.", "In this paper, we propose a method that estimates new tags from user comments on videos on the Nico Nico Douga website. On Nico Nico Douga, users can post tags and comments on videos. However, users cannot post more than 12 tags on a video, therefore, there are some important tags that could be posted but are sometimes missed. We present a technique to acquire some of these missing tags by choosing new tags that score well in a scoring method developed by us.", "DanMu, an emerging type of user-generated comment, has become increasingly popular in recent years. Many online video platforms such as Tudou.com have provided the DanMu function. Unlike traditional online reviews such as reviews at Youtube.com that are outside the videos, DanMu is a scrolling marquee comment, which is overlaid directly on top of the video and synchronized to a specific playback time. Such comments are displayed as streams of moving subtitles overlaid on the video screen. Viewers could easily write DanMus while watching videos, and the written DanMus will be immediately overlaid onto the video and displayed to writers themselves and other viewers as well. Such DanMu systems have greatly enabled users to communicate with each other in a much more direct way, creating a real-time sharing experience. Although there are several unique features of DanMu and has had a great impact on online video systems, to the best of our knowledge, there is no work that has provided a comprehensive study on DanMu. In this article, as a pilot study, we analyze the unique characteristics of DanMu from various perspectives. Specifically, we first illustrate some unique distributions of DanMus by comparing with traditional reviews (TReviews) that we collected from a real DanMu-enabled online video system. Second, we discover two interesting patterns in DanMu data: a herding effect and multiple-burst phenomena that are significantly different from those in TRviews and reveal important insights about the growth of DanMus on a video. Towards exploring antecedents of both th herding effect and multiple-burst phenomena, we propose to further detect leading DanMus within bursts, because those leading DanMus make the most contribution to both patterns. A framework is proposed to detect leading DanMus that effectively combines multiple factors contributing to leading DanMus. Based on the identified characteristics of DanMu, finally we propose to predict the distribution of future DanMus (i.e., the growth of DanMus), which is important for many DanMu-enabled online video systems, for example, the predicted DanMu distribution could be an indicator of video popularity. This prediction task includes two aspects: One is to predict which videos future DanMus will be posted for, and the other one is to predict which segments of a video future DanMus will be posted on. We develop two sophisticated models to solve both problems. Finally, intensive experiments are conducted with a real-world dataset to validate all methods developed in this article.", "Key frames are playing a very important role for many video applications, such as on-line movie preview and video information retrieval. Although a number of key frame selection methods have been proposed in the past, existing technologies mainly focus on how to precisely summarize the video content, but seldom take the user preferences into consideration. However, in real scenarios, people may cast diverse interests on the contents even for the same video, and thus they may be attracted by quite different key frames, which makes the selection of key frames an inherently personalized process. In this paper, we propose and investigate the problem of personalized key frame recommendation to bridge the above gap. To do so, we make use of video images and user time-synchronized comments to design a novel key frame recommender that can simultaneously model visual and textual features in a unified framework. By user personalization based on her his previously reviewed frames and posted comments, we are able to encode different user interests in a unified multi-modal space, and can thus select key frames in a personalized manner, which, to the best of our knowledge, is the first time in the research field of video content analysis. Experimental results show that our method performs better than its competitors on various measures.", "", "", "In recent years, more and more people are like to watch videos online because of its convenience and social features. Due to the limit of entertainment time, there is a new requirement that people prefer to watch some hot video segments rather than an entire video. However, it is a quite time-consuming work to extract the highlight segments in videos manually because the number of videos uploaded to the internet is huge. In this paper, we propose a model of event detection on videos using Time-Sync comments provided by online users. In the model, three features of Time-Sync comments are extracted firstly. Then, user behavior relevance in time series are analyzed to find the video shots that people are interested in most. Metric and its optimization to score video shots for event detection are introduced lastly. Experiments on several movies shows that the events detected by our method coincide with the highlights in the movies. Experiments on movies show that the events detected by our method coincide with the highlights in the movies.", "Online videos have become indispensable to people's daily lives. Everyday, millions of people watch online videos at video websites such as Nico Nico Douga and they comment on these websites. In Nico Nico Douga, comments are useful for online advertisements and video retrieval methods. Therefore, it is important to annotate comments in detail for them. In this paper, we annotate comments based on referring contents automatically.", "Recent years have witnessed the prosperity of a new type of real-time user-generated comment, or so-called DanMu, in many recent online video platforms. These DanMu-enabled video platforms present scrolling marquee comments overlaid directly on top of the videos by synchronizing these comments to specific playback times. In this paper, we study the prediction of video popularity in these platforms, which may benefit a lot of applications ranging from online advertising for website holders to popular video recommendation for audiences. Different from traditional online video platforms where only traditional reviews are available, these DanMus make viewers easily see other viewers' opinions and communicate with each other in a much more direct way. Consequently, viewers are easily influenced by others' behaviors over time, which is considered as the herding effect in social science. However, how to address the unique characteristics i.e., the herding effect of DanMu-enabled online videos for more accurate popularity prediction is still under-explored. To that end, in this paper, we first explore and measure the herding effect of DanMu-enabled video popularity from multiple aspects, including the popular videos, the popular DanMus and the newly updated videos. Also, we recognize that the uploaders' influence and video quality affect the video popularity as well. Along this line, we propose a model that incorporates the herding effect, uploaders' influence and video quality for predicting the video popularity. An effective estimation method is also proposed. Finally, experimental results on real-world data show that our proposed prediction model improves the prediction accuracy by 47.19i?ź compared to the baselines." ] }
1905.06081
2946063405
It is common practice nowadays to use multiple social networks for different social roles. Although this, these networks assume differences in content type, communications and style of speech. If we intend to understand human behaviour as a key-feature for recommender systems, banking risk assessments or sociological researches, this is better to achieve using a combination of the data from different social media. In this paper, we propose a new approach for user profiles matching across social media based on embeddings of publicly available users' face photos and conduct an experimental study of its efficiency. Our approach is stable to changes in content and style for certain social media.
Mostly, previous work in this field is focused on the easily accessible information about the user: self-description, biography, name, nickname @cite_6 @cite_7 ; or on the dynamic of users behaviour: dates of posts, profiles updates @cite_11 @cite_10 . As it noticed in @cite_1 and @cite_5 , this kind of information (username, location, followers followings, meta paths) are very noisy, easily faked, not required, they provide huge research of existing methods to profiles matching. The last suppose that methods of behaviour dynamic analysis show potential for further work, but they have some major disadvantages: they require collecting of information during some period of user activities and require an unusual method of data representation in different social media, which can vary in their features.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_6", "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "2077738931", "2598689838", "844742131", "2802482737", "2759351030", "1916698127" ], "abstract": [ "With the growing popularity and usage of online social media services, people now have accounts (some times several) on multiple and diverse services like Facebook, Linked In, Twitter and You Tube. Publicly available information can be used to create a digital footprint of any user using these social media services. Generating such digital footprints can be very useful for personalization, profile management, detecting malicious behavior of users. A very important application of analyzing users' online digital footprints is to protect users from potential privacy and security risks arising from the huge publicly available user information. We extracted information about user identities on different social networks through Social Graph API, Friend Feed, and Profilactic, we collated our own dataset to create the digital footprints of the users. We used username, display name, description, location, profile image, and number of connections to generate the digital footprints of the user. We applied context specific techniques (e.g. Jaro Winkler similarity, Word net based ontologies) to measure the similarity of the user profiles on different social networks. We specifically focused on Twitter and Linked In. In this paper, we present the analysis and results from applying automated classifiers for disambiguating profiles belonging to the same user from different social networks. User ID and Name were found to be the most discriminative features for disambiguating user profiles. Using the most promising set of features and similarity metrics, we achieved accuracy, precision and recall of 98 , 99 , and 96 , respectively.", "The increasing popularity and diversity of social media sites has encouraged more and more people to participate on multiple online social networks to enjoy their services. Each user may create a user identity, which can includes profile, content, or network information, to represent his or her unique public figure in every social network. Thus, a fundamental question arises -- can we link user identities across online social networks? User identity linkage across online social networks is an emerging task in social media and has attracted increasing attention in recent years. Advancements in user identity linkage could potentially impact various domains such as recommendation and link prediction. Due to the unique characteristics of social network data, this problem faces tremendous challenges. To tackle these challenges, recent approaches generally consist of (1) extracting features and (2) constructing predictive models from a variety of perspectives. In this paper, we review key achievements of user identity linkage across online social networks including stateof- the-art algorithms, evaluation metrics, and representative datasets. We also discuss related research areas, open problems, and future research directions for user identity linkage across online social networks.", "The proliferation of social networks and all the personal data that people share brings many opportunities for developing exciting new applications. At the same time, however, the availability of vast amounts of personal data raises privacy and security concerns.In this thesis, we develop methods to identify the social networks accounts of a given user. We first study how we can exploit the public profiles users maintain in different social networks to match their accounts. We identify four important properties – Availability, Consistency, non- Impersonability, and Discriminability (ACID) – to evaluate the quality of different profile attributes to match accounts. Exploiting public profiles has a good potential to match accounts because a large number of users have the same names and other personal infor- mation across different social networks. Yet, it remains challenging to achieve practically useful accuracy of matching due to the scale of real social networks. To demonstrate that matching accounts in real social networks is feasible and reliable enough to be used in practice, we focus on designing matching schemes that achieve low error rates even when applied in large-scale networks with hundreds of millions of users. Then, we show that we can still match accounts across social networks even if we only exploit what users post, i.e., their activity on a social networks. This demonstrates that, even if users are privacy conscious and maintain distinct profiles on different social networks, we can still potentially match their accounts. Finally, we show that, by identifying accounts that correspond to the same person inside a social network, we can detect impersonators.", "The existence of user profiles belonging to a single user across different social networking sites poses several challenges to the research community. The main technical issue in that context is to detect the same user profiles across several different social networks by leveraging a set of mechanisms that identify the similarity among the user profiles. This problem is commonly referred to as entity matching or identity linkage on social networks. In this review, we describe and compare the 27 most important (to the best of our knowledge) research papers in this area. The main contributions of this article are to provide a systematic and integrated review of papers in this area, to provide comparative points that simplify the understanding of such systems, and finally to discuss future research avenues.", "", "To enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously. The shared users between different networks are called anchor users, while the remaining unshared users are named as non-anchor users. Connections between accounts of anchor users in different networks are defined as anchor links and networks partially aligned by anchor links can be represented as partially aligned networks. In this paper, we want to predict anchor links between partially aligned social networks, which is formally defined as the partial network alignment problem. The partial network alignment problem is very difficult to solve because of the following two challenges: (1) the lack of general features for anchor links, and (2) the one to at most one constraint on anchor links. To address these two challenges, a new method PNA (Partial Network Aligner) is proposed in this paper. PNA (1) extracts various adjacency scores among users across networks based on a set of inter-network anchor meta paths, and (2) utilizes the generic stable matching to identify the non-anchor users to prune the redundant anchor links attached to them. Extensive experiments conducted on two real-world partially aligned social networks demonstrate that PNA can solve the partial network alignment problem very well and outperform all the other comparison methods with significant advantages." ] }
1905.06209
2945364777
Large knowledge bases (KBs) are useful for many AI tasks, but are difficult to integrate into modern gradient-based learning systems. Here we describe a framework for accessing soft symbolic database using only differentiable operators. For example, this framework makes it easy to conveniently write neural models that adjust confidences associated with facts in a soft KB; incorporate prior knowledge in the form of hand-coded KB access rules; or learn to instantiate query templates using information extracted from text. NQL can work well with KBs with millions of tuples and hundreds of thousands of entities on a single GPU.
NQL is closely related to TensorLog @cite_6 , a deductive database formalism which also can be compiled to Tensorflow. In fact, NQL was designed so that every expression in the target sublanguage used by TensorLog can be concisely and readably written in NQL. TensorLog, in turn, has semantics derived from other proof-counting'' logics such as stochastic logic programs (SLP) @cite_0 . TensorLog is also closely related to other differentiable first-order logics such as the differentiable theorem prover (DTP) @cite_4 , in which a proof for an example is unrolled into a network. DPT includes representation-learning as a component, as well as a template-instantiation approach similar to the one used in NQL. TensorLog and NQL are more restricted than DPT but also more scaleable: the current version of NQL can work well with KBs with millions of tuples and hundreds of thousands of entities, even on a single GPU.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_6" ], "mid": [ "1530235063", "2511805592", "2738790068" ], "abstract": [ "Stochastic logic programs (SLPs) are logic programs with parameterised clauses which define a log-linear distribution over refutations of goals. The log-linear distribution provides, by marginalisation, a distribution over variable bindings, allowing SLPs to compactly represent quite complex distributions. We analyse the fundamental statistical properties of SLPs addressing issues concerning infinite derivations, ‘unnormalised’ SLPs and impure SLPs. After detailing existing approaches to parameter estimation for log-linear models and their application to SLPs, we present a new algorithm called failure-adjusted maximisation (FAM). FAM is an instance of the EM algorithm that applies specifically to normalised SLPs and provides a closed-form for computing parameter updates within an iterative maximisation approach. We empirically show that FAM works on some small examples and discuss methods for applying it to bigger problems.", "In this paper we present a proof-of-concept implementation of Neural Theorem Provers (NTPs), end-to-end differentiable counterparts of discrete theorem provers that perform first-order inference on vector representations of symbols using function-free, possibly parameterized, rules. As such, NTPs follow a long tradition of neural-symbolic approaches to automated knowledge base inference, but differ in that they are differentiable with respect to representations of symbols in a knowledge base and can thus learn representations of predicates, constants, as well as rules of predefined structure. Furthermore, they still allow us to incorporate domainknowledge provided as rules. The NTP presented here is realized via a differentiable version of the backward chaining algorithm. It operates on substitution representations and is able to learn complex logical dependencies from training facts of small knowledge bases.", "We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neural-network infrastructure such as Tensorflow or Theano. This leads to a close integration of probabilistic logical reasoning with deep-learning infrastructure: in particular, it enables high-performance deep learning frameworks to be used for tuning the parameters of a probabilistic logic. Experimental results show that TensorLog scales to problems involving hundreds of thousands of knowledge-base triples and tens of thousands of examples." ] }
1905.06209
2945364777
Large knowledge bases (KBs) are useful for many AI tasks, but are difficult to integrate into modern gradient-based learning systems. Here we describe a framework for accessing soft symbolic database using only differentiable operators. For example, this framework makes it easy to conveniently write neural models that adjust confidences associated with facts in a soft KB; incorporate prior knowledge in the form of hand-coded KB access rules; or learn to instantiate query templates using information extracted from text. NQL can work well with KBs with millions of tuples and hundreds of thousands of entities on a single GPU.
NQL however is not a logic, like TensorLog, but a dataflow language, similar in spirit to Pig @cite_5 or Spark @cite_1 . NQL also includes a number of features not found in TensorLog, notably the ability to have variables that refer to relations. NQL also makes it much easier for Tensorflow models to include pieces of NQL, or for NQL queries to call out to Tensorflow models.
{ "cite_N": [ "@cite_5", "@cite_1" ], "mid": [ "2142031898", "2189465200" ], "abstract": [ "Increasingly, organizations capture, transform and analyze enormous data sets. Prominent examples include internet companies and e-science. The Map-Reduce scalable dataflow paradigm has become popular for these applications. Its simple, explicit dataflow programming model is favored by some over the traditional high-level declarative approach: SQL. On the other hand, the extreme simplicity of Map-Reduce leads to much low-level hacking to deal with the many-step, branching dataflows that arise in practice. Moreover, users must repeatedly code standard operations such as join by hand. These practices waste time, introduce bugs, harm readability, and impede optimizations. Pig is a high-level dataflow system that aims at a sweet spot between SQL and Map-Reduce. Pig offers SQL-style high-level data manipulation constructs, which can be assembled in an explicit dataflow and interleaved with custom Map- and Reduce-style functions or executables. Pig programs are compiled into sequences of Map-Reduce jobs, and executed in the Hadoop Map-Reduce environment. Both Pig and Hadoop are open-source projects administered by the Apache Software Foundation. This paper describes the challenges we faced in developing Pig, and reports performance comparisons between Pig execution and raw Map-Reduce execution.", "MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time." ] }
1905.06209
2945364777
Large knowledge bases (KBs) are useful for many AI tasks, but are difficult to integrate into modern gradient-based learning systems. Here we describe a framework for accessing soft symbolic database using only differentiable operators. For example, this framework makes it easy to conveniently write neural models that adjust confidences associated with facts in a soft KB; incorporate prior knowledge in the form of hand-coded KB access rules; or learn to instantiate query templates using information extracted from text. NQL can work well with KBs with millions of tuples and hundreds of thousands of entities on a single GPU.
NQL is one of many systems that have been built on top of Tensorlog or some other deep-learning platform. Perhaps the most similar of these in spirit is Edward @cite_3 , which like NQL, attempts to add a higher-level modeling language based on a rather different programming paradigm: most other packages are aimed at providing additional support for training, or combining existing Tensorflow operators into reusable fragments. In the case of Edward, the alternative paradigm being supported is probabilistic programming (e.g., variational autoencoder modes), while in Tensorlog, the alternative paradigm supported is dataflow operations on KGs.
{ "cite_N": [ "@cite_3" ], "mid": [ "2539792571" ], "abstract": [ "Probabilistic modeling is a powerful approach for analyzing empirical information. We describe Edward, a library for probabilistic modeling. Edward's design reflects an iterative process pioneered by George Box: build a model of a phenomenon, make inferences about the model given data, and criticize the model's fit to the data. Edward supports a broad class of probabilistic models, efficient algorithms for inference, and many techniques for model criticism. The library builds on top of TensorFlow to support distributed training and hardware such as GPUs. Edward enables the development of complex probabilistic models and their algorithms at a massive scale." ] }
1905.06124
2953116023
In times of Industry 4.0 and cyber-physical systems (CPS) providing security is one of the biggest challenges. A cyber attack launched at a CPS poses a huge threat, since a security incident may affect both the cyber and the physical world. Since CPS are very flexible systems, which are capable of adapting to environmental changes, it is important to keep an overview of the resulting costs of providing security. However, research regarding CPS currently focuses more on engineering secure systems and does not satisfactorily provide approaches for evaluating the resulting costs. This paper presents an interaction-based model for evaluating security costs in a CPS. Furthermore, the paper demonstrates in a use case driven study, how this approach could be used to model the resulting costs for guaranteeing security.
This paper is a continuation of @cite_12 where we introduced a high-level process flow based on Six Sigma for identifying, categorizing, analysing and eliminating security risks and measuring the resulting costs. This initial investigation included the evaluation of (i) how security risks of a smart business use case can be eliminated by implementing security controls, and (ii) how the resulting costs could be measured using a monetary cost metric (Euro). Even though the two use cases used IoT-devices and cloud computing the evaluation did not include the costs of security for a CPS. To extend this work the key new contribution of this paper is to present a mathematical expression, which can be used to describe and evaluate security costs of a CPS and which allows the usage of more then one cost metric.
{ "cite_N": [ "@cite_12" ], "mid": [ "2772765373" ], "abstract": [ "In a world, as complex and constantly changing as ours cloud computing is a driving force for shaping the IT landscape and changing the way we do business. Current trends show a world of people, things and services all digitally interconnected via the Internet of Things (IoT). This applies in particular to an industrial environment where smart devices and intelligent services pave the way for smart factories and smart businesses. This paper investigates in a use case driven study the potential of making use of smart devices to enable direct, automated and voice-controlled smart businesses. Furthermore, the paper presents an initial investigation on methodologies for measuring costs of cyber security controls for cloud services." ] }
1905.06246
2945523701
Product reviews and ratings on e-commerce websites provide customers with detailed insights about various aspects of the product such as quality, usefulness, etc. Since they influence customers' buying decisions, product reviews have become a fertile ground for abuse by sellers (colluding with reviewers) to promote their own products or tarnish the reputation of competitor's products. In this paper, our focus is on detecting such abusive entities (both sellers and reviewers) via a tensor decomposition on the product reviews data. While tensor decomposition is mostly unsupervised, we formulate our problem as a semi-supervised binary multi-target tensor decomposition, to take advantage of currently known abusive entities. We empirically show that our multi-target semi-supervised model achieves higher precision and recall in detecting abusive entities as compared with unsupervised techniques. Finally, we show that our proposed stochastic partial natural gradient inference for our model empirically achieves faster convergence than stochastic gradient and Online-EM with sufficient statistics.
There has been a lot of attention recently to address the issue of finding fake reviewers in online e-commerce platforms. @cite_9 were one of the first to show that review spam exists and proposed simple text based features to classify fake reviewers.
{ "cite_N": [ "@cite_9" ], "mid": [ "2154058875" ], "abstract": [ "Mining of opinions from product reviews, forum posts and blogs is an important research topic with many applications. However, existing research has been focused on extraction, classification and summarization of opinions from these sources. An important issue that has not been studied so far is the opinion spam or the trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews. To our knowledge, there is still no published study on this topic, although Web page spam and email spam have been investigated extensively. We will see that review spam is quite different from Web page spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that review spam is widespread. In this paper, we first present a categorization of spam reviews and then propose several techniques to detect them." ] }
1905.06246
2945523701
Product reviews and ratings on e-commerce websites provide customers with detailed insights about various aspects of the product such as quality, usefulness, etc. Since they influence customers' buying decisions, product reviews have become a fertile ground for abuse by sellers (colluding with reviewers) to promote their own products or tarnish the reputation of competitor's products. In this paper, our focus is on detecting such abusive entities (both sellers and reviewers) via a tensor decomposition on the product reviews data. While tensor decomposition is mostly unsupervised, we formulate our problem as a semi-supervised binary multi-target tensor decomposition, to take advantage of currently known abusive entities. We empirically show that our multi-target semi-supervised model achieves higher precision and recall in detecting abusive entities as compared with unsupervised techniques. Finally, we show that our proposed stochastic partial natural gradient inference for our model empirically achieves faster convergence than stochastic gradient and Online-EM with sufficient statistics.
Identifying Abusive Reviews: Abusive reviewers have grown in sophistication ever since the initial efforts @cite_9 @cite_29 , employing professional writing skills to avoid detection via text-based techniques. @cite_25 , the authors have proposed stylistic features derived from the Probabilistic Context Free Grammar parse trees to detect review spam. To detect more complex fake review patterns, researchers have proposed 1) graph based approaches such as approximate bipartite cores and lockstep behavior detection among reviewers @cite_27 @cite_2 @cite_21 @cite_18 , 2) techniques to identify network footprints of reviewers in the reviewer product graph @cite_28 , and 3) using anomalies in ratings distribution @cite_5 . Some recent research has pointed at the importance of time in identifying fake reviews since it is critical to produce as many reviews as possible in a short period of time to be economically viable. Methods exploiting temporal and spatial features related to reviewers reviews @cite_20 @cite_15 , as well as the sequence of reviews @cite_30 have been proposed. While it is not possible to capture all the work on review spam detection, @cite_0 provides a broad coverage of efforts in this area.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_28", "@cite_9", "@cite_29", "@cite_21", "@cite_0", "@cite_27", "@cite_2", "@cite_5", "@cite_15", "@cite_25", "@cite_20" ], "mid": [ "1976467781", "", "1975223096", "2154058875", "2047756776", "", "1851422430", "2221494087", "", "2962881240", "2964127939", "2124637344", "2192783609" ], "abstract": [ "Detecting review spam is important for current e-commerce applications. However, the posted order of review has been neglected by the former work. In this paper, we explore the issue on fake review detection in review sequence, which is crucial for implementing online anti-opinion spam. We analyze the characteristics of fake reviews firstly. Based on review contents and reviewer behaviors, six time sensitive features are proposed to highlight the fake reviews. And then, we devise supervised solutions and a threshold-based solution to spot the fake reviews as early as possible. The experimental results show that our methods can identify the fake reviews orderly with high precision and recall.", "", "Online reviews are an important source for consumers to evaluate products services on the Internet (e.g. Amazon, Yelp, etc.). However, more and more fraudulent reviewers write fake reviews to mislead users. To maximize their impact and share effort, many spam attacks are organized as campaigns, by a group of spammers. In this paper, we propose a new two-step method to discover spammer groups and their targeted products. First, we introduce NFS (Network Footprint Score), a new measure that quantifies the likelihood of products being spam campaign targets. Second, we carefully devise GroupStrainer to cluster spammers on a 2-hop subgraph induced by top ranking products. Our approach has four key advantages: (i) unsupervised detection; both steps require no labeled data, (ii) adversarial robustness; we quantify statistical distortions in the review network, of which spammers have only a partial view, and avoid any side information that spammers can easily evade, (iii) sensemaking; the output facilitates the exploration of the nested hierarchy (i.e., organization) among the spammers, and finally (iv) scalability; both steps have complexity linear in network size, moreover, GroupStrainer operates on a carefully induced subnetwork. We demonstrate the efficiency and effectiveness of our approach on both synthetic and real-world datasets from two different domains with millions of products and reviewers. Moreover, we discover interesting strategies that spammers employ through case studies of our detected groups.", "Mining of opinions from product reviews, forum posts and blogs is an important research topic with many applications. However, existing research has been focused on extraction, classification and summarization of opinions from these sources. An important issue that has not been studied so far is the opinion spam or the trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews. To our knowledge, there is still no published study on this topic, although Web page spam and email spam have been investigated extensively. We will see that review spam is quite different from Web page spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that review spam is widespread. In this paper, we first present a categorization of spam reviews and then propose several techniques to detect them.", "Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them", "", "Online reviews are often the primary factor in a customer’s decision to purchase a product or service, and are a valuable source of information that can be used to determine public opinion on these products or services. Because of their impact, manufacturers and retailers are highly concerned with customer feedback and reviews. Reliance on online reviews gives rise to the potential concern that wrongdoers may create false reviews to artificially promote or devalue products and services. This practice is known as Opinion (Review) Spam, where spammers manipulate and poison reviews (i.e., making fake, untruthful, or deceptive reviews) for profit or gain. Since not all online reviews are truthful and trustworthy, it is important to develop techniques for detecting review spam. By extracting meaningful features from the text using Natural Language Processing (NLP), it is possible to conduct review spam detection using various machine learning techniques. Additionally, reviewer information, apart from the text itself, can be used to aid in this process. In this paper, we survey the prominent machine learning techniques that have been proposed to solve the problem of review spam detection and the performance of different approaches for classification and detection of review spam. The majority of current research has focused on supervised learning methods, which require labeled data, a scarcity when it comes to online review spam. Research on methods for Big Data are of interest, since there are millions of online reviews, with many more being generated daily. To date, we have not found any papers that study the effects of Big Data analytics for review spam detection. The primary goal of this paper is to provide a strong and comprehensive comparative study of current research on detecting review spam using various machine learning techniques and to devise methodology for conducting further investigation.", "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner --- with the objective of searching for individuals that have similar pattern of behavior as the known seeds --- based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98 on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube's engagement graph spanning over a billion users.", "", "Review fraud is a pervasive problem in online commerce, in which fraudulent sellers write or purchase fake reviews to manipulate perception of their products and services. Fake reviews are often detected based on several signs, including 1) they occur in short bursts of time; 2) fraudulent user accounts have skewed rating distributions. However, these may both be true in any given dataset. Hence, in this paper, we propose an approach for detecting fraudulent reviews which combines these 2 approaches in a principled manner, allowing successful detection even when one of these signs is not present. To combine these 2 approaches, we formulate our Bayesian Inference for Rating Data (BIRD) model, a flexible Bayesian model of user rating behavior. Based on our model we formulate a likelihood-based suspiciousness metric, Normalized Expected Surprise Total (NEST). We propose a linear-time algorithm for performing Bayesian inference using our model and computing the metric. Experiments on real data show that BIRDNEST successfully spots review fraud in large, real-world graphs: the 50 most suspicious users of the Flipkart platform flagged by our algorithm were investigated and all identified as fraudulent by domain experts at Flipkart.", "Online consumer reviews reflect the testimonials of real people, unlike e.g., ads. As such, they have critical impact on potential consumers, and indirectly on businesses. Problematically, such financial incentives have created a market for spammers to fabricate reviews to unjustly promote or demote businesses, activities known as opinion spam (Jindal and Liu 2008). Most existing work on this problem have formulations based on static review data, with respective techniques operating in an offline fashion. Spam campaigns, however, are intended to make most impact during their course. Abnormal events triggered by spammers’ activities could be masked in the load of future events, which static analysis would fail to identify. In this work, we approach the opinion spam problem with a temporal formulation. Specifically, we monitor a list of carefully selected indicative signals of opinion spam over time and design efficient techniques to both detect and characterize abnormal events in real-time. Experiments on two different datasets show that our approach is fast, effective, and practical to be deployed in real-world systems.", "Most previous studies in computerized deception detection have relied only on shallow lexico-syntactic patterns. This paper investigates syntactic stylometry for deception detection, adding a somewhat unconventional angle to prior literature. Over four different datasets spanning from the product review to the essay domain, we demonstrate that features driven from Context Free Grammar (CFG) parse trees consistently improve the detection performance over several baselines that are based only on shallow lexico-syntactic features. Our results improve the best published result on the hotel review data (, 2011) reaching 91.2 accuracy with 14 error reduction.", "Although opinion spam (or fake review) detection has attracted significant research attention in recent years, the problem is far from solved. One key reason is that there is no large-scale ground truth labeled dataset available for model building. Some review hosting sites such as Yelp.com and Dianping.com have built fake review filtering systems to ensure the quality of their reviews, but their algorithms are trade secrets. Working with Dianping, we present the first large-scale analysis of restaurant reviews filtered by Dianping's fake review filtering system. Along with the analysis, we also propose some novel temporal and spatial features for supervised opinion spam detection. Our results show that these features significantly outperform existing state-of-art features." ] }
1905.06246
2945523701
Product reviews and ratings on e-commerce websites provide customers with detailed insights about various aspects of the product such as quality, usefulness, etc. Since they influence customers' buying decisions, product reviews have become a fertile ground for abuse by sellers (colluding with reviewers) to promote their own products or tarnish the reputation of competitor's products. In this paper, our focus is on detecting such abusive entities (both sellers and reviewers) via a tensor decomposition on the product reviews data. While tensor decomposition is mostly unsupervised, we formulate our problem as a semi-supervised binary multi-target tensor decomposition, to take advantage of currently known abusive entities. We empirically show that our multi-target semi-supervised model achieves higher precision and recall in detecting abusive entities as compared with unsupervised techniques. Finally, we show that our proposed stochastic partial natural gradient inference for our model empirically achieves faster convergence than stochastic gradient and Online-EM with sufficient statistics.
Tensor based methods: Techniques such as CrossSpot @cite_32 , M-Zoom @cite_12 , and MultiAspectForensics @cite_3 propose identifying dense blocks in tensors or dense sub-graphs in heterogeneous networks, which can also be applied to the problem of identifying fake reviewers. M-Zoom is an improved version of CrossSpot that computes dense blocks in tensors which indicate anomalous or fraudulent behavior. The number of dense blocks (i.e., sub-tensors) returned by M-Zoom is configured a-priori. Note that the dense blocks identified may be overlapping, i.e., a tuple could be included in two or more blocks. M-Zoom is used as one of the baseline unsupervised methods in our experiments. MultiAspectForensics, on the other hand, automatically detects and visualizes novel patterns that include bipartite cores in heterogeneous networks.
{ "cite_N": [ "@cite_3", "@cite_32", "@cite_12" ], "mid": [ "2132938399", "2248736178", "2513234781" ], "abstract": [ "Modern applications such as web knowledge base, network traffic monitoring and online social networks have made available an unprecedented amount of network data with rich types of interactions carrying multiple attributes, for instance, port number and time tick in the case of network traffic. The design of algorithms to leverage this structured relationship with the power of computing to assist researchers and practitioners for better understanding, exploration and navigation of this space of information has become a challenging, albeit rewarding, topic in social network analysis and data mining. The constantly growing scale and enriching genres of network data always demand higher levels of efficiency, robustness and generalizability where existing approaches with successes on small, homogeneous network data are likely to fall short. We introduce MultiAspectForensics, a handy tool to automatically detect and visualize novel sub graph patterns within a local community of nodes in a heterogenous network, such as a set of vertices that form a dense bipartite graph whose edges share exactly the same set of attributes. We apply the proposed method on three data sets from distinct application domains, present empirical results and discuss insights derived from these patterns discovered. Our algorithm, built on scalable tensor analysis procedures, captures spectral properties of network data and reveals informative signals for subsequent domain-specific study and investigation, such as suspicious port-scanning activities in the scenario of cyber-security monitoring.", "Which seems more suspicious: 5,000 tweets from 200 users on 5 IP addresses, or 10,000 tweets from 500 users on 500 IP addresses but all with the same trending topic and all in 10 minutes? The literature has many methods that try to find dense blocks in matrices, and, recently, tensors, but no method gives a principled way to score the suspiciouness of dense blocks with different numbers of modes and rank them to draw human attention accordingly. Dense blocks are worth inspecting, typically indicating fraud, emerging trends, or some other noteworthy deviation from the usual. Our main contribution is that we show how to unify these methods and how to give a principled answer to questions like the above. Specifically, (a) we give a list of axioms that any metric of suspicousness should satisfy, (b) we propose an intuitive, principled metric that satisfies the axioms, and is fast to compute, (c) we propose CROSSSPOT, an algorithm to spot dense regions, and sort them in importance (\"suspiciousness\") order. Finally, we apply CROSSSPOT to real data, where it improves the F1 score over previous techniques by 68 and finds retweet-boosting in a real social dataset spanning 0.3 billion posts.", "Given a large-scale and high-order tensor, how can we find dense blocks in it? Can we find them in near-linear time but with a quality guarantee? Extensive previous work has shown that dense blocks in tensors as well as graphs indicate anomalous or fraudulent behavior e.g., lockstep behavior in social networks. However, available methods for detecting such dense blocks are not satisfactory in terms of speed, accuracy, or flexibility. In this work, we propose M-Zoom, a flexible framework for finding dense blocks in tensors, which works with a broad class of density measures. M-Zoom has the following properties: 1 Scalable: M-Zoom scales linearly with all aspects of tensors and is upi¾źto 114 @math faster than state-of-the-art methods with similar accuracy. 2 Provably accurate: M-Zoom provides a guarantee on the lowest density of the blocks it finds. 3 Flexible: M-Zoom supports multi-block detection and size bounds as well as diverse density measures. 4 Effective: M-Zoom successfully detected edit wars and bot activities in Wikipedia, and spotted network attacks from a TCP dump with near-perfect accuracy AUCi¾ź=i¾ź0.98. The data and software related to this paper are available at http: www.cs.cmu.edu kijungs codes mzoom ." ] }
1905.06246
2945523701
Product reviews and ratings on e-commerce websites provide customers with detailed insights about various aspects of the product such as quality, usefulness, etc. Since they influence customers' buying decisions, product reviews have become a fertile ground for abuse by sellers (colluding with reviewers) to promote their own products or tarnish the reputation of competitor's products. In this paper, our focus is on detecting such abusive entities (both sellers and reviewers) via a tensor decomposition on the product reviews data. While tensor decomposition is mostly unsupervised, we formulate our problem as a semi-supervised binary multi-target tensor decomposition, to take advantage of currently known abusive entities. We empirically show that our multi-target semi-supervised model achieves higher precision and recall in detecting abusive entities as compared with unsupervised techniques. Finally, we show that our proposed stochastic partial natural gradient inference for our model empirically achieves faster convergence than stochastic gradient and Online-EM with sufficient statistics.
We have a small set of known abusive sellers and reviewers identified via manual audits. We leverage the partial supervision to propose a semi-supervised extension to tensor decomposition to detect new abusive entities with greater fidelity. To leverage correlations between different forms of abuse (e.g., paid reviews abuse, abuse related to compromised accounts), we incorporate multiple binary targets based on Logistic Model with P ' o lya-Gamma data augmentation. Natural gradient learning: Natural gradient learning @cite_8 is an alternative to traditional gradient descent based learning. We develop stochastic partial natural gradient learning for the semi-supervised tensor decomposition model and show that it empirically achieves faster convergence as compared to stochastic gradient descent and EM with sufficient statistics.
{ "cite_N": [ "@cite_8" ], "mid": [ "1970789124" ], "abstract": [ "When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction, but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation), and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient online learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon, which appears in the backpropagation learning algorithm of multilayer perceptrons, might disappear or might not be so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed." ] }
1905.06203
2946104016
Sharing multimodal information (typically images, videos or text) in Social Network Sites (SNS) occupies a relevant part of our time. The particular way how users expose themselves in SNS can provide useful information to infer human behaviors. This paper proposes to use multimodal data gathered from Instagram accounts to predict the perceived prototypical needs described in Glasser's choice theory. The contribution is two-fold: (i) we provide a large multimodal database from Instagram public profiles (more than 30,000 images and text captions) annotated by expert Psychologists on each perceived behavior according to Glasser's theory, and (ii) we propose to automate the recognition of the (unconsciously) perceived needs by the users. Particularly, we propose a baseline using three different feature sets: visual descriptors based on pixel images (SURF and Visual Bag of Words), a high-level descriptor based on the automated scene description using Convolutional Neural Networks, and a text-based descriptor (Word2vec) obtained from processing the captions provided by the users. Finally, we propose a multimodal fusion of these descriptors obtaining promising results in the multi-label classification problem.
Different researchers have used SNS data to plan the path of disease occurrences @cite_11 @cite_45 . Predictive screening methods have also successfully found signs of mental health issues in social media data @cite_21 @cite_10 @cite_46 .
{ "cite_N": [ "@cite_21", "@cite_45", "@cite_46", "@cite_10", "@cite_11" ], "mid": [ "2162051395", "2962848499", "2513928994", "", "2140095656" ], "abstract": [ "We consider social media as a promising tool for public health, focusing on the use of Twitter posts to build predictive models about the forthcoming influence of childbirth on the behavior and mood of new mothers. Using Twitter posts, we quantify postpartum changes in 376 mothers along dimensions of social engagement, emotion, social network, and linguistic style. We then construct statistical models from a training set of observations of these measures before and after the reported childbirth, to forecast significant postpartum changes in mothers. The predictive models can classify mothers who will change significantly following childbirth with an accuracy of 71 , using observations about their prenatal behavior, and as accurately as 80-83 when additionally leveraging the initial 2-3 weeks of postnatal data. The study is motivated by the opportunity to use social media to identify mothers at risk of postpartum depression, an underreported health concern among large populations, and to inform the design of low-cost, privacy-sensitive early-warning systems and intervention programs aimed at promoting wellness postpartum.", "We developed computational models to predict the emergence of depression and Post-Traumatic Stress Disorder in Twitter users. Twitter data and details of depression history were collected from 204 individuals (105 depressed, 99 healthy). We extracted predictive features measuring affect, linguistic style, and context from participant tweets (N = 279,951) and built models using these features with supervised learning algorithms. Resulting models successfully discriminated between depressed and healthy content, and compared favorably to general practitioners’ average success rates in diagnosing depression, albeit in a separate population. Results held even when the analysis was restricted to content posted before first depression diagnosis. State-space temporal analysis suggests that onset of depression may be detectable from Twitter data several months prior to diagnosis. Predictive results were replicated with a separate sample of individuals diagnosed with PTSD (Nusers = 174, Ntweets = 243,775). A state-space time series model revealed indicators of PTSD almost immediately post-trauma, often many months prior to clinical diagnosis. These methods suggest a data-driven, predictive approach for early screening and detection of mental illness.", "Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection. Resulting models outperformed general practitioners’ average unassisted diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally-generated features. These results suggest new avenues for early screening and detection of mental illness.", "", "Current methods for the detection of contagious outbreaks give contemporaneous information about the course of an epidemic at best. It is known that individuals near the center of a social network are likely to be infected sooner during the course of an outbreak, on average, than those at the periphery. Unfortunately, mapping a whole network to identify central individuals who might be monitored for infection is typically very difficult. We propose an alternative strategy that does not require ascertainment of global network structure, namely, simply monitoring the friends of randomly selected individuals. Such individuals are known to be more central. To evaluate whether such a friend group could indeed provide early detection, we studied a flu outbreak at Harvard College in late 2009. We followed 744 students who were either members of a group of randomly chosen individuals or a group of their friends. Based on clinical diagnoses, the progression of the epidemic in the friend group occurred 13.9 days (95 C.I. 9.9–16.6) in advance of the randomly chosen group (i.e., the population as a whole). The friend group also showed a significant lead time (p<0.05) on day 16 of the epidemic, a full 46 days before the peak in daily incidence in the population as a whole. This sensor method could provide significant additional time to react to epidemics in small or large populations under surveillance. The amount of lead time will depend on features of the outbreak and the network at hand. The method could in principle be generalized to other biological, psychological, informational, or behavioral contagions that spread in networks." ] }
1905.06203
2946104016
Sharing multimodal information (typically images, videos or text) in Social Network Sites (SNS) occupies a relevant part of our time. The particular way how users expose themselves in SNS can provide useful information to infer human behaviors. This paper proposes to use multimodal data gathered from Instagram accounts to predict the perceived prototypical needs described in Glasser's choice theory. The contribution is two-fold: (i) we provide a large multimodal database from Instagram public profiles (more than 30,000 images and text captions) annotated by expert Psychologists on each perceived behavior according to Glasser's theory, and (ii) we propose to automate the recognition of the (unconsciously) perceived needs by the users. Particularly, we propose a baseline using three different feature sets: visual descriptors based on pixel images (SURF and Visual Bag of Words), a high-level descriptor based on the automated scene description using Convolutional Neural Networks, and a text-based descriptor (Word2vec) obtained from processing the captions provided by the users. Finally, we propose a multimodal fusion of these descriptors obtaining promising results in the multi-label classification problem.
@cite_46 used a computational model to predict depression signs in users' Twitter data and showed that screening the posts on Twitter can effectively identify this condition earlier and more accurately than the health professionals. Results of this study showed that depression indicators are identifiable within six months before the trauma appears in an individual. This progress, compared to the average 19-month delay between trauma event and diagnosis experienced by the individuals, can provide a framework for an accessible, accurate, and inexpensive depression screening, where in-person assessments are difficult or costly.
{ "cite_N": [ "@cite_46" ], "mid": [ "2513928994" ], "abstract": [ "Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection. Resulting models outperformed general practitioners’ average unassisted diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally-generated features. These results suggest new avenues for early screening and detection of mental illness." ] }
1905.06203
2946104016
Sharing multimodal information (typically images, videos or text) in Social Network Sites (SNS) occupies a relevant part of our time. The particular way how users expose themselves in SNS can provide useful information to infer human behaviors. This paper proposes to use multimodal data gathered from Instagram accounts to predict the perceived prototypical needs described in Glasser's choice theory. The contribution is two-fold: (i) we provide a large multimodal database from Instagram public profiles (more than 30,000 images and text captions) annotated by expert Psychologists on each perceived behavior according to Glasser's theory, and (ii) we propose to automate the recognition of the (unconsciously) perceived needs by the users. Particularly, we propose a baseline using three different feature sets: visual descriptors based on pixel images (SURF and Visual Bag of Words), a high-level descriptor based on the automated scene description using Convolutional Neural Networks, and a text-based descriptor (Word2vec) obtained from processing the captions provided by the users. Finally, we propose a multimodal fusion of these descriptors obtaining promising results in the multi-label classification problem.
Kircaburun and Griffiths @cite_18 examined the relationships between personality, self-liking, daily Internet use, and Instagram addiction. They asked 752 university students to complete a self-report survey, including the Instagram Addiction Scale and the Self-Liking Scale. They reported that agreeableness, conscientiousness, and self-liking are negatively associated with Instagram addiction while daily Internet usage is positively associated with Instagram addiction. However, the majority of shared contents on Instagram is not only about selfie and self-liking and users also tend to share personal interests through image, videos, and text over a photo, as well.
{ "cite_N": [ "@cite_18" ], "mid": [ "2788268593" ], "abstract": [ "Background and aimsRecent research has suggested that social networking site use can be addictive. Although extensive research has been carried out on potential addiction to social networking sites, such as Facebook, Twitter, YouTube, and Tinder, only one very small study has previously examined potential addiction to Instagram. Consequently, the objectives of this study were to examine the relationships between personality, self-liking, daily Internet use, and Instagram addiction, as well as exploring the mediating role of self-liking between personality and Instagram addiction using path analysis.MethodsA total of 752 university students completed a self-report survey, including the Instagram Addiction Scale (IAS), the Big Five Inventory (BFI), and the Self-Liking Scale.ResultsResults indicated that agreeableness, conscientiousness, and self-liking were negatively associated with Instagram addiction, whereas daily Internet use was positively associated with Instagram addiction. The results also showed t..." ] }
1905.06203
2946104016
Sharing multimodal information (typically images, videos or text) in Social Network Sites (SNS) occupies a relevant part of our time. The particular way how users expose themselves in SNS can provide useful information to infer human behaviors. This paper proposes to use multimodal data gathered from Instagram accounts to predict the perceived prototypical needs described in Glasser's choice theory. The contribution is two-fold: (i) we provide a large multimodal database from Instagram public profiles (more than 30,000 images and text captions) annotated by expert Psychologists on each perceived behavior according to Glasser's theory, and (ii) we propose to automate the recognition of the (unconsciously) perceived needs by the users. Particularly, we propose a baseline using three different feature sets: visual descriptors based on pixel images (SURF and Visual Bag of Words), a high-level descriptor based on the automated scene description using Convolutional Neural Networks, and a text-based descriptor (Word2vec) obtained from processing the captions provided by the users. Finally, we propose a multimodal fusion of these descriptors obtaining promising results in the multi-label classification problem.
@cite_13 surveyed methods published from 2005 to 2017 about automatic depression assessment based on visual cues. They addressed several research questions, including the number of modalities employed, facial signs, experimental protocols for dataset acquisition, feature descriptors, decision methods, and scores. They concluded that results are consistent with the social withdrawal, emotion-context insensitivity, reduced reactivity hypotheses of depression, and the importance of dynamic features multimodal approaches through the quantitative analysis. They also mentioned that the multitude of reported approaches on automatic depression assessment is not mature enough because clinical research questions such as the capacity to distinguish between different depression sub-types or the influence of ethnicity and culture on the progress of mental health were not addressed systematically. Finally, they argued that visual cues need to be supplemented by information from other modalities to achieve clinically useful results.
{ "cite_N": [ "@cite_13" ], "mid": [ "2760537051" ], "abstract": [ "Automatic depression assessment based on visual cues is a rapidly growing research domain. The present exhaustive review of existing approaches as reported in over sixty publications during the last ten years focuses on image processing and machine learning algorithms. Visual manifestations of depression, various procedures used for data collection, and existing datasets are summarized. The review outlines methods and algorithms for visual feature extraction, dimensionality reduction, decision methods for classification and regression approaches, as well as different fusion strategies. A quantitative meta-analysis of reported results, relying on performance metrics robust to chance, is included, identifying general trends and key unresolved issues to be considered in future studies of automatic depression assessment utilizing visual cues alone or in combination with vocal or verbal cues." ] }
1905.06233
2945193749
Rewriting is a formalism widely used in computer science and mathematical logic. The classical formalism has been extended, in the context of functional languages, with an order over the rules and, in the context of rewrite based languages, with the negation over patterns. We propose in this paper a concise and clear algorithm computing the difference over patterns which can be used to define generic encodings of constructor term rewriting systems with negation and order into classical term rewriting systems. As a direct consequence, established methods used for term rewriting systems can be applied to analyze properties of the extended systems. The approach can also be seen as a generic compiler which targets any language providing basic pattern matching primitives. The formalism provides also a new method for deciding if a set of patterns subsumes a given pattern and thus, for checking the presence of useless patterns or the completeness of a set of patterns.
L. Maranget has proposed an algorithm for detecting useless patterns @cite_8 for OCaml and Haskell. As mentioned previously, the algorithm in Figure can be also used to check whether a pattern is useless a set of patterns (Proposition ) but it computes the difference between the pattern and the set in order to make the decision. The minimization algorithm in Figure can thus use the two algorithms interchangeably. Both algorithms have been implemented and we measured the execution time for the minimization function on various examples. In average, L. Maranget's approach is @math (l) ! !r, (x l) ! !r', @math @math [ (l) r (x) r' ] @math 20 $25 using strategies, the order of rule application is expressed with a left-to-right strategy choice-operator and for such strategies the gain with our new approach is even more significant than for the examples in @cite_6 which involved only plain, order independent, TRSs.
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "2287769028", "2164474969" ], "abstract": [ "Rewriting is a formalism widely used in computer science and mathematical logic. When using rewriting as a programming or modeling paradigm, the rewrite rules describe the transformations one wants to operate and declarative rewriting strategies are used to control their application. The operational semantics of these strategies are generally accepted and approaches for analyzing the termination of specific strategies have been studied. We propose in this paper a generic encoding of classic control and traversal strategies used in rewrite based languages such as Maude, Stratego and Tom into a plain term rewriting system. The encoding is proven sound and complete and, as a direct consequence, established termination methods used for term rewriting systems can be applied to analyze the termination of strategy controlled term rewriting systems. The corresponding implementation in Tom generates term rewriting systems compatible with the syntax of termination tools such as AProVE and TTT2, tools which turned out to be very effective in (dis)proving the termination of the generated term rewriting systems. The approach can also be seen as a generic strategy compiler which can be integrated into languages providing pattern matching primitives; this has been experimented for Tom and performances comparable to the native Tom strategies have been observed. 1998 ACM Subject Classification F.4 Mathematical Logic and Formal Languages", "We examine the ML pattern-matching anomalies of useless clauses and non-exhaustive matches. We state the definition of these anomalies, building upon pattern matching semantics, and propose a simple algorithm to detect them. We have integrated the algorithm in the Objective Caml compiler, but we show that the same algorithm is also usable in a non-strict language such as Haskell. Or-patterns are considered for both strict and nonstrict languages." ] }
1905.06233
2945193749
Rewriting is a formalism widely used in computer science and mathematical logic. The classical formalism has been extended, in the context of functional languages, with an order over the rules and, in the context of rewrite based languages, with the negation over patterns. We propose in this paper a concise and clear algorithm computing the difference over patterns which can be used to define generic encodings of constructor term rewriting systems with negation and order into classical term rewriting systems. As a direct consequence, established methods used for term rewriting systems can be applied to analyze properties of the extended systems. The approach can also be seen as a generic compiler which targets any language providing basic pattern matching primitives. The formalism provides also a new method for deciding if a set of patterns subsumes a given pattern and thus, for checking the presence of useless patterns or the completeness of a set of patterns.
There are a lot of works @cite_7 @cite_3 @cite_5 @cite_4 targeting the analysis of functional languages essentially in terms of termination and complexity, and usually they involve some encoding of the match construction. These encodings are generally deep and take into account the evaluation strategy of the targeted language leading to powerful analyzing tools. Our encodings are shallow and independent of the reduction strategy. Even if it turned out to be very practical for encoding ordered CTRSs involving anti-patterns and prove the (innermost) termination of the corresponding CTRSs with , in the context of functional program analysis we see our approach more like a helper that will be hopefully used as an add-on by the existing analyzing tools.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_4", "@cite_7" ], "mid": [ "2020692742", "1836046881", "2115215132", "2099473324" ], "abstract": [ "We show how the complexity of higher-order functional programs can be analysed automatically by applying program transformations to a defunctionalised versions of them, and feeding the result to existing tools for the complexity analysis of first-order term rewrite systems. This is done while carefully analysing complexity preservation and reflection of the employed transformations such that the complexity of the obtained term rewrite system reflects on the complexity of the initial program. Further, we describe suitable strategies for the application of the studied transformations and provide ample experimental data for assessing the viability of our method.", "We show how to automate termination proofs for recursive functions in (a first-order subset of) Isabelle HOL by encoding them as term rewrite systems and invoking an external termination prover. Our link to the external prover includes full proof reconstruction, where all necessary properties are derived inside Isabelle HOL without oracles. Apart from the certification of the imported proof, the main challenge is the formal reduction of the proof obligation produced by Isabelle HOL to the termination of the corresponding term rewrite system. We automate this reduction via suitable tactics which we added to the IsaFoR library.", "AProVE is a system for automatic termination and complexity proofs of Java, C, Haskell, Prolog, and term rewrite systems (TRSs). To analyze programs in high-level languages, AProVE automatically converts them to TRSs. Then, a wide range of techniques is employed to prove termination and to infer complexity bounds for the resulting TRSs. The generated proofs can be exported to check their correctness using automatic certifiers. For use in software construction, we present an AProVE plug-in for the popular Eclipse software development environment.", "There are many powerful techniques for automated termination analysis of term rewriting. However, up to now they have hardly been used for real programming languages. We present a new approach which permits the application of existing techniques from term rewriting to prove termination of most functions defined in Haskell programs. In particular, we show how termination techniques for ordinary rewriting can be used to handle those features of Haskell which are missing in term rewriting (e.g., lazy evaluation, polymorphic types, and higher-order functions). We implemented our results in the termination prover AProVE and successfully evaluated them on existing Haskell libraries." ] }
1905.06139
2945598945
In image-grounded text generation, fine-grained representations of the image are considered to be of paramount importance. Most of the current systems incorporate visual features and textual concepts as a sketch of an image. However, plainly inferred representations are usually undesirable in that they are composed of separate components, the relations of which are elusive. In this work, we aim at representing an image with a set of integrated visual regions and corresponding textual concepts. To this end, we build the Mutual Iterative Attention (MIA) module, which integrates correlated visual features and textual concepts, respectively, by aligning the two modalities. We evaluate the proposed approach on the COCO dataset for image captioning. Extensive experiments show that the refined image representations boost the baseline models by up to 12 in terms of CIDEr, demonstrating that our method is effective and generalizes well to a wide range of models.
A number of neural approaches have been proposed to obtain image representations in various forms. An intuitive method is to extract visual features using a CNN or a RCNN. The former splits an image into a uniform grid of visual regions (Figure (a)), and the latter produces object-level visual features based on bounding boxes (Figure (b)), which has proven to be more effective. For image captioning, , and augmented the information source with textual concepts that are given by a predictor, which is trained to find the most frequent words in the captions. A most recent advance @cite_18 built graphs over the RCNN-detected visual regions, whose relationships are modeled as directed edges in a scene-graph, which is further encoded via a Graph Convolutional Network (GCN).
{ "cite_N": [ "@cite_18" ], "mid": [ "2890531016" ], "abstract": [ "It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1 to 128.7 on COCO testing set." ] }
1905.06139
2945598945
In image-grounded text generation, fine-grained representations of the image are considered to be of paramount importance. Most of the current systems incorporate visual features and textual concepts as a sketch of an image. However, plainly inferred representations are usually undesirable in that they are composed of separate components, the relations of which are elusive. In this work, we aim at representing an image with a set of integrated visual regions and corresponding textual concepts. To this end, we build the Mutual Iterative Attention (MIA) module, which integrates correlated visual features and textual concepts, respectively, by aligning the two modalities. We evaluate the proposed approach on the COCO dataset for image captioning. Extensive experiments show that the refined image representations boost the baseline models by up to 12 in terms of CIDEr, demonstrating that our method is effective and generalizes well to a wide range of models.
To acquire integrated image representations, we introduce the Mutual Iterative Attention (MIA) strategy, which is based on the self-attention mechanism @cite_8 , to align the visual features and textual concepts. It is worth mentioning that also introduced the notion of visual-semantic alignment. They endowed the RCNN-based visual features with semantic information by minimizing their distance in a multimodal embedding space with corresponding segments of the ground-truth caption, which is quite different from our concept-based alignment.
{ "cite_N": [ "@cite_8" ], "mid": [ "2963403868" ], "abstract": [ "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature." ] }
1905.06139
2945598945
In image-grounded text generation, fine-grained representations of the image are considered to be of paramount importance. Most of the current systems incorporate visual features and textual concepts as a sketch of an image. However, plainly inferred representations are usually undesirable in that they are composed of separate components, the relations of which are elusive. In this work, we aim at representing an image with a set of integrated visual regions and corresponding textual concepts. To this end, we build the Mutual Iterative Attention (MIA) module, which integrates correlated visual features and textual concepts, respectively, by aligning the two modalities. We evaluate the proposed approach on the COCO dataset for image captioning. Extensive experiments show that the refined image representations boost the baseline models by up to 12 in terms of CIDEr, demonstrating that our method is effective and generalizes well to a wide range of models.
In the field of image captioning, a prevailing paradigm is the encoder-decoder framework, where a CNN encoder and a RNN decoder are trained end-to-end, translating an image into a coherent description. To bridge the gap between the image and the half-finished caption, visual attention @cite_23 and semantic attention @cite_27 are separately proposed to force the decoder to focus on the most relevant visual regions and textual concepts, respectively, according to the generated context. As a result, the burden falls entirely on the decoder to associate the individual features, the relations of which are elusive. The contribution of this work is providing fine-grained image representations, which can be used in conjunction with the decoder-based attention mechanisms, and ultimately gives rise to higher-quality captions. It is worth noticing that used Transformer to replace RNN and showed that Transformer was less effective than RNN in image captioning, while we use the multi-head attention as a means for aligning the visual features and textual concepts, and the decoder still follows baselines and is not replaced.
{ "cite_N": [ "@cite_27", "@cite_23" ], "mid": [ "2302086703", "2950178297" ], "abstract": [ "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO." ] }
1905.05583
2945824677
Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets.
Multi-task learning @cite_7 @cite_21 is another relevant direction. and use this method to train the language model and the main task model jointly. extend the MT-DNN model originally proposed in by incorporating BERT as its shared text encoding layers. MTL requires training tasks from scratch every time, which makes it inefficient and it usually requires careful weighing of task-specific objective functions @cite_24 . However, we can use multi-task BERT fine-tuning to avoid this problem by making full use of the shared pre-trained model.
{ "cite_N": [ "@cite_24", "@cite_21", "@cite_7" ], "mid": [ "2767434619", "2117130368", "1614862348" ], "abstract": [ "Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter @math . Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.", "We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance.", "This paper suggests that it may be easier to learn several hard tasks at one time than to learn these same tasks separately. In effect, the information provided by the training signal for each task serves as a domain-specific inductive bias for the other tasks. Frequently the world gives us clusters of related tasks to learn. When it does not, it is often straightforward to create additional tasks. For many domains, acquiring inductive bias by collecting additional teaching signal may be more practical than the traditional approach of codifying domain-specific biases acquired from human expertise. We call this approach Multitask Learning (MTL). Since much of the power of an inductive learner follows directly from its inductive bias, multitask learning may yield more powerful learning. An empirical example of multitask connectionist learning is presented where learning improves by training one network on several related tasks at the same time. Multitask decision tree induction is also outlined." ] }
1905.05835
2946113133
Despite the robust structure of the Internet, it is still susceptible to disruptive routing updates that prevent network traffic from reaching its destination. In this work, we propose a method for early detection of large-scale disruptions based on the analysis of bursty BGP announcements. We hypothesize that the occurrence of large-scale disruptions is preceded by bursty announcements. Our method is grounded in analysis of changes in the inter-arrival times of announcements. BGP announcements that are associated with disruptive updates tend to occur in groups of relatively high frequency, followed by periods of infrequent activity. To test our hypothesis, we quantify the burstiness of inter-arrival times around the date and times of three large-scale incidents: the Indosat hijacking event in April 2014, the Telecom Malaysia leak in June 2015, and the Bharti Airtel Ltd. hijack in November 2015. We show that we can detect these events several hours prior to when they were originally detected. We propose an algorithm that leverages the burstiness of disruptive updates to provide early detection of large-scale malicious incidents using local collector data. We describe limitations, open challenges, and how this method can be used for large-scale routing anomaly detection.
Data-plane approaches use ping traceroute to detect anomalies in the route of data @cite_75 @cite_76 . These approaches rely on monitoring the reachability of routes from the victim to detect anomalies. The work in @cite_75 proposed a distributed scheme for detecting BGP anomalies based on departures of hop count stability and AS path similarity. Following this methodology, the work in @cite_7 proposed iSPY. iSPY generates an alarm every time the reachability of a predefined prefix is not observable from multiple vantage points. Data-plane approaches are able to pinpoint suspicious path changes in the traffic which results in higher detection accuracy. However, they do not scale well since they require a considerable number of active measurements for characterizing regular paths and have large latency @cite_54 . Data-plane approaches are complementary to the proposed method, but they are reactive in terms of being able to detect anomalies once they are widely spread and do not allow the ability to anticipate when an event is incipient.
{ "cite_N": [ "@cite_54", "@cite_76", "@cite_75", "@cite_7" ], "mid": [ "2091563247", "", "2162695692", "2963936748" ], "abstract": [ "The de facto inter-domain routing protocol, Border Gateway Protocol (BGP), plays a critical role in the Internet routing reliability. Invalid routes generated by mis-configurations or malicious attacks will devastate the Internet routing system. In the near future, deploying a secure BGP in the Internet to completely prevent hijacking is impossible. As a result, lots of hijacking detection systems have emerged. However, they have more or less weaknesses such as long detection delay, high false alarm rate or deploy hardness. This paper proposes Argus, an agile system to fast and accurate detect prefix hijacking. Argus already keeps on running in the Internet for two months and identified several possible hijackings. Initial results show that it usually discovers a hijacking in less than ten seconds, and can significantly decrease the false alarm rate.", "", "As more and more Internet IP prefix hijacking incidents are being reported, the value of hijacking detection services has become evident. Most of the current hijacking detection approaches monitor IP prefixes on the control plane and detect inconsistencies in route advertisements and route qualities. We propose a different approach that utilizes information collected mostly from the data plane. Our method is motivated by two key observations: when a prefix is not hijacked, 1) the hop count of the path from a source to this prefix is generally stable; and 2) the path from a source to this prefix is almost always a super-path of the path from the same source to a reference point along the previous path, as long as the reference point is topologically close to the prefix. By carefully selecting multiple vantage points and monitoring from these vantage points for any departure from these two observations, our method is able to detect prefix hijacking with high accuracy in a light-weight, distributed, and real-time fashion. Through simulations constructed based on real Internet measurement traces, we demonstrate that our scheme is accurate with both false positive and false negative ratios below 0.5 .", "BGP prefix hijacking is a threat to Internet operators and users. Several mechanisms or modifications to BGP that protect the Internet against it have been proposed. However, the reality is that most operators have not deployed them and are reluctant to do so in the near future. Instead, they rely on basic - and often inefficient - proactive defenses to reduce the impact of hijacking events, or on detection based on third party services and reactive approaches that might take up to several hours. In this work, we present the results of a survey we conducted among 75 network operators to study: (a) the operators' awareness of BGP prefix hijacking attacks, (b) presently used defenses (if any) against BGP prefix hijacking, (c) the willingness to adopt new defense mechanisms, and (d) reasons that may hinder the deployment of BGP prefix hijacking defenses. We expect the findings of this survey to increase the understanding of existing BGP hijacking defenses and the needs of network operators, as well as contribute towards designing new defense mechanisms that satisfy the requirements of the operators." ] }
1905.05835
2946113133
Despite the robust structure of the Internet, it is still susceptible to disruptive routing updates that prevent network traffic from reaching its destination. In this work, we propose a method for early detection of large-scale disruptions based on the analysis of bursty BGP announcements. We hypothesize that the occurrence of large-scale disruptions is preceded by bursty announcements. Our method is grounded in analysis of changes in the inter-arrival times of announcements. BGP announcements that are associated with disruptive updates tend to occur in groups of relatively high frequency, followed by periods of infrequent activity. To test our hypothesis, we quantify the burstiness of inter-arrival times around the date and times of three large-scale incidents: the Indosat hijacking event in April 2014, the Telecom Malaysia leak in June 2015, and the Bharti Airtel Ltd. hijack in November 2015. We show that we can detect these events several hours prior to when they were originally detected. We propose an algorithm that leverages the burstiness of disruptive updates to provide early detection of large-scale malicious incidents using local collector data. We describe limitations, open challenges, and how this method can be used for large-scale routing anomaly detection.
Hybrid approaches have been developed to address the limitations of exclusively control- and data-plane methods @cite_20 @cite_60 . The main idea behind hybrid approaches is to use control-plane inconsistencies to inform data-plane measurements, i.e., by exploring the reachability of packets in a particular network. The work in @cite_20 explored this idea by proposing a framework that launches data-plane probes only when anomalous update messages are received. This system was intended to be used as customized software installed in the routers. Following this idea, the work in @cite_51 introduced Argus. Argus is an automated system that detects prefix hijacking and deduces the origin of the anomaly. Argus is based on pervasively correlating control- and data-plane data during a given time period to detect anomalies including sub-prefix hijacks. The proposed method is able to identify sophisticated attacks as those that are able to be identified by hybrid approaches without using data-plane information. It allows predictive occurrence of anomalies relying only on control-plane information.
{ "cite_N": [ "@cite_51", "@cite_20", "@cite_60" ], "mid": [ "2069222612", "2162877415", "" ], "abstract": [ "Border Gateway Protocol (BGP) plays a critical role in the Internet inter-domain routing reliability. Invalid routes generated by mis-configurations or forged by malicious attacks may hijack the traffic and devastate the Internet routing system, but it is unlikely that a secure BGP can be deployed in the near future to completely prevent them. Although many hijacking detection systems have been developed, they more or less have weaknesses such as long detection delay, high false alarm rate and deployment difficulty, and no systematic detection results have been studied. This paper proposes Argus, an agile system that can accurately detect prefix hijackings and deduce the underlying cause of route anomalies in a very fast way. Argus is based on correlating the control and data plane information closely and pervasively, and has been continuously monitoring the Internet for more than one year. During this period, around 40K routing anomalies were detected, from which 220 stable prefix hijackings were identified. Our analysis on these events shows that, hijackings that have only been theoretically studied before do exist in the Internet. Although the frequency of new hijackings is nearly stable, more specific prefixes are hijacked more frequently. Around 20 of the hijackings last less than ten minutes, and some can pollute 90 of the Internet in less than two minutes. These characteristics make especially useful in practice. We further analyze some representative cases in detail to help increase the understanding of prefix hijackings in the Internet.", "We present novel and practical techniques to accurately detect IP prefix hijacking attacks in real time to facilitate mitigation. Attacks may hijack victim's address space to disrupt network services or perpetrate malicious activities such as spamming and DoS attacks without disclosing identity. We propose novel ways to significantly improve the detection accuracy by combining analysis of passively collected BGP routing updates with data plane fingerprints of suspicious prefixes. The key insight is to use data plane information in the form of edge network fingerprinting to disambiguate suspect IP hijacking incidences based on routing anomaly detection. Conflicts in data plane fingerprints provide much more definitive evidence of successful IP prefix hijacking. Utilizing multiple real-time BGP feeds, we demonstrate the ability of our system to distinguish between legitimate routing changes and actual attacks. Strong correlation with addresses that originate spam emails from a spam honeypot confirms the accuracy of our techniques.", "" ] }
1905.05835
2946113133
Despite the robust structure of the Internet, it is still susceptible to disruptive routing updates that prevent network traffic from reaching its destination. In this work, we propose a method for early detection of large-scale disruptions based on the analysis of bursty BGP announcements. We hypothesize that the occurrence of large-scale disruptions is preceded by bursty announcements. Our method is grounded in analysis of changes in the inter-arrival times of announcements. BGP announcements that are associated with disruptive updates tend to occur in groups of relatively high frequency, followed by periods of infrequent activity. To test our hypothesis, we quantify the burstiness of inter-arrival times around the date and times of three large-scale incidents: the Indosat hijacking event in April 2014, the Telecom Malaysia leak in June 2015, and the Bharti Airtel Ltd. hijack in November 2015. We show that we can detect these events several hours prior to when they were originally detected. We propose an algorithm that leverages the burstiness of disruptive updates to provide early detection of large-scale malicious incidents using local collector data. We describe limitations, open challenges, and how this method can be used for large-scale routing anomaly detection.
The use of the RPKI alone does not require changes in the BGP protocol. The RPKI is an out-of-band mechanism in which routers download information for decision making and does not require the use of online cryptography. However, there are reasons that limit the scope of the RPKI for securing BGP. Researchers have debated the agreement of a trusted Certificate Authority @cite_69 , difficulties to correctly configure the RPKI @cite_46 , a general lack of commitment and incentives to lead their implementation @cite_42 , and its permissiveness to certain types of attacks, e.g., path shortening attacks @cite_14 .
{ "cite_N": [ "@cite_46", "@cite_14", "@cite_69", "@cite_42" ], "mid": [ "1973489837", "1988292309", "2010896385", "2087039608" ], "abstract": [ "Prefix hijacking has always been a big concern in the Internet. Some events made it into the international world-news, but most of them remain unreported or even unnoticed. The scale of the problem can only be estimated. The Resource Publication Infrastructure (RPKI) is an effort by the IETF to secure the inter-domain routing system. It includes a formally verifiable way of identifying who owns legitimately which portion of the IP address space. The RPKI has been standardized and prototype implementations are tested by Internet Service Providers (ISPs). Currently the system holds already about 2 of the Internet routing table. Therefore, in theory, it should be easy to detect hijacking of prefixes within that address space. We take an early look at BGP update data and check those updates against the RPKI---in the same way a router would do, once the system goes operational. We find many interesting dynamics, not all can be easily explained as hijacking, but a significant number are likely operational testing or misconfigurations.", "", "The RPKI is a new security infrastructure that relies on trusted authorities to prevent some of the most devastating attacks on interdomain routing. The threat model for the RPKI supposes that authorities are trusted and routing is under attack. Here we discuss the risks that arise when this threat model is flipped: when RPKI authorities are faulty, misconfigured, compromised, or compelled to misbehave. We show how design decisions that elegantly address the vulnerabilities in the original threat model have unexpected side effects in this flipped threat model. In particular, we show new targeted attacks that allow RPKI authorities, under certain conditions, to limit access to IP prefixes, and discuss the risk that transient RPKI faults can take IP prefixes offline. Our results suggest promising directions for future research, and have implications on the design of security architectures that are appropriate for the untrusted and error-prone Internet.", "With a cryptographic root-of-trust for Internet routing(RPKI [17]) on the horizon, we can finally start planning the deployment of one of the secure interdomain routing protocols proposed over a decade ago (Secure BGP [22], secure origin BGP [37]). However, if experience with IPv6 is any indicator, this will be no easy task. Security concerns alone seem unlikely to provide sufficient local incentive to drive the deployment process forward. Worse yet, the security benefits provided by the S*BGP protocols do not even kick in until a large number of ASes have deployed them. Instead, we appeal to ISPs' interest in increasing revenue-generating traffic. We propose a strategy that governments and industry groups can use to harness ISPs' local business objectives and drive global S*BGP deployment. We evaluate our deployment strategy using theoretical analysis and large-scale simulations on empirical data. Our results give evidence that the market dynamics created by our proposal can transition the majority of the Internet to S*BGP." ] }
1905.05642
2951431118
We present Scratchy-a modular, lightweight robot built for low budget competition attendances. Its base is mainly built with standard 4040 aluminium profiles and the robot is driven by four mecanum wheels on brushless DC motors. In combination with a laser range finder we use estimated odometry - which is calculated by encoders - for creating maps using a particle filter. A RGB-D camera is utilized for object detection and pose estimation. Additionally, there is the option to use a 6-DOF arm to grip objects from an estimated pose or generally for manipulation tasks. The robot can be assembled in less than one hour and fits into two pieces of hand luggage or one bigger suitcase. Therefore, it provides a huge advantage for student teams that participate in robot competitions like the European Robotics League or RoboCup. Thus, this keeps the funding required for participation, which is often a big hurdle for student teams to overcome, low. The software and additional hardware descriptions are available under: https: github.com homer-robotics scratchy.
Piperidis @cite_12 proposed a modular low cost platform for research and development. The platform is differential and built on minimalist low budget parts. Our proposed design is approximately double the size in order to manipulate objects like domestic furniture and interact with humans. Further, we follow a mecanum base platform to be more flexible during manipulation tasks where adjusting the robot using a differential design usually requires a sequence of linear and rotary motions.
{ "cite_N": [ "@cite_12" ], "mid": [ "2085716503" ], "abstract": [ "In this paper the development of a low cost robotic vehicle for research and education is presented. The vehicle was designed considering minimum cost and maximum capabilities. As a base for testing devices and different type of sensors, a commercially available vehicle was used and modified. Two different version of the prototype vehicle were developed accompanied by the proper software that allows the end user to operate the vehicles as an educational or research platform. The functionality of the vehicles was verified after extensive experimentation." ] }
1905.05540
2946562230
This paper presents a novel time series clustering method, the self-organising eigenspace map (SOEM), based on a generalisation of the well-known self-organising feature map (SOFM). The SOEM operates on the eigenspaces of the embedded covariance structures of time series which are related directly to modes in those time series. Approximate joint diagonalisation acts as a pseudo-metric across these spaces allowing us to generalise the SOFM to a neural network with matrix input. The technique is empirically validated against three sets of experiments; univariate and multivariate time series clustering, and application to (clustered) multi-variate time series forecasting. Results indicate that the technique performs a valid topologically ordered clustering of the time series. The clustering is superior in comparison to standard benchmarks when the data is non-aligned, gives the best clustering stage for when used in forecasting, and can be used with partial non-overlapping time series, multivariate clustering and produces a topological representation of the time series objects.
Time series clustering generally involves using time series representations in order to reduce their dimensionality. Proposed representations include the Discrete Fourier Transform @cite_36 , Discrete Wavelets Transform @cite_40 , Singular Value Decomposition @cite_28 , Adaptive Piecewise Constant Approximation @cite_28 , and Symbolic representations @cite_4 etc. The reduced representations or features can be then used to compute clustering of the time series. Almost all clustering techniques require a measure to compute the distance or similarity between the time series being compared. Most commonly used distance or similarity measures used in the literature ( @cite_20 @cite_15 @cite_22 @cite_37 ) are Euclidean distance, Dynamic Time Warping (DTW), distance based on Longest Common Sub-sequence, Sequence Weighted Alignment model, Edit Distance on Real sequences, Spatial Assembling Distance, etc.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_22", "@cite_36", "@cite_28", "@cite_40", "@cite_15", "@cite_20" ], "mid": [ "1894414046", "2164274563", "2161078209", "1499049447", "2163336863", "2042591571", "1853995153", "" ], "abstract": [ "Clustering is a solution for classifying enormous data when there is not any early knowledge about classes. With emerging new concepts like cloud computing and big data and their vast applications in recent years, research works have been increased on unsupervised solutions like clustering algorithms to extract knowledge from this avalanche of data. Clustering time-series data has been used in diverse scientific areas to discover patterns which empower data analysts to extract valuable information from complex and massive datasets. In case of huge datasets, using supervised classification solutions is almost impossible, while clustering can solve this problem using un-supervised approaches. In this research work, the focus is on time-series data, which is one of the popular data types in clustering problems and is broadly used from gene expression data in biology to stock market analysis in finance. This review will expose four main components of time-series clustering and is aimed to represent an updated investigation on the trend of improvements in efficiency, quality and complexity of clustering time-series approaches during the last decade and enlighten new paths for future works. Anatomy of time-series clustering is revealed by introducing its 4 main component.Research works in each of the four main components are reviewed in detail and compared.Analysis of research works published in the last decade.Enlighten new paths for future works for time-series clustering and its components.", "Many high level representations of time series have been proposed for data mining, including Fourier transforms, wavelets, eigenwaves, piecewise polynomial models, etc. Many researchers have also considered symbolic representations of time series, noting that such representations would potentiality allow researchers to avail of the wealth of data structures and algorithms from the text processing and bioinformatics communities. While many symbolic representations of time series have been introduced over the past decades, they all suffer from two fatal flaws. First, the dimensionality of the symbolic representation is the same as the original data, and virtually all data mining algorithms scale poorly with dimensionality. Second, although distance measures can be defined on the symbolic approaches, these distance measures have little correlation with distance measures defined on the original time series. In this work we formulate a new symbolic representation of time series. Our representation is unique in that it allows dimensionality numerosity reduction, and it also allows distance measures to be defined on the symbolic approach that lower bound corresponding distance measures defined on the original series. As we shall demonstrate, this latter feature is particularly exciting because it allows one to run certain data mining algorithms on the efficiently manipulated symbolic representation, while producing identical results to the algorithms that operate on the original data. In particular, we will demonstrate the utility of our representation on various data mining tasks of clustering, classification, query by content, anomaly detection, motif discovery, and visualization.", "Time-Series clustering is one of the important concepts of data mining that is used to gain insight into the mechanism that generate the time-series and predicting the future values of the given time-series. Time-series data are frequently very large and elements of these kinds of data have temporal ordering. The clustering of time series is organized into three groups depending upon whether they work directly on raw data either in frequency or time domain, indirectly with the features extracted from the raw data or with model built from raw data. In this paper, we have shown the survey and summarization of previous work that investigated the clustering of time series in various application domains ranging from science, engineering, business, finance, economic, health care, to government.", "We propose an indexing method for time sequences for processing similarity queries. We use the Discrete Fourier Transform (DFT) to map time sequences to the frequency domain, the crucial observation being that, for most sequences of practical interest, only the first few frequencies are strong. Another important observation is Parseval's theorem, which specifies that the Fourier transform preserves the Euclidean distance in the time or frequency domain. Having thus mapped sequences to a lower-dimensionality space by using only the first few Fourier coefficients, we use R * -trees to index the sequences and efficiently answer similarity queries. We provide experimental results which show that our method is superior to search based on sequential scanning. Our experiments show that a few coefficients (1–3) are adequate to provide good performance. The performance gain of our method increases with the number and length of sequences.", "Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most promising solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments of varying lengths such that their individual reconstruction errors are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower bounding, but very tight Euclidean distance approximation and show how they can support fast exact searching, and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its superiority.", "Time series stored as feature vectors can be indexed by multidimensional index trees like R-Trees for fast retrieval. Due to the dimensionality curse problem, transformations are applied to time series to reduce the number of dimensions of the feature vectors. Different transformations like Discrete Fourier Transform (DFT) Discrete Wavelet Transform (DWT), Karhunen-Loeve (KL) transform or Singular Value Decomposition (SVD) can be applied. While the use of DFT and K-L transform or SVD have been studied on the literature, to our knowledge, there is no in-depth study on the application of DWT. In this paper we propose to use Haar Wavelet Transform for time series indexing. The major contributions are: (1) we show that Euclidean distance is preserved in the Haar transformed domain and no false dismissal will occur, (2) we show that Haar transform can outperform DFT through experiments, (3) a new similarity model is suggested to accommodate vertical shift of time series, and (4) a two-phase method is proposed for efficient n-nearest neighbor query in time series databases.", "In the last decade there has been an explosion of interest in mining time series data. Literally hundreds of papers have introduced new algorithms to index, classify, cluster and segment time series. In this work we make the following claim. Much of this work has very little utility because the contribution made (speed in the case of indexing, accuracy in the case of classification and clustering, model accuracy in the case of segmentation) offer an amount of “improvement” that would have been completely dwarfed by the variance that would have been observed by testing on many real world datasets, or the variance that would have been observed by changing minor (unstated) implementation details. To illustrate our point, we have undertaken the most exhaustive set of time series experiments ever attempted, re-implementing the contribution of more than two dozen papers, and testing them on 50 real world, highly diverse datasets. Our empirical results strongly support our assertion, and suggest the need for a set of time series benchmarks and more careful empirical evaluation in the data mining community.", "" ] }
1905.05540
2946562230
This paper presents a novel time series clustering method, the self-organising eigenspace map (SOEM), based on a generalisation of the well-known self-organising feature map (SOFM). The SOEM operates on the eigenspaces of the embedded covariance structures of time series which are related directly to modes in those time series. Approximate joint diagonalisation acts as a pseudo-metric across these spaces allowing us to generalise the SOFM to a neural network with matrix input. The technique is empirically validated against three sets of experiments; univariate and multivariate time series clustering, and application to (clustered) multi-variate time series forecasting. Results indicate that the technique performs a valid topologically ordered clustering of the time series. The clustering is superior in comparison to standard benchmarks when the data is non-aligned, gives the best clustering stage for when used in forecasting, and can be used with partial non-overlapping time series, multivariate clustering and produces a topological representation of the time series objects.
However, these measures assume that the time series are or semi-aligned, i.e. time series in the same clusters evolve closely at the same or at similar times, see @cite_37 @cite_22 @cite_28 . For example, DTW seeks the optimal alignment between two time series by warping time (adjusting the time index dynamically to obtain a good match) and then, for instance, calculate the Euclidean distance between them @cite_8 . The optimal alignment is achieved by mapping the time axis of one time series onto the time axis of the other one, which is basically the same as the optimal warping function. See @cite_9 for a thorough description on DTW. In the literature, there are several variants of the warping function, such as Weighted Dynamic Time Warping (WDTW), Derivative Dynamic Time Warping and multiscale DTW specifically for speeding up the DTW (we refer to @cite_38 for further explanation of the techniques mentioned above). However, if the time series are not semi-aligned, then DTW by itself fails to cluster time series accurately. As an example, Figure shows three time series with the same dynamics which are obviously not aligned and even semi-aligned.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_22", "@cite_8", "@cite_28", "@cite_9" ], "mid": [ "", "1894414046", "2161078209", "2072241861", "2163336863", "2128160875" ], "abstract": [ "", "Clustering is a solution for classifying enormous data when there is not any early knowledge about classes. With emerging new concepts like cloud computing and big data and their vast applications in recent years, research works have been increased on unsupervised solutions like clustering algorithms to extract knowledge from this avalanche of data. Clustering time-series data has been used in diverse scientific areas to discover patterns which empower data analysts to extract valuable information from complex and massive datasets. In case of huge datasets, using supervised classification solutions is almost impossible, while clustering can solve this problem using un-supervised approaches. In this research work, the focus is on time-series data, which is one of the popular data types in clustering problems and is broadly used from gene expression data in biology to stock market analysis in finance. This review will expose four main components of time-series clustering and is aimed to represent an updated investigation on the trend of improvements in efficiency, quality and complexity of clustering time-series approaches during the last decade and enlighten new paths for future works. Anatomy of time-series clustering is revealed by introducing its 4 main component.Research works in each of the four main components are reviewed in detail and compared.Analysis of research works published in the last decade.Enlighten new paths for future works for time-series clustering and its components.", "Time-Series clustering is one of the important concepts of data mining that is used to gain insight into the mechanism that generate the time-series and predicting the future values of the given time-series. Time-series data are frequently very large and elements of these kinds of data have temporal ordering. The clustering of time series is organized into three groups depending upon whether they work directly on raw data either in frequency or time domain, indirectly with the features extracted from the raw data or with model built from raw data. In this paper, we have shown the survey and summarization of previous work that investigated the clustering of time series in various application domains ranging from science, engineering, business, finance, economic, health care, to government.", "Condition-based maintenance is believed to be a cost-effective and safety-assured strategy for railroad track management. Implementation of the strategy strongly relies on reliable and complete track condition data, reliable track deterioration models, and efficient and solvable mathematical models for optimal track maintenance scheduling. In practice, reliability of track condition inspection data is often in question; therefore, collected inspection data need to be preprocessed before it is used to implement a condition-based maintenance strategy. Reliable track condition inspection data means accurate positioning data and noiseless condition parameter measurements. Based on dynamic time warping, which is a widely used technique in the area of speech signal processing and biomedical engineering, this paper presents a robust optimization model for correcting positional errors of inspection data from a track geometry car, which is a kind of specialized instrument that is extensively used to measure the condition of tracks under wheel loadings. An efficient solution algorithm for the model is proposed as well. Applications of the model to inspection data from the track geometry car show that positional errors are almost removed from the inspection data, regardless of noises in condition parameter measurements and track maintenance interventions, and the model takes 1.5004 s, on average, to complete the positional error correction for a 1-km-long track segment. The presented model is adjustable to alignment of data sequences in many other areas, e.g., railroad inspection by track geometry trolley, highway roughness inspection by Light Detection and Ranging (LiDAR) vehicles, and railroad catenary wire geometry inspection.", "Similarity search in large time series databases has attracted much research interest recently. It is a difficult problem because of the typically high dimensionality of the data.. The most promising solutions involve performing dimensionality reduction on the data, then indexing the reduced data with a multidimensional index structure. Many dimensionality reduction techniques have been proposed, including Singular Value Decomposition (SVD), the Discrete Fourier transform (DFT), and the Discrete Wavelet Transform (DWT). In this work we introduce a new dimensionality reduction technique which we call Adaptive Piecewise Constant Approximation (APCA). While previous techniques (e.g., SVD, DFT and DWT) choose a common representation for all the items in the database that minimizes the global reconstruction error, APCA approximates each time series by a set of constant value segments of varying lengths such that their individual reconstruction errors are minimal. We show how APCA can be indexed using a multidimensional index structure. We propose two distance measures in the indexed space that exploit the high fidelity of APCA for fast searching: a lower bounding Euclidean distance approximation, and a non-lower bounding, but very tight Euclidean distance approximation and show how they can support fast exact searching, and even faster approximate searching on the same index structure. We theoretically and empirically compare APCA to all the other techniques and demonstrate its superiority.", "This paper reports on an optimum dynamic progxamming (DP) based time-normalization algorithm for spoken word recognition. First, a general principle of time-normalization is given using time-warping function. Then, two time-normalized distance definitions, called symmetric and asymmetric forms, are derived from the principle. These two forms are compared with each other through theoretical discussions and experimental studies. The symmetric form algorithm superiority is established. A new technique, called slope constraint, is successfully introduced, in which the warping function slope is restricted so as to improve discrimination between words in different categories. The effective slope constraint characteristic is qualitatively analyzed, and the optimum slope constraint condition is determined through experiments. The optimized algorithm is then extensively subjected to experimental comparison with various DP-algorithms, previously applied to spoken word recognition by different research groups. The experiment shows that the present algorithm gives no more than about two-thirds errors, even compared to the best conventional algorithm." ] }
1905.05540
2946562230
This paper presents a novel time series clustering method, the self-organising eigenspace map (SOEM), based on a generalisation of the well-known self-organising feature map (SOFM). The SOEM operates on the eigenspaces of the embedded covariance structures of time series which are related directly to modes in those time series. Approximate joint diagonalisation acts as a pseudo-metric across these spaces allowing us to generalise the SOFM to a neural network with matrix input. The technique is empirically validated against three sets of experiments; univariate and multivariate time series clustering, and application to (clustered) multi-variate time series forecasting. Results indicate that the technique performs a valid topologically ordered clustering of the time series. The clustering is superior in comparison to standard benchmarks when the data is non-aligned, gives the best clustering stage for when used in forecasting, and can be used with partial non-overlapping time series, multivariate clustering and produces a topological representation of the time series objects.
In classification, non-aligned time series are often classified by different means such as kernel methods or motif extraction. In essence the algorithms seek some characteristic of the data which is not localised to a particular time. @cite_14 proposed a new kernel function based on a WDTW distance for multiclass support vector machines for non-aligned time series. This provides an optimal match between two time series by not only allowing a non-linear mapping between two sequences, but also considering relative significance depending on the phase difference between points on time series data. Clustering time series under time warp measures is very challenging and as yet unresolved, as it requires aligning multiple temporal series simultaneously given an unlabelled time series @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_14" ], "mid": [ "2298278275", "2071192814" ], "abstract": [ "Generalize k-means-based clustering to temporal data under time warp.Extend time warp measures and temporal kernels to capture local temporal differences.Propose a tractable estimation of the cluster representatives under extended measures.Propose fast solutions that capture both global and local temporal features.Deep analysis on a wide range of 20 non-isotropic, linearly non-separable public data. Temporal data naturally arise in various emerging applications, such as sensor networks, human mobility or internet of things. Clustering is an important task, usually applied a priori to pattern analysis tasks, for summarization, group and prototype extraction; it is all the more crucial for dimensionality reduction in a big data context. Clustering temporal data under time warp measures is challenging because it requires aligning multiple temporal data simultaneously. To circumvent this problem, costly k-medoids and kernel k-means algorithms are generally used. This work investigates a different approach to temporal data clustering through weighted and kernel time warp measures and a tractable and fast estimation of the representative of the clusters that captures both global and local temporal features. A wide range of 20 public and challenging datasets, encompassing images, traces and ecg data that are non-isotropic (i.e., non-spherical), not well-isolated and linearly non-separable, is used to evaluate the efficiency of the proposed temporal data clustering. The results of this comparison illustrate the benefits of the method proposed, which outperforms the baselines on all datasets. A deep analysis is conducted to study the impact of the data specifications on the effectiveness of the studied clustering methods.", "In this paper, we propose support vector-based supervised learning algorithms, called multiclass support vector data description with weighted dynamic time warping kernel function (MSVDD-WDTWK) and multiclass support vector machines with weighted dynamic time warping kernel function (MSVM-WDTWK), which provides a flexible and robust kernel function for time series classification between non-aligned time series data resulting in improved accuracy. The proposed WDTW kernel function provides an optimal match between two time series data by not only allowing a non-linear mapping between two data sequences, but also considering relative significance depending on the phase difference between points on time series data. We validate the proposed approaches using extensive numerical experiments on a number of multiclass UCR time series data mining archive, and demonstrate that our proposed methods provide lower classification error rates compared with existing techniques." ] }
1905.05540
2946562230
This paper presents a novel time series clustering method, the self-organising eigenspace map (SOEM), based on a generalisation of the well-known self-organising feature map (SOFM). The SOEM operates on the eigenspaces of the embedded covariance structures of time series which are related directly to modes in those time series. Approximate joint diagonalisation acts as a pseudo-metric across these spaces allowing us to generalise the SOFM to a neural network with matrix input. The technique is empirically validated against three sets of experiments; univariate and multivariate time series clustering, and application to (clustered) multi-variate time series forecasting. Results indicate that the technique performs a valid topologically ordered clustering of the time series. The clustering is superior in comparison to standard benchmarks when the data is non-aligned, gives the best clustering stage for when used in forecasting, and can be used with partial non-overlapping time series, multivariate clustering and produces a topological representation of the time series objects.
In a recent paper Keogh and Kasetty @cite_15 re-examined the performance of multiple clustering algorithms over multiple benchmarking data sets. They conclude that most existing clustering techniques do not work well due to the complexity of their underlying structure and data dependency. This indeed causes a real challenge in clustering temporal data of high dimensionality, unequal length, complicated temporal correlation, and a substantial amount of noise @cite_5 . For example, the SOFM, which is an unsupervised learning algorithm, does not work well with time series of unequal length due to the difficulty involved in defining the dimension of weight vectors @cite_20 . In summary, the choice of the specific clustering algorithm can be highly problem dependent and often real-world data sets do not adhere to the underlying assumptions of the algorithm.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "2298278275", "1853995153", "" ], "abstract": [ "Generalize k-means-based clustering to temporal data under time warp.Extend time warp measures and temporal kernels to capture local temporal differences.Propose a tractable estimation of the cluster representatives under extended measures.Propose fast solutions that capture both global and local temporal features.Deep analysis on a wide range of 20 non-isotropic, linearly non-separable public data. Temporal data naturally arise in various emerging applications, such as sensor networks, human mobility or internet of things. Clustering is an important task, usually applied a priori to pattern analysis tasks, for summarization, group and prototype extraction; it is all the more crucial for dimensionality reduction in a big data context. Clustering temporal data under time warp measures is challenging because it requires aligning multiple temporal data simultaneously. To circumvent this problem, costly k-medoids and kernel k-means algorithms are generally used. This work investigates a different approach to temporal data clustering through weighted and kernel time warp measures and a tractable and fast estimation of the representative of the clusters that captures both global and local temporal features. A wide range of 20 public and challenging datasets, encompassing images, traces and ecg data that are non-isotropic (i.e., non-spherical), not well-isolated and linearly non-separable, is used to evaluate the efficiency of the proposed temporal data clustering. The results of this comparison illustrate the benefits of the method proposed, which outperforms the baselines on all datasets. A deep analysis is conducted to study the impact of the data specifications on the effectiveness of the studied clustering methods.", "In the last decade there has been an explosion of interest in mining time series data. Literally hundreds of papers have introduced new algorithms to index, classify, cluster and segment time series. In this work we make the following claim. Much of this work has very little utility because the contribution made (speed in the case of indexing, accuracy in the case of classification and clustering, model accuracy in the case of segmentation) offer an amount of “improvement” that would have been completely dwarfed by the variance that would have been observed by testing on many real world datasets, or the variance that would have been observed by changing minor (unstated) implementation details. To illustrate our point, we have undertaken the most exhaustive set of time series experiments ever attempted, re-implementing the contribution of more than two dozen papers, and testing them on 50 real world, highly diverse datasets. Our empirical results strongly support our assertion, and suggest the need for a set of time series benchmarks and more careful empirical evaluation in the data mining community.", "" ] }
1905.05540
2946562230
This paper presents a novel time series clustering method, the self-organising eigenspace map (SOEM), based on a generalisation of the well-known self-organising feature map (SOFM). The SOEM operates on the eigenspaces of the embedded covariance structures of time series which are related directly to modes in those time series. Approximate joint diagonalisation acts as a pseudo-metric across these spaces allowing us to generalise the SOFM to a neural network with matrix input. The technique is empirically validated against three sets of experiments; univariate and multivariate time series clustering, and application to (clustered) multi-variate time series forecasting. Results indicate that the technique performs a valid topologically ordered clustering of the time series. The clustering is superior in comparison to standard benchmarks when the data is non-aligned, gives the best clustering stage for when used in forecasting, and can be used with partial non-overlapping time series, multivariate clustering and produces a topological representation of the time series objects.
We propose here a new approach to clustering of time series based on our self-organising map for temporal data which are described by matrices rather than feature vectors. This provides and advances over the existing methodology. A predecessor to this work @cite_10 clusters pairwise data and co-occurrence data using a variant of the SOFM. However this approach is based on a single matrix derived from the data as opposed to clustering multiple matrices as presented here. To the best of our knowledge the proposed technique is the first to present a truly matrix-based generalisation of the SOFM. In addition, our technique is applicable to non-aligned time series, time series of differing lengths, multivariate time series clustering and requires minimal tuning.
{ "cite_N": [ "@cite_10" ], "mid": [ "2075073685" ], "abstract": [ "In this contribution we present extensions of the Self Organizing Map and clustering methods for the categorization and visualization of data which are described by matrices rather than feature vectors. Rows and Columns of these matrices correspond to objects which may or may not belong to the same set, and the entries in the matrix describe the relationships between them. The clustering task is formulated as an optimization problem: Model complexity is minimized under the constraint, that the error one makes when reconstructing objects from class information is fixed, usually to a small value. The data is then visualized with help of modified Self Organizing Maps methods, i.e. by constructing a neighborhood preserving non-linear projection into a low-dimensional \"map-space\". Grouping of data objects is done using an improved optimization technique, which combines deterministic annealing with \"growing\" techniques. Performance of the new methods is evaluated by applying them to two kinds of matrix data: (i) pairwise data, where row and column objects are from the same set and where matrix elements denote dissimilarity values and (ii) co-occurrence data, where row and column objects are from different sets and where the matrix elements describe how often object pairs occur." ] }
1905.05659
2946053549
Heterogeneous network embedding (HNE) is a challenging task due to the diverse node types and or diverse relationships between nodes. Existing HNE methods are typically unsupervised. To maximize the profit of utilizing the rare and valuable supervised information in HNEs, we develop a novel Active Heterogeneous Network Embedding (ActiveHNE) framework, which includes two components: Discriminative Heterogeneous Network Embedding (DHNE) and Active Query in Heterogeneous Networks (AQHN). In DHNE, we introduce a novel semi-supervised heterogeneous network embedding method based on graph convolutional neural network. In AQHN, we first introduce three active selection strategies based on uncertainty and representativeness, and then derive a batch selection method that assembles these strategies using a multi-armed bandit mechanism. ActiveHNE aims at improving the performance of HNE by feeding the most valuable supervision obtained by AQHN into DHNE. Experiments on public datasets demonstrate the effectiveness of ActiveHNE and its advantage on reducing the query cost.
Most of the previous approaches on HNE are unsupervised @cite_27 @cite_16 @cite_18 @cite_17 . Recently, methods have been proposed to leverage meta-paths, either specified by users or derived from additional supervision @cite_3 @cite_19 @cite_4 . However, the choice of meta-paths strongly depends on the task at hands, thus limiting their ability of generalization @cite_27 . In addition, they enrich the neighborhood of nodes, resulting in a denser network and in higher training costs @cite_31 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_3", "@cite_19", "@cite_27", "@cite_31", "@cite_16", "@cite_17" ], "mid": [ "2584848220", "2963844113", "2767774008", "2743104969", "2809435521", "2154851992", "2062797058", "" ], "abstract": [ "Heterogeneous events, which are defined as events connecting strongly-typed objects, are ubiquitous in the real world. We propose a HyperEdge-Based Embedding (Hebe) framework for heterogeneous event data, where a hyperedge represents the interaction among a set of involving objects in an event. The Hebe framework models the proximity among objects in an event by predicting a target object given the other participating objects in the event (hyperedge). Since each hyperedge encapsulates more information on a given event, Hebe is robust to data sparseness. In addition, Hebe is scalable when the data size spirals. Extensive experiments on large-scale real-world datasets demonstrate the efficacy and robustness of Hebe.", "", "In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6 to 23.8 of @math - @math in multi-label node classification and 5 to 70.8 of @math in link prediction.", "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.", "Heterogeneous information networks (HINs) are ubiquitous in real-world applications. In the meantime, network embedding has emerged as a convenient tool to mine and learn from networked data. As a result, it is of interest to develop HIN embedding methods. However, the heterogeneity in HINs introduces not only rich information but also potentially incompatible semantics, which poses special challenges to embedding learning in HINs. With the intention to preserve the rich yet potentially incompatible information in HIN embedding, we propose to study the problem of comprehensive transcription of heterogeneous information networks. The comprehensive transcription of HINs also provides an easy-to-use approach to unleash the power of HINs, since it requires no additional supervision, expertise, or feature engineering. To cope with the challenges in the comprehensive transcription of HINs, we propose the HEER algorithm, which embeds HINs via edge representations that are further coupled with properly-learned heterogeneous metrics. To corroborate the efficacy of HEER, we conducted experiments on two large-scale real-words datasets with an edge reconstruction task and multiple case studies. Experiment results demonstrate the effectiveness of the proposed HEER model and the utility of edge representations and heterogeneous metrics. The code and data are available at https: github.com GentleZhu HEER.", "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "Data embedding is used in many machine learning applications to create low-dimensional feature representations, which preserves the structure of data points in their original space. In this paper, we examine the scenario of a heterogeneous network with nodes and content of various types. Such networks are notoriously difficult to mine because of the bewildering combination of heterogeneous contents and structures. The creation of a multidimensional embedding of such data opens the door to the use of a wide variety of off-the-shelf mining techniques for multidimensional data. Despite the importance of this problem, limited efforts have been made on embedding a network of scalable, dynamic and heterogeneous data. In such cases, both the content and linkage structure provide important cues for creating a unified feature representation of the underlying network. In this paper, we design a deep embedding algorithm for networked data. A highly nonlinear multi-layered embedding function is used to capture the complex interactions between the heterogeneous data in a network. Our goal is to create a multi-resolution deep embedding function, that reflects both the local and global network structures, and makes the resulting embedding useful for a variety of data mining tasks. In particular, we demonstrate that the rich content and linkage information in a heterogeneous network can be captured by such an approach, so that similarities among cross-modal data can be measured directly in a common embedding space. Once this goal has been achieved, a wide variety of data mining problems can be solved by applying off-the-shelf algorithms designed for handling vector representations. Our experiments on real-world network datasets show the effectiveness and scalability of the proposed algorithm as compared to the state-of-the-art embedding methods.", "" ] }
1905.05659
2946053549
Heterogeneous network embedding (HNE) is a challenging task due to the diverse node types and or diverse relationships between nodes. Existing HNE methods are typically unsupervised. To maximize the profit of utilizing the rare and valuable supervised information in HNEs, we develop a novel Active Heterogeneous Network Embedding (ActiveHNE) framework, which includes two components: Discriminative Heterogeneous Network Embedding (DHNE) and Active Query in Heterogeneous Networks (AQHN). In DHNE, we introduce a novel semi-supervised heterogeneous network embedding method based on graph convolutional neural network. In AQHN, we first introduce three active selection strategies based on uncertainty and representativeness, and then derive a batch selection method that assembles these strategies using a multi-armed bandit mechanism. ActiveHNE aims at improving the performance of HNE by feeding the most valuable supervision obtained by AQHN into DHNE. Experiments on public datasets demonstrate the effectiveness of ActiveHNE and its advantage on reducing the query cost.
One can improve the embedding performance by acquiring the labels of the most valuable nodes via AL. However, AL on non-i.i.d. network data is seldom studied. In addition, the diversity of node types in HINs makes the query criterion of AL even harder to design. Although attempts have been made to improve the embedding performance by incorporating AL, they neither consider the dependence between nodes, nor the heterogeneity of networks @cite_13 @cite_26 @cite_2 .
{ "cite_N": [ "@cite_26", "@cite_13", "@cite_2" ], "mid": [ "2614195334", "2552147134", "2808130788" ], "abstract": [ "Graph embedding provides an efficient solution for graph analysis by converting the graph into a low-dimensional space which preserves the structure information. In contrast to the graph structure data, the i.i.d. node embedding can be processed efficiently in terms of both time and space. Current semi-supervised graph embedding algorithms assume the labelled nodes are given, which may not be always true in the real world. While manually label all training data is inapplicable, how to select the subset of training data to label so as to maximize the graph analysis task performance is of great importance. This motivates our proposed active graph embedding (AGE) framework, in which we design a general active learning query strategy for any semi-supervised graph embedding algorithm. AGE selects the most informative nodes as the training labelled nodes based on the graphical information (i.e., node centrality) as well as the learnt node embedding (i.e., node classification uncertainty and node embedding representativeness). Different query criteria are combined with the time-sensitive parameters which shift the focus from graph based query criteria to embedding based criteria as the learning progresses. Experiments have been conducted on three public data sets and the results verified the effectiveness of each component of our query strategy and the power of combining them using time-sensitive parameters. Our code is available online at: this https URL.", "We propose a new active learning (AL) method for text classification with convolutional neural networks (CNNs). In AL, one selects the instances to be manually labeled with the aim of maximizing model performance with minimal effort. Neural models capitalize on word embeddings as representations (features), tuning these to the task at hand. We argue that AL strategies for multi-layered neural models should focus on selecting instances that most affect the embedding space (i.e., induce discriminative word representations). This is in contrast to traditional AL approaches (e.g., entropy-based uncertainty sampling), which specify higher level objectives. We propose a simple approach for sentence classification that selects instances containing words whose embeddings are likely to be updated with the greatest magnitude, thereby rapidly learning discriminative, task-specific embeddings. We extend this approach to document classification by jointly considering: (1) the expected changes to the constituent word representations; and (2) the model's current overall uncertainty regarding the instance. The relative emphasis placed on these criteria is governed by a stochastic process that favors selecting instances likely to improve representations at the outset of learning, and then shifts toward general uncertainty sampling as AL progresses. Empirical results show that our method outperforms baseline AL approaches on both sentence and document classification tasks. We also show that, as expected, the method quickly learns discriminative word embeddings. To the best of our knowledge, this is the first work on AL addressing neural models for text classification.", "Most of current network representation models are learned in unsupervised fashions, which usually lack the capability of discrimination when applied to network analysis tasks, such as node classification. It is worth noting that label information is valuable for learning the discriminative network representations. However, labels of all training nodes are always difficult or expensive to obtain and manually labeling all nodes for training is inapplicable. Different sets of labeled nodes for model learning lead to different network representation results. In this paper, we propose a novel method, termed as ANRMAB, to learn the active discriminative network representations with a multi-armed bandit mechanism in active learning setting. Specifically, based on the networking data and the learned network representations, we design three active learning query strategies. By deriving an effective reward scheme that is closely related to the estimated performance measure of interest, ANRMAB uses a multi-armed bandit mechanism for adaptive decision making to select the most informative nodes for labeling. The updated labeled nodes are then used for further discriminative network representation learning. Experiments are conducted on three public data sets to verify the effectiveness of ANRMAB." ] }
1905.05613
2945928209
In this paper we study the time required for a @math -biased ( @math ) walk to visit all nodes of a supercritical Galton-Watson tree up to generation @math . Inspired by the extremal landscape approach in [ 2018] for simple random walk on binary trees, we establish the near-independent nature of extremal points for the @math -biased walk, and deduce the scaling limit of the cover time.
Bounds using hitting time were given in Matthews @cite_17 , @math where @math is the expected time for the walk starting at @math to hit @math .
{ "cite_N": [ "@cite_17" ], "mid": [ "2068008593" ], "abstract": [ "On donne des bornes superieures et inferieures sur la fonction generatrice des moments du temps pris par une chaine de Markov pour visiter au moins n des N sous-ensembles selectionnes de son espace d'etats" ] }
1905.05613
2945928209
In this paper we study the time required for a @math -biased ( @math ) walk to visit all nodes of a supercritical Galton-Watson tree up to generation @math . Inspired by the extremal landscape approach in [ 2018] for simple random walk on binary trees, we establish the near-independent nature of extremal points for the @math -biased walk, and deduce the scaling limit of the cover time.
More precise results can be obtained if we restrict to particular graphs. Postponding our topic of trees to the next paragraph, the most studied situation is the two-dimensional torus. The first order estimation of its cover time was determined in @cite_7 , then the result was ameliorated in Ding @cite_12 , Belius and Kistler @cite_9 , Abe @cite_8 , and most recently @cite_1 to the extend that @math where @math is a 2-dimensional manifold with some regularity conditions, @math is the area of @math , and @math is the time for the walk to intersect every ball of radius @math on @math .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_12" ], "mid": [ "2120076230", "2760496220", "2257118316", "2786790317", "2114344097" ], "abstract": [ "LetT (x;\") denote the rst hitting time of the disc of radius \" centered at x for Brownian motion on the two dimensional torus T 2 . We prove that sup x2T2T (x;\")=j log\"j 2 ! 2= as \" ! 0. The same applies to Brownian motion on any smooth, compact connected, two- dimensional, Riemannian manifold with unit area and no boundary. As a consequence, we prove a conjecture, due to Aldous (1989), that the number of steps it takes a simple random walk to cover all points of the lattice torus Z 2 is asymptotic to 4n 2 (logn) 2 = . Determining these asymptotics is an essential step toward analyzing the fractal structure of the set of uncovered sites before coverage is complete; so far, this structure was only studied non-rigorously in the physics literature. We also establish a conjecture, due to Kesten and R ev esz, that describes the asymptotics for the number of steps needed by simple random walk in Z 2 to cover the disc of radius n.", "We consider the cover time for a simple random walk on the two-dimensional discrete torus of side length @math . Dembo, Peres, Rosen, and Zeitouni [Ann. Math. 160:433-464, 2004] identified the leading term in the asymptotics for the cover time as @math goes to infinity. In this paper, we study the exact second order term. This is a discrete analogue of the work on the cover time for planar Brownian motion by Belius and Kistler [Probab. Theory Relat Fields. 167:461-552, 2017].", "The ( )-cover time of the two dimensional torus by Brownian motion is the time it takes for the process to come within distance ( >0 ) from any point. Its leading order in the small ( )-regime has been established by (Ann Math 160:433–464, 2004). In this work, the second order correction is identified. The approach relies on a multi-scale refinement of the second moment method, and draws on ideas from the study of the extremes of branching Brownian motion.", "Let @math denote the cover time of a two dimensional compact manifold @math by a Wiener sausage of radius @math . We prove that @math is tight, where @math denotes the area of @math .", "We study the cover time @math by (continuous-time) random walk on the 2D box of side length @math with wired boundary or on the 2D torus,and show that in both cases with probability approaching @math as @math increases, @math . This improves a result of Dembo, Peres, Rosen, and Zeitouni (2004) and makes progresstowards a conjecture of Bramson and Zeitouni (2009)." ] }
1905.05613
2945928209
In this paper we study the time required for a @math -biased ( @math ) walk to visit all nodes of a supercritical Galton-Watson tree up to generation @math . Inspired by the extremal landscape approach in [ 2018] for simple random walk on binary trees, we establish the near-independent nature of extremal points for the @math -biased walk, and deduce the scaling limit of the cover time.
As for trees, the first order approximation for @math -ary trees was first obtained in Aldous @cite_3 using recursive equations, @math General walks on Galton-Watson trees were studied Andreoletti and Debs @cite_2 by a second moment method: on the recurrent case with some regularity assumptions, @math generations are covered in @math steps, where @math is an explicit constant (reciprocal to the constant of law of large number for branching random walk).
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1971599265", "2047393285" ], "abstract": [ "Abstract For simple random walk on a finite tree, the cover time is the time taken to visit every vertex. For the balanced b-ary tree of height m, the cover time is shown to be asymptotic to 2m 2 b m + 1 ( log b) (b − 1) as m → ∞ . On the uniform random labeled tree on n vertices, we give a convincing heuristic argument that the mean time to cover and return to the root is asymptotic to 6(2π) 1 2 n 3 2 , and prove a weak O(n 3 2 ) upper bound. The argument rests upon a recursive formula for cover time of trees generated by a simple branching process.", "In this paper we deal with a random walk in a random environment on a super-critical Galton–Watson tree. We focus on the recurrent cases already studied by Hu and Shi (Ann. Probab. 35:1978–1997, 2007; Probab. Theory Relat. Fields 138:521–549, 2007), (Probab. Theory Relat. Fields, 2011, in press), and Faraud (Electron. J. Probab. 16(6):174–215, 2011). We prove that the largest generation entirely visited by these walks behaves like logn, and that the constant of normalization, which differs from one case to another, is a function of the inverse of the constant of Biggins’ law of large numbers for branching random walks (Biggins in Adv. Appl. Probab. 8:446–459, 1976)." ] }
1905.05613
2945928209
In this paper we study the time required for a @math -biased ( @math ) walk to visit all nodes of a supercritical Galton-Watson tree up to generation @math . Inspired by the extremal landscape approach in [ 2018] for simple random walk on binary trees, we establish the near-independent nature of extremal points for the @math -biased walk, and deduce the scaling limit of the cover time.
The case of simple random walk on binary trees received extensive studies recently, originally as a counterexample showing that in second order, cover time is no longer determined by the corresponding GFF (cf. @cite_4 ). A second order result with error @math was given in Ding and Zeitouni @cite_4 , then refined to @math in @cite_5 by second moment methods, and a scaling limit was given in @cite_6 using an extremal landscape approach, @math for some implicit constant @math and explicit distribution @math (the sum of two independent copies of the limit of the derivative martingale associated with the branching random walk).
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_6" ], "mid": [ "2760681307", "2001123643", "2906272554" ], "abstract": [ "", "We compute the second order correction for the cover time of the binary tree of depth n by (continuous-time) random walk, and show that with probability approaching 1 as n increases, τcov=|E|[2log2⋅n−logn 2log2+O((loglogn)8)], thus showing that the second order correction differs from the corresponding one for the maximum of the Gaussian free field on the tree.", "We consider a continuous time random walk on the rooted binary tree of depth @math with all transition rates equal to one and study its cover time, namely the time until all vertices of the tree have been visited. We prove that, normalized by @math and then centered by @math , the cover time admits a weak limit as the depth of the tree tends to infinity. The limiting distribution is identified as that of a Gumbel random variable with rate one, shifted randomly by the logarithm of the sum of the limits of the derivative martingales associated with two negatively correlated discrete Gaussian free fields on the infinite version of the tree. The existence of the limit and its overall form were conjectured in the literature. Our approach is quite different from those taken in earlier works on this subject and relies in great part on a comparison with the extremal landscape of the discrete Gaussian free field on the tree." ] }
1905.05613
2945928209
In this paper we study the time required for a @math -biased ( @math ) walk to visit all nodes of a supercritical Galton-Watson tree up to generation @math . Inspired by the extremal landscape approach in [ 2018] for simple random walk on binary trees, we establish the near-independent nature of extremal points for the @math -biased walk, and deduce the scaling limit of the cover time.
Particularly in @cite_6 , the authors noticed a clustering extremal landscape (Theorem 5.1, @cite_6 ): at a suitable time, if two nodes with low local time share the same ancestor in a generation of order @math , then (with high probability) they have the same ancestors all the way until a generation of order @math . This inspired us to look for properties of similar style in the biased case, leading to the key observation in our proof (Lemma ) that non-visited nodes (up to a suitable time, with high probability) never share ancestors after generations of order @math .
{ "cite_N": [ "@cite_6" ], "mid": [ "2906272554" ], "abstract": [ "We consider a continuous time random walk on the rooted binary tree of depth @math with all transition rates equal to one and study its cover time, namely the time until all vertices of the tree have been visited. We prove that, normalized by @math and then centered by @math , the cover time admits a weak limit as the depth of the tree tends to infinity. The limiting distribution is identified as that of a Gumbel random variable with rate one, shifted randomly by the logarithm of the sum of the limits of the derivative martingales associated with two negatively correlated discrete Gaussian free fields on the infinite version of the tree. The existence of the limit and its overall form were conjectured in the literature. Our approach is quite different from those taken in earlier works on this subject and relies in great part on a comparison with the extremal landscape of the discrete Gaussian free field on the tree." ] }
1905.05761
2945942189
Online anomaly detection of time-series data is an important and challenging task in machine learning. Gaussian processes (GPs) are powerful and flexible models for modeling time-series data. However, the high time complexity of GPs limits their applications in online anomaly detection. Attributed to some internal or external changes, concept drift usually occurs in time-series data, where the characteristics of data and meanings of abnormal behaviors alter over time. Online anomaly detection methods should have the ability to adapt to concept drift. Motivated by the above facts, this paper proposes the method of sparse Gaussian processes with Q-function (SGP-Q). The SGP-Q employs sparse Gaussian processes (SGPs) whose time complexity is lower than that of GPs, thus significantly speeding up online anomaly detection. By using Q-function properly, the SGP-Q can adapt to concept drift well. Moreover, the SGP-Q makes use of few abnormal data in the training data by its strategy of updating training data, resulting in more accurate sparse Gaussian process regression models and better anomaly detection results. We evaluate the SGP-Q on various artificial and real-world datasets. Experimental results validate the effectiveness of the SGP-Q.
GPs can be seen as Gaussian distributions on real-valued functions. The Gaussian distribution is uniquely determined by its mean and covariance matrix, and the GP is uniquely specified by its mean and covariance function similarly @cite_28 . A noiseless GP @math can be expressed as follows, where @math is the mean function of the GP and @math is the covariance function of the GP, which is also known as kernel function. The selection of kernel function plays an important roles in the GP model. Common kernel functions include the radial basis function (RBF) kernel function, periodic kernel function and linear kernel function. The periodic kernel function is capable of modeling periodicity in data. These kernel functions are computed as follows, where @math , @math and @math are the variances, @math is the length-scale and @math is the periodic parameter.
{ "cite_N": [ "@cite_28" ], "mid": [ "1746819321" ], "abstract": [ "Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes." ] }
1905.05437
2945729670
The notion of socioeconomic status (SES) of a person or family reflects the corresponding entity's social and economic rank in society. Such information may help applications like bank loaning decisions and provide measurable inputs for related studies like social stratification, social welfare and business planning. Traditionally, estimating SES for a large population is performed by national statistical institutes through a large number of household interviews, which is highly expensive and time-consuming. Recently researchers try to estimate SES from data sources like mobile phone call records and online social network platforms, which is much cheaper and faster. Instead of relying on these data about users' cyberspace behaviors, various alternative data sources on real-world users' behavior such as mobility may offer new insights for SES estimation. In this paper, we leverage Smart Card Data (SCD) for public transport systems which records the temporal and spatial mobility behavior of a large population of users. More specifically, we develop S2S, a deep learning based approach for estimating people's SES based on their SCD. Essentially, S2S models two types of SES-related features, namely the temporal-sequential feature and general statistical feature, and leverages deep learning for SES estimation. We evaluate our approach in an actual dataset, Shanghai SCD, which involves millions of users. The proposed model clearly outperforms several state-of-art methods in terms of various evaluation metrics.
SES is a widely studied concept in the field of social sciences, especially in health and education analysis @cite_28 . In recent years, companies and researchers pay increasing attention to SES estimation because of its potential in numerous high-value applications like the personalized recommendation and online banking. Though there has been a great improvement in estimating other demographic attributes like age, ethnicity and gender @cite_27 @cite_29 , SES estimation still needs more effort. One of the main obstacles is that SES ground truth data (covering a large group of people) is much harder to get than attributes like age and gender. Normally users are more reluctant to disclose their education, occupation and income information. The organizations, which have such data, also seldom open it to the public for privacy reasons. Recently, researchers begin to use indirect SES indicators from some big data sources. These data sources may cover millions of people, recording different aspects of their lifestyles.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_29" ], "mid": [ "2171357886", "2030214288", "2179234838" ], "abstract": [ "▪ Abstract Socioeconomic status (SES) is one of the most widely studied constructs in the social sciences. Several ways of measuring SES have been proposed, but most include some quantification of family income, parental education, and occupational status. Research shows that SES is associated with a wide array of health, cognitive, and socioemotional outcomes in children, with effects beginning prior to birth and continuing into adulthood. A variety of mechanisms linking SES to child well-being have been proposed, with most involving differences in access to material and social resources or reactions to stress-inducing conditions by both the children themselves and their parents. For children, SES impacts well-being at multiple levels, including both family and neighborhood. Its effects are moderated by children's own characteristics, family characteristics, and external support systems.", "User profiling is crucial to many online services. Several recent studies suggest that demographic attributes are predictable from different online behavioral data, such as users' \"Likes\" on Facebook, friendship relations, and the linguistic characteristics of tweets. But location check-ins, as a bridge of users' offline and online lives, have by and large been overlooked in inferring user profiles. In this paper, we investigate the predictive power of location check-ins for inferring users' demographics and propose a simple yet general location to profile (L2P) framework. More specifically, we extract rich semantics of users' check-ins in terms of spatiality, temporality, and location knowledge, where the location knowledge is enriched with semantics mined from heterogeneous domains including both online customer review sites and social networks. Additionally, tensor factorization is employed to draw out low dimensional representations of users' intrinsic check-in preferences considering the above factors. Meanwhile, the extracted features are used to train predictive models for inferring various demographic attributes. We collect a large dataset consisting of profiles of 159,530 verified users from an online social network. Extensive experimental results based upon this dataset validate that: 1) Location check-ins are diagnostic representations of a variety of demographic attributes, such as gender, age, education background, and marital status; 2) The proposed framework substantially outperforms compared models for profile inference in terms of various evaluation metrics, such as precision, recall, F-measure, and AUC.", "Obtained the record gender recognition performance of 97.31 on the LFW dataset.Used about 10 times fewer training images than the previous state-of-the-art.Only publicly available training images are used.The trained model is optimized in terms of running time and required memory.The trained model is made public for download. It can be also tested via a web demo. Despite being extensively studied in the literature, the problem of gender recognition from face images remains difficult when dealing with unconstrained images in a cross-dataset protocol. In this work, we propose a convolutional neural network ensemble model to improve the state-of-the-art accuracy of gender recognition from face images on one of the most challenging face image datasets today, LFW (Labeled Faces in the Wild). We find that convolutional neural networks need significantly less training data to obtain the state-of-the-art performance than previously proposed methods. Furthermore, our ensemble model is deliberately designed in a way that both its memory requirements and running time are minimized. This allows us to envision a potential usage of the constructed model in embedded devices or in a cloud platform for an intensive use on massive image databases." ] }
1905.05437
2945729670
The notion of socioeconomic status (SES) of a person or family reflects the corresponding entity's social and economic rank in society. Such information may help applications like bank loaning decisions and provide measurable inputs for related studies like social stratification, social welfare and business planning. Traditionally, estimating SES for a large population is performed by national statistical institutes through a large number of household interviews, which is highly expensive and time-consuming. Recently researchers try to estimate SES from data sources like mobile phone call records and online social network platforms, which is much cheaper and faster. Instead of relying on these data about users' cyberspace behaviors, various alternative data sources on real-world users' behavior such as mobility may offer new insights for SES estimation. In this paper, we leverage Smart Card Data (SCD) for public transport systems which records the temporal and spatial mobility behavior of a large population of users. More specifically, we develop S2S, a deep learning based approach for estimating people's SES based on their SCD. Essentially, S2S models two types of SES-related features, namely the temporal-sequential feature and general statistical feature, and leverages deep learning for SES estimation. We evaluate our approach in an actual dataset, Shanghai SCD, which involves millions of users. The proposed model clearly outperforms several state-of-art methods in terms of various evaluation metrics.
The social network is another important data source which Researchers pay a lot of attention to. @cite_15 @cite_0 @cite_18 all explore how to estimate people's SES based on their tweets. They use the job information from the users' profile as ground-truth. @cite_0 use features like topics, emotions to estimate peoples' income. Their predictions reach a correlation of 0.633 with actual user income, showing that tweets can be used to predict income. @cite_0 @cite_18 further improve the features and significantly increase the accuracy. @cite_13 analyzes the relationship between SES and people's activity patterns extracted from Twitter. They find out that while SES is highly important, the urban spatial structure also plays a critical role in affecting the activity patterns of users in different communities.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_18", "@cite_13" ], "mid": [ "1948823840", "2166434810", "2336223491", "2290531422" ], "abstract": [ "Automatically inferring user demographics from social media posts is useful for both social science research and a range of downstream applications in marketing and politics. We present the first extensive study where user behaviour on Twitter is used to build a predictive model of income. We apply non-linear methods for regression, i.e. Gaussian Processes, achieving strong correlation between predicted and actual user income. This allows us to shed light on the factors that characterise income on Twitter and analyse their interplay with user emotions and sentiment, perceived psycho-demographics and language use expressed through the topics of their posts. Our analysis uncovers correlations between different feature categories and income, some of which reflect common belief e.g. higher perceived education and intelligence indicates higher earnings, known differences e.g. gender and age differences, however, others show novel findings e.g. higher income users express more fear and anger, whereas lower income users express more of the time emotion and opinions.", "Social media content can be used as a complementary source to the traditional methods for extracting and studying collective social attributes. This study focuses on the prediction of the occupational class for a public user profile. Our analysis is conducted on a new annotated corpus of Twitter users, their respective job titles, posted textual content and platform-related attributes. We frame our task as classification using latent feature representations such as word clusters and embeddings. The employed linear and, especially, non-linear methods can predict a user’s occupational class with strong accuracy for the coarsest level of a standard occupation taxonomy which includes nine classes. Combined with a qualitative assessment, the derived results confirm the feasibility of our approach in inferring a new user attribute that can be embedded in a multitude of downstream applications.", "This paper presents a method to classify social media users based on their socioeconomic status. Our experiments are conducted on a curated set of Twitter profiles, where each user is represented by the posted text, topics of discussion, interactive behaviour and estimated impact on the microblogging platform. Initially, we formulate a 3-way classification task, where users are classified as having an upper, middle or lower socioeconomic status. A nonlinear, generative learning approach using a composite Gaussian Process kernel provides significantly better classification accuracy ( (75 , )) than a competitive linear alternative. By turning this task into a binary classification – upper vs. medium and lower class – the proposed classifier reaches an accuracy of (82 , ).", "Individual activity patterns are influenced by a wide variety of factors. The more important ones include socioeconomic status SES and urban spatial structure. While most previous studies relied heavily on the expensive travel-diary type data, the feasibility of using social media data to support activity pattern analysis has not been evaluated. Despite the various appealing aspects of social media data, including low acquisition cost and relatively wide geographical and international coverage, these data also have many limitations, including the lack of background information of users, such as home locations and SES. A major objective of this study is to explore the extent that Twitter data can be used to support activity pattern analysis. We introduce an approach to determine users’ home and work locations in order to examine the activity patterns of individuals. To infer the SES of individuals, we incorporate the American Community Survey ACS data. Using Twitter data for Washington, DC, we analyzed the activity patterns of Twitter users with different SESs. The study clearly demonstrates that while SES is highly important, the urban spatial structure, particularly where jobs are mainly found and the geographical layout of the region, plays a critical role in affecting the variation in activity patterns between users from different communities." ] }
1905.05233
2945824285
We investigated a wider range of Winograd family convolution algorithms for Deep Neural Network. We presented the explicit Winograd convolution algorithm in general case (used the polynomials of the degrees higher than one). It allows us to construct more different versions in the aspect of performance than commonly used Winograd convolution algorithms and improve the accuracy and performance of convolution computations. We found that in @math this approach gives us better accuracy of image recognition while keeps the same number of general multiplications computed per single output point as the commonly used Winograd algorithm for a kernel of the size @math and output size equal to @math . We demonstrated that in @math it is possible to perform the convolution computation faster keeping the accuracy of image recognition the same as for direct convolution method. We tested our approach for a subset of @math images from Imaginet validation set. We present the results for three precision of computations @math , @math and @math .
In the DNN research literature the term Winograd convolution algorithm'' is used for both Winograd and Toom-Cook convolution, and in practice the Toom-Cook algorithm is used to generate the convolution matrices. The general Winograd algorithm described in this paper is not explored a lot in literature. We can find a description of the approach in Winograd @cite_5 , but not for the multi-channel multiple kernel convolution used for DNNs. A simple example how to construct matrices is presented in @cite_10 . A more general and detailed description can be found in @cite_6 . Selesnick and Burrus @cite_12 considered cyclic convolution methods using cyclotomic polynomials in their theoretical work.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_12", "@cite_6" ], "mid": [ "", "565312106", "1846148069", "1990860452" ], "abstract": [ "", "1. Introduction 2. Introduction to abstract algebra 3. Fast algorithms for the discrete Fourier transform 4. Fast algorithms based on doubling strategies 5. Fast algorithms for short convolutions 6. Architecture of filters and transforms 7. Fast algorithms for solving Toeplitz systems 8. Fast algorithms for trellis search 9. Numbers and fields 10. Computation in finite fields and rings 11. Fast algorithms and multidimensional convolutions 12. Fast algorithms and multidimensional transforms Appendices: A. A collection of cyclic convolution algorithms B. A collection of Winograd small FFT algorithms.", "For short data sequences, Winograd's convolution algorithms attaining the minimum number of multiplications also attain a low number of additions, making them very efficient. However, for longer lengths they require a larger number of additions. Winograd's approach is usually extended to longer lengths by using a nesting approach such as the Agarwal-Cooley (1977) or Split-Nesting algorithms. Although these nesting algorithms are organizationally quite simple, they do not make the greatest use of the factorability of the data sequence length. The algorithm we propose adheres to Winograd's original approach more closely than do the nesting algorithms. By evaluating polynomials over simple matrices we retain, in algorithms for longer lengths, the basic structure and strategy of Winograd's approach, thereby designing computationally refined algorithms. This tactic is arithmetically profitable because Winograd's approach is based on a theory of minimum multiplicative complexity. >", "Contents: Introduction to Abstract Algebra.- Tensor Product and Stride Permutation.- Cooley-Tukey FFF Algorithms.- Variants of FFT Algorithms and Their Implementations.- Good-Thomas PFA.- Linear and Cyclic Convolutions.- Agarwal-Cooley Convolution Algorithm.- Introduction to Multiplicative Fourier Transform Algorithms (MFTA).- MFTA: The Prime Case.- MFTA: Product of Two Distinct Primes.- MFTA: Transform Size N = Mr. M-Composite Integer and r-Prime.- MFTA: Transform Size N = p2.- Periodization and Decimation.- Multiplicative Character and the FFT.- Rationality.- Index." ] }
1905.05233
2945824285
We investigated a wider range of Winograd family convolution algorithms for Deep Neural Network. We presented the explicit Winograd convolution algorithm in general case (used the polynomials of the degrees higher than one). It allows us to construct more different versions in the aspect of performance than commonly used Winograd convolution algorithms and improve the accuracy and performance of convolution computations. We found that in @math this approach gives us better accuracy of image recognition while keeps the same number of general multiplications computed per single output point as the commonly used Winograd algorithm for a kernel of the size @math and output size equal to @math . We demonstrated that in @math it is possible to perform the convolution computation faster keeping the accuracy of image recognition the same as for direct convolution method. We tested our approach for a subset of @math images from Imaginet validation set. We present the results for three precision of computations @math , @math and @math .
Meng and Brothers in @cite_8 apply the idea of using complex points @math and @math (root points of polynomial @math ) for quantization network. We present a general definition of the method and present floating point accuracy for a couple of different versions of the algorithm.
{ "cite_N": [ "@cite_8" ], "mid": [ "2907172645" ], "abstract": [ "Convolution is the core operation for many deep neural networks. The Winograd convolution algorithms have been shown to accelerate the widely-used small convolution sizes. Quantized neural networks can effectively reduce model sizes and improve inference speed, which leads to a wide variety of kernels and hardware accelerators that work with integer data. The state-of-the-art Winograd algorithms pose challenges for efficient implementation and execution by the integer kernels and accelerators. We introduce a new class of Winograd algorithms by extending the construction to the field of complex and propose optimizations that reduce the number of general multiplications. The new algorithm achieves an arithmetic complexity reduction of @math x over the direct method and an efficiency gain up to @math over the rational algorithms. Furthermore, we design and implement an integer-based filter scaling scheme to effectively reduce the filter bit width by @math without any significant accuracy loss." ] }
1905.05178
2952832237
We consider the problem of representation learning for graph data. Convolutional neural networks can naturally operate on images, but have significant challenges in dealing with graph data. Given images are special cases of graphs with nodes lie on 2D lattices, graph embedding tasks have a natural correspondence with image pixel-wise prediction tasks such as segmentation. While encoder-decoder architectures like U-Nets have been successfully applied on many image pixel-wise prediction tasks, similar methods are lacking for graph data. This is due to the fact that pooling and up-sampling operations are not natural on graph data. To address these challenges, we propose novel graph pooling (gPool) and unpooling (gUnpool) operations in this work. The gPool layer adaptively selects some nodes to form a smaller graph based on their scalar projection values on a trainable projection vector. We further propose the gUnpool layer as the inverse operation of the gPool layer. The gUnpool layer restores the graph into its original structure using the position information of nodes selected in the corresponding gPool layer. Based on our proposed gPool and gUnpool layers, we develop an encoder-decoder model on graph, known as the graph U-Nets. Our experimental results on node classification and graph classification tasks demonstrate that our methods achieve consistently better performance than previous models.
Recently, there has been a rich line of research on graph neural networks . Inspired by the first order graph Laplacian methods, @cite_6 proposed graph convolutional networks (GCNs), which achieved promising performance on graph node classification tasks. The layer-wise forward-propagation operation of GCNs is defined as: where @math is used to add self-loops in the input adjacency matrix @math , @math is the feature matrix of layer @math . The GCN layer uses the diagonal node degree matrix @math to normalize @math . @math is a trainable weight matrix that applies a linear transformation to feature vectors. GCNs essentially perform aggregation and transformation on node features without learning trainable filters. @cite_1 tried to sample a fixed number of neighboring nodes to keep the computational footprint consistent. @cite_11 proposed to use attention mechanisms to enable different weights for neighboring nodes. @cite_0 used relational graph convolutional networks for link prediction and entity classification. Some studies applied GNNs to graph classification tasks . @cite_9 discussed possible ways of applying deep learning on graph data. @cite_3 and @cite_4 proposed to use spectral networks for large-scale graph classification tasks. Some studies also applied graph kernels on traditional computer vision tasks .
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_11" ], "mid": [ "2964311892", "2558748708", "2962767366", "2964015378", "637153065", "2604314403", "2766453196" ], "abstract": [ "Abstract: Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.", "Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them.", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.", "Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to handle the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved through the use of an R-GCN encoder model to accumulate evidence over multiple inference steps in the graph, demonstrating a large improvement of 29.8 on FB15k-237 over a decoder-only baseline.", "We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training)." ] }
1905.05178
2952832237
We consider the problem of representation learning for graph data. Convolutional neural networks can naturally operate on images, but have significant challenges in dealing with graph data. Given images are special cases of graphs with nodes lie on 2D lattices, graph embedding tasks have a natural correspondence with image pixel-wise prediction tasks such as segmentation. While encoder-decoder architectures like U-Nets have been successfully applied on many image pixel-wise prediction tasks, similar methods are lacking for graph data. This is due to the fact that pooling and up-sampling operations are not natural on graph data. To address these challenges, we propose novel graph pooling (gPool) and unpooling (gUnpool) operations in this work. The gPool layer adaptively selects some nodes to form a smaller graph based on their scalar projection values on a trainable projection vector. We further propose the gUnpool layer as the inverse operation of the gPool layer. The gUnpool layer restores the graph into its original structure using the position information of nodes selected in the corresponding gPool layer. Based on our proposed gPool and gUnpool layers, we develop an encoder-decoder model on graph, known as the graph U-Nets. Our experimental results on node classification and graph classification tasks demonstrate that our methods achieve consistently better performance than previous models.
In addition to convolution, some studies tried to extend pooling operations to graphs. @cite_7 proposed to use binary tree indexing for graph coarsening, which fixes indices of nodes before applying 1-D pooling operations. @cite_10 used deterministic graph clustering algorithm to determine pooling patterns. @cite_2 used an assignment matrix to achieve pooling by assigning nodes to different clusters of the next layer.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_2" ], "mid": [ "2963712507", "2964321699", "2951659295" ], "abstract": [ "Convolutional neural networks have shown great success on feature extraction from raw input data such as images. Although convolutional neural networks are invariant to translations on the inputs, they are not invariant to other transformations, including rotation and flip. Recent attempts have been made to incorporate more invariance in image recognition applications, but they are not applicable to dense prediction tasks, such as image segmentation. In this paper, we propose a set of methods based on kernel rotation and flip to enable rotation and flip invariance in convolutional neural networks. The kernel rotation can be achieved on kernels of 3 × 3, while kernel flip can be applied on kernels of any size. By rotating in eight or four angles, the convolutional layers could produce the corresponding number of feature maps based on eight or four different kernels. By using flip, the convolution layer can produce three feature maps. By combining produced feature maps using maxout, the resource requirement could be significantly reduced while still retain the invariance properties. Experimental results demonstrate that the proposed methods can achieve various invariance at reasonable resource requirements in terms of both memory and time.", "In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.", "Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs---a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DiffPool, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DiffPool yields an average improvement of 5-10 accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets." ] }
1905.05222
2944334252
Abstract While the public claim concern for their privacy, they frequently appear to overlook it. This disparity between concern and behaviour is known as the Privacy Paradox. Such issues are particularly prevalent on wearable devices. These products can store personal data, such as text messages and contact details. However, owners rarely use protective features. Educational games can be effective in encouraging changes in behaviour. Therefore, we developed the first privacy game for (Android) Wear OS watches. 10 participants used smartwatches for two months, allowing their high-level settings to be monitored. Five individuals were randomly assigned to our treatment group, and they played a dynamically-customised privacy-themed game. To minimise confounding variables, the other five received the same app but lacking the privacy topic. The treatment group improved their protection, with their usage of screen locks significantly increasing ( p = 0.043). In contrast, 80 of the control group continued to never restrict their settings. After the posttest phase, we evaluated behavioural rationale through semi-structured interviews. Privacy concerns became more nuanced in the treatment group, with opinions aligning with behaviour. Actions appeared influenced primarily by three factors: convenience, privacy salience and data sensitivity. This is the first smartwatch game to encourage privacy-protective behaviour.
Awareness can highlight the existence of a particular risk. However, this is often insufficient to change privacy behaviour @cite_94 . Sasse2007 recommended a three-stage approach: raise awareness, give education and provide training. In this manner, individuals have an opportunity to practice and refine their behaviour. Finally, even if users possess the knowledge, they must be incentivised to act @cite_94 . Our game, introduced in Section , seeks to implement all these approaches.
{ "cite_N": [ "@cite_94" ], "mid": [ "2497179706" ], "abstract": [ "The present paper focuses on Cyber Security Awareness Campaigns, and aims to identify key factors regarding security which may lead them to failing to appropriately change people’s behaviour. Past and current efforts to improve information-security practices and promote a sustainable society have not had the desired impact. It is important therefore to critically reflect on the challenges involved in improving information security behaviours for citizens, consumers and employees. In particular, our work considers these challenges from a Psychology perspective, as we believe that understanding how people perceive risks is critical to creating effective awareness campaigns. Changing behaviour requires more than providing information about risks and reactive behaviours – firstly, people must be able to understand and apply the advice, and secondly, they must be motivated and willing to do so – and the latter requires changes to attitudes and intentions. These antecedents of behaviour change are identified in several psychological models of behaviour. We review the suitability of persuasion techniques, including the widely used ‘fear appeals’. From this range of literature, we extract essential components for an awareness campaign as well as factors which can lead to a campaign’s success or failure. Finally, we present examples of existing awareness campaigns in different cultures (the UK and Africa) and reflect on these." ] }
1905.05222
2944334252
Abstract While the public claim concern for their privacy, they frequently appear to overlook it. This disparity between concern and behaviour is known as the Privacy Paradox. Such issues are particularly prevalent on wearable devices. These products can store personal data, such as text messages and contact details. However, owners rarely use protective features. Educational games can be effective in encouraging changes in behaviour. Therefore, we developed the first privacy game for (Android) Wear OS watches. 10 participants used smartwatches for two months, allowing their high-level settings to be monitored. Five individuals were randomly assigned to our treatment group, and they played a dynamically-customised privacy-themed game. To minimise confounding variables, the other five received the same app but lacking the privacy topic. The treatment group improved their protection, with their usage of screen locks significantly increasing ( p = 0.043). In contrast, 80 of the control group continued to never restrict their settings. After the posttest phase, we evaluated behavioural rationale through semi-structured interviews. Privacy concerns became more nuanced in the treatment group, with opinions aligning with behaviour. Actions appeared influenced primarily by three factors: convenience, privacy salience and data sensitivity. This is the first smartwatch game to encourage privacy-protective behaviour.
Albayram2017a later explored whether videos can encourage the use of Two-Factor Authentication (2FA). Through a 2x2x2 design, they generated and evaluated eight videos. Their content varied on whether risk, self-efficacy and contingency were included. When the first two components were highlighted in the videos, participants were found to adopt 2FA. Both risk and self-efficacy are considered within PMT, and we also use the theory to encourage alterations. However, while Albayram2017a used Amazon Mechanical Turk, we analyse participants through a field study. Our in-person approach delivers several advantages. Since our behaviour is empirical rather than self-reported, it should be less prone to falsehood @cite_38 . With participants using a real smartwatch in a native environment, our findings should also have external validity. Finally, although our in-person approach limited our sample size, it supported rationale extraction through rich interviews.
{ "cite_N": [ "@cite_38" ], "mid": [ "2551851611" ], "abstract": [ "Technological innovations are increasingly helping people expand their social capital through online networks by offering new opportunities for sharing personal information. Online social networks are perceived to provide individuals new benefits and have led to a surge of personal data uploaded, stored, and shared. While privacy concerns are a major issue for many users of social networking sites, studies have shown that their information disclosing behavior does not align with their concerns. This gap between behavior and concern is called the privacy paradox. Several theories have been explored to explain this, but with inconsistent and incomplete results. This study investigates the paradox using a construal level theory lens. We show how a privacy breach, not yet experienced and psychologically distant, has less weight in everyday choices than more concrete and psychologically-near social networking activities and discuss the implications for research and practice. An explanation of the information privacy paradox using Construal Level Theory.Intentions mediate the relationship between privacy concerns and self-disclosure behavior.Social Rewards predict online behavior through near-future intentions.Privacy Concerns relate to distant-future intentions, but do not directly affect the online behavior.Privacy concerns indirectly affect online behavior through near-future intentions." ] }
1905.05222
2944334252
Abstract While the public claim concern for their privacy, they frequently appear to overlook it. This disparity between concern and behaviour is known as the Privacy Paradox. Such issues are particularly prevalent on wearable devices. These products can store personal data, such as text messages and contact details. However, owners rarely use protective features. Educational games can be effective in encouraging changes in behaviour. Therefore, we developed the first privacy game for (Android) Wear OS watches. 10 participants used smartwatches for two months, allowing their high-level settings to be monitored. Five individuals were randomly assigned to our treatment group, and they played a dynamically-customised privacy-themed game. To minimise confounding variables, the other five received the same app but lacking the privacy topic. The treatment group improved their protection, with their usage of screen locks significantly increasing ( p = 0.043). In contrast, 80 of the control group continued to never restrict their settings. After the posttest phase, we evaluated behavioural rationale through semi-structured interviews. Privacy concerns became more nuanced in the treatment group, with opinions aligning with behaviour. Actions appeared influenced primarily by three factors: convenience, privacy salience and data sensitivity. This is the first smartwatch game to encourage privacy-protective behaviour.
Nudging' has become a popular approach to encourage protection @cite_73 . Wang2014 augmented Facebook to highlight the audience of a person's posts. Through their six-week trial, they found unintended disclosures were decreased. Although temporarily influential, behaviour can revert when nudges are removed @cite_68 . This approach differs from techniques within serious games. Nudging modifies the choice architecture to encourage certain decisions. In contrast, serious games seek to instil lessons through education and positive reinforcement @cite_39 . Since intrinsic motivation can be highly persuasive @cite_71 , the latter approach might prove more persistent.
{ "cite_N": [ "@cite_68", "@cite_73", "@cite_71", "@cite_39" ], "mid": [ "2525125280", "2520213061", "2170899200", "2165136898" ], "abstract": [ "", "Social Network Sites (SNSs) offer a plethora of privacy controls, but users rarely exploit all of these mechanisms, nor do they do so in the same manner. We demonstrate that SNS users instead adhere to one of a small set of distinct privacy management strategies that are partially related to their level of privacy feature awareness. Using advanced Factor Analysis methods on the self-reported privacy behaviors and feature awareness of 308 Facebook users, we extrapolate six distinct privacy management strategies, including: Privacy Maximizers, Selective Sharers, Privacy Balancers, Self-Censors, Time Savers Consumers, and Privacy Minimalists and six classes of privacy proficiency based on feature awareness, ranging from Novices to Experts. We then cluster users on these dimensions to form six distinct behavioral profiles of privacy management strategies and six awareness profiles for privacy proficiency. We further analyze these privacy profiles to suggest opportunities for training and education, interface redesign, and new approaches for personalized privacy recommendations. We show that Facebook users' privacy behaviors and awareness are multi-dimensional.Feature awareness is a significant predictor of Facebook users' privacy behaviors.Six unique user profiles emerged to reveal different privacy management strategies.Six privacy proficiency profiles emerged from the dimensions of feature awareness.The privacy profiles can be used to personalize user education and nudging.", "Intrinsic and extrinsic types of motivation have been widely studied, and the distinction between them has shed important light on both developmental and educational practices. In this review we revisit the classic definitions of intrinsic and extrinsic motivation in light of contemporary research and theory. Intrinsic motivation remains an important construct, reflecting the natural human propensity to learn and assimilate. However, extrinsic motivation is argued to vary considerably in its relative autonomy and thus can either reflect external control or true self-regulation. The relations of both classes of motives to basic human needs for autonomy, competence and relatedness are discussed. ≈ 2000 Academic Press To be motivated means to be moved to do something. A person who feels no impetus or inspiration to act is thus characterized as unmotivated, whereas someone who is energized or activated toward an end is considered motivated. Most everyone who works or plays with others is, accordingly, concerned with motivation, facing the question of how much motivation those others, or oneself, has for a task, and practitioners of all types face the perennial task of fostering more versus less motivation in those around them. Most theories of motivation reflect these concerns by viewing motivation as a unitary phenomenon, one that varies from very little motivation to act to a great deal of it. Yet, even brief reflection suggests that motivation is hardly a unitary phenomenon. People have not only different amounts, but also different kinds of motivation. That is, they vary not only in level of motivation (i.e., how much motivation), but also in the orientation of that motivation (i.e., what type of motivation). Orientation of motivation concerns the underlying attitudes and goals that give rise to action—that is, it concerns the why of actions. As an example, a student can be highly motivated to do homework out of curiosity and interest or, alternatively, because he or she wants to procure the approval of a teacher or parent. A student could be motivated", "This paper examines the literature on computer games and serious games in regard to the potential positive impacts of gaming on users aged 14 years or above, especially with respect to learning, skill enhancement and engagement. Search terms identified 129 papers reporting empirical evidence about the impacts and outcomes of computer games and serious games with respect to learning and engagement and a multidimensional approach to categorizing games was developed. The findings revealed that playing computer games is linked to a range of perceptual, cognitive, behavioural, affective and motivational impacts and outcomes. The most frequently occurring outcomes and impacts were knowledge acquisition content understanding and affective and motivational outcomes. The range of indicators and measures used in the included papers are discussed, together with methodological limitations and recommendations for further work in this area." ] }
1905.05222
2944334252
Abstract While the public claim concern for their privacy, they frequently appear to overlook it. This disparity between concern and behaviour is known as the Privacy Paradox. Such issues are particularly prevalent on wearable devices. These products can store personal data, such as text messages and contact details. However, owners rarely use protective features. Educational games can be effective in encouraging changes in behaviour. Therefore, we developed the first privacy game for (Android) Wear OS watches. 10 participants used smartwatches for two months, allowing their high-level settings to be monitored. Five individuals were randomly assigned to our treatment group, and they played a dynamically-customised privacy-themed game. To minimise confounding variables, the other five received the same app but lacking the privacy topic. The treatment group improved their protection, with their usage of screen locks significantly increasing ( p = 0.043). In contrast, 80 of the control group continued to never restrict their settings. After the posttest phase, we evaluated behavioural rationale through semi-structured interviews. Privacy concerns became more nuanced in the treatment group, with opinions aligning with behaviour. Actions appeared influenced primarily by three factors: convenience, privacy salience and data sensitivity. This is the first smartwatch game to encourage privacy-protective behaviour.
Immaculacy @cite_74 is a proposed privacy game, in which the user faces dystopian scenarios. Characters progress through challenges by undertaking privacy-protective actions. This encourages reflection on behaviour, and we adopt a similar approach. Vaidya2014 considered interactive techniques to teach privacy. Since privacy is inherently complex, they recommended that scenarios be used. We implement scenario-based challenges, developing the first smartwatch privacy game.
{ "cite_N": [ "@cite_74" ], "mid": [ "2027195022" ], "abstract": [ "With the intent of addressing growing concerns regarding online privacy, Immaculacy is an interactive story that immerses the player in a slightly dystopian world littered with privacy issues. Events unfold in the narrative based on hidden scores kept during gameplay and calculated based on specific decisions made by the player. Ultimately, we hope to create an engaging environment that helps players consider the decisions they are making in their own lives. We give the player experience with many privacy issues through their explorations of a world of hyper surveillance and connectivity." ] }
1905.05179
2946549239
Assemblies of modular subsystems are being pressed into service to perform sensing, reasoning, and decision making in high-stakes, time-critical tasks in such areas as transportation, healthcare, and industrial automation. We address the opportunity to maximize the utility of an overall computing system by employing reinforcement learning to guide the configuration of the set of interacting modules that comprise the system. The challenge of doing system-wide optimization is a combinatorial problem. Local attempts to boost the performance of a specific module by modifying its configuration often leads to losses in overall utility of the system's performance as the distribution of inputs to downstream modules changes drastically. We present metareasoning techniques which consider a rich representation of the input, monitor the state of the entire pipeline, and adjust the configuration of modules on-the-fly so as to maximize the utility of a system's operation. We show significant improvement in both real-world and synthetic pipelines across a variety of reinforcement learning techniques.
Decisions about computation under uncertainties in time and context have been described in @cite_7 , which presented the use of metareasoning to guide graphics rendering under changing computational resources, considering probabilistic models of human attention so as to maximize the perceived quality of rendered content. The metareasoning guided tradeoffs in rendering quality under shifting content and time constraints in accordance with preferences encoded in a utility function. Principles for guiding proactive computation were formalized in @cite_3 . @cite_6 characterize a tradeoff between computation and performance in data processing and ML pipelines, and provide a message-passing algorithm (derived by viewing pipelines as graphical models) that allows a human operator to manually navigate this tradeoff. Our work focuses on the use of metareasoning to replace the operator by seeting the best operating point for any pipeline automatically.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_7" ], "mid": [ "2050880818", "1987331701", "1485188102" ], "abstract": [ "Automated problem solving is viewed typically as the allocation of computational resources to solve one or more problems passed to a reasoning system. In response to each problem received, effort is applied in real time to generate a solution and problem solving ends when a solution is rendered. We examine continual computation, reasoning policies that capture a broader conception of problem by considering the proactive allocation of computational resources to potential future challenges. We explore policies for allocating idle time for several settings and present applications that highlight opportunities for harnessing continual computation in real-world tasks.  2001 Elsevier Science B.V. All rights reserved.", "Big Data Pipelines decompose complex analyses of large data sets into a series of simpler tasks, with independently tuned components for each task. This modular setup allows re-use of components across several different pipelines. However, the interaction of independently tuned pipeline components yields poor end-to-end performance as errors introduced by one component cascade through the whole pipeline, affecting overall accuracy. We propose a novel model for reasoning across components of Big Data Pipelines in a probabilistically well-founded manner. Our key idea is to view the interaction of components as dependencies on an underlying graphical model. Different message passing schemes on this graphical model provide various inference algorithms to trade-off end-to-end performance and computational cost. We instantiate our framework with an efficient beam search algorithm, and demonstrate its efficiency on two Big Data Pipelines: parsing and relation extraction.", "We describe work to control graphics rendering under limited computational resources by taking a decision-theoretic perspective on perceptual costs and computational savings of approximations. The work extends earlier work on the control of rendering by introducing methods and models for computing the expected cost associated with degradations of scene components. The expected cost is computed by considering the perceptual cost of degradations and a probability distribution over the attentional focus of viewers. We review the critical literature describing findings on visual search and attention, discuss the implications of the findings, and introduce models of expected perceptual cost. Finally, we discuss policies that harness information about the expected cost of scene components." ] }
1905.05382
2945691630
Unsupervised domain adaptation in person re-identification resorts to labeled source data to promote the model training on target domain, facing the dilemmas caused by large domain shift and large camera variations. The non-overlapping labels challenge that source domain and target domain have entirely different persons further increases the re-identification difficulty. In this paper, we propose a novel algorithm to narrow such domain gaps. We derive a camera style adaptation framework to learn the style-based mappings between different camera views, from the target domain to the source domain, and then we can transfer the identity-based distribution from the source domain to the target domain on the camera level. To overcome the non-overlapping labels challenge and guide the person re-identification model to narrow the gap further, an efficient and effective soft-labeling method is proposed to mine the intrinsic local structure of the target domain through building the connection between GAN-translated source domain and the target domain. Experiment results conducted on real benchmark datasets indicate that our method gets state-of-the-art results.
. Image-to-image translation aims to translate an image to another one with given attributes changed. Recent literature based on GANs @cite_12 has shown impressive results in image-to-image translation. Typically, GANs consist of a generator @math and a discriminator @math , aiming to learn the true data distribution by a min-max game. Let @math be an image (usually in the tensor form with three channels) sampled from the given dataset, and @math be a random vector which obeys Gaussian or other distribution. @math and @math are corresponding probability distribution functions. The generator @math tries to generate fake images, such as @math , to fool the discriminator @math , while the discriminator tries to classify the real images and the fake images. It is essentially a generative framework in which the discriminator @math is introduced to find against with the generator @math a min-max game as below. Because @math and @math have the same optimization direction, the latter is often used for the sake of stability. Blurry images will not be tolerated since they look obviously fake, thus finally the generator can learn data distribution of real images and generated images that look exactly like the real ones.
{ "cite_N": [ "@cite_12" ], "mid": [ "2099471712" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1905.05382
2945691630
Unsupervised domain adaptation in person re-identification resorts to labeled source data to promote the model training on target domain, facing the dilemmas caused by large domain shift and large camera variations. The non-overlapping labels challenge that source domain and target domain have entirely different persons further increases the re-identification difficulty. In this paper, we propose a novel algorithm to narrow such domain gaps. We derive a camera style adaptation framework to learn the style-based mappings between different camera views, from the target domain to the source domain, and then we can transfer the identity-based distribution from the source domain to the target domain on the camera level. To overcome the non-overlapping labels challenge and guide the person re-identification model to narrow the gap further, an efficient and effective soft-labeling method is proposed to mine the intrinsic local structure of the target domain through building the connection between GAN-translated source domain and the target domain. Experiment results conducted on real benchmark datasets indicate that our method gets state-of-the-art results.
A stream of relevant methods are proposed to improve the learning capacity of GANs. cGANs @cite_1 and its variant @cite_24 learn generators by combining the original adversarial loss with a @math loss which force the generated images to be near the ground truth output under the @math distance. However, they need paired data constrains for scalability. Thus unpaired image-to-image frameworks @cite_38 @cite_46 @cite_40 @cite_29 have been proposed to alleviate this limitation. In @cite_38 @cite_46 @cite_40 , a cycle consistency loss is introduced to preserve the image contents and only change the domain-related parts. However, in these frameworks, one model needs to be trained for every domain pair mapping at a time. This cannot meet the scalability in handling multiple domains. StarGAN @cite_21 tackles this problem by introducing an auxiliary classifier @cite_31 to allow the discriminator of GAN to control multiple domains. Iterative training approaches that alternates between multiple domains make it possible that the generator learns multiple mappings simultaneously.
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_29", "@cite_21", "@cite_1", "@cite_24", "@cite_40", "@cite_46" ], "mid": [ "2962793481", "2950776302", "", "2768626898", "2125389028", "2552465644", "", "" ], "abstract": [ "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.", "", "Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.", "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "", "" ] }
1905.05382
2945691630
Unsupervised domain adaptation in person re-identification resorts to labeled source data to promote the model training on target domain, facing the dilemmas caused by large domain shift and large camera variations. The non-overlapping labels challenge that source domain and target domain have entirely different persons further increases the re-identification difficulty. In this paper, we propose a novel algorithm to narrow such domain gaps. We derive a camera style adaptation framework to learn the style-based mappings between different camera views, from the target domain to the source domain, and then we can transfer the identity-based distribution from the source domain to the target domain on the camera level. To overcome the non-overlapping labels challenge and guide the person re-identification model to narrow the gap further, an efficient and effective soft-labeling method is proposed to mine the intrinsic local structure of the target domain through building the connection between GAN-translated source domain and the target domain. Experiment results conducted on real benchmark datasets indicate that our method gets state-of-the-art results.
. While supervised person re-ID methods have achieved high accuracies due to the access of deep learning algorithms and large scale datasets, the great demand on labeled data limits their generalization and applications. Unsupervised methods avoid expensive artificial data labeling or annotation. One typical type of unsupervised methods is to extract hand-crafted features @cite_15 @cite_41 @cite_19 @cite_37 without learning. It is straightforward. But such methods may loss valuable information in labeled data from external domains, which can be exploited to obtain discriminative features for UDA tasks. UMDL @cite_16 resorts to the dictionary learning approach to obtain a dataset-shared but target data-biased representation with the labeled source domain. SPGAN @cite_10 and PTGAN @cite_11 use the similar settings with UMDL to learn image translation for unsupervised person re-ID. Specifically, SPGAN uses an additional SiaNet to preserve the ID-related information, while PTGAN uses an extra PSPNet @cite_26 to make person ID be ignored by the generator. Both of them cannot efficiently capture the camera variations with one generator.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_41", "@cite_19", "@cite_15", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "", "2560023338", "", "", "2125447566", "2441160157", "2769088658", "2769994766" ], "abstract": [ "", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "", "", "Abstract Avoiding the use of complicated pre-processing steps such as accurate face and body part segmentation or image normalization, this paper proposes a novel face person image representation which can properly handle background and illumination variations. Denoted as gBiCov, this representation relies on the combination of Biologically Inspired Features (BIF) and Covariance descriptors [1]. More precisely, gBiCov is obtained by computing and encoding the difference between BIF features at different scales. The distance between two persons can then be efficiently measured by computing the Euclidean distance of their signatures, avoiding some time consuming operations in Riemannian manifold required by the use of Covariance descriptors. In addition, the recently proposed KISSME framework [2] is adopted to learn a metric adapted to the representation. To show the effectiveness of gBiCov, experiments are conducted on three person re-identification tasks (VIPeR, i-LIDS and ETHZ) and one face verification task (LFW), on which competitive results are obtained. As an example, the matching rate at rank 1 on the VIPeR dataset is of 31.11 , improving the best previously published result by more than 10.", "Most existing person re-identification (Re-ID) approaches follow a supervised learning framework, in which a large number of labelled matching pairs are required for training. This severely limits their scalability in realworld applications. To overcome this limitation, we develop a novel cross-dataset transfer learning approach to learn a discriminative representation. It is unsupervised in the sense that the target dataset is completely unlabelled. Specifically, we present an multi-task dictionary learning method which is able to learn a dataset-shared but targetdata-biased representation. Experimental results on five benchmark datasets demonstrate that the method significantly outperforms the state-of-the-art.", "Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a \"learning via translation\" framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.", "Although the performance of person Re-Identification (ReID) has been significantly boosted, many challenging issues in real scenarios have not been fully investigated, e.g., the complex scenes and lighting variations, viewpoint and pose changes, and the large number of identities in a camera network. To facilitate the research towards conquering those issues, this paper contributes a new dataset called MSMT17 with many important features, e.g., 1) the raw videos are taken by an 15-camera network deployed in both indoor and outdoor scenes, 2) the videos cover a long period of time and present complex lighting variations, and 3) it contains currently the largest number of annotated identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe that, domain gap commonly exists between datasets, which essentially causes severe performance drop when training and testing on different datasets. This results in that available training data cannot be effectively leveraged for new testing domains. To relieve the expensive costs of annotating new training samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap. Comprehensive experiments show that the domain gap could be substantially narrowed-down by the PTGAN." ] }
1905.05382
2945691630
Unsupervised domain adaptation in person re-identification resorts to labeled source data to promote the model training on target domain, facing the dilemmas caused by large domain shift and large camera variations. The non-overlapping labels challenge that source domain and target domain have entirely different persons further increases the re-identification difficulty. In this paper, we propose a novel algorithm to narrow such domain gaps. We derive a camera style adaptation framework to learn the style-based mappings between different camera views, from the target domain to the source domain, and then we can transfer the identity-based distribution from the source domain to the target domain on the camera level. To overcome the non-overlapping labels challenge and guide the person re-identification model to narrow the gap further, an efficient and effective soft-labeling method is proposed to mine the intrinsic local structure of the target domain through building the connection between GAN-translated source domain and the target domain. Experiment results conducted on real benchmark datasets indicate that our method gets state-of-the-art results.
We note that propose a Hetero-Homogeneous Learning (HHL) method @cite_36 to address the domain adaptive person re-ID problems, and the StarGAN approach is also used for camera style adaptation. However, it has significant difference between HHL and CSGLP. We show their schedule flowcharts of camera style translation in Fig. . HHL considers style translation between images inside the target domain only, while CSGLP makes efforts to transfer camera styles from the target domain to the source domain. The cross-domain style translator is expected to play an active role in distribution approximation and feature matching.
{ "cite_N": [ "@cite_36" ], "mid": [ "2896016251" ], "abstract": [ "Person re-identification (re-ID) poses unique challenges for unsupervised domain adaptation (UDA) in that classes in the source and target sets (domains) are entirely different and that image variations are largely caused by cameras. Given a labeled source training set and an unlabeled target training set, we aim to improve the generalization ability of re-ID models on the target testing set. To this end, we introduce a Hetero-Homogeneous Learning (HHL) method. Our method enforces two properties simultaneously: (1) camera invariance, learned via positive pairs formed by unlabeled target images and their camera style transferred counterparts; (2) domain connectedness, by regarding source target images as negative matching pairs to the target source images. The first property is implemented by homogeneous learning because training pairs are collected from the same domain. The second property is achieved by heterogeneous learning because we sample training pairs from both the source and target domains. On Market-1501, DukeMTMC-reID and CUHK03, we show that the two properties contribute indispensably and that very competitive re-ID UDA accuracy is achieved. Code is available at: https: github.com zhunzhong07 HHL." ] }
1905.05180
2945263287
Hierarchical Reinforcement Learning (HRL) exploits temporally extended actions, or options, to make decisions from a higher-dimensional perspective to alleviate the sparse reward problem, one of the most challenging problems in reinforcement learning. The majority of existing HRL algorithms require either significant manual design with respect to the specific environment or enormous exploration to automatically learn options from data. To achieve fast exploration without using manual design, we devise a multi-goal HRL algorithm, consisting of a high-level policy Manager and a low-level policy Worker. The Manager provides the Worker multiple subgoals at each time step. Each subgoal corresponds to an option to control the environment. Although the agent may show some confusion at the beginning of training since it is guided by three diverse subgoals, the agent's behavior policy will quickly learn how to respond to multiple subgoals from the high-level controller on different occasions. By exploiting multiple subgoals, the exploration efficiency is significantly improved. We conduct experiments in Atari's Montezuma's Revenge environment, a well-known sparse reward environment, and in doing so achieve the same performance as state-of-the-art HRL methods with substantially reduced training time cost.
Intrinsic motivation aims to provide qualitative guidance for exploration. In HRL algorithms, subgoal is the manifestation of intrinsic motivation. How to define subgoals greatly affects the agent’s exploration efficiency. Embodying an agent with a form of intrinsic motivation has been explored in several previous works. The FuN @cite_19 algorithm is a well-known two-layer hierarchical reinforcement learning algorithm that defines the subgoals as the direction in the latent state space. The HDQN @cite_15 algorithm defines some target locations as subgoals by manual design. These are different manifestations of intrinsic motivation. However, how to introduce multiple subgoals to these algorithms is unclear. In some works, intrinsic motivation is defined to find bottleneck states @cite_8 @cite_20 @cite_1 in the environment. However, discovering the bottleneck states requires a lot of environmental statistics, so it can be impossible in complex environments, especially in sparse reward environments.
{ "cite_N": [ "@cite_8", "@cite_1", "@cite_19", "@cite_15", "@cite_20" ], "mid": [ "1968768508", "2950614095", "2949267040", "2963262099", "2143435603" ], "abstract": [ "We present a new method for automatically creating useful temporal abstractions in reinforcement learning. We argue that states that allow the agent to transition to a different region of the state space are useful subgoals, and propose a method for identifying them using the concept of relative novelty. When such a state is identified, a temporally-extended activity (e.g., an option) is generated that takes the agent efficiently to this state. We illustrate the utility of the method in a number of tasks.", "Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them difficult to apply in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. To address efficiency, we propose to use off-policy experience for both higher and lower-level training. This poses a considerable challenge, since changes to the lower-level behaviors change the action space for the higher-level policy, and we introduce an off-policy correction to remedy this challenge. This allows us to take advantage of recent advances in off-policy model-free RL to learn both higher- and lower-level policies using substantially fewer environment interactions than on-policy algorithms. We term the resulting HRL agent HIRO and find that it is generally applicable and highly sample-efficient. Our experiments show that HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms previous state-of-the-art techniques.", "We introduce FeUdal Networks (FuNs): a novel architecture for hierarchical reinforcement learning. Our approach is inspired by the feudal reinforcement learning proposal of Dayan and Hinton, and gains power and efficacy by decoupling end-to-end learning across multiple levels -- allowing it to utilise different resolutions of time. Our framework employs a Manager module and a Worker module. The Manager operates at a lower temporal resolution and sets abstract goals which are conveyed to and enacted by the Worker. The Worker generates primitive actions at every tick of the environment. The decoupled structure of FuN conveys several benefits -- in addition to facilitating very long timescale credit assignment it also encourages the emergence of sub-policies associated with different goals set by the Manager. These properties allow FuN to dramatically outperform a strong baseline agent on tasks that involve long-term credit assignment or memorisation. We demonstrate the performance of our proposed system on a range of tasks from the ATARI suite and also from a 3D DeepMind Lab environment.", "Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. One of the key difficulties is insufficient exploration, resulting in an agent being unable to learn robust policies. Intrinsically motivated agents can explore new behavior for their own sake rather than to directly solve external goals. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical action-value functions, operating at different temporal scales, with goal-driven intrinsically motivated deep reinforcement learning. A top-level q-value function learns a policy over intrinsic goals, while a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse and delayed feedback: (1) a complex discrete stochastic decision process with stochastic transitions, and (2) the classic ATARI game -'Montezuma's Revenge'.", "This paper presents a method by which a reinforcement learning agent can automatically discover certain types of subgoals online. By creating useful new subgoals while learning, the agent is able to accelerate learning on the current task and to transfer its expertise to other, related tasks through the reuse of its ability to attain subgoals. The agent discovers subgoals based on commonalities across multiple paths to a solution. We cast the task of finding these commonalities as a multiple-instance learning problem and use the concept of diverse density to find solutions. We illustrate this approach using several gridworld tasks." ] }
1905.05180
2945263287
Hierarchical Reinforcement Learning (HRL) exploits temporally extended actions, or options, to make decisions from a higher-dimensional perspective to alleviate the sparse reward problem, one of the most challenging problems in reinforcement learning. The majority of existing HRL algorithms require either significant manual design with respect to the specific environment or enormous exploration to automatically learn options from data. To achieve fast exploration without using manual design, we devise a multi-goal HRL algorithm, consisting of a high-level policy Manager and a low-level policy Worker. The Manager provides the Worker multiple subgoals at each time step. Each subgoal corresponds to an option to control the environment. Although the agent may show some confusion at the beginning of training since it is guided by three diverse subgoals, the agent's behavior policy will quickly learn how to respond to multiple subgoals from the high-level controller on different occasions. By exploiting multiple subgoals, the exploration efficiency is significantly improved. We conduct experiments in Atari's Montezuma's Revenge environment, a well-known sparse reward environment, and in doing so achieve the same performance as state-of-the-art HRL methods with substantially reduced training time cost.
The UNREAL @cite_18 architecture proposed the concept of auxiliary control tasks and designed pixel-control and feature-control tasks in the vision domain. These two auxiliary tasks greatly improved the performance in the Atari environment. The feature-control agent designed these two auxiliary control tasks in the form of intrinsic motivation; the subgoals of the algorithm were correspondingly executed to change the pixels of the specified area or the specified higher-order environmental features. Based on the idea of feature-control, we have designed a new auxiliary subgoal, direction-control. Compared to the auxiliary control tasks mentioned above, the new auxiliary control task is more concise and closer to the agent itself.
{ "cite_N": [ "@cite_18" ], "mid": [ "2950872548" ], "abstract": [ "Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth." ] }
1905.05355
2946234575
Multi-person pose estimation is a fundamental yet challenging task in computer vision. Both rich context information and spatial information are required to precisely locate the keypoints for all persons in an image. In this paper, a novel Context-and-Spatial Aware Network (CSANet), which integrates both a Context Aware Path and Spatial Aware Path, is proposed to obtain effective features involving both context information and spatial information. Specifically, we design a Context Aware Path with structure supervision strategy and spatial pyramid pooling strategy to enhance the context information. Meanwhile, a Spatial Aware Path is proposed to preserve the spatial information, which also shortens the information propagation path from low-level features to high-level features. On top of these two paths, we employ a Heavy Head Path to further combine and enhance the features effectively. Experimentally, our proposed network outperforms state-of-the-art methods on the COCO keypoint benchmark, which verifies the effectiveness of our method and further corroborates the above proposition.
Recently, lots of approaches based on Convolution Neural Network (CNN) have achieved high performance on different benchmarks of multi-person pose estimation @cite_27 @cite_23 @cite_11 @cite_18 @cite_2 @cite_5 @cite_4 @cite_26 @cite_28 @cite_20 @cite_16 . Several principles proposed for designing networks in scene parsing are also effective for our work, in which we pay more attention to the issue of context information extraction and spatial information preservation @cite_15 @cite_21 @cite_7 @cite_14 @cite_25 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_14", "@cite_7", "@cite_28", "@cite_21", "@cite_27", "@cite_23", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2796779902", "", "2952025147", "2592939477", "2412782625", "2307770531", "2560023338", "", "2113325037", "2555751471", "2789129057", "2630837129", "2559085405", "2950045474", "2795262365", "2769331938" ], "abstract": [ "There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and evaluation more difficult. This work provides baseline methods that are surprisingly simple and effective, thus helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be released.", "", "Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multi-branch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at this https URL.", "Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the \"gridding issue\"caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1 mIOU in the test set at the time of submission. We also have achieved state-of-theart overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at https: github.com TuSimple TuSimple-DUC.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "", "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.", "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets.", "", "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.", "Semantic segmentation requires both rich spatial information and sizeable receptive field. However, modern approaches usually compromise spatial resolution to achieve real-time inference speed, which leads to poor performance. In this paper, we address this dilemma with a novel Bilateral Segmentation Network (BiSeNet). We first design a Spatial Path with a small stride to preserve the spatial information and generate high-resolution features. Meanwhile, a Context Path with a fast downsampling strategy is employed to obtain sufficient receptive field. On top of the two paths, we introduce a new Feature Fusion Module to combine features efficiently. The proposed architecture makes a right balance between the speed and segmentation performance on Cityscapes, CamVid, and COCO-Stuff datasets. Specifically, for a 2048x1024 input, we achieve 68.4 Mean IOU on the Cityscapes test dataset with speed of 105 FPS on one NVIDIA Titan XP card, which is significantly faster than the existing methods with comparable performance.", "We develop a robust multi-scale structure-aware neural network for human pose estimation. This method improves the recent deep conv-deconv hourglass models with four key improvements: (1) multi-scale supervision to strengthen contextual feature learning in matching body keypoints by combining feature heatmaps across scales, (2) multi-scale regression network at the end to globally optimize the structural matching of the multi-scale features, (3) structure-aware loss used in the intermediate supervision and at the regression to improve the matching of keypoints and respective neighbors to infer a higher-order matching configurations, and (4) a keypoint masking training scheme that can effectively fine-tune our network to robustly localize occluded keypoints via adjacent matches. Our method can effectively improve state-of-the-art pose estimation methods that suffer from difficulties in scale varieties, occlusions, and complex multi-person scenarios. This multi-scale supervision tightly integrates with the regression network to effectively (i) localize keypoints using the ensemble of multi-scale features, and (ii) infer global pose configuration by maximizing structural consistencies across multiple keypoints and scales. The keypoint masking training enhances these advantages to focus learning on hard occlusion samples. Our method achieves the leading position in the MPII challenge leaderboard among the state-of-the-art methods.", "The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19 relative improvement compared with 60.5 from the COCO 2016 keypoint challenge.Code (this https URL) and the detection results are publicly available for further research." ] }
1905.05355
2946234575
Multi-person pose estimation is a fundamental yet challenging task in computer vision. Both rich context information and spatial information are required to precisely locate the keypoints for all persons in an image. In this paper, a novel Context-and-Spatial Aware Network (CSANet), which integrates both a Context Aware Path and Spatial Aware Path, is proposed to obtain effective features involving both context information and spatial information. Specifically, we design a Context Aware Path with structure supervision strategy and spatial pyramid pooling strategy to enhance the context information. Meanwhile, a Spatial Aware Path is proposed to preserve the spatial information, which also shortens the information propagation path from low-level features to high-level features. On top of these two paths, we employ a Heavy Head Path to further combine and enhance the features effectively. Experimentally, our proposed network outperforms state-of-the-art methods on the COCO keypoint benchmark, which verifies the effectiveness of our method and further corroborates the above proposition.
Recently, significant progress has been made in multi-person pose estimation with the development of CNN. @cite_16 , a real-time Convolution Pose Machine (CPM) is proposed to locate the body keypoints, and assemble the keypoints to individuals in the image with the learning part affinity fields (PAFs). Based on the ResNet backbone, the Simple Baseline Network (SBN) @cite_18 employs a deconvolution head network to predict human keypoints. The spatial detail information is inevitably lost along the information propogation in CPM and SBN, which is useful for refining keypoints' localization. To avoid this problem, Newell @cite_2 integrate associate embedding with a stack-hourglass network to produce joint score heatmaps and embedded tags for grouping joints into individual people. The Cascaded Pyramid Network (CPN) @cite_11 adopts the GlobalNet to learn a good feature representation and the RefineNet to further recalibrate the feature representation for accurate keypoint localization. The Hourglass network and Cascade Pyramid Network preserve spatial features at each resolution by adding skip layers and capture sufficient context information for accurately inferencing both simple keypoints and challenge keypoints.
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_11", "@cite_2" ], "mid": [ "2796779902", "2559085405", "2769331938", "2555751471" ], "abstract": [ "There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and evaluation more difficult. This work provides baseline methods that are surprisingly simple and effective, thus helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be released.", "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.", "The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19 relative improvement compared with 60.5 from the COCO 2016 keypoint challenge.Code (this https URL) and the detection results are publicly available for further research.", "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets." ] }
1905.05355
2946234575
Multi-person pose estimation is a fundamental yet challenging task in computer vision. Both rich context information and spatial information are required to precisely locate the keypoints for all persons in an image. In this paper, a novel Context-and-Spatial Aware Network (CSANet), which integrates both a Context Aware Path and Spatial Aware Path, is proposed to obtain effective features involving both context information and spatial information. Specifically, we design a Context Aware Path with structure supervision strategy and spatial pyramid pooling strategy to enhance the context information. Meanwhile, a Spatial Aware Path is proposed to preserve the spatial information, which also shortens the information propagation path from low-level features to high-level features. On top of these two paths, we employ a Heavy Head Path to further combine and enhance the features effectively. Experimentally, our proposed network outperforms state-of-the-art methods on the COCO keypoint benchmark, which verifies the effectiveness of our method and further corroborates the above proposition.
Generally, as the network goes deep, the high-level feature holds potential to capture the context information with a large receptive filed. In another way, Atrous Spatial Pyramid Pooling (ASPP) @cite_7 and Pyramid Pooling Module (PPM) @cite_21 are widely used to extract abundant context information in scene parsing. ASPP module employs atrous convolution with different dilation rates and global pooling module to capture diverse context information. PPM module fuses features under different pyramid pooling scales to obtain global contextual prior information.
{ "cite_N": [ "@cite_21", "@cite_7" ], "mid": [ "2560023338", "2412782625" ], "abstract": [ "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online." ] }
1905.05355
2946234575
Multi-person pose estimation is a fundamental yet challenging task in computer vision. Both rich context information and spatial information are required to precisely locate the keypoints for all persons in an image. In this paper, a novel Context-and-Spatial Aware Network (CSANet), which integrates both a Context Aware Path and Spatial Aware Path, is proposed to obtain effective features involving both context information and spatial information. Specifically, we design a Context Aware Path with structure supervision strategy and spatial pyramid pooling strategy to enhance the context information. Meanwhile, a Spatial Aware Path is proposed to preserve the spatial information, which also shortens the information propagation path from low-level features to high-level features. On top of these two paths, we employ a Heavy Head Path to further combine and enhance the features effectively. Experimentally, our proposed network outperforms state-of-the-art methods on the COCO keypoint benchmark, which verifies the effectiveness of our method and further corroborates the above proposition.
Consecutive down-sampling or pooling operations in the convolution neural network may lose the spatial information which is crucial to predicting the detailed output in scene parsing and pose estimation tasks. Some existing methods @cite_15 @cite_21 @cite_7 @cite_14 use the dilated convolution to preserve spatial size of the feature map. Other methods employ the feature pyramid network @cite_17 , U-shape method @cite_30 , Hourglass network @cite_28 to shorten the information path between low-level features and high-level features. By using such skip-connected network structure, we can recover a certain extent of spatial information.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_7", "@cite_28", "@cite_21", "@cite_15", "@cite_17" ], "mid": [ "1901129140", "2592939477", "2412782625", "2307770531", "2560023338", "2630837129", "2565639579" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "Recent advances in deep learning, especially deep convolutional neural networks (CNNs), have led to significant improvement over previous semantic segmentation systems. Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are of both theoretical and practical value. First, we design dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields (RF) of the network to aggregate global information; 2) alleviates what we call the \"gridding issue\"caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a state-of-art result of 80.1 mIOU in the test set at the time of submission. We also have achieved state-of-theart overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Our source code can be found at https: github.com TuSimple TuSimple-DUC.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available." ] }
1905.05243
2944830665
Face obscuration is often needed by law enforcement or mass media outlets to provide privacy protection. Sharing sensitive content where the obscuration or redaction technique may have failed to completely remove all identifiable traces can lead to life-threatening consequences. Hence, it is critical to be able to systematically measure the face obscuration performance of a given technique. In this paper we propose to measure the effectiveness of three obscuration techniques: Gaussian blurring, median blurring, and pixelation. We do so by identifying the redacted faces under two scenarios: classifying an obscured face into a group of identities and comparing the similarity of an obscured face with a clear face. Threat modeling is also considered to provide a vulnerability analysis for each studied obscuration technique. Based on our evaluation, we show that pixelation-based face obscuration approaches are the most effective.
The first set of approaches, known as @math -same methods @cite_12 @cite_13 @cite_11 , attempt to group faces into clusters based on personal attributes such as age, gender, or facial expression. Then, a template face for each cluster is generated. These methods are able to guarantee that any face recognition system cannot do better than @math in recognizing who a particular image corresponds to, where @math is the minimum number of faces among all clusters @cite_1 . In Newton al @cite_12 and Gross al @cite_1 , they simply compute the average face for each cluster. Therefore, the obscured faces are blurry and cannot handle various facial poses. Du al @cite_21 use the active appearance model @cite_20 to learn the shape and appearance of faces. Then, they generate a template face for each cluster to produce obscured faces with better visual quality.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_20", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2003921219", "2897263486", "2152826865", "", "2103958416", "" ], "abstract": [ "Face de-identification, the process of preventing a person’ identity from being connected with personal information, is an important privacy protection tool in multimedia data processing. With the advance of face detection algorithms, a natural solution is to blur or block facial regions in visual data so as to obscure identity information. Such solutions however often destroy privacy-insensitive information and hence limit the data utility, e.g., gender and age information. In this paper we address the de-identification problem by proposing a simple yet effective framework, named GARP-Face, that balances utility preservation in face deidentification. In particular, we use modern facial analysis technologies to determine the Gender, Age, and Race attributes of facial images, and Preserving these attributes by seeking corresponding representatives constructed through a gallery dataset. We evaluate the proposed approach using the MORPH dataset in comparison with several stateof-the-art face de-identification solutions. The results show that our method outperforms previous solutions in preserving data utility while achieving similar degree of privacy protection.", "With the proliferation of inexpensive video surveillance and face recognition technologies, it is increasingly possible to track and match people as they move through public spaces. To protect the privacy of subjects visible in video sequences, prior research suggests using ad hoc obfuscation methods, such as blurring or pixelation of the face. However, there has been little investigation into how obfuscation influences the usability of images, such as for classification tasks. In this paper, we demonstrate that at high obfuscation levels, ad hoc methods fail to preserve utility for various tasks, whereas at low obfuscation levels, they fail to prevent recognition. To overcome the implied tradeoff between privacy and utility, we introduce a new algorithm, k-Same-Select, which is a formal privacy protection schema based on k-anonymity that provably protects privacy and preserves data utility. We empirically validate our findings through evaluations on the FERET database, a large real world dataset of facial images.", "We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.", "", "In the context of sharing video surveillance data, a significant threat to privacy is face recognition software, which can automatically identify known people, such as from a database of drivers' license photos, and thereby track people regardless of suspicion. This paper introduces an algorithm to protect the privacy of individuals in video surveillance data by deidentifying faces such that many facial characteristics remain but the face cannot be reliably recognized. A trivial solution to deidentifying faces involves blacking out each face. This thwarts any possible face recognition, but because all facial details are obscured, the result is of limited use. Many ad hoc attempts, such as covering eyes, fail to thwart face recognition because of the robustness of face recognition methods. This work presents a new privacy-enabling algorithm, named k-Same, that guarantees face recognition software cannot reliably recognize deidentified faces, even though many facial details are preserved. The algorithm determines similarity between faces based on a distance metric and creates new faces by averaging image components, which may be the original image pixels (k-Same-Pixel) or eigenvectors (k-Same-Eigen). Results are presented on a standard collection of real face images with varying k.", "" ] }
1905.05393
2952217990
A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-the-art approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. We show that PBA can match the performance of AutoAugment on CIFAR-10, CIFAR-100, and SVHN, with three orders of magnitude less overall compute. On CIFAR-10 we achieve a mean test error of 1.46 , which is a slight improvement upon the current state-of-the-art. The code for PBA is open source and is available at this https URL.
Augmentation has been shown to have a large impact on image modalities where data is scare or expensive to generate, like medical imaging @cite_7 @cite_20 or non-supervised learning approaches @cite_43 .
{ "cite_N": [ "@cite_43", "@cite_20", "@cite_7" ], "mid": [ "2769857323", "", "2898091194" ], "abstract": [ "We develop a set of methods to improve on the results of self-supervised learning using context. We start with a baseline of patch based arrangement context learning and go from there. Our methods address some overt problems such as chromatic aberration as well as other potential problems such as spatial skew and mid-level feature neglect. We prevent problems with testing generalization on common self-supervised benchmark tests by using different datasets during our development. The results of our methods combined yield top scores on all standard self-supervised benchmarks, including classification and detection on PASCAL VOC 2007, segmentation on PASCAL VOC 2012, and \"linear tests\" on the ImageNet and CSAIL Places datasets. We obtain an improvement over our baseline method of between 4.0 to 7.1 percentage points on transfer learning classification tests. We also show results on different standard network architectures to demonstrate generalization as well as portability.", "", "One of the biggest issues facing the use of machine learning in medical imaging is the lack of availability of large, labelled datasets. The annotation of medical images is not only expensive and time consuming but also highly dependent on the availability of expert observers. The limited amount of training data can inhibit the performance of supervised machine learning algorithms which often need very large quantities of data on which to train to avoid overfitting. So far, much effort has been directed at extracting as much information as possible from what data is available. Generative Adversarial Networks (GANs) offer a novel way to unlock additional information from a dataset by generating synthetic samples with the appearance of real images. This paper demonstrates the feasibility of introducing GAN derived synthetic data to the training datasets in two brain segmentation tasks, leading to improvements in Dice Similarity Coefficient (DSC) of between 1 and 5 percentage points under different conditions, with the strongest effects seen fewer than ten training image stacks are available." ] }
1905.05393
2952217990
A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-the-art approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. We show that PBA can match the performance of AutoAugment on CIFAR-10, CIFAR-100, and SVHN, with three orders of magnitude less overall compute. On CIFAR-10 we achieve a mean test error of 1.46 , which is a slight improvement upon the current state-of-the-art. The code for PBA is open source and is available at this https URL.
Several papers have attempted to automate the generation of data augmentations with data-driven learning. These use methods such as manifold learning @cite_54 , Bayesian Optimization @cite_32 , and generative adversarial networks which generate transformation sequences @cite_9 . Additionally, @cite_44 uses a network to combine pairs of images to train a target network, and @cite_34 injects noise and interpolates images in an autoencoder learned feature space. AutoAugment @cite_6 uses reinforcement learning to optimize for accuracy in a discrete search space of augmentation policies.
{ "cite_N": [ "@cite_9", "@cite_54", "@cite_32", "@cite_6", "@cite_44", "@cite_34" ], "mid": [ "", "2908750566", "2963552443", "2804047946", "2604262106", "2594477595" ], "abstract": [ "", "In this paper we propose a novel augmentation technique that improves not only the performance of deep neural networks on clean test data, but also significantly increases their robustness to random transformations, both affine and projective. Inspired by ManiFool, the augmentation is performed by a line-search manifold-exploration method that learns affine geometric transformations that lead to the misclassification on an image, while ensuring that it remains on the same manifold as the training data. This augmentation method populates any training dataset with images that lie on the border of the manifolds between two-classes and maximizes the variance the network is exposed to during training. Our method was thoroughly evaluated on the challenging tasks of fine-grained skin lesion classification from limited data, and breast tumor classification of mammograms. Compared with traditional augmentation methods, and with images synthesized by Generative Adversarial Networks our method not only achieves state-of-the-art performance but also significantly improves the network's robustness.", "Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as data augmentation. The dominant data augmentation approach in the field assumes that new training samples can be obtained via random geometric or appearance transformations applied to annotated training samples, but this is a strong assumption because it is unclear if this is a reliable generative model for producing new training samples. In this paper, we provide a novel Bayesian formulation to data augmentation, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set. For learning, we introduce a theoretically sound algorithm --- generalised Monte Carlo expectation maximisation, and demonstrate one possible implementation via an extension of the Generative Adversarial Network (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the better performance of our proposed method compared to the current dominant data augmentation approach mentioned above --- the results also show that our approach produces better classification results than similar GAN models.", "Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5 which is 0.4 better than the previous record of 83.1 . On CIFAR-10, we achieve an error rate of 1.5 , which is 0.6 better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars.", "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.", "Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data." ] }
1905.05393
2952217990
A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. Properly chosen augmentation policies can lead to significant generalization improvements; however, state-of-the-art approaches such as AutoAugment are computationally infeasible to run for the ordinary user. In this paper, we introduce a new data augmentation algorithm, Population Based Augmentation (PBA), which generates nonstationary augmentation policy schedules instead of a fixed augmentation policy. We show that PBA can match the performance of AutoAugment on CIFAR-10, CIFAR-100, and SVHN, with three orders of magnitude less overall compute. On CIFAR-10 we achieve a mean test error of 1.46 , which is a slight improvement upon the current state-of-the-art. The code for PBA is open source and is available at this https URL.
Our approach was inspired by work in hyperparameter optimization. There has been much previous work to well-tune hyperparameters, especially in Bayesian Optimization @cite_36 @cite_15 @cite_48 @cite_14 , which are sequential in nature and expensive computationally. Other methods incorporate parallelization or use non-bayesian techniques @cite_17 @cite_47 @cite_16 @cite_50 @cite_38 but still either require multiple rounds of optimization or large amounts of compute. These issues are resolved in Population Based Training @cite_28 , which builds upon both evolutionary strategies @cite_41 and random search @cite_30 to generate non-stationary, adaptive hyperparameter schedules in a single round of model training.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_47", "@cite_14", "@cite_36", "@cite_48", "@cite_28", "@cite_41", "@cite_50", "@cite_15", "@cite_16", "@cite_17" ], "mid": [ "2950646495", "", "", "", "1481631336", "", "2770298516", "2096368961", "", "", "", "2304209433" ], "abstract": [ "We develop parallel predictive entropy search (PPES), a novel algorithm for Bayesian optimization of expensive black-box objective functions. At each iteration, PPES aims to select a batch of points which will maximize the information gain about the global maximizer of the objective. Well known strategies exist for suggesting a single evaluation point based on previous observations, while far fewer are known for selecting batches of points to evaluate in parallel. The few batch selection schemes that have been studied all resort to greedy methods to compute an optimal batch. To the best of our knowledge, PPES is the first non-greedy batch Bayesian optimization strategy. We demonstrate the benefit of this approach in optimization performance on both synthetic and real world applications, including problems in machine learning, rocket science and robotics.", "", "", "", "An apparatus and method are provided for testing memory circuits in a microprocessor. The apparatus includes test management logic and test execution logic located within the microprocessor. The test management logic has a non-specific test program stored therein, and it accepts test parameters provided by an external test controller. The test parameters are applied to the non-specific test program to produce a specific test program. The test execution logic executes the specific test program to test the memory circuits within the microprocessor at the internal speed of the microprocessor.", "", "Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present , a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.", "Mutations are required for adaptation, yet most mutations with phenotypic effects are deleterious. As a consequence, the mutation rate that maximizes adaptation will be some intermediate value. This abstract summarizes a previous publication in which we used Avida, a well-studied artificial life platform, to investigate the ability of natural selection to adjust and optimize mutation rates. Our initial experiments occurred in a previously studied environment with a complex fitness landscape ( Nature, 423, 2003) where Avidians were rewarded for performing any of nine logic tasks. We assessed the optimal mutation rate by empirically determining which unchanging mutation rate produced the highest rate of adaptation. Then, we allowed mutation rates to evolve and we evaluated their proximity to the optimum. Although we chose conditions favorable for mutation rate optimization (asexual organisms not yet adapted to a new environment), the evolved rates were invariably far below the optimum across a wide range of experimental parameter settings (Fig. 1). We hypothesized that the reason mutation rates evolved to be suboptimal was the ruggedness of fitness landscapes. To test this hypothesis, we created a simplified 'counting ones' landscape without any fitness valleys and found that, in such conditions, populations evolved near-optimal mutation rates (Fig. 2, top row). In contrast, once moderate fitness valleys were added to this simple landscape, the ability of evolving populations to find the optimal mutation rate was lost (Fig. 2, bottom two rows). Additional experiments revealed that lowering the rate at which mutation rates evolved did not preclude the evolution of suboptimal mutation rates (see original manuscript). We conclude that rugged fitness landscapes can prevent the evolution of mutation rates that are optimal for long-term adaptation because of the short-term costs of traversing fitness valleys. This finding has important implications for evolutionary research in both biological and computational realms.", "", "", "", "Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While current methods offer efficiencies by adaptively choosing new configurations to train, an alternative strategy is to adaptively allocate resources across the selected configurations. We formulate hyperparameter optimization as a pure-exploration non-stochastic infinitely many armed bandit problem where allocation of additional resources to an arm corresponds to training a configuration on larger subsets of the data. We introduce Hyperband for this framework and analyze its theoretical properties, providing several desirable guarantees. We compare Hyperband with state-of-the-art Bayesian optimization methods and a random search baseline on a comprehensive benchmark including 117 datasets. Our results on this benchmark demonstrate that while Bayesian optimization methods do not outperform random search trained for twice as long, Hyperband in favorable settings offers valuable speedups." ] }
1905.05478
2946555364
The paper represents an algorithm for planning safe and optimal routes for transport facilities with unrestricted movement direction that travel within areas with obstacles. Paper explains the algorithm using a ship as an example of such a transport facility. This paper also provides a survey of several existing solutions for the problem. The method employs an evolutionary algorithm to plan several locally optimal routes and a parallel genetic algorithm to create the final route by optimising the abovementioned set of routes. The routes are optimized against the arrival time, assuming that the optimal route is the route with the lowermost arrival time. It is also possible to apply additional restriction to the routes.
Paper @cite_6 describes the task of planning ship route taking weather conditions into account. The paper uses A * algorithm with heuristic and takes into account wind waves (speed and height). As a result, routes bypass areas with high waves that may cause danger. Paper @cite_14 by the same authors assumes optimization of a route in terms of costs. The authors conclude, that optimizing a route is a multi-objective optimization problem, and thus a genetic algorithm can be used for it, but instead they use A * algorithm with a heuristic that takes costs into account. Both papers use A * algorithm which requires a graph or a grid that describes area where the action is taking place and wave information must also be a part of this grid. In both papers wave information is provided by external systems and is assumed to be always available.
{ "cite_N": [ "@cite_14", "@cite_6" ], "mid": [ "2799743150", "2888625733" ], "abstract": [ "This contribution investigates the economic benefits of using weather ship routing on Short Sea Shipping (SSS) activities. The investigation is supported with the development of a ship routing system based on pathfinding algorithm, the parametrization of the wave effect on navigation, and the use of high-resolution meteo-oceanographic predictions. The optimal ship routing analysis is investigated in a European SSS system: the link between Spanish and Italian ports. The results show the economic benefits using ship routing in SSS during energetic wave episodes. The rate of cost savings may reach 18 of the total costs under particular bad weather conditions in the navigation area. The work establishes the basis of further developments in optimal route applied in relatively short distances and its systematic use in the SSS maritime industry.", "Abstract Weather ship routing has become a recognized measure to target safe, sustainable and economical ship activities. Academic research has focused the ship routing optimization through pathfinding algorithms which take into account the meteo-oceanographic forecasts (i.e. wind, waves or currents predictions). This contribution shows the results of the numerical simulations carried out during the development of a weather ship routing applied to a ferry service in the Mediterranean Sea: Barcelona – Palma de Mallorca. From a methodological point of view, the pathfinding A* algorithm is applied to optimize the travel time considering the wave action. Under severe weather conditions, a reduction of the 7 of the travel time is obtained comparing the optimized route and the minimum distance route. The results show also a non-significant correlation between the travel time reduction and wave height. In consequence the benefit of ship routing depends not only of the wave height but also in the spatial sequence of the storm." ] }
1905.05478
2946555364
The paper represents an algorithm for planning safe and optimal routes for transport facilities with unrestricted movement direction that travel within areas with obstacles. Paper explains the algorithm using a ship as an example of such a transport facility. This paper also provides a survey of several existing solutions for the problem. The method employs an evolutionary algorithm to plan several locally optimal routes and a parallel genetic algorithm to create the final route by optimising the abovementioned set of routes. The routes are optimized against the arrival time, assuming that the optimal route is the route with the lowermost arrival time. It is also possible to apply additional restriction to the routes.
Paper @cite_18 describes a dynamic programming method for plotting a route for a ship. In this case route is a sequence of tuples each of which describes engine power and ship's heading during voyage. This method requires hydrodynamic model of a ship for which a route is being planned and method's precision depends on the precision of the model. Moreover, this method assumes that an external optimization method is used to solve the dynamic programming problem.
{ "cite_N": [ "@cite_18" ], "mid": [ "2034412030" ], "abstract": [ "This paper presents a novel forward dynamic programming method for weather routing to minimise ship fuel consumption during a voyage. Compared with traditional weather routing methods which only optimise the ship’s heading, while the engine power or propeller rotation speed is set as a constant throughout the voyage, this new method considers both the ship power settings and heading controls. A float state technique is used to reduce the iterations required during optimisation and thus save computation time. This new method could lead to quasi-global optimal routing in comparison with the traditional weather routing methods." ] }
1905.05265
2946180323
Autonomous vehicles may make wrong decisions due to inaccurate detection and recognition. Therefore, an intelligent vehicle can combine its own data with that of other vehicles to enhance perceptive ability, and thus improve detection accuracy and driving safety. However, multi-vehicle cooperative perception requires the integration of real world scenes and the traffic of raw sensor data exchange far exceeds the bandwidth of existing vehicular networks. To the best our knowledge, we are the first to conduct a study on raw-data level cooperative perception for enhancing the detection ability of self-driving systems. In this work, relying on LiDAR 3D point clouds, we fuse the sensor data collected from different positions and angles of connected vehicles. A point cloud based 3D object detection method is proposed to work on a diversity of aligned point clouds. Experimental results on KITTI and our collected dataset show that the proposed system outperforms perception by extending sensing area, improving detection accuracy and promoting augmented results. Most importantly, we demonstrate it is possible to transmit point clouds data for cooperative perception via existing vehicular network technologies.
Current works make use of low level fusion of sensors to extract the features or objects for purpose of tracking @cite_1 . However, this does not incorporate the use of raw data as is for the purpose of fusion and object detection. Papers such as @cite_24 and @cite_30 discuss methods of fusion that constructs theoretical architecture for low level fusion and detection. Other approaches, such as @cite_6 and @cite_9 , proposed 3D object detection methods by fusing both image and point cloud from the same vehicle. Differing from improving the detection methods for a single vehicle, we focus our method on the fusion of data between different vehicles.
{ "cite_N": [ "@cite_30", "@cite_9", "@cite_1", "@cite_6", "@cite_24" ], "mid": [ "", "2964062501", "2053782190", "2962888833", "2141184637" ], "abstract": [ "", "In this work, we study 3D object detection from RGBD data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.", "We propose a new cooperative fusion approach between stereovision and laser scanner in order to take advantage of the best features and cope with the drawbacks of these two sensors to perform robust, accurate and real time-detection of multi-obstacles in the automotive context. The proposed system is able to estimate the position and the height, width and depth of generic obstacles at video frame rate (25 frames per second). The vehicle pitch, estimated by stereovision, is used to filter laser scanner raw data. Objects out of the road are removed using road lane information computed by stereovision. Various fusion schemes are proposed and one is experimented. Results of experiments in real driving situations (multi-pedestrians and multi-vehicles detection) are presented and stress the benefits of our approach.", "We present PointFusion, a generic 3D object detection method that leverages both image and 3D point cloud information. Unlike existing methods that either use multistage pipelines or hold sensor and dataset-specific assumptions, PointFusion is conceptually simple and application-agnostic. The image data and the raw point cloud data are independently processed by a CNN and a PointNet architecture, respectively. The resulting outputs are then combined by a novel fusion network, which predicts multiple 3D box hypotheses and their confidences, using the input 3D points as spatial anchors. We evaluate PointFusion on two distinctive datasets: the KITTI dataset that features driving scenes captured with a lidar-camera setup, and the SUN-RGBD dataset that captures indoor environments with RGB-D cameras. Our model is the first one that is able to perform better or on-par with the state-of-the-art on these diverse datasets without any dataset-specific model tuning.", "A scalable feature-level sensor fusion architecture combining the data of a multi-layer laserscanner and a monocular video has been developed. The approach aims at a maximization of synergetic effects by combining low-level measurement features and at the same time trying to keep the fusion architecture as general as possible. A new concept for the geometric modeling of diverse object shapes found in real traffic scenes, including free form models, enhances the precision of the object tracking. Results from real sensor data demonstrate the performance of the new algorithms compared to robust algorithms known from the literature." ] }
1905.05143
2943833595
Many human activities take minutes to unfold. To represent them, related works opt for statistical pooling, which neglects the temporal structure. Others opt for convolutional methods, as CNN and Non-Local. While successful in learning temporal concepts, they are short of modeling minutes-long temporal dependencies. We propose VideoGraph, a method to achieve the best of two worlds: represent minutes-long human activities and learn their underlying temporal structure. VideoGraph learns a graph-based representation for human activities. The graph, its nodes and edges are learned entirely from video datasets, making VideoGraph applicable to problems without node-level annotation. The result is improvements over related works on benchmarks: Epic-Kitchen and Breakfast. Besides, we demonstrate that VideoGraph is able to learn the temporal structure of human activities in minutes-long videos.
Orderless Order-aware Temporal Modeling. Be it short-, mid-, or long-range human activities, when it comes to temporal modeling, related methods are divided into two main families: orderless and order-aware. In orderless methods, the main focus is the statistical pooling of temporal signals in videos, without considering their temporal order or structure. Different pooling strategies are used, as max and average pooling @cite_26 , attention pooling @cite_70 , and context gating @cite_11 , to name a few. A similar approach is vector aggregation, for example: Fisher Vectors @cite_22 and VLAD @cite_48 @cite_19 . Although statistical pooling can trivially scale up to extremely long sequences in theory, this comes at a cost of losing the temporal structure, reminiscent of Bag-of-Words losing spatial understanding.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_70", "@cite_48", "@cite_19", "@cite_11" ], "mid": [ "", "2131042978", "2964233791", "2567033548", "", "2706729717" ], "abstract": [ "", "Action recognition in uncontrolled video is an important and challenging computer vision problem. Recent progress in this area is due to new local features and models that capture spatio-temporal structure between local features, or human-object interactions. Instead of working towards more complex models, we focus on the low-level features and their encoding. We evaluate the use of Fisher vectors as an alternative to bag-of-word histograms to aggregate a small set of state-of-the-art low-level descriptors, in combination with linear classifiers. We present a large and varied set of evaluations, considering (i) classification of short actions in five datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that for basic action recognition and localization MBH features alone are enough for state-of-the-art performance. For complex events we find that SIFT and MFCC features provide complementary cues. On all three problems we obtain state-of-the-art results, while using fewer features and less complex models.", "We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII dataset with 12.5 relative improvement. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem.", "Encoding is one of the key factors for building an effective video representation. In the recent works, super vector-based encoding approaches are highlighted as one of the most powerful representation generators. Vector of Locally Aggregated Descriptors (VLAD) is one of the most widely used super vector methods. However, one of the limitations of VLAD encoding is the lack of spatial information captured from the data. This is critical, especially when dealing with video information. In this work, we propose Spatio-temporal VLAD (ST-VLAD), an extended encoding method which incorporates spatio-temporal information within the encoding process. This is carried out by proposing a video division and extracting specific information over the feature group of each video split. Experimental validation is performed using both hand-crafted and deep features. Our pipeline for action recognition with the proposed encoding method obtains state-of-the-art performance over three challenging datasets: HMDB51 (67.6 ), UCF50 (97.8 ) and UCF101 (91.5 ).", "", "Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at modeling in-terdependencies between features. We evaluate the method on the multi-modal Youtube-8M Large-Scale Video Understanding dataset using pre-extracted visual and audio features. We demonstrate improvements provided by the Context Gating as well as by the combination of learnable pooling methods. We finally show how this leads to the best performance, out of more than 600 teams, in the Kaggle Youtube-8M Large-Scale Video Understanding challenge." ] }
1905.05143
2943833595
Many human activities take minutes to unfold. To represent them, related works opt for statistical pooling, which neglects the temporal structure. Others opt for convolutional methods, as CNN and Non-Local. While successful in learning temporal concepts, they are short of modeling minutes-long temporal dependencies. We propose VideoGraph, a method to achieve the best of two worlds: represent minutes-long human activities and learn their underlying temporal structure. VideoGraph learns a graph-based representation for human activities. The graph, its nodes and edges are learned entirely from video datasets, making VideoGraph applicable to problems without node-level annotation. The result is improvements over related works on benchmarks: Epic-Kitchen and Breakfast. Besides, we demonstrate that VideoGraph is able to learn the temporal structure of human activities in minutes-long videos.
In order-aware methods, the main attention is payed to learning structured or ordered temporal patterns in videos. For example, LSTMs @cite_14 @cite_55 , CRF @cite_27 , 3D CNNs @cite_41 @cite_69 @cite_51 @cite_59 @cite_68 . Others propose temporal modeling layers on top of backbone CNNs, as in Temporal-Segments @cite_12 , Temporal-Relations @cite_38 and Rank-Pool @cite_17 . The outcome of order-aware methods is substantial improvements over their orderless counterparts in standard benchmarks @cite_50 @cite_37 @cite_61 . Nevertheless, both temporal footprint and computational remain the main bottlenecks to learn long-range temporal dependencies. The best methods @cite_56 @cite_68 can model as much as 1k frames ( @math 30 seconds), which is a no match to minutes-long videos.
{ "cite_N": [ "@cite_61", "@cite_38", "@cite_69", "@cite_14", "@cite_37", "@cite_41", "@cite_55", "@cite_56", "@cite_27", "@cite_59", "@cite_50", "@cite_68", "@cite_51", "@cite_12", "@cite_17" ], "mid": [ "", "2950870964", "", "2594590268", "", "2963247196", "", "2963722382", "2886620625", "", "2619947201", "", "", "2950971447", "2193384753" ], "abstract": [ "", "Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets - Something-Something, Jester, and Charades - which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos.", "", "We introduce a system that recognizes concurrent activities from real-world data captured by multiple sensors of different types. The recognition is achieved in two steps. First, we extract spatial and temporal features from the multimodal data. We feed each datatype into a convolutional neural network that extracts spatial features, followed by a long-short term memory network that extracts temporal information in the sensory data. The extracted features are then fused for decision making in the second step. Second, we achieve concurrent activity recognition with a single classifier that encodes a binary output vector in which elements indicate whether the corresponding activity types are currently in progress. We tested our system with three datasets from different domains recorded using different sensors and achieved performance comparable to existing systems designed specifically for those domains. Our system is the first to address the concurrent activity recognition with multisensory data using a single model, which is scalable, simple to train and easy to deploy.", "", "We address the problem of activity detection in continuous, untrimmed video streams. This is a difficult task that requires extracting meaningful spatio-temporal features to capture activities, accurately localizing the start and end times of each activity. We introduce a new model, Region Convolutional 3D Network (R-C3D), which encodes the video streams using a three-dimensional fully convolutional network, then generates candidate temporal regions containing activities, and finally classifies selected regions into specific activities. Computation is saved due to the sharing of convolutional features between the proposal and the classification pipelines. The entire model is trained end-to-end with jointly optimized localization and classification losses. R-C3D is faster than existing methods (569 frames per second on a single Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS’14. We further demonstrate that our model is a general activity detection framework that does not rely on assumptions about particular dataset properties by evaluating our approach on ActivityNet and Charades. Our code is available at http: ai.bu.edu r-c3d", "", "", "Local features at neighboring spatial positions in feature maps have high correlation since their receptive fields are often overlapped. Self-attention usually uses the weighted sum (or other functions) with internal elements of each local feature to obtain its weight score, which ignores interactions among local features. To address this, we propose an effective interaction-aware self-attention model inspired by PCA to learn attention maps. Furthermore, since different layers in a deep network capture feature maps of different scales, we use these feature maps to construct a spatial pyramid and then utilize multi-scale information to obtain more accurate attention scores, which are used to weight the local features in all spatial positions of feature maps to calculate attention maps. Moreover, our spatial pyramid attention is unrestricted to the number of its input feature maps so it is easily extended to a spatio-temporal version. Finally, our model is embedded in general CNNs to form end-to-end attention networks for action classification. Experimental results show that our method achieves the state-of-the-art results on the UCF101, HMDB51 and untrimmed Charades.", "", "We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.", "", "", "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( @math ) and UCF101 ( @math ). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices.", "We propose a function-based temporal pooling method that captures the latent structure of the video sequence data - e.g., how frame-level features evolve over time in a video. We show how the parameters of a function that has been fit to the video data can serve as a robust new video representation. As a specific example, we learn a pooling function via ranking machines. By learning to rank the frame-level features of a video in chronological order, we obtain a new representation that captures the video-wide temporal dynamics of a video, suitable for action recognition. Other than ranking functions, we explore different parametric models that could also explain the temporal changes in videos. The proposed functional pooling methods, and rank pooling in particular, is easy to interpret and implement, fast to compute and effective in recognizing a wide variety of actions. We evaluate our method on various benchmarks for generic action, fine-grained action and gesture recognition. Results show that rank pooling brings an absolute improvement of 7-10 average pooling baseline. At the same time, rank pooling is compatible with and complementary to several appearance and local motion based methods and features, such as improved trajectories and deep learning features." ] }
1905.05143
2943833595
Many human activities take minutes to unfold. To represent them, related works opt for statistical pooling, which neglects the temporal structure. Others opt for convolutional methods, as CNN and Non-Local. While successful in learning temporal concepts, they are short of modeling minutes-long temporal dependencies. We propose VideoGraph, a method to achieve the best of two worlds: represent minutes-long human activities and learn their underlying temporal structure. VideoGraph learns a graph-based representation for human activities. The graph, its nodes and edges are learned entirely from video datasets, making VideoGraph applicable to problems without node-level annotation. The result is improvements over related works on benchmarks: Epic-Kitchen and Breakfast. Besides, we demonstrate that VideoGraph is able to learn the temporal structure of human activities in minutes-long videos.
Short-range Actions Long-range Activities. Huge body of work is dedicated to recognizing human actions that take few seconds to unfold. Examples of well-established benchmarks are: Kinetics @cite_50 , Sports-1M @cite_67 , YouTube-8M @cite_29 , Moments in Time @cite_28 , 20B-Something @cite_34 and AVA @cite_30 . For these short- or mid-range actions, @cite_65 demonstrates that a few frames suffice to successfully recognize them.
{ "cite_N": [ "@cite_30", "@cite_67", "@cite_28", "@cite_29", "@cite_65", "@cite_50", "@cite_34" ], "mid": [ "2949827582", "2016053056", "2962711930", "2524365899", "2964260135", "2619947201", "2949901290" ], "abstract": [ "This paper introduces a video dataset of spatio-temporally localized Atomic Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual actions in 430 15-minute video clips, where actions are localized in space and time, resulting in 1.58M action labels with multiple labels per person occurring frequently. The key characteristics of our dataset are: (1) the definition of atomic visual actions, rather than composite actions; (2) precise spatio-temporal annotations with possibly multiple annotations for each person; (3) exhaustive annotation of these atomic actions over 15-minute video clips; (4) people temporally linked across consecutive segments; and (5) using movies to gather a varied set of action representations. This departs from existing datasets for spatio-temporal action recognition, which typically provide sparse annotations for composite actions in short video clips. We will release the dataset publicly. AVA, with its realistic scene and action complexity, exposes the intrinsic difficulty of action recognition. To benchmark this, we present a novel approach for action localization that builds upon the current state-of-the-art methods, and demonstrates better performance on JHMDB and UCF101-24 categories. While setting a new state of the art on existing datasets, the overall results on AVA are low at 15.6 mAP, underscoring the need for developing new approaches for video understanding.", "Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3 to 63.9 ), but only a surprisingly modest improvement compared to single-frame models (59.3 to 60.9 ). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3 up from 43.9 ).", "We present the Moments in Time Dataset, a large-scale human-annotated collection of one million short videos corresponding to dynamic events unfolding within three seconds. Modeling the spatial-audio-temporal dynamics even for actions occurring in 3 second videos poses many challenges: meaningful events do not include only people, but also objects, animals, and natural phenomena; visual and auditory events can be symmetrical or not in time (\"opening\" means \"closing\" in reverse order), and transient or sustained. We describe the annotation process of our dataset (each video is tagged with one action or activity label among 339 different classes), analyze its scale and diversity in comparison to other large-scale video datasets for action recognition, and report results of several baseline models addressing separately, and jointly, three modalities: spatial, temporal and auditory. The Moments in Time dataset, designed to have a large coverage and diversity of events in both visual and auditory modalities, can serve as a new challenge to develop models that scale to the level of complexity and abstract reasoning that a human processes on a daily basis.", "Many recent advancements in Computer Vision are attributed to large datasets. Open-source software packages for Machine Learning and inexpensive commodity hardware have reduced the barrier of entry for exploring novel approaches at scale. It is possible to train models over millions of examples within a few days. Although large-scale datasets exist for image understanding, such as ImageNet, there are no comparable size video classification datasets. In this paper, we introduce YouTube-8M, the largest multi-label video classification dataset, composed of 8 million videos (500K hours of video), annotated with a vocabulary of 4800 visual entities. To get the videos and their labels, we used a YouTube video annotation system, which labels videos with their main topics. While the labels are machine-generated, they have high-precision and are derived from a variety of human-based signals including metadata and query click signals. We filtered the video labels (Knowledge Graph entities) using both automated and manual curation strategies, including asking human raters if the labels are visually recognizable. Then, we decoded each video at one-frame-per-second, and used a Deep CNN pre-trained on ImageNet to extract the hidden representation immediately prior to the classification layer. Finally, we compressed the frame features and make both the features and video-level labels available for download. We trained various (modest) classification models on the dataset, evaluated them using popular evaluation metrics, and report them as baselines. Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow. We plan to release code for training a TensorFlow model and for computing metrics.", "What is the right way to reason about human activities? What directions forward are most promising? In this work, we analyze the current state of human activity understanding in videos. The goal of this paper is to examine datasets, evaluation metrics, algorithms, and potential future directions. We look at the qualitative attributes that define activities such as pose variability, brevity, and density. The experiments consider multiple state-of-the-art algorithms and multiple datasets. The results demonstrate that while there is inherent ambiguity in the temporal extent of activities, current datasets still permit effective benchmarking. We discover that fine-grained understanding of objects and pose when combined with temporal reasoning is likely to yield substantial improvements in algorithmic accuracy. We present the many kinds of information that will be needed to achieve substantial gains in activity understanding: objects, verbs, intent, and sequential reasoning. The software and additional information will be made available to provide other researchers detailed diagnostics to understand their own algorithms.", "We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.", "Neural networks trained on datasets such as ImageNet have led to major advances in visual object classification. One obstacle that prevents networks from reasoning more deeply about complex scenes and situations, and from integrating visual knowledge with natural language, like humans do, is their lack of common sense knowledge about the physical world. Videos, unlike still images, contain a wealth of detailed information about the physical world. However, most labelled video datasets represent high-level concepts rather than detailed physical aspects about actions and scenes. In this work, we describe our ongoing collection of the \"something-something\" database of video prediction tasks whose solutions require a common sense understanding of the depicted situation. The database currently contains more than 100,000 videos across 174 classes, which are defined as caption-templates. We also describe the challenges in crowd-sourcing this data at scale." ] }
1905.05143
2943833595
Many human activities take minutes to unfold. To represent them, related works opt for statistical pooling, which neglects the temporal structure. Others opt for convolutional methods, as CNN and Non-Local. While successful in learning temporal concepts, they are short of modeling minutes-long temporal dependencies. We propose VideoGraph, a method to achieve the best of two worlds: represent minutes-long human activities and learn their underlying temporal structure. VideoGraph learns a graph-based representation for human activities. The graph, its nodes and edges are learned entirely from video datasets, making VideoGraph applicable to problems without node-level annotation. The result is improvements over related works on benchmarks: Epic-Kitchen and Breakfast. Besides, we demonstrate that VideoGraph is able to learn the temporal structure of human activities in minutes-long videos.
Other strands of work shift their attention to human activities that take minutes or even an hour to unfold. Cooking-related activities are good examples, as in YouCook @cite_63 , Breakfast @cite_0 , Epic-Kitchens @cite_49 , MPII Cooking @cite_35 or 50-Salads @cite_31 . Other examples include instructional videos: Charades @cite_57 , and unscripted activities: EventNet @cite_45 , Multi-THUMOS @cite_52 .
{ "cite_N": [ "@cite_35", "@cite_52", "@cite_0", "@cite_57", "@cite_45", "@cite_49", "@cite_63", "@cite_31" ], "mid": [ "2156798932", "2952835694", "2099614498", "2337252826", "2035607533", "2964242760", "2784025607", "2109698606" ], "abstract": [ "Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them.", "Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory (LSTM) deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.", "This paper describes a framework for modeling human activities as temporally structured processes. Our approach is motivated by the inherently hierarchical nature of human activities and the close correspondence between human actions and speech: We model action units using Hidden Markov Models, much like words in speech. These action units then form the building blocks to model complex human activities as sentences using an action grammar. To evaluate our approach, we collected a large dataset of daily cooking activities: The dataset includes a total of 52 participants, each performing a total of 10 cooking activities in multiple real-life kitchens, resulting in over 77 hours of video footage. We evaluate the HTK toolkit, a state-of-the-art speech recognition engine, in combination with multiple video feature descriptors, for both the recognition of cooking activities (e.g., making pancakes) as well as the semantic parsing of videos into action units (e.g., cracking eggs). Our results demonstrate the benefits of structured temporal generative approaches over existing discriminative approaches in coping with the complexity of human daily life activities.", "Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 s, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.", "Event-specific concepts are the semantic concepts specifically designed for the events of interest, which can be used as a mid-level representation of complex events in videos. Existing methods only focus on defining event-specific concepts for a small number of pre-defined events, but cannot handle novel unseen events. This motivates us to build a large scale event-specific concept library that covers as many real-world events and their concepts as possible. Specifically, we choose WikiHow, an online forum containing a large number of how-to articles on human daily life events. We perform a coarse-to-fine event discovery process and discover 500 events from WikiHow articles. Then we use each event name as query to search YouTube and discover event-specific concepts from the tags of returned videos. After an automatic filter process, we end up with 95,321 videos and 4,490 concepts. We train a Convolutional Neural Network (CNN) model on the 95,321 videos over the 500 events, and use the model to extract deep learning feature from video content. With the learned deep learning feature, we train 4,490 binary SVM classifiers as the event-specific concept library. The concepts and events are further organized in a hierarchical structure defined by WikiHow, and the resultant concept library is called EventNet. Finally, the EventNet concept library is used to generate concept based representation of event videos. To the best of our knowledge, EventNet represents the first video event ontology that organizes events and their concepts into a semantic structure. It offers great potential for event retrieval and browsing. Extensive experiments over the zero-shot event retrieval task when no training samples are available show that the proposed EventNet concept library consistently and significantly outperforms the state-of-the-art (such as the 20K ImageNet concepts trained with CNN) by a large margin up to 207 . We will also show that EventNet structure can help users find relevant concepts for novel event queries that cannot be well handled by conventional text based semantic analysis alone. The unique two-step approach of first applying event detection models followed by detection of event-specific concepts also provides great potential to improve the efficiency and accuracy of Event Recounting since only a very small number of event-specific concept classifiers need to be fired after event detection.", "", "", "This paper introduces a publicly available dataset of complex activities that involve manipulative gestures. The dataset captures people preparing mixed salads and contains more than 4.5 hours of accelerometer and RGB-D video data, detailed annotations, and an evaluation protocol for comparison of activity recognition algorithms. Providing baseline results for one possible activity recognition task, this paper further investigates modality fusion methods at different stages of the recognition pipeline: (i) prior to feature extraction through accelerometer localization, (ii) at feature level via feature concatenation, and (iii) at classification level by combining classifier outputs. Empirical evaluation shows that fusing information captured by these sensor types can considerably improve recognition performance." ] }
1905.05143
2943833595
Many human activities take minutes to unfold. To represent them, related works opt for statistical pooling, which neglects the temporal structure. Others opt for convolutional methods, as CNN and Non-Local. While successful in learning temporal concepts, they are short of modeling minutes-long temporal dependencies. We propose VideoGraph, a method to achieve the best of two worlds: represent minutes-long human activities and learn their underlying temporal structure. VideoGraph learns a graph-based representation for human activities. The graph, its nodes and edges are learned entirely from video datasets, making VideoGraph applicable to problems without node-level annotation. The result is improvements over related works on benchmarks: Epic-Kitchen and Breakfast. Besides, we demonstrate that VideoGraph is able to learn the temporal structure of human activities in minutes-long videos.
Graph-based Representation. Earlier, graph-based representation has been used in storytelling @cite_60 @cite_32 , and video retrieval @cite_16 . Different works use graph convolutions to learn concepts and or relationships from data @cite_71 @cite_20 @cite_24 . Recently, graph convolutions are applied to image understanding @cite_23 , video understanding @cite_3 @cite_53 @cite_36 @cite_7 and question answering @cite_44 . Despite their success in learning structured representations from video datasets, the main limitation of graph convolution methods is requiring the graph nodes and or edges to be known a priori. Consequently, when node or frame-level annotations are not available, using these methods is hard.
{ "cite_N": [ "@cite_7", "@cite_60", "@cite_36", "@cite_53", "@cite_32", "@cite_3", "@cite_24", "@cite_44", "@cite_71", "@cite_23", "@cite_16", "@cite_20" ], "mid": [ "", "1984899418", "", "", "", "2806331055", "2964015378", "2903401627", "2964145825", "2902591409", "2039380761", "" ], "abstract": [ "", "In this paper, we investigate an approach for reconstructing storyline graphs from large-scale collections of Internet images, and optionally other side information such as friendship graphs. The storyline graphs can be an effective summary that visualizes various branching narrative structure of events or activities recurring across the input photo sets of a topic class. In order to explore further the usefulness of the storyline graphs, we leverage them to perform the image sequential prediction tasks, from which photo recommendation applications can benefit. We formulate the storyline reconstruction problem as an inference of sparse time-varying directed graphs, and develop an optimization algorithm that successfully addresses a number of key challenges of Web-scale problems, including global optimality, linear complexity, and easy parallelization. With experiments on more than 3.3 millions of images of 24 classes and user studies via Amazon Mechanical Turk, we show that the proposed algorithm improves other candidate methods for both storyline reconstruction and image prediction tasks.", "", "", "", "How do humans recognize the action “opening a book”? We argue that there are two important cues: modeling temporal shape dynamics and modeling functional relationships between humans and objects. In this paper, we propose to represent videos as space-time region graphs which capture these two important cues. Our graph nodes are defined by the object region proposals from different frames in a long range video. These nodes are connected by two types of relations: (i) similarity relations capturing the long range dependencies between correlated objects and (ii) spatial-temporal relations capturing the interactions between nearby objects. We perform reasoning on this graph representation via Graph Convolutional Networks. We achieve state-of-the-art results on the Charades and Something-Something datasets. Especially for Charades with complex environments, we obtain a huge (4.4 ) gain when our model is applied in complex environments.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.", "Understanding web instructional videos is an essential branch of video understanding in two aspects. First, most existing video methods focus on short-term actions for a-few-second-long video clips; these methods are not directly applicable to long videos. Second, unlike unconstrained long videos, e.g., movies, instructional videos are more structured in that they have step-by-step procedure constraining the understanding task. In this paper, we study reasoning on instructional videos via question-answering (QA). Surprisingly, it has not been an emphasis in the video community despite its rich applications. We thereby introduce YouQuek, an annotated QA dataset for instructional videos based on the recent YouCook2. The questions in YouQuek are not limited to cues on one frame but related to logical reasoning in the temporal dimension. Observing the lack of effective representations for modeling long videos, we propose a set of carefully designed models including a novel Recurrent Graph Convolutional Network (RGCN) that captures both temporal order and relation information. Furthermore, we study multiple modalities including description and transcripts for the purpose of boosting video understanding. Extensive experiments on YouQuek suggest that RGCN performs the best in terms of QA accuracy and a better performance is gained by introducing human annotated description.", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet, ResNeXt, SE-Net and DPN, for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.", "This paper introduces Videograph, a new tool for video mining and visu alizing the structure of the plot of a video sequence. The main idea is to lstitchr together similar scenes which are apart in time. We give a fast algorithm to do stitching and we show case studies, where our approach (a) gives good features for classification (91 accuracy), and (b) results in Videographs which reveal the logical structure of the plot of the video clips.", "" ] }