aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1709.05254
2756166446
Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network's reconstruction error obtainable for a journal entry and regularized by the entry's individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries, show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies.
in @cite_3 and @cite_37 created transaction profiles of SAP ERP users. The profiles are derived from journal entry based user activity pattern recorded in two SAP R 3 ERP system in order to detect suspicious user behavior and segregation of duties violations. Similarly, used SAP R 3 system audit logs to detect known fraud scenarios and collusion fraud via a "red-flag" based matching of fraud scenarios @cite_6 .
{ "cite_N": [ "@cite_37", "@cite_6", "@cite_3" ], "mid": [ "2077180684", "1498749478", "2117732769" ], "abstract": [ "Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour, and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R 3, presently the predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach.", "ERP systems generally implement controls to prevent certain common kinds of fraud. In addition however, there is an imperative need for detection of more sophisticated patterns of fraudulent activity as evidenced by the legal requirement for company audits and the common incidence of fraud. This paper describes the design and implementation of a framework for detecting patterns of fraudulent activity in ERP systems. We include the description of six fraud scenarios and the process of specifying and detecting the occurrence of those scenarios in ERP user log data using the prototype software which we have developed. The test results for detecting these scenarios in log data have been verified and confirm the success of our approach which can be generalized to ERP systems in general.", "4A bstract. Despite all attempts to prevent fraud, it continues to be a major threat to industry and government. Traditionally, organizations have focused on fraud prevention rather than detection, to combat fraud. In this paper we present a role mining inspired approach to represent user behaviour in Enterprise Resource Planning (ERP) systems, primarily aimed at detecting opportunities to commit fraud or potentially suspicious activities. We have adapted an approach which uses set theory to create transaction profiles based on analysis of user activity records. Based on these transaction profiles, we propose a set of (1) anomaly types to detect potentially suspicious user behaviour and (2) scenarios to identify inadequate segregation of duties in an ERP environment. In addition, we present two algorithms to construct a directed acyclic graph to represent relationships between transaction profiles. Experiments were conducted using a real dataset obtained from a teaching environment and a demonstration dataset, both using SAP R 3, presently the most predominant ERP system. The results of this empirical research demonstrate the effectiveness of the proposed approach." ] }
1709.05254
2756166446
Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network's reconstruction error obtainable for a journal entry and regularized by the entry's individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries, show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies.
in @cite_8 used latent class clustering to conduct an uni- and multivariate clustering of SAP ERP purchase order transactions. Transactions significantly deviating from the cluster centroids are flagged as anomalous and are proposed for a detailed review by auditors. The approach was enhanced in @cite_39 by a means of process mining to detect deviating process flows in an organization procure to pay process.
{ "cite_N": [ "@cite_39", "@cite_8" ], "mid": [ "2150960152", "2074923226" ], "abstract": [ "Corporate fraud these days represents a huge cost to our economy. In the paper we address one specific type of corporate fraud, internal transaction fraud. Given the omnipresence of stored history logs, the field of process mining rises as an adequate answer to mitigating internal transaction fraud. Process mining diagnoses processes by mining event logs. This way we can expose opportunities to commit fraud in the followed process. In this paper we report on an application of process mining at a case company. The procurement process was selected as example for internal transaction fraud mitigation. The results confirm the contribution process mining can provide to business practice.", "Corporate fraud represents a huge cost to the current economy. Academic literature has demonstrated how data mining techniques can be of value in the fight against fraud. This research has focused on fraud detection, mostly in a context of external fraud. In this paper, we discuss the use of a data mining approach to reduce the risk of internal fraud. Reducing fraud risk involves both detection and prevention. Accordingly, a descriptive data mining strategy is applied as opposed to the widely used prediction data mining techniques in the literature. The results of using a multivariate latent class clustering algorithm to a case company's procurement data suggest that applying this technique in a descriptive data mining approach is useful in assessing the current risk of internal fraud. The same results could not be obtained by applying a univariate analysis." ] }
1709.05254
2756166446
Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network's reconstruction error obtainable for a journal entry and regularized by the entry's individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries, show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies.
Concluding from the reviewed literature, the majority of references draw either (1) on historical accounting and forensic knowledge about various "red-flags" and fraud schemes or (2) on traditional non-deep learning techniques. As a result and in agreement with @cite_36 , we see a demand for unsupervised and novel approaches capable to detect so far unknown scenarios of fraudulent journal entries.
{ "cite_N": [ "@cite_36" ], "mid": [ "2082621278" ], "abstract": [ "This survey paper categorizes, compares, and summarizes the data set, algorithm and performance measurement in almost all published technical and review articles in automated accounting fraud detection. Most researches regard fraud companies and non-fraud companies as data subjects, Eigenvalue covers auditor data, company governance data, financial statement data, industries, trading data and other categories. Most data in earlier research were auditor data; Later research establish model by using sharing data and public statement data. Company governance data have been widely used. It is generally believed that ratio data is more effective than accounting data; Seldom research on time Series Data Mining were conducted. The retrieved literature used mining algorithms including statistical test, regression analysis, neural networks, decision tree, Bayesian network, and stack variables etc.. Regression Analysis is widely used on hiding data. Generally the detecting effect and accuracy of NN are superior to regression model. General conclusion is that model detecting is better than auditor detecting rate without assisting. There is a need to introduce other algorithms of no-tag data mining. Owing to the small size of fraud samples, some literature reached conclusion based on training samples and may overestimated the effect of model." ] }
1709.05254
2756166446
Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network's reconstruction error obtainable for a journal entry and regularized by the entry's individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries, show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies.
Nowadays, autoencoder networks have been widely used in image classification @cite_45 , machine translation @cite_42 and speech processing @cite_38 for their unsupervised data compression capabilities. To the best of our knowledge and were the first who proposed autoencoder networks for anomaly detection @cite_22 , @cite_19 .
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_42", "@cite_19", "@cite_45" ], "mid": [ "1519114972", "1876967670", "2950682695", "2111160323", "2100495367" ], "abstract": [ "This paper deals with outlier modeling within a very special framework: a segment-based speech recognizer. The recognizer is built on a neural net that, besides classifying speech segments, has to identify outliers as well. One possibility is to artificially generate outlier samples, but this is tedious, error-prone and significantly increases the training time. This study examines the alternative of applying a replicator neural net for this task, originally proposed for outlier modeling in data mining. Our findings show that with a replicator net the recognizer is capable of a very similar performance, but this time without the need for a large amount of outlier data.", "We consider the problem of finding outliers in large multivariate databases. Outlier detection can be applied during the data cleansing process of data mining to identify problems with the data itself, and to fraud detection where groups of outliers are often of particular interest. We use replicator neural networks (RNNs) to provide a measure of the outlyingness of data records. The performance of the RNNs is assessed using a ranked score measure. The effectiveness of the RNNs for outlier detection is demonstrated on two publicly available databases.", "Cross-language learning allows us to use training data from one language to build models for a different language. Many approaches to bilingual learning require that we have word-level alignment of sentences from parallel corpora. In this work we explore the use of autoencoder-based methods for cross-language learning of vectorial word representations that are aligned between two languages, while not relying on word-level alignments. We show that by simply learning to reconstruct the bag-of-words representations of aligned sentences, within and between languages, we can in fact learn high-quality representations and do without word alignments. Since training autoencoders on word observations presents certain computational issues, we propose and compare different variations adapted to this setting. We also propose an explicit correlation maximizing regularizer that leads to significant improvement in the performance. We empirically investigate the success of our approach on the problem of cross-language test classification, where a classifier trained on a given language (e.g., English) must learn to generalize to a different language (e.g., German). These experiments demonstrate that our approaches are competitive with the state-of-the-art, achieving up to 10-14 percentage point improvements over the best reported results on this task.", "We have proposed replicator neural networks (RNNs) for outlier detection. We compare RNN for outlier detection with three other methods using both publicly available statistical datasets (generally small) and data mining datasets (generally much larger and generally real data). The smaller datasets provide insights into the relative strengths and weaknesses of RNNs. The larger datasets in particular test scalability and practicality of application.", "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data." ] }
1709.05342
2755148105
In this paper, we propose and evaluate the application of unsupervised machine learning to anomaly detection for a Cyber-Physical System (CPS). We compare two methods: Deep Neural Networks (DNN) adapted to time series data generated by a CPS, and one-class Support Vector Machines (SVM). These methods are evaluated against data from the Secure Water Treatment (SWaT) testbed, a scaled-down but fully operational raw water purification plant. For both methods, we first train detectors using a log generated by SWaT operating under normal conditions. Then, we evaluate the performance of both methods using a log generated by SWaT operating under 36 different attack scenarios. We find that our DNN generates fewer false positives than our one-class SVM while our SVM detects slightly more anomalies. Overall, our DNN has a slightly better F measure than our SVM. We discuss the characteristics of the DNN and one-class SVM used in this experiment, and compare the advantages and disadvantages of the two methods.
There is a large body of work on simulation and model-based anomaly detection for CPSs, e.g. @cite_11 @cite_5 @cite_7 @cite_0 @cite_4 @cite_9 @cite_17 . However, these approaches require prior knowledge of the system's configuration, in addition to operation logs.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_17", "@cite_0", "@cite_5", "@cite_11" ], "mid": [ "1592367452", "1495862233", "2129039806", "2145546780", "2109559642", "2158469273", "2963459078" ], "abstract": [ "Model-based diagnosis and mode estimation capabilities excel at diagnosing systems whose symptoms are clearly distinguished from normal behavior. A strength of mode estimation, in particular, is its ability to track a system's discrete dynamics as it moves between different behavioral modes. However, often failures bury their symptoms amongst the signal noise, until their effects become catastrophic.We introduce a hybrid mode estimation system that extracts mode estimates from subtle symptoms. First, we introduce a modeling formalism, called concurrent probabilistic hybrid automata (cPHA), that merge hidden Markov models (HMM) with continuous dynamical system models. Second, we introduce hybrid estimation as a method for tracking and diagnosing cPHA, by unifying traditional continuous state observers with HMM belief update. Finally, we introduce a novel, any-time, any-space algorithm for computing approximate hybrid estimates.", "This article presents a number of complementary algorithms for detecting faults on-board operating robots, where a fault is defined as a deviation from expected behavior. The algorithms focus on faults that cannot directly be detected from current sensor values but require inference from a sequence of time-varying sensor values. Each algorithm provides an independent improvement over the basic approach. These improvements are not mutually exclusive, and the algorithms may be combined to suit the application domain. All the approaches presented require dynamic models representing the behavior of each of the fault and operational states. These models can be built from analytical models of the robot dynamics, data from simulation, or from the real robot. All the approaches presented detect faults from a finite number of known fault conditions, although there may potentially be a very large number of these faults.", "Many networked embedded sensing and control systems can be modeled as hybrid systems with interacting continuous and discrete dynamics. These systems present significant challenges for monitoring and diagnosis. Many existing model-based approaches focus on diagnostic reasoning assuming appropriate fault signatures have been generated. However, an important missing piece is the integration of model-based techniques with the acquisition and processing of sensor signals and the modeling of faults to support diagnostic reasoning. This paper addresses key modeling and computational problems at the interface between model-based diagnosis techniques and signature analysis to enable the efficient detection and isolation of incipient and abrupt faults in hybrid systems. A hybrid automata model that parameterizes abrupt and incipient faults is introduced. Based on this model, an approach for diagnoser design is presented. The paper also develops a novel mode estimation algorithm that uses model-based prediction to focus distributed processing signal algorithms. Finally, the paper describes a diagnostic system architecture that integrates the modeling, prediction, and diagnosis components. The implemented architecture is applied to fault diagnosis of a complex electro-mechanical machine, the Xerox DC265 printer, and the experimental results presented validate the approach. A number of design trade-offs that were made to support implementation of the algorithms for online applications are also described.", "Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.", "Techniques for diagnosing faults in hybrid systems that combine digital (discrete) supervisory controllers with analog (continuous) plants need to be different from those used for discrete or continuous systems. This paper presents a methodology for online tracking and diagnosis of hybrid systems. We demonstrate the effectiveness of the approach with experiments conducted on the fuel-transfer system of fighter aircraft", "Cyber-secure networked control is modeled, analyzed, and experimentally illustrated in this paper. An attack space defined by the adversary's system knowledge, disclosure, and disruption resources is introduced. Adversaries constrained by these resources are modeled for a networked control system architecture. It is shown that attack scenarios corresponding to replay, zero dynamics, and bias injection attacks can be analyzed using this framework. An experimental setup based on a quadruple-tank process controlled over a wireless network is used to illustrate the attack scenarios, their consequences, and potential counter-measures.", "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study." ] }
1709.05342
2755148105
In this paper, we propose and evaluate the application of unsupervised machine learning to anomaly detection for a Cyber-Physical System (CPS). We compare two methods: Deep Neural Networks (DNN) adapted to time series data generated by a CPS, and one-class Support Vector Machines (SVM). These methods are evaluated against data from the Secure Water Treatment (SWaT) testbed, a scaled-down but fully operational raw water purification plant. For both methods, we first train detectors using a log generated by SWaT operating under normal conditions. Then, we evaluate the performance of both methods using a log generated by SWaT operating under 36 different attack scenarios. We find that our DNN generates fewer false positives than our one-class SVM while our SVM detects slightly more anomalies. Overall, our DNN has a slightly better F measure than our SVM. We discuss the characteristics of the DNN and one-class SVM used in this experiment, and compare the advantages and disadvantages of the two methods.
There is a proposal to use @cite_25 to obtain a model for anomaly detection, which requires access to the source code of the control elements. In this approach, the detector is trained on correct and incorrect behaviors, the latter of which are generated by randomly injecting faults into the control code. Currently, only a preliminary investigation of the approach has been performed.
{ "cite_N": [ "@cite_25" ], "mid": [ "2511988939" ], "abstract": [ "Cyber-physical systems (CPS), which integrate algorithmic control with physical processes, often consist of physically distributed components communicating over a network. A malfunctioning or compromised component in such a CPS can lead to costly consequences, especially in the context of public infrastructure. In this short paper, we argue for the importance of constructing invariants (or models) of the physical behaviour exhibited by CPS, motivated by their applications to the control, monitoring, and attestation of components. To achieve this despite the inherent complexity of CPS, we propose a new technique for learning invariants that combines machine learning with ideas from mutation testing. We present a preliminary study on a water treatment system that suggests the efficacy of this approach, propose strategies for establishing confidence in the correctness of invariants, then summarise some research questions and the steps we are taking to investigate them." ] }
1709.05342
2755148105
In this paper, we propose and evaluate the application of unsupervised machine learning to anomaly detection for a Cyber-Physical System (CPS). We compare two methods: Deep Neural Networks (DNN) adapted to time series data generated by a CPS, and one-class Support Vector Machines (SVM). These methods are evaluated against data from the Secure Water Treatment (SWaT) testbed, a scaled-down but fully operational raw water purification plant. For both methods, we first train detectors using a log generated by SWaT operating under normal conditions. Then, we evaluate the performance of both methods using a log generated by SWaT operating under 36 different attack scenarios. We find that our DNN generates fewer false positives than our one-class SVM while our SVM detects slightly more anomalies. Overall, our DNN has a slightly better F measure than our SVM. We discuss the characteristics of the DNN and one-class SVM used in this experiment, and compare the advantages and disadvantages of the two methods.
Jones et al. @cite_13 propose an SVM-like algorithm which finds a description in a formula of the known region of behaviors. An advantage of this approach is that it often creates a readable description of the known behaviors. However, if the system behavior does not allow for a short description in STL, this method will not work. Because SWaT is dynamic, non-linear, stochastic, and has high dimensionality, a short description is unlikely. Moreover, in their method, the tightness function is heuristic and no justification is given.
{ "cite_N": [ "@cite_13" ], "mid": [ "2086359741" ], "abstract": [ "As the complexity of cyber-physical systems increases, so does the number of ways an adversary can disrupt them. This necessitates automated anomaly detection methods to detect possible threats. In this paper, we extend our recent results in the field of inference via formal methods to develop an unsupervised learning algorithm. Our procedure constructs from data a signal temporal logic (STL) formula that describes normal system behavior. Trajectories that do not satisfy the learned formula are flagged as anomalous. STL can be used to formulate properties such as “If the train brakes within 500 m of the platform at a speed of 50 km hr, then it will stop in at least 30 s and at most 50 s.” STL gives a more human-readable representation of behavior than classifiers represented as surfaces in high-dimensional feature spaces. STL formulae can also be used for early detection via online monitoring and for anomaly mitigation via formal synthesis. We demonstrate the power of our method with a physical model of a train's brake system. To our knowledge, this paper is the first instance of formal methods being applied to anomaly detection." ] }
1709.05342
2755148105
In this paper, we propose and evaluate the application of unsupervised machine learning to anomaly detection for a Cyber-Physical System (CPS). We compare two methods: Deep Neural Networks (DNN) adapted to time series data generated by a CPS, and one-class Support Vector Machines (SVM). These methods are evaluated against data from the Secure Water Treatment (SWaT) testbed, a scaled-down but fully operational raw water purification plant. For both methods, we first train detectors using a log generated by SWaT operating under normal conditions. Then, we evaluate the performance of both methods using a log generated by SWaT operating under 36 different attack scenarios. We find that our DNN generates fewer false positives than our one-class SVM while our SVM detects slightly more anomalies. Overall, our DNN has a slightly better F measure than our SVM. We discuss the characteristics of the DNN and one-class SVM used in this experiment, and compare the advantages and disadvantages of the two methods.
Anomaly detection, beyond the specific application to CPSs, is a well-studied area of research (see e.g. the survey @cite_27 and textbook @cite_26 ). Harada et al. @cite_10 applies one of the most widely-used anomaly detection methods, @cite_21 , to an automated aquarium management system and detects the failure of mutual exclusion. However, LOF is a method to find outliers without prior knowledge of the normal behaviors. Because the normal behaviors are known in our case, LOF is not suitable for our task.
{ "cite_N": [ "@cite_27", "@cite_26", "@cite_10", "@cite_21" ], "mid": [ "2122646361", "", "2583152362", "2144182447" ], "abstract": [ "Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.", "", "Detecting anomalies of a cyber physical system (CPS), which is a complex system consisting of both physical and software parts, is important because a CPS often operates autonomously in an unpredictable environment. However, because of the ever-changing nature and lack of a precise model for a CPS, detecting anomalies is still a challenging task. To address this problem, we propose applying an outlier detection method to a CPS log. By using a log obtained from an actual aquarium management system, we evaluated the effectiveness of our proposed method by analyzing outliers that it detected. By investigating the outliers with the developer of the system, we confirmed that some outliers indicate actual faults in the system. For example, our method detected failures of mutual exclusion in the control system that were unknown to the developer. Our method also detected transient losses of functionalities and unexpected reboots. On the other hand, our method did not detect anomalies that were too many and similar. In addition, our method reported rare but unproblematic concurrent combinations of operations as anomalies. Thus, our approach is effective at finding anomalies, but there is still room for improvement.", "For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical." ] }
1709.05342
2755148105
In this paper, we propose and evaluate the application of unsupervised machine learning to anomaly detection for a Cyber-Physical System (CPS). We compare two methods: Deep Neural Networks (DNN) adapted to time series data generated by a CPS, and one-class Support Vector Machines (SVM). These methods are evaluated against data from the Secure Water Treatment (SWaT) testbed, a scaled-down but fully operational raw water purification plant. For both methods, we first train detectors using a log generated by SWaT operating under normal conditions. Then, we evaluate the performance of both methods using a log generated by SWaT operating under 36 different attack scenarios. We find that our DNN generates fewer false positives than our one-class SVM while our SVM detects slightly more anomalies. Overall, our DNN has a slightly better F measure than our SVM. We discuss the characteristics of the DNN and one-class SVM used in this experiment, and compare the advantages and disadvantages of the two methods.
The SWaT testbed and its dataset @cite_12 have been used to evaluate a number of other approaches for cyber-attack prevention, including learning classifiers from data @cite_25 @cite_23 , monitoring network traffic @cite_20 , or monitoring process invariants @cite_14 @cite_8 . These process invariants are derived from the physical laws concerning different aspects of the SWaT system, and thus in our terminology can be considered in the category of rule-based anomaly detection methods.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_23", "@cite_12", "@cite_25", "@cite_20" ], "mid": [ "2494784360", "2399043755", "2608911009", "2768947629", "2511988939", "2542991636" ], "abstract": [ "An experimental investigation was undertaken to assess the effectiveness of process invariants in detecting cyber-attacks on an Industrial Control System (ICS). An invariant was derived from one selected sub-process and coded into the corresponding controller. Experiments were performed each with an attack selected from a set of three stealthy attack types and launched in different states of the system to cause tank overflow and degrade system productivity. The impact of power failure, possibly due to an attack on the power source, was also studied. The effectiveness of the detection method was investigated against several design parameters. Despite the apparent simplicity of the experiment, results point to challenges in implementing invariant-based attack detection in an operational Industrial Control System.", "A distributed detection method is proposed to detect single stage multi-point (SSMP) attacks on a Cyber Physical System (CPS). Such attacks aim at compromising two or more sensors or actuators at any one stage of a CPS and could totally compromise a controller and prevent it from detecting the attack. However, as demonstrated in this work, using the flow properties of water from one stage to the other, a neighboring controller was found effective in detecting such attacks. The method is based on physical invariants derived for each stage of the CPS from its design. The attack detection effectiveness of the method was evaluated experimentally against an operational water treatment testbed containing 42 sensors and actuators. Results from the experiments point to high effectiveness of the method in detecting a variety of SSMP attacks but also point to its limitations. Distributing the attack detection code among various controllers adds to the scalability of the proposed method.", "This paper presents a novel unsupervised approach to detect cyber attacks in Cyber-Physical Systems (CPS). We describe an unsupervised learning approach using a Recurrent Neural network which is a time series predictor as our model. We then use the Cumulative Sum method to identify anomalies in a replicate of a water treatment plant. The proposed method not only detects anomalies in the CPS but also identifies the sensor that was attacked. The experiments were performed on a complex dataset which is collected through a Secure Water Treatment Testbed (SWaT). Through the experiments, we show that the proposed technique is able to detect majority of the attacks designed by our research team with low false positive rates.", "This paper presents a dataset to support research in the design of secure Cyber Physical Systems (CPS). The data collection process was implemented on a six-stage Secure Water Treatment (SWaT) testbed. SWaT represents a scaled down version of a real-world industrial water treatment plant producing 5 gallons per minute of water filtered via membrane based ultrafiltration and reverse osmosis units. This plant allowed data collection under two behavioral modes: normal and attacked. SWaT was run non-stop from its “empty” state to fully operational state for a total of 11-days. During this period, the first seven days the system operated normally i.e. without any attacks or faults. During the remaining days certain cyber and physical attacks were launched on SWaT while data collection continued. The dataset reported here contains the physical properties related to the plant and the water treatment process, as well as network traffic in the testbed. The data of both physical properties and network traffic contains attacks that were created and generated by our research team.", "Cyber-physical systems (CPS), which integrate algorithmic control with physical processes, often consist of physically distributed components communicating over a network. A malfunctioning or compromised component in such a CPS can lead to costly consequences, especially in the context of public infrastructure. In this short paper, we argue for the importance of constructing invariants (or models) of the physical behaviour exhibited by CPS, motivated by their applications to the control, monitoring, and attestation of components. To achieve this despite the inherent complexity of CPS, we propose a new technique for learning invariants that combines machine learning with ideas from mutation testing. We present a preliminary study on a water treatment system that suggests the efficacy of this approach, propose strategies for establishing confidence in the correctness of invariants, then summarise some research questions and the steps we are taking to investigate them.", "In this paper, we propose a hierarchical monitoring intrusion detection system (HAMIDS) for industrial control systems (ICS). The HAMIDS framework detects the anomalies in both level 0 and level 1 of an industrial control plant. In addition, the framework aggregates the cyber-physical process data in one point for further analysis as part of the intrusion detection process. The novelty of this framework is its ability to detect anomalies that have a distributed impact on the cyber-physical process. The performance of the proposed framework evaluated as part of SWaT security showdown (S3) in which six international teams were invited to test the framework in a real industrial control system. The proposed framework outperformed other proposed academic IDS in term of detection of ICS threats during the S3 event, which was held from July 25-29, 2016 at Singapore University of Technology and Design." ] }
1709.05342
2755148105
In this paper, we propose and evaluate the application of unsupervised machine learning to anomaly detection for a Cyber-Physical System (CPS). We compare two methods: Deep Neural Networks (DNN) adapted to time series data generated by a CPS, and one-class Support Vector Machines (SVM). These methods are evaluated against data from the Secure Water Treatment (SWaT) testbed, a scaled-down but fully operational raw water purification plant. For both methods, we first train detectors using a log generated by SWaT operating under normal conditions. Then, we evaluate the performance of both methods using a log generated by SWaT operating under 36 different attack scenarios. We find that our DNN generates fewer false positives than our one-class SVM while our SVM detects slightly more anomalies. Overall, our DNN has a slightly better F measure than our SVM. We discuss the characteristics of the DNN and one-class SVM used in this experiment, and compare the advantages and disadvantages of the two methods.
Goh et al. @cite_23 propose a similar unsupervised machine learning approach to learn a model of SWaT. They use stacked LSTMs to detect anomalies, and the same SWaT dataset in their evaluation @cite_12 . However, they only apply their approach to the first SWaT subsystem (of six), and consider only ten attacks in their evaluation (i.e. the attacks targeted to that subsystem). Their anomaly detection is based on cumulative sums of prediction error for each sensor, with the evaluation based upon the number of attacks detected. Nine of the ten attacks are detected, with four false positives reported. In contrast, we apply our method to the SWaT testbed in its entirety (i.e. all six subsystems) and evaluate against the full attack log, spanning 36 attacks. We achieve very high precision, i.e. very few false positives while a moderate recall rate. Precision and recall are calculated based on the number of detected log entries instead of number of attacks, which should lead to smaller recall rates. Our methods are based on probabilistic density estimation, rather than prediction error.
{ "cite_N": [ "@cite_12", "@cite_23" ], "mid": [ "2768947629", "2608911009" ], "abstract": [ "This paper presents a dataset to support research in the design of secure Cyber Physical Systems (CPS). The data collection process was implemented on a six-stage Secure Water Treatment (SWaT) testbed. SWaT represents a scaled down version of a real-world industrial water treatment plant producing 5 gallons per minute of water filtered via membrane based ultrafiltration and reverse osmosis units. This plant allowed data collection under two behavioral modes: normal and attacked. SWaT was run non-stop from its “empty” state to fully operational state for a total of 11-days. During this period, the first seven days the system operated normally i.e. without any attacks or faults. During the remaining days certain cyber and physical attacks were launched on SWaT while data collection continued. The dataset reported here contains the physical properties related to the plant and the water treatment process, as well as network traffic in the testbed. The data of both physical properties and network traffic contains attacks that were created and generated by our research team.", "This paper presents a novel unsupervised approach to detect cyber attacks in Cyber-Physical Systems (CPS). We describe an unsupervised learning approach using a Recurrent Neural network which is a time series predictor as our model. We then use the Cumulative Sum method to identify anomalies in a replicate of a water treatment plant. The proposed method not only detects anomalies in the CPS but also identifies the sensor that was attacked. The experiments were performed on a complex dataset which is collected through a Secure Water Treatment Testbed (SWaT). Through the experiments, we show that the proposed technique is able to detect majority of the attacks designed by our research team with low false positive rates." ] }
1709.05296
2571028936
Voice over Internet Protocol (VoIP) applications (apps) provide convenient and low cost means for users to communicate and share information with each other in real-time. Day by day, the popularity of such apps is increasing, and people produce and share a huge amount of data, including their personal and sensitive information. This might lead to several privacy issues, such as revealing user contacts, private messages or personal photos. Therefore, having an up-to-date forensic understanding of these apps is necessary. This chapter presents analysis of forensically valuable remnants of three popular Mobile VoIP (mVoIP) apps on Google Play store, namely: Viber, Skype, and WhatsApp Messenger, in order to figure out to what extent these apps reveal forensically valuable information about the users activities. We performed a thorough investigative study of these three mVoIP apps on smartphone devices. Our experimental results show that several artefacts, such as messages, contact details, phone numbers, images, and video files, are recoverable from the smartphone device that is equipped with these mVoIP apps.
A comparison of forensic evidence recovery techniques for a Windows Mobile smartphone demonstrates that there are different techniques to acquire and decode information of potential forensic interest from a Windows Mobile smartphone @cite_36 . Furthermore, forensic examination of the Windows Mobile device database (the pim.vol ) file confirmed that pim.vol contains information related to contacts, call history, speed-dial settings, appointments, and tasks @cite_47 . Moreover, in @cite_63 , the authors provided a number of possible methods of acquiring and examining data on Windows Mobile devices, as well as the locations of potentially useful data, such as text messages, multimedia, e-mail, Web browsing artefacts and Registry entries. They also used monitoring software as a case-example to highlight the importance for forensic analysis. They showed that the existence of such a malicious monitoring software on a Windows phone could be detectable on the device being investigated by forensics analyst. In another recent study, Yang et al. @cite_45 carried out an investigative study on two popular Windows instant messaging apps, i.e., Facebook and Skype. The authors showed that several artefacts are recoverable, such as contact lists, conversations, and transferred files.
{ "cite_N": [ "@cite_36", "@cite_47", "@cite_45", "@cite_63" ], "mid": [ "2016917250", "1975642608", "2300707846", "2035963588" ], "abstract": [ "Acquisition, decoding and presentation of information from mobile devices is complex and challenging. Device memory is usually integrated into the device, making isolation prior to recovery difficult. In addition, manufacturers have adopted a variety of file systems and formats complicating decoding and presentation. A variety of tools and methods have been developed (both commercially and in the open source community) to assist mobile forensics investigators. However, it is unclear to what extent these tools can present a complete view of the information held on a mobile device, or the extent the results produced by different tools are consistent. This paper investigates what information held on a Windows Mobile smart phone can be recovered using several different approaches to acquisition and decoding. The paper demonstrates that no one technique recovers all information of potential forensic interest from a Windows Mobile device; and that in some cases the information recovered is conflicting.", "Abstract Forensic examination of Windows Mobile devices and devices running its successor Windows Phone 7 remains relevant for the digital forensic community. In these devices, the file pim.vol is a Microsoft Embedded Database (EDB) volume that contains information related to contacts, appointments, call history, speed-dial settings and tasks. Current literature shows that analysis of the pim.vol file is less than optimal. We succeeded in reverse-engineering significant parts of the EDB volume format and this article presents our current understanding of the format. In addition we provide a mapping from internal column identifiers to human readable application-level property names for the pim.vol database. We implemented a parser and compared our results to the traditional approach using an emulator and the API provided by the Windows CE operating system. We were able to recover additional databases, additional properties per record and unallocated records.", "Instant messaging (IM) has changed the way people communicate with each other. However, the interactive and instant nature of these applications (apps) made them an attractive choice for malicious cyber activities such as phishing. The forensic examination of IM apps for modern Windows 8.1 (or later) has been largely unexplored, as the platform is relatively new. In this paper, we seek to determine the data remnants from the use of two popular Windows Store application software for instant messaging, namely Facebook and Skype on a Windows 8.1 client machine. This research contributes to an in-depth understanding of the types of terrestrial artefacts that are likely to remain after the use of instant messaging services and application software on a contemporary Windows operating system. Potential artefacts detected during the research include data relating to the installation or uninstallation of the instant messaging application software, log-in and log-off information, contact lists, conversations, and transferred files.", "Windows Mobile devices are becoming more widely used and can be a valuable source of evidence in a variety of investigations. These portable devices can contain details about an individual's communications, contacts, calendar, online activities, and whereabouts at specific times. Although forensic analysts can apply their knowledge of other Microsoft operating systems to Windows Mobile devices, there are sufficient differences that require specialized knowledge and tools to locate and interpret digital evidence on these systems. This paper provides an overview of Windows Mobile Forensics, describing various methods of acquiring and examining data on Windows Mobile devices. The locations and data formats of useful information on these systems are described, including text messages, multimedia, e-mail, Web browsing artifacts, and Registry entries. This paper concludes with an illustrative scenario involving MobileSpy monitoring software." ] }
1709.05296
2571028936
Voice over Internet Protocol (VoIP) applications (apps) provide convenient and low cost means for users to communicate and share information with each other in real-time. Day by day, the popularity of such apps is increasing, and people produce and share a huge amount of data, including their personal and sensitive information. This might lead to several privacy issues, such as revealing user contacts, private messages or personal photos. Therefore, having an up-to-date forensic understanding of these apps is necessary. This chapter presents analysis of forensically valuable remnants of three popular Mobile VoIP (mVoIP) apps on Google Play store, namely: Viber, Skype, and WhatsApp Messenger, in order to figure out to what extent these apps reveal forensically valuable information about the users activities. We performed a thorough investigative study of these three mVoIP apps on smartphone devices. Our experimental results show that several artefacts, such as messages, contact details, phone numbers, images, and video files, are recoverable from the smartphone device that is equipped with these mVoIP apps.
In order to facilitate the forensic investigation of mobile devices that are rapidly changing in their structure, Do et al. @cite_0 proposed a forensically sound adversary model. Azfar et al. @cite_21 considered this adversary model as a template to map a potential adversary's capabilities and constraints, in order to evaluate the usefulness of such a model by carrying out a forensic analysis of five popular Android social apps (Twitter, POF Dating, Snapchat, Fling and Pinterest). They showed that useful artefacts are recoverable using this model, including databases, user account information, contact lists, images and profile pictures. They could also discover timestamps for notifications and tweets, as well as a Facebook authentication token string used by the apps.
{ "cite_N": [ "@cite_0", "@cite_21" ], "mid": [ "2281202115", "2293466296" ], "abstract": [ "In this paper, we propose an adversary model to facilitate forensic investigations of mobile devices (e.g. Android, iOS and Windows smartphones) that can be readily adapted to the latest mobile device technologies. This is essential given the ongoing and rapidly changing nature of mobile device technologies. An integral principle and significant constraint upon forensic practitioners is that of forensic soundness. Our adversary model specifically considers and integrates the constraints of forensic soundness on the adversary, in our case, a forensic practitioner. One construction of the adversary model is an evidence collection and analysis methodology for Android devices. Using the methodology with six popular cloud apps, we were successful in extracting various information of forensic interest in both the external and internal storage of the mobile device.", "Android forensics is one of the most studied topics in the mobile forensics literature, partly due to the popularity of Android devices and apps. However, there does not appear to have a formal model that captures the activities undertaken during a forensic investigation. In this paper, we adapt a widely used adversary model from the cryptographic literature to formally capture a forensic investigator's capabilities during the collection and analysis of evidentiary materials from mobile devices. We demonstrate the utility of the model using five popular Android social apps (Twitter, POF Dating, Snapchat, Fling and Pinterest). We recover various information of forensic interest, such as databases, user account information, sent-received images, profile pictures, contact lists, unviewed text messages. We are also able to determine when a notification was sent, a tweet was posted, as well as identifying the Facebook authentication token string used in the apps." ] }
1709.05006
2756048956
The paper introduces a new kernel-based Maximum Mean Discrepancy (MMD) statistic for measuring the distance between two distributions given finitely-many multivariate samples. When the distributions are locally low-dimensional, the proposed test can be made more powerful to distinguish certain alternatives by incorporating local covariance matrices and constructing an anisotropic kernel. The kernel matrix is asymmetric; it computes the affinity between @math data points and a set of @math reference points, where @math can be drastically smaller than @math . While the proposed statistic can be viewed as a special class of Reproducing Kernel Hilbert Space MMD, the consistency of the test is proved, under mild assumptions of the kernel, as long as @math for any @math , based on a result of convergence in distribution of the test statistic. Applications to flow cytometry and diffusion MRI datasets are demonstrated, which motivate the proposed approach to compare distributions.
The 1D Kolmogorov-Smirnov statistic can be seen as a special case of the MMD discrepancy, which is generally defined as [ MMD (p,q; F ) = f F f(x) (p(x)-q(x))dx, ] where @math is certain family of integrable functions. When @math equals the set of all indicator functions of intervals @math in @math , the MMD discrepancy gives the Kolmogorov-Smirnov distance. Kernel-based MMD has been studied in @cite_18 , where the function class @math consists of all functions s.t. @math , where @math indicates the norm of the Hilbert space associated with the reproducing kernel. Specifically, suppose the PSD kernel is @math , the (squared) RKHS MMD can be written as and can be estimated by (here we refer to the biased estimator in @cite_18 which includes the diagonal terms) The methodology and theory apply to any dimensional data as long as the kernel can be evaluated.
{ "cite_N": [ "@cite_18" ], "mid": [ "2212660284" ], "abstract": [ "We propose a framework for analyzing and comparing distributions, which we use to construct statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS), and is called the maximum mean discrepancy (MMD).We present two distribution free tests based on large deviation bounds for the MMD, and a third test based on the asymptotic distribution of this statistic. The MMD can be computed in quadratic time, although efficient linear time approximations are available. Our statistic is an instance of an integral probability metric, and various classical metrics on distributions are obtained when alternative function classes are used in place of an RKHS. We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests." ] }
1709.05006
2756048956
The paper introduces a new kernel-based Maximum Mean Discrepancy (MMD) statistic for measuring the distance between two distributions given finitely-many multivariate samples. When the distributions are locally low-dimensional, the proposed test can be made more powerful to distinguish certain alternatives by incorporating local covariance matrices and constructing an anisotropic kernel. The kernel matrix is asymmetric; it computes the affinity between @math data points and a set of @math reference points, where @math can be drastically smaller than @math . While the proposed statistic can be viewed as a special class of Reproducing Kernel Hilbert Space MMD, the consistency of the test is proved, under mild assumptions of the kernel, as long as @math for any @math , based on a result of convergence in distribution of the test statistic. Applications to flow cytometry and diffusion MRI datasets are demonstrated, which motivate the proposed approach to compare distributions.
Our approach is also closely related to the previous study of the distribution distance based on kernel density estimation @cite_24 . We generalize the results in @cite_24 by considering non-translation-invariant kernels, which greatly increases the separation between the expectation of @math under the null hypothesis and the expectation of @math under an alternative hypothesis. Moreover, it is well-known that kernel density estimation, which @cite_24 is based on, converges poorly in high dimension. In the manifold setting, the problem was remedied by normalizing the (isometric) kernel in a modified way and the estimation accuracy was shown to only depend on the intrinsic dimension @cite_3 . Our proposed approach takes extra advantage of the locally low dimensional structure, and obtains improved distinguishing power compared to the one using isotropic kernels when possible.
{ "cite_N": [ "@cite_24", "@cite_3" ], "mid": [ "2030150661", "2133326466" ], "abstract": [ "Test statistics are proposed for testing equality of two p-variate probability density functions. The statistics are based on the integrated square distance between two kernel-based density estimates and are two-sample versions of the statistic studied by Hall (1984, J. Multivariate Anal. 14 1-16). Particular emphasis is laid on the case where the two bandwidths are fixed and equal. Asymptotic distributional results and power calculations are supplemented by an empirical study based on univariate examples.", "Kernel density estimation is the most widely-used practical method for accurate nonparametric density estimation. However, long-standing worst-case theoretical results showing that its performance worsens exponentially with the dimension of the data have quashed its application to modern high-dimensional datasets for decades. In practice, it has been recognized that often such data have a much lower-dimensional intrinsic structure. We propose a small modification to kernel density estimation for estimating probability density functions on Riemannian submanifolds of Euclidean space. Using ideas from Riemannian geometry, we prove the consistency of this modified estimator and show that the convergence rate is determined by the intrinsic dimension of the submanifold. We conclude with empirical results demonstrating the behavior predicted by our theory." ] }
1709.05006
2756048956
The paper introduces a new kernel-based Maximum Mean Discrepancy (MMD) statistic for measuring the distance between two distributions given finitely-many multivariate samples. When the distributions are locally low-dimensional, the proposed test can be made more powerful to distinguish certain alternatives by incorporating local covariance matrices and constructing an anisotropic kernel. The kernel matrix is asymmetric; it computes the affinity between @math data points and a set of @math reference points, where @math can be drastically smaller than @math . While the proposed statistic can be viewed as a special class of Reproducing Kernel Hilbert Space MMD, the consistency of the test is proved, under mild assumptions of the kernel, as long as @math for any @math , based on a result of convergence in distribution of the test statistic. Applications to flow cytometry and diffusion MRI datasets are demonstrated, which motivate the proposed approach to compare distributions.
At last, the proposed approach can be viewed as related to two sample testing via nearest neighbors @cite_2 . In @cite_2 , one computes the nearest neighbors of a reference point @math to the data @math and derives a statistical test based on the amount the empirical ratio @math , where @math is the number of neighbors from @math (similarly @math ), deviates from the expected ratio under the null hypothesis, namely @math . Because the nearest neighbor algorithm is based on Euclidean distance, it is equivalent to a kernel-based MMD with a hard-thresholded isotropic kernel. The approach can be similarly combined with a local Mahalanobis distance as we do, which has not been explored.
{ "cite_N": [ "@cite_2" ], "mid": [ "2111527197" ], "abstract": [ "Pour des echantillons aleatoires independants: X 1 X 2 …X n1 distribue selon f(x) et Y 1 Y 2 …Y n2 distribue selon g(x) on presente un test qui possede les proprietes suivantes: la probabilite d'erreur de premiere espece ne depend pas de f; lorsque min (n 1 , n 2 )→∞, la statistique de test est asymptotiquement independante de la distribution sous Ho et la distribution limite est connue; le test est consistant contre des alternatives generales" ] }
1709.04763
2755231175
Time series prediction is of great significance in many applications and has attracted extensive attention from the data mining community. Existing work suggests that for many problems, the shape in the current time series may correlate an upcoming shape in the same or another series. Therefore, it is a promising strategy to associate two recurring patterns as a rule's antecedent and consequent: the occurrence of the antecedent can foretell the occurrence of the consequent, and the learned shape of consequent will give accurate predictions. Earlier work employs symbolization methods, but the symbolized representation maintains too little information of the original series to mine valid rules. The state-of-the-art work, though directly manipulating the series, fails to segment the series precisely for seeking antecedents consequents, resulting in inaccurate rules in common scenarios. In this paper, we propose a novel motif-based rule discovery method, which utilizes motif discovery to accurately extract frequently occurring consecutive subsequences, i.e. motifs, as antecedents consequents. It then investigates the underlying relationships between motifs by matching motifs as rule candidates and ranking them based on the similarities. Experimental results on real open datasets show that the proposed approach outperforms the baseline method by 23.9 . Furthermore, it extends the applicability from single time series to multiple ones.
Early work extracts association rules from frequent patterns in the transactional database @cite_16 . are the first to study the problem of finding rules from real-valued time series in a symbolization-based perspective @cite_0 , followed by a series of work @cite_3 @cite_14 .
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_16", "@cite_3" ], "mid": [ "151863654", "", "2140190241", "1613577429" ], "abstract": [ "We consider the problem of finding rules relating patterns in a time series to other patterns in that series, or patterns in one series to patterns in another series. A simple example is a rule such as \"a period of low telephone call activity is usually followed by a sharp rise in call volume\". Examples of rules relating two or more time series are \"if the Microsoft stock price goes up and Intel falls, then IBM goes up the next day,\" and \"if Microsoft goes up strongly for one day, then declines strongly on the next day, and on the same days Intel stays about level, then IBM stays about level.\" Our emphasis is in the discovery of local patterns in multivariate time series, in contrast to traditional time series analysis which largely focuses on global models. Thus, we search for rules whose conditions refer to patterns in time series. However, we do not want to define beforehand which patterns are to be used; rather, we want the patterns to be formed from the data in the context of rule discovery. We describe adaptive methods for finding rules of the above type from time-series data. The methods are based on discretizing the sequence by methods resembling vector quantization. We first form subsequences by sliding a window through the time series, and then cluster these subsequences by using a suitable measure of time-series similarity. The discretized version of the time series is obtained by taking the cluster identifiers corresponding to the subsequence. Once the time-series is discretized, we use simple rule finding methods to obtain rules from the sequence. We present empirical results on the behavior of the method.", "", "The increasing volume of data in modern business and science calls for more complex and sophisticated tools. Although advances in data mining technology have made extensive data collection much easier, it's still always evolving and there is a constant need for new techniques and tools that can help us transform this data into useful information and knowledge. Since the previous edition's publication, great advances have been made in the field of data mining. Not only does the third of edition of Data Mining: Concepts and Techniques continue the tradition of equipping you with an understanding and application of the theory and practice of discovering patterns hidden in large data sets, it also focuses on new, important topics in the field: data warehouses and data cube technology, mining stream, mining social networks, and mining spatial, multimedia and other complex data. Each chapter is a stand-alone guide to a critical topic, presenting proven algorithms and sound implementations ready to be used directly or with strategic modification against live data. This is the resource you need if you want to apply today's most powerful data mining techniques to meet real business challenges. * Presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects. * Addresses advanced topics such as mining object-relational databases, spatial databases, multimedia databases, time-series databases, text databases, the World Wide Web, and applications in several fields. *Provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data", "This paper presents techniques for discovering and matching rules with elastic patterns. Elastic patterns are ordered lists of elements that can be stretched along the time axis. Elastic patterns are useful for discovering rules from data sequences with different sampling rates. For fast discovery of rules whose heads (left-hand sides) and bodies (right-hand sides) are elastic patterns, we construct a trimmed suffix tree from succinct forms of data sequences and keep the tree as a compact representation of rules. The trimmed suffix tree is also used as an index structure for finding rules matched to a target head sequence. When matched rules cannot be found, the concept of rule relaxation is introduced. Using a cluster hierarchy and relaxation error as a new distance function, we find the least relaxed rules that provide the most specific information on a target head sequence. Experiments on synthetic data sequences reveal the effectiveness of our proposed approach." ] }
1709.04763
2755231175
Time series prediction is of great significance in many applications and has attracted extensive attention from the data mining community. Existing work suggests that for many problems, the shape in the current time series may correlate an upcoming shape in the same or another series. Therefore, it is a promising strategy to associate two recurring patterns as a rule's antecedent and consequent: the occurrence of the antecedent can foretell the occurrence of the consequent, and the learned shape of consequent will give accurate predictions. Earlier work employs symbolization methods, but the symbolized representation maintains too little information of the original series to mine valid rules. The state-of-the-art work, though directly manipulating the series, fails to segment the series precisely for seeking antecedents consequents, resulting in inaccurate rules in common scenarios. In this paper, we propose a novel motif-based rule discovery method, which utilizes motif discovery to accurately extract frequently occurring consecutive subsequences, i.e. motifs, as antecedents consequents. It then investigates the underlying relationships between motifs by matching motifs as rule candidates and ranking them based on the similarities. Experimental results on real open datasets show that the proposed approach outperforms the baseline method by 23.9 . Furthermore, it extends the applicability from single time series to multiple ones.
The state-of-the-art work (Y15) directly manipulates the real-valued series @cite_4 . Their method is found on the assumption that a rule is contained in a subsequence, which splits a subsequence into the rule's antecedent and consequent. Usually, there is an interval between a rule's antecedent consequent, and the splitting method will append the extra series to the antecedent consequent. The complicated diversity of the intervals could result in rules with bad prediction performance. Besides, the splitting method cannot be applied to discover rules from two series either, since a motif is a subsequence in a single series.
{ "cite_N": [ "@cite_4" ], "mid": [ "2002608093" ], "abstract": [ "The ability to make predictions about future events is at the heart of much of science; so, it is not surprising that prediction has been a topic of great interest in the data mining community for the last decade. Most of the previous work has attempted to predict the future based on the current value of a stream. However, for many problems the actual values are irrelevant, whereas the shape of the current time series pattern may foretell the future. The handful of research efforts that consider this variant of the problem have met with limited success. In particular, it is now understood that most of these efforts allow the discovery of spurious rules. We believe the reason why rule discovery in real-valued time series has failed thus far is because most efforts have more or less indiscriminately applied the ideas of symbolic stream rule discovery to real-valued rule discovery. In this work, we show why these ideas are not directly suitable for rule discovery in time series. Beyond our novel definitions representations, which allow for meaningful and extendable specifications of rules, we further show novel algorithms that allow us to quickly discover high quality rules in very large datasets that accurately predict the occurrence of future events." ] }
1709.04751
2754006806
Agricultural robots are expected to increase yields in a sustainable way and automate precision tasks, such as weeding and plant monitoring. At the same time, they move in a continuously changing, semi-structured field environment, in which features can hardly be found and reproduced at a later time. Challenges for Lidar and visual detection systems stem from the fact that plants can be very small, overlapping and have a steadily changing appearance. Therefore, a popular way to localize vehicles with high accuracy is based on ex- pensive global navigation satellite systems and not on natural landmarks. The contribution of this work is a novel image- based plant localization technique that uses the time-invariant stem emerging point as a reference. Our approach is based on a fully convolutional neural network that learns landmark localization from RGB and NIR image input in an end-to-end manner. The network performs pose regression to generate a plant location likelihood map. Our approach allows us to cope with visual variances of plants both for different species and different growth stages. We achieve high localization accuracies as shown in detailed evaluations of a sugar beet cultivation phase. In experiments with our BoniRob we demonstrate that detections can be robustly reproduced with centimeter accuracy.
Plant localization is useful for agricultural robots in a number of ways. In orchards, the semi-structured environment has been used for mapping and localization purposes: @cite_4 find olive tree stems by camera and laser scanner detection, which then work as landmarks in a SLAM system.
{ "cite_N": [ "@cite_4" ], "mid": [ "2048269893" ], "abstract": [ "Precision agricultural maps are required for agricultural machinery navigation, path planning and plantation supervision. In this work we present a Simultaneous Localization and Mapping (SLAM) algorithm solved by an Extended Information Filter (EIF) for agricultural environments (olive groves). The SLAM algorithm is implemented on an unmanned non-holonomic car-like mobile robot. The map of the environment is based on the detection of olive stems from the plantation. The olive stems are acquired by means of both: a range sensor laser and a monocular vision system. A support vector machine (SVM) is implemented on the vision system to detect olive stems on the images acquired from the environment. Also, the SLAM algorithm has an optimization criterion associated with it. This optimization criterion is based on the correction of the SLAM system state vector using only the most meaningful stems - from an estimation convergence perspective - extracted from the environment information without compromising the estimation consistency. The optimization criterion, its demonstration and experimental results within real agricultural environments showing the performance of our proposal are also included in this work." ] }
1709.04751
2754006806
Agricultural robots are expected to increase yields in a sustainable way and automate precision tasks, such as weeding and plant monitoring. At the same time, they move in a continuously changing, semi-structured field environment, in which features can hardly be found and reproduced at a later time. Challenges for Lidar and visual detection systems stem from the fact that plants can be very small, overlapping and have a steadily changing appearance. Therefore, a popular way to localize vehicles with high accuracy is based on ex- pensive global navigation satellite systems and not on natural landmarks. The contribution of this work is a novel image- based plant localization technique that uses the time-invariant stem emerging point as a reference. Our approach is based on a fully convolutional neural network that learns landmark localization from RGB and NIR image input in an end-to-end manner. The network performs pose regression to generate a plant location likelihood map. Our approach allows us to cope with visual variances of plants both for different species and different growth stages. We achieve high localization accuracies as shown in detailed evaluations of a sugar beet cultivation phase. In experiments with our BoniRob we demonstrate that detections can be robustly reproduced with centimeter accuracy.
The field of image landmark localization is important to our pose regression approach. @cite_6 estimate likelihood maps of hand joint locations based on depth images. However, without any upsampling operation in their network architecture, they generate a coarse output of less than a fifth of the input size. @cite_8 overcome this drawback by using a fully convolutional neural network (FCNN). This allows them to generate likelihood maps with input image resolution.
{ "cite_N": [ "@cite_6", "@cite_8" ], "mid": [ "2075156252", "2275770195" ], "abstract": [ "We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.", "Pose variation and subtle differences in appearance are key challenges to fine-grained classification. While deep networks have markedly improved general recognition, many approaches to fine-grained recognition rely on anchoring networks to parts for better accuracy. Identifying parts to find correspondence discounts pose variation so that features can be tuned to appearance. To this end previous methods have examined how to find parts and extract pose-normalized features. These methods have generally separated fine-grained recognition into stages which first localize parts using hand-engineered and coarsely-localized proposal features, and then separately learn deep descriptors centered on inferred part positions. We unify these steps in an end-to-end trainable network supervised by keypoint locations and class labels that localizes parts by a fully convolutional network to focus the learning of feature representations for the fine-grained classification task. Experiments on the popular CUB200 dataset show that our method is state-of-the-art and suggest a continuing role for strong supervision." ] }
1709.04747
2955074175
Information retrieval from textual data focuses on the construction of vocabularies that contain weighted term tuples. Such vocabularies can then be exploited by various text analysis algorithms to extract new knowledge, e.g., top-k keywords, top-k documents, etc. Top-k keywords are casually used for various purposes, are often computed on-the-fly, and thus must be efficiently computed. To compare competing weighting schemes and database implementations, benchmarking is customary. To the best of our knowledge, no benchmark currently addresses these problems. Hence, in this paper, we present a top-k keywords benchmark, T @math K @math , which features a real tweet dataset and queries with various complexities and selectivities. T @math K @math helps evaluate weighting schemes and database implementations in terms of computing performance. To illustrate T @math K @math 's relevance and genericity, we successfully performed tests on the TF-IDF and Okapi BM25 weighting schemes, on one hand, and on different relational (Oracle, PostgreSQL) and document-oriented (MongoDB) database implementations, on the other hand.
Term weighting schemes are extensively benchmarked in sentiment analysis @cite_5 , semantic similarity @cite_11 , text classification and categorization @cite_14 @cite_12 @cite_11 @cite_17 , and textual corpus generation @cite_2 . Benchmarks for text analysis focus mainly on algorithm accuracy, while either term weights are known before the algorithm is applied, or their computation is incorporated with preprocessing. Thus, such benchmarks do not evaluate weighting scheme construction efficiency as we do. Other benchmarks evaluate parallel text processing in big data applications in the cloud @cite_6 @cite_13 . PRIMEBALL notably specifies several relevant properties characterizing cloud platforms @cite_6 , such as scale-up, elastic speedup, horizontal scalability, latency, durability, consistency and version handling, availability, concurrency and other data and information retrieval properties. However, PRIMEBALL is only a specification; it is not implemented.
{ "cite_N": [ "@cite_14", "@cite_11", "@cite_6", "@cite_2", "@cite_5", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "", "2026481075", "2953285261", "2530223335", "2266540986", "2074306108", "2150102617", "" ], "abstract": [ "", "Short text semantic similarity measurement is a new and rapidly growing field of research. 'Short texts' are typically sentence length but are not required to be grammatically correct. There is great potential for applying these measures in fields such as information retrieval, dialogue management and question answering. A dataset of 65 sentence pairs, with similarity ratings, produced in 2006 has become adopted as a de facto gold standard benchmark. This paper discusses the adoption of the 2006 dataset, lays down a number of criteria that can be used to determine whether a dataset should be awarded a 'gold standard' accolade and illustrates its use as a benchmark. Procedures for the generation of further gold standard datasets in this field are recommended.", "In this paper, we draw the specifications of a novel benchmark for comparing parallel processing frameworks in the context of big data applications hosted in the cloud. We aim at filling several gaps in already existing cloud data processing benchmarks, which lack a real-life context for their processes, thus losing relevance when trying to assess performance for real applications. Hence, we propose a fictitious news site hosted in the cloud that is to be managed by the framework under analysis, together with several objective use case scenarios and measures for evaluating system performance. The main strengths of our benchmark are parallelization capabilities supporting cloud features and big data properties.", "Modern storage systems incorporate data compressors to improve their performance and capacity. As a result, data content can significantly influence the result of a storage system benchmark. Because real-world proprietary datasets are too large to be copied onto a test storage system, and most data cannot be shared due to privacy issues, a benchmark needs to generate data synthetically. To ensure that the result is accurate, it is necessary to generate data content based on the characterization of real-world data properties that influence the storage system performance during the execution of a benchmark. The existing approach, called SDGen, cannot guarantee that the benchmark result is accurate in storage systems that have built-in word-based compressors. The reason is that SDGen characterizes the properties that influence compression performance only at the byte level, and no properties are characterized at the word level. To address this problem, we present TextGen, a realistic text data content generation method for modern storage system benchmarks. TextGen builds the word corpus by segmenting real-world text datasets, and creates a word-frequency distribution by counting each word in the corpus. To improve data generation performance, the word-frequency distribution is fitted to a lognormal distribution by maximum likelihood estimation. The Monte Carlo approach is used to generate synthetic data. The running time of TextGen generation depends only on the expected data size, which means that the time complexity of TextGen is O(n). To evaluate TextGen, four real-world datasets were used to perform an experiment. The experimental results show that, compared with SDGen, the compression performance and compression ratio of the datasets generated by TextGen deviate less from real-world datasets when end-tagged dense code, a representative of word-based compressors, is evaluated.", "The emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, bearing profound implications for our understanding of human behavior. Given the growing assortment of sentiment measuring instruments, comparisons between them are evidently required. Here, we perform detailed tests of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that a dictionary-based method will only perform both reliably and meaningfully if (1) the dictionary covers a sufficiently large enough portion of a given text's lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale.", "Massive-scale Big Data analytics is representative of a new class of workloads that justifies a rethinking of how computing systems should be optimized. This paper addresses the need for a set of benchmarks that system designers can use to measure the quality of their designs and that customers can use to evaluate competing systems offerings with respect to commonly performed text-oriented workflows in Hadoop™. Additions are needed to existing benchmarks such as HiBench in terms of both scale and relevance. We describe a methodology for creating a petascale data-size text-oriented benchmark that includes representative Big Data workflows and can be used to test total system performance, with demands balanced across storage, network, and computation. Creating such a benchmark requires meeting unique challenges associated with the data size and its often unstructured nature. To be useful, the benchmark also needs to be sufficiently generic to be accepted by the community at large. Here, we focus on a text-oriented Hadoop workflow that consists of three common tasks: categorizing text documents, identifying significant documents within each category, and analyzing significant documents for new topic creation.", "Reuters Corpus Volume I (RCV1) is an archive of over 800,000 manually categorized newswire stories recently made available by Reuters, Ltd. for research purposes. Use of this data for research on text categorization requires a detailed understanding of the real world constraints under which the data was produced. Drawing on interviews with Reuters personnel and access to Reuters documentation, we describe the coding policy and quality control procedures used in producing the RCV1 data, the intended semantics of the hierarchical category taxonomies, and the corrections necessary to remove errorful data. We refer to the original data as RCV1-v1, and the corrected data as RCV1-v2. We benchmark several widely used supervised learning methods on RCV1-v2, illustrating the collection's properties, suggesting new directions for research, and providing baseline results for future studies. We make available detailed, per-category experimental results, as well as corrected versions of the category assignments and taxonomy structures, via online appendices.", "" ] }
1709.04666
2803203692
While generic object detection has achieved large improvements with rich feature hierarchies from deep nets, detecting small objects with poor visual cues remains challenging. Motion cues from multiple frames may be more informative for detecting such hard-to-distinguish objects in each frame. However, how to encode discriminative motion patterns, such as deformations and pose changes that characterize objects, has remained an open question. To learn them and thereby realize small object detection, we present a neural model called the Recurrent Correlational Network, where detection and tracking are jointly performed over a multi-frame representation learned through a single, trainable, and end-to-end network. A convolutional long short-term memory network is utilized for learning informative appearance change for detection, while learned representation is shared in tracking for enhancing its performance. In experiments with datasets containing images of scenes with small flying objects, such as birds and unmanned aerial vehicles, the proposed method yielded consistent improvements in detection performance over deep single-frame detectors and existing motion-based detectors. Furthermore, our network performs as well as state-of-the-art generic object trackers when it was evaluated as a tracker on the bird dataset.
Small object detection 1.5mm Detection of small objects has been tackled in the surveillance community @cite_49 , and recently has attracted much attention since the advent of UAVs @cite_50 @cite_35 . Small pedestrians @cite_61 and faces @cite_62 have also been considered, and some recent studies try to detect small common objects in generic-object detection setting @cite_22 @cite_3 . Studies are more focused on scale-tuned convnets with moderate depths and a wider field of view, and despite of its importance, motion has not yet been incorporated in these domains.
{ "cite_N": [ "@cite_61", "@cite_35", "@cite_62", "@cite_22", "@cite_3", "@cite_50", "@cite_49" ], "mid": [ "2419501904", "", "2951230065", "2594258618", "", "2766881959", "2166312434" ], "abstract": [ "Pedestrian detection is a well-studied problem. Even though many datasets contain challenging case studies, the performances of new methods are often only reported on cases of reasonable difficulty. In particular, the issue of small scale pedestrian detection is seldom considered. In this paper, we focus on the detection of small scale pedestrians, i.e., those that are at far distance from the camera. We show that classical features used for pedestrian detection are not well suited for our case of study. Instead, we propose a convolutional neural network based method to learn the features with an end-to-end approach. Experiments on the Caltech Pedestrian Detection Benchmark showed that we outperformed existing methods by more than 10 in terms of log-average miss rate.", "", "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99 of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82 while prior art ranges from 29-64 ).", "Existing object detection literature focuses on detecting a big object covering a large part of an image. The problem of detecting a small object covering a small part of an image is largely ignored. As a result, the state-of-the-art object detection algorithm renders unsatisfactory performance as applied to detect small objects in images. In this paper, we dedicate an effort to bridge the gap. We first compose a benchmark dataset tailored for the small object detection problem to better evaluate the small object detection performance. We then augment the state-of-the-art R-CNN algorithm with a context model and a small region proposal generator to improve the small object detection performance. We conduct extensive experimental validations for studying various design choices. Experiment results show that the augmented R-CNN algorithm improves the mean average precision by 29.8 over the original R-CNN algorithm on detecting small objects.", "", "Small drones are a rising threat due to their possible misuse for illegal activities, in particular smuggling and terrorism. The project SafeShore, funded by the European Commission under the Horizon 2020 program, has launched the “drone-vs-bird detection challenge” to address one of the many technical issues arising in this context. The goal is to detect a drone appearing at some point in a video where birds may be also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. This paper reports on the challenge proposal, evaluation, and results1.", "Abstract : Under the three-year Video Surveillance and Monitoring (VSAM) project, the Robotics Institute at Carnegie Mellon University (CMU) and the Sarnoff Corporation have developed a system for autonomous Video Surveillance and Monitoring. The technical approach uses multiple, cooperative video sensors to provide continuous coverage of people and vehicles in a cluttered environment. This final report presents an overview of the system, and of the technical accomplishments that have been achieved. Details can be found in a set of previously published papers that together comprise Appendix A." ] }
1709.04666
2803203692
While generic object detection has achieved large improvements with rich feature hierarchies from deep nets, detecting small objects with poor visual cues remains challenging. Motion cues from multiple frames may be more informative for detecting such hard-to-distinguish objects in each frame. However, how to encode discriminative motion patterns, such as deformations and pose changes that characterize objects, has remained an open question. To learn them and thereby realize small object detection, we present a neural model called the Recurrent Correlational Network, where detection and tracking are jointly performed over a multi-frame representation learned through a single, trainable, and end-to-end network. A convolutional long short-term memory network is utilized for learning informative appearance change for detection, while learned representation is shared in tracking for enhancing its performance. In experiments with datasets containing images of scenes with small flying objects, such as birds and unmanned aerial vehicles, the proposed method yielded consistent improvements in detection performance over deep single-frame detectors and existing motion-based detectors. Furthermore, our network performs as well as state-of-the-art generic object trackers when it was evaluated as a tracker on the bird dataset.
Recurrent nets @cite_9 @cite_16 efficiently handle temporal structures in sequences and thus, they have been used for tracking @cite_31 @cite_29 @cite_13 @cite_39 . However, most utilize separate convolutional and recurrent layers, and have a fully connected recurrent layer, which may lead to a loss of spatial information. Thus, currently recurrent trackers do not perform as well as the best single-frame convolutional trackers in generic benchmarks. One study used ConvLSTM with simulated robotic sensors for handling occlusion @cite_51 .
{ "cite_N": [ "@cite_29", "@cite_9", "@cite_39", "@cite_51", "@cite_31", "@cite_16", "@cite_13" ], "mid": [ "2772135636", "1971129545", "2617855130", "2263483072", "2496665188", "", "2339473870" ], "abstract": [ "Robust object tracking requires knowledge and understanding of the object being tracked: its appearance, its motion, and how it changes over time. A tracker must be able to modify its underlying model and adapt to new observations. We present Re3, a real-time deep object tracker capable of incorporating temporal information into its model. Rather than focusing on a limited set of objects or training a model at test-time to track a specific instance, we pretrain our generic tracker on a large variety of objects and efficiently update on the fly; Re3 simultaneously tracks and updates the appearance model with a single forward pass. This lightweight model is capable of tracking objects at 150 FPS, while attaining competitive results on challenging benchmarks. We also show that our method handles temporary occlusion better than other comparable trackers using experiments that directly measure performance on sequences with occlusion.", "Abstract Backpropagation is often viewed as a method for adapting artificial neural networks to classify patterns. Based on parts of the book by Rumelhart and colleagues, many authors equate backpropagation with the generalized delta rule applied to fully-connected feedforward networks. This paper will summarize a more general formulation of backpropagation, developed in 1974, which does more justice to the roots of the method in numerical analysis and statistics, and also does more justice to creative approaches expressed by neural modelers in the past year or two. It will discuss applications of backpropagation to forecasting over time (where errors have been halved by using methods other than least squares), to optimization, to sensitivity analysis, and to brain research. This paper will go on to derive a generalization of backpropagation to recurrent systems (which input their own output), such as hybrids of perceptron-style networks and Grossberg Hopfield networks. Unlike the proposal of Rumelhart, Hinton, and Williams, this generalization does not require the storage of intermediate iterations to deal with continuous recurrence. This generalization was applied in 1981 to a model of natural gas markets, where it located sources of forecast uncertainty related to the use of least squares to estimate the model parameters in the first place.", "Motion models have been proved to be a crucial part in the visual tracking process. In recent trackers, particle filter and sliding windows-based motion models have been widely used. Treating motion models as a sequence prediction problem, we can estimate the motion of objects using their trajectories. Moreover, it is possible to transfer the learned knowledge from annotated trajectories to new objects. Inspired by recent advance in deep learning for visual feature extraction and sequence prediction, we propose a trajectory predictor to learn prior knowledge from annotated trajectories and transfer it to predict the motion of target objects. In this predictor, convolutional neural networks extract the visual features of target objects. Long short-term memory model leverages the annotated trajectory priors as well as sequential visual information, which includes the tracked features and center locations of the target object, to predict the motion. Furthermore, to extend this method to videos in which it is difficult to obtain annotated trajectories, a dynamic weighted motion model that combines the proposed trajectory predictor with a random sampler is proposed. To evaluate the transfer performance of the proposed trajectory predictor, we annotated a real-world vehicle dataset. Experiment results on both this real-world vehicle dataset and an online tracker benchmark dataset indicate that the proposed method outperforms several state-of-the-art trackers.", "This paper presents to the best of our knowledge the first end-to-end object tracking approach which directly maps from raw sensor input to object tracks in sensor space without requiring any feature engineering or system identification in the form of plant or sensor models. Specifically, our system accepts a stream of raw sensor data at one end and, in real-time, produces an estimate of the entire environment state at the output including even occluded objects. We achieve this by framing the problem as a deep learning task and exploit sequence models in the form of recurrent neural networks to learn a mapping from sensor measurements to object tracks. In particular, we propose a learning method based on a form of input dropout which allows learning in an unsupervised manner, only based on raw, occluded sensor data without access to ground-truth annotations. We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data -- as commonly encountered in robotics applications -- and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise.", "In this paper, we develop a new approach of spatially supervised recurrent convolutional neural networks for visual object tracking. Our recurrent convolutional network exploits the history of locations as well as the distinctive visual features learned by the deep neural networks. Inspired by recent bounding box regression methods for object detection, we study the regression capability of Long Short-Term Memory (LSTM) in the temporal domain, and propose to concatenate high-level visual features produced by convolutional networks with region information. In contrast to existing deep learning based trackers that use binary classification for region candidates, we use regression for direct prediction of the tracking locations both at the convolutional layer and at the recurrent unit. Our extensive experimental results and performance comparison with state-of-the-art tracking methods on challenging benchmark video tracking datasets shows that our tracker is more accurate and robust while maintaining low computational cost. For most test video sequences, our method achieves the best tracking performance, often outperforms the second best by a large margin.", "", "We present a novel approach to online multi-target tracking based on recurrent neural networks (RNNs). Tracking multiple objects in real-world scenes involves many challenges, including a) an a-priori unknown and time-varying number of targets, b) a continuous state estimation of all present targets, and c) a discrete combinatorial problem of data association. Most previous methods involve complex models that require tedious tuning of parameters. Here, we propose for the first time, an end-to-end learning approach for online multi-target tracking. Existing deep learning methods are not designed for the above challenges and cannot be trivially applied to the task. Our solution addresses all of the above points in a principled way. Experiments on both synthetic and real data show promising results obtained at 300 Hz on a standard CPU, and pave the way towards future research in this direction." ] }
1709.04666
2803203692
While generic object detection has achieved large improvements with rich feature hierarchies from deep nets, detecting small objects with poor visual cues remains challenging. Motion cues from multiple frames may be more informative for detecting such hard-to-distinguish objects in each frame. However, how to encode discriminative motion patterns, such as deformations and pose changes that characterize objects, has remained an open question. To learn them and thereby realize small object detection, we present a neural model called the Recurrent Correlational Network, where detection and tracking are jointly performed over a multi-frame representation learned through a single, trainable, and end-to-end network. A convolutional long short-term memory network is utilized for learning informative appearance change for detection, while learned representation is shared in tracking for enhancing its performance. In experiments with datasets containing images of scenes with small flying objects, such as birds and unmanned aerial vehicles, the proposed method yielded consistent improvements in detection performance over deep single-frame detectors and existing motion-based detectors. Furthermore, our network performs as well as state-of-the-art generic object trackers when it was evaluated as a tracker on the bird dataset.
Joint detection and tracking 1mm The relationship between object detection and tracking is a long-term problem in itself; before the advent of deep learning, it had only been explored with classical tools. In the track--learn--detection (TLD) framework @cite_55 , a trained detector enables long-term tracking by re-initializing trackers after temporal disappearance of objects. Andriluka al uses a single-frame part-based detector and shallow unsupervised learning based on temporal consistency @cite_40 . Tracking by associating detected bounding boxes @cite_27 is another popular approach. However, in this framework, recovering undetected objects is challenging because tracking is more akin to post-processing following detection than to joint detection and tracking.
{ "cite_N": [ "@cite_40", "@cite_55", "@cite_27" ], "mid": [ "2138302688", "", "2122469558" ], "abstract": [ "Both detection and tracking people are challenging problems, especially in complex real world scenes that commonly involve multiple people, complicated occlusions, and cluttered or even moving backgrounds. People detectors have been shown to be able to locate pedestrians even in complex street scenes, but false positives have remained frequent. The identification of particular individuals has remained challenging as well. Tracking methods are able to find a particular individual in image sequences, but are severely challenged by real-world scenarios such as crowded street scenes. In this paper, we combine the advantages of both detection and tracking in a single framework. The approximate articulation of each person is detected in every frame based on local features that model the appearance of individual body parts. Prior knowledge on possible articulations and temporal coherency within a walking cycle are modeled using a hierarchical Gaussian process latent variable model (hGPLVM). We show how the combination of these results improves hypotheses for position and articulation of each person in several subsequent frames. We present experimental results that demonstrate how this allows to detect and track multiple people in cluttered scenes with reoccurring occlusions.", "", "We present a detection-based three-level hierarchical association approach to robustly track multiple objects in crowded environments from a single camera. At the low level, reliable tracklets (i.e. short tracks for further analysis) are generated by linking detection responses based on conservative affinity constraints. At the middle level, these tracklets are further associated to form longer tracklets based on more complex affinity measures. The association is formulated as a MAP problem and solved by the Hungarian algorithm. At the high level, entries, exits and scene occluders are estimated using the already computed tracklets, which are used to refine the final trajectories. This approach is applied to the pedestrian class and evaluated on two challenging datasets. The experimental results show a great improvement in performance compared to previous methods." ] }
1709.04666
2803203692
While generic object detection has achieved large improvements with rich feature hierarchies from deep nets, detecting small objects with poor visual cues remains challenging. Motion cues from multiple frames may be more informative for detecting such hard-to-distinguish objects in each frame. However, how to encode discriminative motion patterns, such as deformations and pose changes that characterize objects, has remained an open question. To learn them and thereby realize small object detection, we present a neural model called the Recurrent Correlational Network, where detection and tracking are jointly performed over a multi-frame representation learned through a single, trainable, and end-to-end network. A convolutional long short-term memory network is utilized for learning informative appearance change for detection, while learned representation is shared in tracking for enhancing its performance. In experiments with datasets containing images of scenes with small flying objects, such as birds and unmanned aerial vehicles, the proposed method yielded consistent improvements in detection performance over deep single-frame detectors and existing motion-based detectors. Furthermore, our network performs as well as state-of-the-art generic object trackers when it was evaluated as a tracker on the bird dataset.
Motion feature learning 1.5mm Motion feature learning, and hence the use of recurrent nets, are more active in video classification @cite_59 and action recognition @cite_56 . Studies have shown that LSTMs yield improvement in accuracy @cite_11 @cite_42 @cite_36 . For example, VideoLSTM @cite_18 uses the idea of inter-frame correlation to recognize actions with attention. However, with action recognition datasets, the networks may not fully utilize human motion features apart from appearance, backgrounds and contexts @cite_60 .
{ "cite_N": [ "@cite_18", "@cite_60", "@cite_36", "@cite_42", "@cite_56", "@cite_59", "@cite_11" ], "mid": [ "2464235600", "2511671220", "2951183276", "2950966695", "24089286", "2308045930", "" ], "abstract": [ "We present a new architecture for end-to-end sequence learning of actions in video, we call VideoLSTM. Rather than adapting the video to the peculiarities of established recurrent or convolutional architectures, we adapt the architecture to fit the requirements of the video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video has a spatial layout. To exploit the spatial correlation we hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about the action content, but also guides better the attention towards the relevant spatio-temporal locations. We introduce motion-based attention. And finally, we demonstrate how the attention from VideoLSTM can be used for action localization by relying on just the action class label. Experiments and comparisons on challenging datasets for action classification and localization support our claims.", "The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101).", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15 , 7 and 12 respectively in mAP.", "We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.", "", "" ] }
1709.04893
2754770188
The success of convolutional networks in learning problems involving planar signals such as images is due to their ability to exploit the translation symmetry of the data distribution through weight sharing. Many areas of science and egineering deal with signals with other symmetries, such as rotation invariant data on the sphere. Examples include climate and weather science, astrophysics, and chemistry. In this paper we present spherical convolutional networks. These networks use convolutions on the sphere and rotation group, which results in rotational weight sharing and rotation equivariance. Using a synthetic spherical MNIST dataset, we show that spherical convolutional networks are very effective at dealing with rotationally invariant classification problems.
A lot of recent work has focussed on exploiting symmetries for data-efficient deep learning. Theoretical investigations of @cite_9 point out the potential for significant improvements in sample efficiency from deep networks that incorporate geometrical prior knowledge. Successful implementations of equivariant neural networks include @cite_14 @cite_13 @cite_0 @cite_2 @cite_10 @cite_18 @cite_4 . Group invariant scattering networks were explored by @cite_1 @cite_8 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_10", "@cite_9", "@cite_1", "@cite_0", "@cite_2", "@cite_13" ], "mid": [ "2952054889", "2576915720", "", "", "2951770173", "2764342511", "2167383966", "2963299736", "2578436806", "2621199038" ], "abstract": [ "We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CIFAR10 and rotated MNIST.", "Translating or rotating an input image should not affect the results of many computer vision tasks. Convolutional neural networks (CNNs) are already translation equivariant: input image translations produce proportionate feature map translations. This is not the case for rotations. Global rotation equivariance is typically sought through data augmentation, but patch-wise equivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN exhibiting equivariance to patch-wise translation and 360-rotation. We achieve this by replacing regular CNN filters with circular harmonics, returning a maximal response and orientation for every receptive field patch. H-Nets use a rich, parameter-efficient and fixed computational complexity representation, and we show that deep feature maps within the network encode complicated rotational invariants. We demonstrate that our layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization. We also achieve state-of-the-art classification on rotated-MNIST, and competitive results on other benchmark challenges.", "", "", "Many classes of images exhibit rotational symmetry. Convolutional neural networks are sometimes trained using data augmentation to exploit this, but they are still required to learn the rotation equivariance properties from the data. Encoding these properties into the network architecture, as we are already used to doing for translation equivariance by using convolutional layers, could result in a more efficient use of the parameter budget by relieving the model from learning them. We introduce four operations which can be inserted into neural network models as layers, and which can be combined to make these models partially equivariant to rotations. They also enable parameter sharing across different orientations. We evaluate the effect of these architectural modifications on three datasets which exhibit rotational symmetry and demonstrate improved performance with smaller models.", "A rigid bushing is described suitable for reception on a shaft or in a bore which has formed at an axial end thereof an annular sealing formation which extends integrally from the end of the bushing angularly so as to form a sealing surface to engage either a shaft or a bore, sealing being accomplished by the application of axial pressure upon the bushing.", "An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.", "Deep Convolution Neural Networks (DCNNs) are capable of learning unprecedentedly effective image representations. However, their ability in handling significant local and global image rotations remains limited. In this paper, we propose Active Rotating Filters (ARFs) that actively rotate during convolution and produce feature maps with location and orientation explicitly encoded. An ARF acts as a virtual filter bank containing the filter itself and its multiple unmaterialised rotated versions. During back-propagation, an ARF is collectively updated using errors from all its rotated versions. DCNNs using ARFs, referred to as Oriented Response Networks (ORNs), can produce within-class rotation-invariant deep features while maintaining inter-class discrimination for classification tasks. The oriented response produced by ORNs can also be used for image and object orientation estimation tasks. Over multiple state-of-the-art DCNN architectures, such as VGG, ResNet, and STN, we consistently observe that replacing regular filters with the proposed ARFs leads to significant reduction in the number of network parameters and improvement in classification performance. We report the best results on several commonly used benchmarks.", "We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST digit summation, where in both cases the output is invariant to permutations of the input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer in set-outlier detection as well as semi-supervised learning with clustering side-information.", "Filters in convolutional networks are typically parameterized in a pixel basis, that does not take prior knowledge about the visual world into account. We investigate the generalized notion of frames designed with image properties in mind, as alternatives to this parametrization. We show that frame-based ResNets and Densenets can improve performance on Cifar-10+ consistently, while having additional pleasant properties like steerability. By exploiting these transformation properties explicitly, we arrive at dynamic steerable blocks. They are an extension of residual blocks, that are able to seamlessly transform filters under pre-defined transformations, conditioned on the input at training and inference time. Dynamic steerable blocks learn the degree of invariance from data and locally adapt filters, allowing them to apply a different geometrical variant of the same filter to each location of the feature map. When evaluated on the Berkeley Segmentation contour detection dataset, our approach outperforms all competing approaches that do not utilize pre-training. Our results highlight the benefits of image-based regularization to deep networks." ] }
1709.04885
2756397983
Large computer networks are an essential part of modern technology, and quite often information needs to be broadcast to all the computers in the network. If all computers work perfectly all the time, this is simple. Suppose, however, that some of the computers fail occasionally. What is the fastest way to ensure that with high probability all working computers get the information? In this paper, we analyze three algorithms to do so. All algorithms terminate in logarithmic time, assuming computers fail with probability @math independently of each other. We prove that the third algorithm, which runs in time @math , is asymptotically optimal.
The topic of broadcasting information to all nodes of a network has been extensively studied for many different models. For a survey of the different models, see Pelc @cite_4 . Gasieniec and Pelc @cite_2 gave an algorithm working in @math (under slightly different assumptions then ours), Diks and Pelc @cite_6 improved this to @math (see also @cite_15 ). Results for some variation on the model can be found in @cite_8 @cite_9 @cite_1 @cite_12 @cite_16 . In all of these the analysis is up to a constant, whereas we determine the optimal running time up to @math . For examples of real world systems using such broadcasting algorithms, see @cite_7 @cite_11 . Frieze and Grimmett @cite_3 studied the running time of the naive algorithm when there are no faults ( @math ). Their result was further refined by Pittel @cite_5 . Our Theorem generalizes these results to all @math . Doerr, Huber and Levavi @cite_13 analyzed the naive algorithm in the case of faulty links, instead of nodes, getting the same running time as in our Thoerem .
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_3", "@cite_2", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2063105299", "1857436971", "", "1980048622", "2061408752", "2173651310", "2059014957", "2056496732", "1984977694", "2102879646", "", "1508237135", "2052326735", "" ], "abstract": [ "Broadcasting and gossiping are fundamental tasks in network communication. In broadcasting, or one-to-all communication, information originally held in one node of the network (called the source) must be transmitted to all other nodes. In gossiping, or all-to-all communication, every node holds a message which has to be transmitted to all other nodes. As communication networks grow in size, they become increasingly vulnerable to component failures. Thus, capabilities for fault-tolerant broadcasting and gossiping gain importance. The present paper is a survey of the fast-growing area of research investigating these capabilities. We focus on two most important efficiency measures of broadcasting and gossiping algorithms: running time and number of elementary transmissions required by the communication process. We emphasize the unifying thread in most results from the research in fault-tolerant communication: the trade-offs between efficiency of communication schemes and their fault-tolerance. © 1996 John Wiley & Sons, Inc.", "Management of forthcoming exascale clusters requires frequent collection of run-time information about the nodes and the running applications. This paper presents a new paradigm for providing online information to the management system of scalable clusters, consisting of a large number of nodes and one or more masters that manage these nodes. We describe the details of resilient gossip algorithms for sharing local information within subsets of nodes and for sending global information to a master, which holds information on all the nodes. The presented algorithms are decentralized, scalable and resilient, working well even when some nodes fail, without needing any recovery protocol. The paper gives formal expressions for approximating the average ages of the local information at each node and the information collected by the master. It then shows that these results closely match the results of simulations and measurements on a real cluster. The paper also investigates the resilience of the algorithms and the impact on the average age when nodes or masters fail. The main outcome of this paper is that partitioning of large clusters can improve the quality of information available to the management system without increasing the number of messages per node. Copyright © 2015 John Wiley & Sons, Ltd.", "", "A variant of the well-known gossip problem is studied. Each of n members of a communication network has a piece of information that should be made known to everybody else. This is to be done by placing a sequence of two-party phone calls along the lines of the network. During each call, the two participants exchange all information they currently have, in a unit of time. It is assumed that calls fail independently with fixed probability @math and that no information is exchanged during a failed call. For communication networks of bounded degree, efficient schemes of calls are shown that assure complete communication with probability converging to 1 as n grows. Both the number of calls and the time they use are of minimal order.", "Broadcasting is a process of transmitting a message held in one node of a communication network to all other nodes. Links of the network are subject to randomly and independently distributed faults with probability0", "We consider broadcasting among n processors, f of which can be faulty. A fault-free processor, called the source, holds a piece of information which has to be transmitted to all other fault-free processors. We assume that the fraction f n of faulty processors is bounded by a constant γ<1 . Transmissions are fault free. Faults are assumed to be of the crash type: faulty processors do not send or receive messages. We use the whispering model: pairs of processors communicating in one round must form a matching. A fault-free processor sending a message to another processor becomes aware of whether this processor is faulty or fault free and can adapt future transmissions accordingly. The main result of the paper is a broadcasting algorithm working in O( log n) rounds and using O(n) messages of logarithmic size, in the worst case. This is an improvement of the result from [17] where O ((log n) 2 ) rounds were used. Our method also gives the first algorithm for adaptive distributed fault diagnosis in O( log n) rounds.", "We consider the problem of finding the shortest distance between all pairs of vertices in a complete digraph on n vertices, whose arc-lengths are non-negative random variables. We describe an algorithm which solves this problem in O(n(m+nlogn)) expected time, where m is the expected number of arcs with finite lenght. If m is small enough, this represents a small improvement over the bound in Bloniarz [3]. We consider also the case when the arc-lengths are random variables which are independently distributed with distribution function F, where F(0)=0 and F is differentiable at 0; for this case, we describe an algorithm which runs in O(n2logn) expected time. In our treatment of the shortest-path problem we consider the following problem in combinatorial probability theory. A town contains n people, one of whom knows a rumour. At the first stage he tells someone chosen randomly from the town; at each stage, each person who knows the rumour tells someone else, chosen randomly from the town and indeependently of all other choices. Let Sn be the number of stages before the whole town rnows the rumour. We show that Sn log2n → 1 + loge2 in probability as n → ∞, and estimate the probabilities of large deviations in Sn.", "We consider broadcasting from a fault-free source to all nodes of a completely connected n-node network in the presence of k faulty nodes. Every node can communicate with at most one other node in a unit of time and during this period every pair of communicating nodes can exchange information packets. Faulty nodes cannot send information. Broadcasting is adaptive, i.e., a node schedules its next communication on the basis of information currently available to it. Assuming that the fraction of faulty nodes is bounded by a constant smaller than 1, we construct a broadcasting algorithm working in worst-case time O(log2 n).", "Suppose that one of n people knows a rumor. At the first stage, he passes the rumor to someone chosen at random; at each stage, each person already informed (“knower”) communicates the rumor to a person chosen at random and independently of all other past and present choices. Denote by @math the random number of stages before everybody is informed. How large is @math typically? Frieze and Grimmet, who introduced this problem, proved that, in probability, @math . In this paper we show that, in fact, @math in probability. Our proof demonstrates that the number @math of persons informed after t stages obeys very closely, with high probability, a deterministic equation @math , @math . A case when each knower passes the rumor to several members at every stage is also discussed.", "We revisit the classic problem of spreading a piece of information in a group of @math n fully connected processors. By suitably adding a small dose of randomness to the protocol of Gasieniec and Pelc (Parallel Comput 22:903---912, 1996), we derive for the first time protocols that (i) use a linear number of messages, (ii) are correct even when an arbitrary number of adversarially chosen processors does not participate in the process, and (iii) with high probability have the asymptotically optimal runtime of @math O(logn) when at least an arbitrarily small constant fraction of the processors are working. In addition, our protocols do not require that the system is synchronized nor that all processors are simultaneously woken up at time zero, they are fully based on push-operations, and they do not need an a priori estimate on the number of failed nodes. Our protocols thus overcome the typical disadvantages of the two known approaches, algorithms based on random gossip (typically needing a large number of messages due to their unorganized nature) and algorithms based on fair workload splitting (which are either not time-efficient or require intricate preprocessing steps plus synchronization).", "", "Randomized rumor spreading is a classical protocol to disseminate information across a network. At SODA 2008, a quasirandom version of this protocol was proposed and competitive bounds for its run-time were proven. This prompts the question: to what extent does the quasirandom protocol inherit the second principal advantage of randomized rumor spreading, namely robustness against transmission failures? In this paper, we present a result precise up to (1+ -o(1)) factors. We limit ourselves to the network in which the vertices form a clique. Run-times accurate to their leading constants are unknown for all other non-trivial networks. We show that if each transmission reaches its destination with probability p@?(0,1], after (1+@e)(1log\"2(1+p)log\"2n+1plnn) rounds where @e>0 is fixed, the quasirandom protocol has informed all n nodes in the network with probability at least 1-n^-^p^@e^ ^4^0. Note that this is slightly faster than the intuitively natural 1 p factor increase over the run-time of approximately log\"2n+lnn for the non-corrupted case. We also provide a corresponding lower bound for the classical model. This demonstrates that the quasirandom model is at least as robust as the fully random model despite the greatly reduced degree of independent randomness.", "We consider broadcasting a message from one node of a tree to all other nodes. In the presence of up to k link failures the tree becomes disconnected, and only nodes in the connected component C containing the source can be informed. The maximum ratio between the time used by a broadcasting scheme B to inform C and the optimal time to inform C, taken over all components C yielded by configurations of at most k faults, is the k-vulnerability of B. This is the maximum slowdown incurred by B due to the lack of a priori knowledge of fault location, for at most k faults. This measure of fault tolerance is similar to the competitive factor of on-line algorithms: in both cases, the performance of an algorithm lacking some crucial information is compared to the performance of an “off-line” algorithm, one that is given this information as input. It is also the first known tool to measure and compare fault tolerance of broadcasting schemes in trees. We seek broadcasting schemes with low vulnerability, working for tree networks. It turns out that schemes that give the best broadcasting time in a fault-free environment may have very high vulnerability, i.e., poor fault tolerance, for some trees. The main result of this paper is an algorithm that, given an arbitrary tree T and an integer k, computes a broadcasting scheme B with lowest possible k-vulnerability among all schemes working for T. Our algorithm has running time O(kn2+n2logn), where n is the size of the tree. We also give an algorithm to find a “universally fault-tolerant” broadcasting scheme in a tree T: one that approximates the lowest possible k-vulnerability, for all k simultaneously.", "" ] }
1709.04417
2752808676
Due to the flexibility, affordability and portability of cloud storage, individuals and companies envisage the cloud storage as one of the preferred storage media nowadays. This attracts the eyes of cyber criminals, since much valuable informa- tion such as user credentials, and private customer records are stored in the cloud. There are many ways for criminals to compromise cloud services; ranging from non-technical attack methods, such as social engineering, to deploying advanced malwares. Therefore, it is vital for cyber forensics examiners to be equipped and informed about best methods for investigation of different cloud platforms. In this chapter, using pCloud (an extensively used online cloud storage service) as a case study, and we elaborate on different kinds of artefacts retrievable during a forensics examination. We carried out our experiments on four different virtual machines running four popular operating systems: a 64 bit Windows 8, Ubuntu 14.04.1 LTS, Android 4.4.2, and iOS 8.1. Moreover, we examined cloud remnants of two different web browsers: Internet Explorer and Google Chrome on Windows. We believe that our study would promote awareness among digital forensic examiners on how to conduct cloud storage forensics examination.
Along the same line of study, Hale @cite_20 analyzed the digital artefacts remnant on a computer after accessing or manipulating Cloud Drive. They could recover several information, such as installation path, and upload download operations. @cite_31 , presented new method in order to analyze the digital artefacts left on all accessible devices, such as Mobile phones (e.g., iPhone and Android smartphone) and Desktop systems, running different OS (e.g., Windows and Mac) while using Amazon S3, Google Docs, Dropbox, and Evernote. Contrary to most of the cloud storage services that are based on open source platforms, Apple users, have their own special cloud storage called . Oestreicher @cite_62 investigated particularly iCloud service in order to find leftover digital droplets when using native Mac OS X during system synchronization with the cloud. There are also various research studies on several different cloud storage services that we summarized in Table . We refer the interested reader to @cite_63 @cite_25 for a comprehensive survey in this regard.
{ "cite_N": [ "@cite_62", "@cite_63", "@cite_31", "@cite_25", "@cite_20" ], "mid": [ "2039687910", "2034493311", "1991458033", "", "2040656613" ], "abstract": [ "The acquisition of data stored on cloud services has become increasingly important to digital forensic investigations. Apple, Inc. continues to expand the capabilities of its cloud service, iCloud. As such, it is critical to determine an effective means for forensic acquisition of data from this service and its effect on the original file data and metadata.This research examined files acquired from the iCloud service via the native Mac OS X system synchronization with the service. The goal was to determine the operating system locations of iCloud-synched files. Once located, the secondary goal was to determine if the file hash values match those of the original files and whether file metadata, particularly timestamps, are altered.", "As cloud computing becomes more prevalent, there is a growing need for forensic investigations involving cloud technologies. The field of cloud forensics seeks to address the challenges to digital forensics presented by cloud technologies. This article reviews current research in the field of cloud forensics, with a focus on \"forensics in the cloud\"--that is, cloud computing as an evidence source for forensic investigations.", "Abstract The demand for cloud computing is increasing because of the popularity of digital devices and the wide use of the Internet. Among cloud computing services, most consumers use cloud storage services that provide mass storage. This is because these services give them various additional functions as well as storage. It is easy to access cloud storage services using smartphones. With increasing utilization, it is possible for malicious users to abuse cloud storage services. Therefore, a study on digital forensic investigation of cloud storage services is necessary. This paper proposes new procedure for investigating and analyzing the artifacts of all accessible devices, such as Windows system, Mac system, iPhone, and Android smartphone.", "", "Cloud storage is becoming increasingly popular among individuals and businesses. Amazon Cloud Drive is a flavor of cloud-based storage that allows users to transfer files to and from multiple computers, with or without the use of a separate application that must be installed on the user's machine. This paper discusses the digital artifacts left behind after an Amazon Cloud Drive has been accessed or manipulated from a computer. Methods available to a forensic examiner that can be used to determine file transfers that occurred to and from an Amazon Cloud Drive on a computer, as well as retrieving relevant Cloud Drive artifacts from unallocated space is discussed in this paper. Two Perl scripts are also introduced to help automate the process of retrieving information from Amazon Cloud Drive artifacts." ] }
1709.04595
2951364653
Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.
Image cropping aims at improving the composition of images, which is very important for the aesthetic quality. There are a number of previous works for aesthetic quality assessment. Many early works @cite_28 @cite_27 @cite_30 @cite_2 focus on designing handcrafted features based on intuitions from human's perception or photographic rules. Recently, thanks to the fast development of deep learning and newly proposed large scale datasets @cite_18 , there are many new works @cite_12 @cite_13 @cite_33 which accomplish aesthetic quality assessment with convolutional neural networks.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_33", "@cite_28", "@cite_27", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "1997095443", "", "2529088810", "2170658603", "1511924373", "2080754665", "2467531333", "2417288846" ], "abstract": [ "Automatically assessing photo quality from the perspective of visual aesthetics is of great interest in high-level vision research and has drawn much attention in recent years. In this paper, we propose content-based photo quality assessment using regional and global features. Under this framework, subject areas, which draw the most attentions of human eyes, are first extracted. Then regional features extracted from subject areas and the background regions are combined with global features to assess the photo quality. Since professional photographers may adopt different photographic techniques and may have different aesthetic criteria in mind when taking different types of photos (e.g. landscape versus portrait), we propose to segment regions and extract visual features in different ways according to the categorization of photo content. Therefore we divide the photos into seven categories based on their content and develop a set of new subject area extraction methods and new visual features, which are specially designed for different categories. This argument is supported by extensive experimental comparisons of existing photo quality assessment approaches as well as our new regional and global features over different categories of photos. Our new features significantly outperform the state-of-the-art methods. Another contribution of this work is to construct a large and diversified benchmark database for the research of photo quality assessment. It includes 17, 613 photos with manually labeled ground truth.", "", "This article reviews recent computer vision techniques used in the assessment of image aesthetic quality. Image aesthetic assessment aims at computationally distinguishing high-quality from low-quality photos based on photographic rules, typically in the form of binary classification or quality scoring. A variety of approaches has been proposed in the literature to try to solve this challenging problem. In this article, we summarize these approaches based on visual feature types (hand-crafted features and deep features) and evaluation criteria (data set characteristics and evaluation metrics). The main contributions and novelties of the reviewed approaches are highlighted and discussed. In addition, following the emergence of deep-learning techniques, we systematically evaluate recent deep-learning settings that are useful for developing a robust deep model for aesthetic scoring.", "We propose a principled method for designing high level features forphoto quality assessment. Our resulting system can classify between high quality professional photos and low quality snapshots. Instead of using the bag of low-level features approach, we first determine the perceptual factors that distinguish between professional photos and snapshots. Then, we design high level semantic features to measure the perceptual differences. We test our features on a large and diverse dataset and our system is able to achieve a classification rate of 72 on this difficult task. Since our system is able to achieve a precision of over 90 in low recall scenarios, we show excellent results in a web image search application.", "Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities of photographs is a highly subjective task. Hence, there is no unanimously agreed standard for measuring aesthetic value. In spite of the lack of firm rules, certain features in photographic images are believed, by many, to please humans more than certain others. In this paper, we treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing Website as data source. We extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. Automated classifiers are built using support vector machines and classification trees. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings. The work attempts to explore the relationship between emotions which pictures arouse in people, and their low-level content. Potential applications include content-based image retrieval and digital photography.", "With the rise in popularity of digital cameras, the amount of visual data available on the web is growing exponentially. Some of these pictures are extremely beautiful and aesthetically pleasing, but the vast majority are uninteresting or of low quality. This paper demonstrates a simple, yet powerful method to automatically select high aesthetic quality images from large image collections. Our aesthetic quality estimation method explicitly predicts some of the possible image cues that a human might use to evaluate an image and then uses them in a discriminative approach. These cues or high level describable image attributes fall into three broad types: 1) compositional attributes related to image layout or configuration, 2) content attributes related to the objects or scene types depicted, and 3) sky-illumination attributes related to the natural lighting conditions. We demonstrate that an aesthetics classifier trained on these describable attributes can provide a significant improvement over baseline methods for predicting human quality judgments. We also demonstrate our method for predicting the “interestingness” of Flickr photos, and introduce a novel problem of estimating query specific “interestingness”.", "Photo aesthetics assessment is challenging. Deep convolutional neural network (ConvNet) methods have recently shown promising results for aesthetics assessment. The performance of these deep ConvNet methods, however, is often compromised by the constraint that the neural network only takes the fixed-size input. To accommodate this requirement, input images need to be transformed via cropping, scaling, or padding, which often damages image composition, reduces image resolution, or causes image distortion, thus compromising the aesthetics of the original images. In this paper, we present a composition-preserving deep Con-vNet method that directly learns aesthetics features from the original input images without any image transformations. Specifically, our method adds an adaptive spatial pooling layer upon the regular convolution and pooling layers to directly handle input images with original sizes and aspect ratios. To allow for multi-scale feature extraction, we develop the Multi-Net Adaptive Spatial Pooling ConvNet architecture which consists of multiple sub-networks with different adaptive spatial pooling sizes and leverage a scene–based aggregation layer to effectively combine the predictions from multiple sub-networks. Our experiments on the large-scale aesthetics assessment benchmark (AVA [29]) demonstrate that our method can significantly improve the state-of-the-art results in photo aesthetics assessment.", "Real-world applications could benefit from the ability to automatically generate a fine-grained ranking of photo aesthetics. However, previous methods for image aesthetics analysis have primarily focused on the coarse, binary categorization of images into high- or low-aesthetic categories. In this work, we propose to learn a deep convolutional neural network to rank photo aesthetics in which the relative ranking of photo aesthetics are directly modeled in the loss function. Our model incorporates joint learning of meaningful photographic attributes and image content information which can help regularize the complicated photo aesthetics rating problem." ] }
1709.04595
2951364653
Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.
Previous automatic image cropping methods can be divided into two classes, attention-based and aesthetics-based methods. The basic approach of attention-based methods @cite_16 @cite_29 @cite_14 @cite_8 is to find the most visually salient regions in the original images. Attention-based methods can find cropping windows that draw more attention from people, but they may not generate very pleasing cropping windows, because they hardly consider about the image composition @cite_34 . For those aesthetics-based methods, they aim to find the most pleasing cropping windows from original images. Some of these works @cite_20 @cite_32 use aesthetic quality classifiers to discriminate the quality of candidate windows. Other works use RankSVM @cite_34 or RankNet @cite_11 to grade each candidate window. There are also change-based methods @cite_19 , which compares original images with cropped images so as to throw away distracting regions and retain high quality ones. Image retargeting techniques @cite_6 @cite_17 adjust the aspect ratio of an image to fit the target aspect ratio, while not discarding important content in an image, which are relevant to our task.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_29", "@cite_32", "@cite_34", "@cite_6", "@cite_17", "@cite_19", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "", "2467818129", "", "2082335776", "2575939610", "2744093089", "2613969344", "1975521048", "2060502770", "", "2586372171" ], "abstract": [ "", "Attention based automatic image cropping aims at preserving the most visually important region in an image. A common task in this kind of method is to search for the smallest rectangle inside which the summed attention is maximized. We demonstrate that under appropriate formulations, this task can be achieved using efficient algorithms with low computational complexity. In a practically useful scenario where the aspect ratio of the cropping rectangle is given, the problem can be solved with a computational complexity linear to the number of image pixels. We also study the possibility of multiple rectangle cropping and a new model facilitating fully automated image cropping.", "", "Cropping is one of the most common tasks in image editing for improving the aesthetic quality of a photograph. In this paper, we propose a new, aesthetic photo cropping system which combines three models: visual composition, boundary simplicity, and content preservation. The visual composition model measures the quality of composition for a given crop. Instead of manually defining rules or score functions for composition, we learn the model from a large set of well-composed images via discriminative classifier training. The boundary simplicity model measures the clearness of the crop boundary to avoid object cutting-through. The content preservation model computes the amount of salient information kept in the crop to avoid excluding important content. By assigning a hard lower bound constraint on the content preservation and linearly combining the scores from the visual composition and boundary simplicity models, the resulting system achieves significant improvement over recent cropping methods in both quantitative and qualitative evaluation.", "Automatic photo cropping is an important tool for improving visual quality of digital photos without resorting to tedious manual selection. Traditionally, photo cropping is accomplished by determining the best proposal window through visual quality assessment or saliency detection. In essence, the performance of an image cropper highly depends on the ability to correctly rank a number of visually similar proposal windows. Despite the ranking nature of automatic photo cropping, little attention has been paid to learning-to-rank algorithms in tackling such a problem. In this work, we conduct an extensive study on traditional approaches as well as ranking-based croppers trained on various image features. In addition, a new dataset consisting of high quality cropping and pairwise ranking annotations is presented to evaluate the performance of various baselines. The experimental results on the new dataset provide useful insights into the design of better photo cropping algorithms.", "This paper proposes a weakly- and self-supervised deep convolutional neural network (WSSDCNN) for content-aware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outputs a retargeted image. Retargeting is performed through a shift map, which is a pixel-wise mapping from the source to the target grid. Our method implicitly learns an attention map, which leads to a content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In the training phase, pairs of an image and its image-level annotation are used to compute content and structure losses. We demonstrate the effectiveness of our proposed method for a retargeting application with insightful analyses.", "Image retargeting techniques that adjust images into different sizes have attracted much attention recently. Objective quality assessment (OQA) of image retargeting results is often desired to automatically select the best results. Existing OQA methods output an absolute score for each retargeted image and use these scores to compare different results. Observing that it is challenging even for human subjects to give consistent scores for retargeting results of different source images, in this paper we propose a learning-based OQA method that predicts the ranking of a set of retargeted images with the same source image. We show that this more manageable task helps achieve more consistent prediction to human preference and is sufficient for most application scenarios. To compute the ranking, we propose a simple yet efficient machine learning framework that uses a General Regression Neural Network (GRNN) to model a combination of seven elaborate OQA metrics. We then propose a simple scheme to transform the relative scores output from GRNN into a global ranking. We train our GRNN model using human preference data collected in the elaborate RetargetMe benchmark and evaluate our method based on the subjective study in RetargetMe. Moreover, we introduce a further subjective benchmark to evaluate the generalizability of different OQA methods. Experimental results demonstrate that our method outperforms eight representative OQA methods in ranking prediction and has better generalizability to different datasets.", "Image cropping is a common operation used to improve the visual quality of photographs. In this paper, we present an automatic cropping technique that accounts for the two primary considerations of people when they crop: removal of distracting content, and enhancement of overall composition. Our approach utilizes a large training set consisting of photos before and after cropping by expert photographers to learn how to evaluate these two factors in a crop. In contrast to the many methods that exist for general assessment of image quality, ours specifically examines differences between the original and cropped photo in solving for the crop parameters. To this end, several novel image features are proposed to model the changes in image content and composition when a crop is applied. Our experiments demonstrate improvements of our method over recent cropping algorithms on a broad range of images.", "Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search.", "", "Photo composition is an important factor affecting the aesthetics in photography. However, it is a highly challenging task to model the aesthetic properties of good compositions due to the lack of globally applicable rules to the wide variety of photographic styles. Inspired by the thinking process of photo taking, we formulate the photo composition problem as a view finding process which successively examines pairs of views and determines their aesthetic preferences. We further exploit the rich professional photographs on the web to mine unlimited high-quality ranking samples and demonstrate that an aesthetics-aware deep ranking network can be trained without explicitly modeling any photographic rules. The resulting model is simple and effective in terms of its architectural design and data sampling method. It is also generic since it naturally learns any photographic rules implicitly encoded in professional photographs. The experiments show that the proposed view finding network achieves state-of-the-art performance with sliding window search strategy on two image cropping datasets." ] }
1709.04595
2951364653
Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods.
RL based strategies have been successfully applied in many domains of computer vision, including image caption @cite_26 , object detection @cite_25 @cite_4 and visual relationship detection @cite_7 . The active object localization method @cite_25 achieves the best performance among detection algorithms without region proposals. The tree-RL method @cite_4 uses RL to obtain region proposals and achieves comparable result with much fewer region proposals compared to RPN @cite_9 . Above RL based object detection methods use bounding boxes as their supervision, however, our framework only uses the aesthetics information as supervision, which requires less label information. To our best knowledge, we are the first to put forward a deep reinforcement learning based method for automatic image cropping.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_7", "@cite_9", "@cite_25" ], "mid": [ "2952591111", "", "2963650529", "2953106684", "2179488730" ], "abstract": [ "Image captioning is a challenging problem owing to the complexity in understanding the image content and diverse ways of describing it in natural language. Recent advances in deep neural networks have substantially improved the performance of this task. Most state-of-the-art approaches follow an encoder-decoder framework, which generates captions using a sequential recurrent prediction model. However, in this paper, we introduce a novel decision-making framework for image captioning. We utilize a \"policy network\" and a \"value network\" to collaboratively generate captions. The policy network serves as a local guidance by providing the confidence of predicting the next word according to the current state. Additionally, the value network serves as a global and lookahead guidance by evaluating all possible extensions of the current state. In essence, it adjusts the goal of predicting the correct words towards the goal of generating captions similar to the ground truth captions. We train both networks using an actor-critic reinforcement learning model, with a novel reward defined by visual-semantic embedding. Extensive experiments and analyses on the Microsoft COCO dataset show that the proposed framework outperforms state-of-the-art approaches across different evaluation metrics.", "", "Computers still struggle to understand the interdependency of objects in the scene as a whole, e.g., relations between objects or their attributes. Existing methods often ignore global context cues capturing the interactions among different object instances, and can only recognize a handful of types by exhaustively training individual detectors for all possible relationships. To capture such global interdependency, we propose a deep Variation-structured Re-inforcement Learning (VRL) framework to sequentially discover object relationships and attributes in the whole image. First, a directed semantic action graph is built using language priors to provide a rich and compact representation of semantic correlations between object categories, predicates, and attributes. Next, we use a variation-structured traversal over the action graph to construct a small, adaptive action set for each step based on the current state and historical actions. In particular, an ambiguity-aware object mining scheme is used to resolve semantic ambiguity among object categories that the object detector fails to distinguish. We then make sequential predictions using a deep RL framework, incorporating global context cues and semantic embeddings of previously extracted phrases in the state vector. Our experiments on the Visual Relationship Detection (VRD) dataset and the large-scale Visual Genome dataset validate the superiority of VRL, which can achieve significantly better detection results on datasets involving thousands of relationship and attribute types. We also demonstrate that VRL is able to predict unseen types embedded in our action graph by learning correlations on shared graph nodes.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization." ] }
1709.04482
2752168051
Neural models have become ubiquitous in automatic speech recognition systems. While neural networks are typically used as acoustic models in more complex systems, recent studies have explored end-to-end speech recognition systems based on neural networks, which can be trained to directly predict text from input acoustic features. Although such systems are conceptually elegant and simpler than traditional systems, it is less obvious how to interpret the trained models. In this work, we analyze the speech representations learned by a deep end-to-end model that is based on convolutional and recurrent layers, and trained with a connectionist temporal classification (CTC) loss. We use a pre-trained model to generate frame-level features which are given to a classifier that is trained on frame classification into phones. We evaluate representations from different layers of the deep model and compare their quality for predicting phone labels. Our experiments shed light on important aspects of the end-to-end model such as layer depth, model complexity, and other design choices.
End-to-end models for ASR have become increasingly popular in recent years. Important studies include models based on connectionist temporal classification (CTC) @cite_29 @cite_7 @cite_22 @cite_15 and attention-based sequence-to-sequence models @cite_23 @cite_28 @cite_25 . The CTC model is based on a recurrent neural network that takes as input acoustic features and is trained to predict a symbol per each frame. Symbols are typically characters, in addition to a special blank symbol. The CTC loss then marginalizes over all possible sequences of symbols given a transcription. The sequence-to-sequence approach, on the other hand, first encodes the sequence of acoustic features into a single vector and then decodes that vector into the sequence of symbols (characters). The attention mechanism improves upon this method by conditioning on a different summary of the input sequence at each decoding step.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_28", "@cite_29", "@cite_23", "@cite_15", "@cite_25" ], "mid": [ "2158373110", "1736701665", "", "2102113734", "1586532344", "", "2109886035" ], "abstract": [ "Main-stream Automatic Speech Recognition systems are based on modelling acoustic sub-word units such as phonemes. Phonemisation dictionaries and language model based decoding techniques are applied to transform the phoneme hypothesis into orthographic transcriptions. Direct modelling of graphemes as sub-word units using HMM has not been successful. We investigate a novel ASR approach using Bidirectional Long Short-Term Memory Recurrent Neural Networks and Connectionist Temporal Classification, which is capable of transcribing graphemes directly and yields results highly competitive with phoneme transcription. In design of such a grapheme based speech recognition system phonemisation dictionaries are no longer required. All that is needed is text transcribed on the sentence level, which greatly simplifies the training procedure. The novel approach is evaluated extensively on the Wall Street Journal 1 corpus.", "The performance of automatic speech recognition (ASR) has improved tremendously due to the application of deep neural networks (DNNs). Despite this progress, building a new ASR system remains a challenging task, requiring various resources, multiple training stages and significant expertise. This paper presents our Eesen framework which drastically simplifies the existing pipeline to build state-of-the-art ASR systems. Acoustic modeling in Eesen involves learning a single recurrent neural network (RNN) predicting context-independent targets (phonemes or characters). To remove the need for pre-generated frame labels, we adopt the connectionist temporal classification (CTC) objective function to infer the alignments between speech and label sequences. A distinctive feature of Eesen is a generalized decoding approach based on weighted finite-state transducers (WFSTs), which enables the efficient incorporation of lexicons and language models into CTC decoding. Experiments show that compared with the standard hybrid DNN systems, Eesen achieves comparable word error rates (WERs), while at the same time speeding up decoding significantly.", "", "This paper presents a speech recognition system that directly transcribes audio data with text, without requiring an intermediate phonetic representation. The system is based on a combination of the deep bidirectional LSTM recurrent neural network architecture and the Connectionist Temporal Classification objective function. A modification to the objective function is introduced that trains the network to minimise the expectation of an arbitrary transcription loss function. This allows a direct optimisation of the word error rate, even in the absence of a lexicon or language model. The system achieves a word error rate of 27.3 on the Wall Street Journal corpus with no prior linguistic information, 21.9 with only a lexicon of allowed words, and 8.2 with a trigram language model. Combining the network with a baseline system further reduces the error rate to 6.7 .", "We replace the Hidden Markov Model (HMM) which is traditionally used in in continuous speech recognition with a bi-directional recurrent neural network encoder coupled to a recurrent neural network decoder that directly emits a stream of phonemes. The alignment between the input and output sequences is established using an attention mechanism: the decoder emits each symbol based on a context created with a subset of input symbols elected by the attention mechanism. We report initial results demonstrating that this new approach achieves phoneme error rates that are comparable to the state-of-the-art HMM-based decoders, on the TIMIT dataset.", "", "Many of the current state-of-the-art Large Vocabulary Continuous Speech Recognition Systems (LVCSR) are hybrids of neural networks and Hidden Markov Models (HMMs). Most of these systems contain separate components that deal with the acoustic modelling, language modelling and sequence decoding. We investigate a more direct approach in which the HMM is replaced with a Recurrent Neural Network (RNN) that performs sequence prediction directly at the character level. Alignment between the input features and the desired character sequence is learned automatically by an attention mechanism built into the RNN. For each predicted character, the attention mechanism scans the input sequence and chooses relevant frames. We propose two methods to speed up this operation: limiting the scan to a subset of most promising frames and pooling over time the information contained in neighboring frames, thereby reducing source sequence length. Integrating an n-gram language model into the decoding process yields recognition accuracies similar to other HMM-free RNN-based approaches." ] }
1709.04482
2752168051
Neural models have become ubiquitous in automatic speech recognition systems. While neural networks are typically used as acoustic models in more complex systems, recent studies have explored end-to-end speech recognition systems based on neural networks, which can be trained to directly predict text from input acoustic features. Although such systems are conceptually elegant and simpler than traditional systems, it is less obvious how to interpret the trained models. In this work, we analyze the speech representations learned by a deep end-to-end model that is based on convolutional and recurrent layers, and trained with a connectionist temporal classification (CTC) loss. We use a pre-trained model to generate frame-level features which are given to a classifier that is trained on frame classification into phones. We evaluate representations from different layers of the deep model and compare their quality for predicting phone labels. Our experiments shed light on important aspects of the end-to-end model such as layer depth, model complexity, and other design choices.
While end-to-end neural network models offer an elegant and relatively simple architecture, they are often thought to be opaque and uninterpretable. Thus researchers have started investigating what such models learn during the training process. For instance, previous work evaluated neural network acoustic models on phoneme recognition using different acoustic features @cite_17 or investigated how such models learn invariant representations @cite_32 and encode linguistic features @cite_4 @cite_2 . Others have correlated activations of gated recurrent networks with phoneme boundaries in autoencoders @cite_30 and in a text-to-speech system @cite_27 . A joint audio-visual model of speech and lip movements was developed in @cite_21 , where phoneme embeddings were shown to be closer to certain linguistic features than embeddings based on audio alone. Other joint audio-visual models have also analyzed the learned representations in different ways @cite_20 @cite_19 @cite_11 . Finally, we note that analyzing neural representations has also attracted attention in other domains like vision and natural language processing, including machine translation @cite_0 @cite_26 and joint vision-language models @cite_10 , among others. To our knowledge, hidden representations in end-to-end ASR systems have not been thoroughly analyzed before.
{ "cite_N": [ "@cite_30", "@cite_11", "@cite_4", "@cite_26", "@cite_21", "@cite_32", "@cite_0", "@cite_19", "@cite_27", "@cite_2", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "", "2964215850", "2295676751", "2605717780", "2963583362", "", "2563574619", "2580178245", "2964060510", "", "2531381952", "2586148577", "2172097686" ], "abstract": [ "", "We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.", "Deep neural networks (DNNs) have become the dominant technique for acoustic-phonetic modeling due to their markedly improved performance over other models. Despite this, little is understood about the computation they implement in creating phonemic categories from highly variable acoustic signals. In this paper, we analyzed a DNN trained for phoneme recognition and characterized its representational properties, both at the single node and population level in each layer. At the single node level, we found strong selectivity to distinct phonetic features in all layers. Node selectivity to specific manners and places of articulation appeared from the first hidden layer and became more explicit in deeper layers. Furthermore, we found that nodes with similar phonetic feature selectivity were differentially activated to different exemplars of these features. Thus, each node becomes tuned to a particular acoustic manifestation of the same feature, providing an effective representational basis for the formation of invariant phonemic categories. This study reveals that phonetic features organize the activations in different layers of a DNN, a result that mirrors the recent findings of feature encoding in the human auditory system. These insights may provide better understanding of the limitations of current models, leading to new strategies to improve their performance. Index Terms: Deep neural networks, deep learning, automatic speech recognition.", "Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.", "", "", "", "Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the word 'lighthouse' within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.", "Recently, recurrent neural networks (RNNs) as powerful sequence models have re-emerged as a potential acoustic model for statistical parametric speech synthesis (SPSS). The long short-term memory (LSTM) architecture is particularly attractive because it addresses the vanishing gradient problem in standard RNNs, making them easier to train. Although recent studies have demonstrated that LSTMs can achieve significantly better performance on SPSS than deep feedforward neural networks, little is known about why. Here we attempt to answer two questions: a) why do LSTMs work well as a sequence model for SPSS; b) which component (e.g., input gate, output gate, forget gate) is most important. We present a visual analysis alongside a series of experiments, resulting in a proposal for a simplified architecture. The simplified architecture has significantly fewer parameters than an LSTM, thus reducing generation complexity considerably without degrading quality.", "", "We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover both structure and meaning from noisy and ambiguous data across modalities. We show that our model indeed learns to predict features of the visual context given phonetically transcribed image descriptions, and show that it represents linguistic information in a hierarchy of levels: lower layers in the stack are comparatively more sensitive to form, whereas higher layers are more sensitive to meaning.", "We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.", "Deep Belief Networks (DBNs) are a very competitive alternative to Gaussian mixture models for relating states of a hidden Markov model to frames of coefficients derived from the acoustic input. They are competitive for three reasons: DBNs can be fine-tuned as neural networks; DBNs have many non-linear hidden layers; and DBNs are generatively pre-trained. This paper illustrates how each of these three aspects contributes to the DBN's good recognition performance using both phone recognition performance on the TIMIT corpus and a dimensionally reduced visualization of the relationships between the feature vectors learned by the DBNs that preserves the similarity structure of the feature vectors at multiple scales. The same two methods are also used to investigate the most suitable type of input representation for a DBN." ] }
1709.04518
2949385043
We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach, which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2 . Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.
Computer-aided diagnosis (CAD) is an important technique which can assist human doctors in many clinical scenarios. An important prerequisite of CAD is medical imaging analysis. As a popular and cheap way of medical imaging, contrast-enhanced computed tomography (CECT) produces detailed images of internal organs, bones, soft tissues and blood vessels. It is of great value to automatically segment organs and or soft tissues from these CT volumes for further diagnosis @cite_2 @cite_44 @cite_23 @cite_45 . To capture specific properties of different organs, researchers often design individualized algorithms for each of them. Typical examples include the the liver @cite_15 @cite_33 , the spleen @cite_24 , the kidneys @cite_20 @cite_6 , the lungs @cite_3 , the pancreas @cite_19 @cite_32 , etc . Small organs ( e.g. , the pancreas ) are often more difficult to segment, partly due to their low contrast and large anatomical variability in size and (most often irregular) shape.
{ "cite_N": [ "@cite_33", "@cite_32", "@cite_6", "@cite_3", "@cite_44", "@cite_24", "@cite_19", "@cite_45", "@cite_23", "@cite_2", "@cite_15", "@cite_20" ], "mid": [ "2153431772", "", "1584247442", "", "2474421929", "2050542229", "", "", "1884191083", "2342591535", "", "2171417304" ], "abstract": [ "This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the \"MICCAI 2007 Grand Challenge\" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.", "", "We propose a novel kidney segmentation approach based on the graph cuts technique. The proposed approach depends on both image appearance and shape information. Shape information is gathered from a set of training shapes. Then we estimate the shape variations using a new distance probabilistic model which approximates the marginal densities of the kidney and its background in the variability region using a Poisson distribution refined by positive and negative Gaussian components. To segment a kidney slice, we align it with the training slices so we can use the distance probabilistic model. Then its gray level is approximated with a LCG with sign-alternate components. The spatial interaction between the neighboring pixels is identified using a new analytical approach. Finally, we formulate a new energy function using both image appearance models and shape constraints. This function is globally minimized using s t graph cuts to get the optimal segmentation. Experimental results show that the proposed technique gives promising results compared to others without shape constraints.", "", "The International Symposium on Biomedical Imaging (ISBI) held a grand challenge to evaluate computational systems for the automated detection of metastatic breast cancer in whole slide images of sentinel lymph node biopsies. Our team won both competitions in the grand challenge, obtaining an area under the receiver operating curve (AUC) of 0.925 for the task of whole slide image classification and a score of 0.7051 for the tumor localization task. A pathologist independently reviewed the same images, obtaining a whole slide image classification AUC of 0.966 and a tumor localization score of 0.733. Combining our deep learning system's predictions with the human pathologist's diagnoses increased the pathologist's AUC to 0.995, representing an approximately 85 percent reduction in human error rate. These results demonstrate the power of using deep learning to produce significant improvements in the accuracy of pathological diagnoses.", "Purpose: To investigate the potential of the normalized probabilistic atlases and computer-aided medical image analysis to automatically segment and quantify livers and spleens for extracting imaging biomarkers (volume and height). Methods: A clinical tool was developed to segment livers and spleen from 257 abdominal contrast-enhanced CT studies. There were 51 normal livers, 44 normal spleens, 128 splenomegaly, 59 hepatomegaly, and 23 partial hepatectomy cases. 20 more contrast-enhanced CT scans from a public site with manual segmentations of mainly pathological livers were used to test the method. Data were acquired on a variety of scanners from different manufacturers and at varying resolution. Probabilistic atlases of livers and spleens were created using manually segmented data from ten noncontrast CT scans (five male and five female). The organ locations were modeled in the physical space and normalized to the position of an anatomical landmark, the xiphoid. The construction and exploitation of liver and spleen atlases enabled the automated quantifications of liver spleen volumes and heights (midhepatic liver height and cephalocaudal spleen height) from abdominal CT data. The quantification was improved incrementally by a geodesic active contour, patient specific contrast-enhancement characteristics passed to an adaptive convolution, and correction for shape and location errors. Results: The livers and spleens were robustly segmented from normal and pathological cases. For the liver, the Dice Tanimoto volume overlaps were 96.2 92.7 , the volume height errors were 2.2 2.8 , the root-mean-squared error (RMSE) was 2.3 mm, and the average surface distance (ASD) was 1.2 mm. The spleen quantification led to 95.2 91 Dice Tanimoto overlaps, 3.3 1.7 volume height errors, 1.1 mm RMSE, and 0.7 ASD. The correlations ( R 2 ) with clinical manual height measurements were 0.97 and 0.93 for the spleen and liver, respectively ( p 0.0001 ) . No significant difference ( p > 0.2 ) was found comparing interobserver and automatic-manual volume height errors for liver and spleen. Conclusions: The algorithm is robust to segmenting normal and enlarged spleens and livers, and in the presence of tumors and large morphological changes due to partial hepatectomy. Imaging biomarkers of the liver and spleen from automated computer-assisted tools have the potential to assist the diagnosis of abdominal disorders from routine analysis of clinical data and guide clinical management.", "", "", "Abstract In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we’ve found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster.", "We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.", "", "In this paper, an effective model-based approach for computer-aided kidney segmentation of abdominal CT images with anatomic structure consideration is presented. This automatic segmentation system is expected to assist physicians in both clinical diagnosis and educational training. The proposed method is a coarse to fine segmentation approach divided into two stages. First, the candidate kidney region is extracted according to the statistical geometric location of kidney within the abdomen. This approach is applicable to images of different sizes by using the relative distance of the kidney region to the spine. The second stage identifies the kidney by a series of image processing operations. The main elements of the proposed system are: 1) the location of the spine is used as the landmark for coordinate references; 2) elliptic candidate kidney region extraction with progressive positioning on the consecutive CT images; 3) novel directional model for a more reliable kidney region seed point identification; and 4) adaptive region growing controlled by the properties of image homogeneity. In addition, in order to provide different views for the physicians, we have implemented a visualization tool that will automatically show the renal contour through the method of second-order neighborhood edge detection. We considered segmentation of kidney regions from CT scans that contain pathologies in clinical practice. The results of a series of tests on 358 images from 30 patients indicate an average correlation coefficient of up to 88 between automatic and manual segmentation" ] }
1709.04421
2950005135
Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations. To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code. The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.
The best-known random program generator is Csmith @cite_15 , based on an earlier system called randprog @cite_13 @cite_9 . Csmith generates complete, self-contained programs that take all their input from initialized global variables and compute an output consisting of a hash over the values of all global variables at the end of execution. The generator is designed to only generate programs with well-defined semantics: Operations that may be undefined in C, such as overflowing signed integer arithmetic, are guarded by conditionals that exclude undefined cases (these guards can be disabled, and we disabled them for the experiments reported above). Like , Csmith performs data-flow analysis during generation, although the details differ due to the differing design goals. Csmith's forward analysis computes points-to facts and uses them for safety checks. If the checks fail, Csmith backtracks, deleting code it generated until a safe state is reached again. In contrast, 's data-flow analysis only deals with liveness, and never backtracks: Full liveness of variables in loops is ensured by construction. Csmith generates a larger subset of C than current or currently planned versions of , including unstructured control flow and less restricted use of pointers.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_13" ], "mid": [ "", "2098456636", "2096698236" ], "abstract": [ "", "Compilers should be correct. To improve the quality of C compilers, we created Csmith, a randomized test-case generation tool, and spent three years using it to find compiler bugs. During this period we reported more than 325 previously unknown bugs to compiler developers. Every compiler we tested was found to crash and also to silently generate wrong code when presented with valid input. In this paper we present our compiler-testing tool and the results of our bug-hunting study. Our first contribution is to advance the state of the art in compiler testing. Unlike previous tools, Csmith generates programs that cover a large subset of C while avoiding the undefined and unspecified behaviors that would destroy its ability to automatically find wrong-code bugs. Our second contribution is a collection of qualitative and quantitative results about the bugs we have found in open-source C compilers.", "C's volatile qualifier is intended to provide a reliable link between operations at the source-code level and operations at the memory-system level. We tested thirteen production-quality C compilers and, for each, found situations in which the compiler generated incorrect code for accessing volatile variables. This result is disturbing because it implies that embedded software and operating systems---both typically coded in C, both being bases for many mission-critical and safety-critical applications, and both relying on the correct translation of volatiles---may be being miscompiled. Our contribution is centered on a novel technique for finding volatile bugs and a novel technique for working around them. First, we present access summary testing: an efficient, practical, and automatic way to detect code-generation errors related to the volatile qualifier. We have found a number of compiler bugs by performing access summary testing on randomly generated C programs. Some of these bugs have been confirmed and fixed by compiler developers. Second, we present and evaluate a workaround for the compiler defects we discovered. In 96 of the cases in which one of our randomly generated programs is miscompiled, we can cause the faulty C compiler to produce correctly behaving code by applying a straightforward source-level transformation to the test program." ] }
1709.04421
2950005135
Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations. To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code. The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.
The JTT program generator @cite_1 is aimed directly at testing compiler optimizations. It uses a model-based approach, where generation is guided using test scripts. These scripts contain code templates and temporal logic specifications of the optimizations to be tested. For example, the authors specify opportunities for dead code elimination as cases where a variable is assigned, then assigned again before being used. The test script contains a temporal logic formula expressing this pattern and the test condition that the compiler should eliminate the first assignment. Using this script, JTT generates test programs containing this pattern. JTT was used successfully to find bugs and increase the test suite's statement coverage for an industrial C compiler.
{ "cite_N": [ "@cite_1" ], "mid": [ "2141717815" ], "abstract": [ "This paper presents joint research and practice on automated test program generation for an industrial compiler, UniPhier, by Matsushita Electric Industrial Co., Ltd. (MEI) and Institute of Software, Chinese Academy of Sciences (ISCAS) since Sept. 2002. To meet the test requirements of MEI's engineers, we proposed an automated approach to produce test programs for UniPhier, and as a result we developed an integrated tool named JTT. Firstly, we show the script-driven test program generation process in JTT. Secondly, we show how to produce test programs automatically, based on a temporal-logic model of compiler optimizations, to guarantee the execution of optimizing modules under test during compilation. JTT has gained success in testing UniPhier: even after benchmark testing and comprehensive manual testing, JTT still found 6 new serious defects." ] }
1709.04421
2950005135
Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations. To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code. The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.
Other work specifically aimed at testing and comparing program verification tools generates code from randomly generated LTL formulae @cite_5 . The generated code is guaranteed to satisfy the specified temporal properties.
{ "cite_N": [ "@cite_5" ], "mid": [ "2124207853" ], "abstract": [ "We present a systematic approach to the automatic generation of platform-independent benchmarks of realistic structure and tailored complexity for evaluating verification tools for reactive systems. The idea is to mimic a systematic constraint-driven software development process by automatically transforming randomly generated temporal-logic-based requirement specifications on the basis of a sequence of property-preserving, randomly generated structural design decisions into executable source code of a chosen target language or platform. Our automated transformation process steps through dedicated representations in terms of Buchi automata, Mealy machines, decision diagram models, and code models. It comprises LTL synthesis, model checking, property-oriented expansion, path condition extraction, theorem proving, SAT solving, and code motion. This setup allows us to address different communities via a growing set of programming languages, tailored sets of programming constructs, different notions of observation, and the full variety of LTL properties--ranging from mere reachability over general safety properties to arbitrary liveness properties. The paper illustrates the corresponding tool chain along accompanying examples, emphasizes the current state of development, and sketches the envisioned potential and impact of our approach." ] }
1709.04329
2755066373
The huge variance of human pose and the misalignment of detected human images significantly increase the difficulty of person Re-Identification (Re-ID). Moreover, efficient Re-ID systems are required to cope with the massive visual data being produced by video surveillance systems. Targeting to solve these problems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an efficient indexing and retrieval framework, respectively. GLAD explicitly leverages the local and global cues in human body to generate a discriminative and robust representation. It consists of part extraction and descriptor learning modules, where several part regions are first detected and then deep neural networks are designed for representation learning on both the local and global regions. A hierarchical indexing and retrieval framework is designed to eliminate the huge redundancy in the gallery set, and accelerate the online Re-ID procedure. Extensive experimental results show GLAD achieves competitive accuracy compared to the state-of-the-art methods. Our retrieval framework significantly accelerates the online Re-ID procedure without loss of accuracy. Therefore, this work has potential to work better on person Re-ID tasks in real scenarios.
Deep learning shows remarkable performance in computer vision and multimedia tasks and has become the main stream method for person Re-ID. Current deep learning based person Re-ID methods can be divided into two categories based on the usage of deep neural network, , feature learning and distance metric learning. Feature learning networks aim to learn a robust and discriminative feature to represent pedestrian images. Cheng @cite_25 propose a multi-channel parts based network to learn a discriminative feature with an improved triplet loss. Wu @cite_34 discover hand-crafted feature is complementary with CNN feature. They thus divide one image into five fixed-length part regions. For each part region, a histogram descriptor is generated and concatenated with the full body CNN feature. Su @cite_52 @cite_9 propose a semi-supervised attribute learning framework to learn binary attribute features. @cite_40 , identification model and verification model are combined to learn a discriminative representation. In @cite_46 , a new dropout algorithm is designed for feature learning on a multi-domain dataset, which is generated by combining several existing datasets.
{ "cite_N": [ "@cite_9", "@cite_52", "@cite_40", "@cite_46", "@cite_34", "@cite_25" ], "mid": [ "2754916687", "2361187101", "2549957142", "2342611082", "2344924411", "2467139031" ], "abstract": [ "Abstract One of the major challenges in person Re-Identification (ReID) is the inconsistent visual appearance of a person. Current works on visual feature and distance metric learning have achieved significant achievements, but still suffer from the limited robustness to pose variations, viewpoint changes, etc ., and the high computational complexity. This makes person ReID among multiple cameras still challenging. This work is motivated to learn mid-level human attributes which are robust to visual appearance variations and could be used as efficient features for person matching. We propose a weakly supervised multi-type attribute learning framework which considers the contextual cues among attributes and progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a three-stage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely deep attributes exhibit promising generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained competitive accuracy on four person ReID datasets. Experiments also show that a simple distance metric learning modular further boosts our method, making it outperform many recent works.", "The visual appearance of a person is easily affected by many factors like pose variations, viewpoint changes and camera parameter differences. This makes person Re-Identification (ReID) among multiple cameras a very challenging task. This work is motivated to learn mid-level human attributes which are robust to such visual appearance variations. And we propose a semi-supervised attribute learning framework which progressively boosts the accuracy of attributes only using a limited number of labeled data. Specifically, this framework involves a three-stage training. A deep Convolutional Neural Network (dCNN) is first trained on an independent dataset labeled with attributes. Then it is fine-tuned on another dataset only labeled with person IDs using our defined triplet loss. Finally, the updated dCNN predicts attribute labels for the target dataset, which is combined with the independent dataset for the final round of fine-tuning. The predicted attributes, namely deep attributes exhibit superior generalization ability across different datasets. By directly using the deep attributes with simple Cosine distance, we have obtained surprisingly good accuracy on four person ReID datasets. Experiments also show that a simple distance metric learning modular further boosts our method, making it significantly outperform many recent works.", "In this article, we revisit two popular convolutional neural networks in person re-identification (re-ID): verification and identification models. The two models have their respective advantages and limitations due to different loss functions. Here, we shed light on how to combine the two models to learn more discriminative pedestrian descriptors. Specifically, we propose a Siamese network that simultaneously computes the identification loss and verification loss. Given a pair of training images, the network predicts the identities of the two input images and whether they belong to the same identity. Our network learns a discriminative embedding and a similarity measurement at the same time, thus taking full usage of the re-ID annotations. Our method can be easily applied on different pretrained networks. Albeit simple, the learned embedding improves the state-of-the-art performance on two public person re-ID benchmarks. Further, we show that our architecture can also be applied to image retrieval. The code is available at https: github.com layumi 2016_person_re-ID.", "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "Feature representation and metric learning are two critical components in person re-identification models. In this paper, we focus on the feature representation and claim that hand-crafted histogram features can be complementary to Convolutional Neural Network (CNN) features. We propose a novel feature extraction model called Feature Fusion Net (FFN) for pedestrian image representation. In FFN, back propagation makes CNN features constrained by the handcrafted features. Utilizing color histogram features (RGB, HSV, YCbCr, Lab and YIQ) and texture features (multi-scale and multi-orientation Gabor features), we get a new deep feature representation that is more discriminative and compact. Experiments on three challenging datasets (VIPeR, CUHK01, PRID450s) validates the effectiveness of our proposal.", "Person re-identification across cameras remains a very challenging problem, especially when there are no overlapping fields of view between cameras. In this paper, we present a novel multi-channel parts-based convolutional neural network (CNN) model under the triplet framework for person re-identification. Specifically, the proposed CNN model consists of multiple channels to jointly learn both the global full-body and local body-parts features of the input persons. The CNN model is trained by an improved triplet loss function that serves to pull the instances of the same person closer, and at the same time push the instances belonging to different persons farther from each other in the learned feature space. Extensive comparative evaluations demonstrate that our proposed method significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based ones, on the challenging i-LIDS, VIPeR, PRID2011 and CUHK01 datasets." ] }
1709.04329
2755066373
The huge variance of human pose and the misalignment of detected human images significantly increase the difficulty of person Re-Identification (Re-ID). Moreover, efficient Re-ID systems are required to cope with the massive visual data being produced by video surveillance systems. Targeting to solve these problems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an efficient indexing and retrieval framework, respectively. GLAD explicitly leverages the local and global cues in human body to generate a discriminative and robust representation. It consists of part extraction and descriptor learning modules, where several part regions are first detected and then deep neural networks are designed for representation learning on both the local and global regions. A hierarchical indexing and retrieval framework is designed to eliminate the huge redundancy in the gallery set, and accelerate the online Re-ID procedure. Extensive experimental results show GLAD achieves competitive accuracy compared to the state-of-the-art methods. Our retrieval framework significantly accelerates the online Re-ID procedure without loss of accuracy. Therefore, this work has potential to work better on person Re-ID tasks in real scenarios.
Siamese network is commonly used to learn better distance metrics between the input image pair. Yi @cite_31 propose a siamese network composed of three components, , CNN, connection function, and cost function, respectively. Similar with @cite_25 , several fixed-length part regions are divided and trained independently. @cite_12 , an end-to-end siamese network is proposed. By utilizing small filters, the network goes deeper and obtains a remarkable performance. Ahmed @cite_24 design a new layer to capture local relationships between input image pair. @cite_29 , comparative attention network is proposed to adaptively compare the similarity between images.
{ "cite_N": [ "@cite_29", "@cite_24", "@cite_31", "@cite_25", "@cite_12" ], "mid": [ "2432402544", "1928419358", "2135442311", "2467139031", "2259687230" ], "abstract": [ "Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, i.e. , the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively comparing their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.", "In this work, we propose a method for simultaneously learning features and a corresponding similarity metric for person re-identification. We present a deep convolutional architecture with layers specially designed to address the problem of re-identification. Given a pair of images as input, our network outputs a similarity value indicating whether the two input images depict the same person. Novel elements of our architecture include a layer that computes cross-input neighborhood differences, which capture local relationships between the two input images based on mid-level features from each input image. A high-level summary of the outputs of this layer is computed by a layer of patch summary features, which are then spatially integrated in subsequent layers. Our method significantly outperforms the state of the art on both a large data set (CUHK03) and a medium-sized data set (CUHK01), and is resistant to over-fitting. We also demonstrate that by initially training on an unrelated large data set before fine-tuning on a small target data set, our network can achieve results comparable to the state of the art even on a small data set (VIPeR).", "Various hand-crafted features and metric learning methods prevail in the field of person re-identification. Compared to these methods, this paper proposes a more general way that can learn a similarity metric from image pixels directly. By using a \"siamese\" deep neural network, the proposed method can jointly learn the color feature, texture feature and metric in a unified framework. The network has a symmetry structure with two sub-networks which are connected by a cosine layer. Each sub network includes two convolutional layers and a full connected layer. To deal with the big variations of person images, binomial deviance is used to evaluate the cost between similarities and labels, which is proved to be robust to outliers. Experiments on VIPeR illustrate the superior performance of our method and a cross database experiment also shows its good generalization.", "Person re-identification across cameras remains a very challenging problem, especially when there are no overlapping fields of view between cameras. In this paper, we present a novel multi-channel parts-based convolutional neural network (CNN) model under the triplet framework for person re-identification. Specifically, the proposed CNN model consists of multiple channels to jointly learn both the global full-body and local body-parts features of the input persons. The CNN model is trained by an improved triplet loss function that serves to pull the instances of the same person closer, and at the same time push the instances belonging to different persons farther from each other in the learned feature space. Extensive comparative evaluations demonstrate that our proposed method significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based ones, on the challenging i-LIDS, VIPeR, PRID2011 and CUHK01 datasets.", "In this paper, we propose a deep end-to-end neu- ral network to simultaneously learn high-level features and a corresponding similarity metric for person re-identification. The network takes a pair of raw RGB images as input, and outputs a similarity value indicating whether the two input images depict the same person. A layer of computing neighborhood range differences across two input images is employed to capture local relationship between patches. This operation is to seek a robust feature from input images. By increasing the depth to 10 weight layers and using very small (3 @math 3) convolution filters, our architecture achieves a remarkable improvement on the prior-art configurations. Meanwhile, an adaptive Root- Mean-Square (RMSProp) gradient decent algorithm is integrated into our architecture, which is beneficial to deep nets. Our method consistently outperforms state-of-the-art on two large datasets (CUHK03 and Market-1501), and a medium-sized data set (CUHK01)." ] }
1709.04329
2755066373
The huge variance of human pose and the misalignment of detected human images significantly increase the difficulty of person Re-Identification (Re-ID). Moreover, efficient Re-ID systems are required to cope with the massive visual data being produced by video surveillance systems. Targeting to solve these problems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an efficient indexing and retrieval framework, respectively. GLAD explicitly leverages the local and global cues in human body to generate a discriminative and robust representation. It consists of part extraction and descriptor learning modules, where several part regions are first detected and then deep neural networks are designed for representation learning on both the local and global regions. A hierarchical indexing and retrieval framework is designed to eliminate the huge redundancy in the gallery set, and accelerate the online Re-ID procedure. Extensive experimental results show GLAD achieves competitive accuracy compared to the state-of-the-art methods. Our retrieval framework significantly accelerates the online Re-ID procedure without loss of accuracy. Therefore, this work has potential to work better on person Re-ID tasks in real scenarios.
Human parts provide important local cues of human appearance. Therefore, it is natural to design part detection algorithms for person Re-ID in some early person Re-ID works @cite_42 @cite_33 @cite_0 . Motivated by the symmetry and asymmetry properties of human body, Farenzena @cite_42 propose to detect salient part regions by the perceptual principles of symmetry and asymmetry. @cite_1 , Cheng propose a pictorial structure algorithm to detect parts. @cite_50 , deformable part model @cite_21 is utilized to detect six body parts. Most of recent deep learning based methods directly divide pedestrian images into fixed-length regions and have not paid much attention in leveraging part cues @cite_7 . Recently, Zheng @cite_49 adopt the convolution pose machines @cite_44 to detect fine-grained body parts and then generate a standard pose image, which is hence utilized to generate descriptors. Therefore, the representation @cite_49 is not learned explicitly on local parts. Also, fine-grained part extraction is expensive and could be easily affected by image noises, pose and viewpoint variance. Those factors would degrade the Re-ID accuracy and efficiency.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_42", "@cite_1", "@cite_21", "@cite_0", "@cite_44", "@cite_50", "@cite_49" ], "mid": [ "", "1982925187", "1979260620", "9364628", "2120419212", "", "2964304707", "1985820353", "2583528645" ], "abstract": [ "", "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.", "In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.", "Re-identification of pedestrians in video-surveillance settings can be effectively approached by treating each human figure as an articulated body, whose pose is estimated through the framework of Pictorial Structures (PS). In this way, we can focus selectively on similarities between the appearance of body parts to recognize a previously seen individual. In fact, this strategy resembles what humans employ to solve the same task in the absence of facial details or other reliable biometric information. Based on these insights, we show how to perform single image re-identification by matching signatures coming from articulated appearances, and how to strengthen this process in multi-shot re-identification by using Custom Pictorial Structures (CPS) to produce improved body localizations and appearance signatures. Moreover, we provide a complete and detailed breakdown-analysis of the system that surrounds these core procedures, with several novel arrangements devised for efficiency and flexibility. Finally, we test our approach on several public benchmarks, obtaining convincing results.", "This paper describes a discriminatively trained, multiscale, deformable part model for object detection. Our system achieves a two-fold improvement in average precision over the best performance in the 2006 PASCAL person detection challenge. It also outperforms the best results in the 2007 challenge in ten out of twenty categories. The system relies heavily on deformable parts. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge. Our system also relies heavily on new methods for discriminative training. We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM. A latent SVM, like a hidden CRF, leads to a non-convex training problem. However, a latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. We believe that our training methods will eventually make possible the effective use of more latent information such as hierarchical (grammar) models and models involving latent three dimensional pose.", "", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "In this paper we propose an adaptive part-based spatio-temporal model that characterizes person's appearance using color and facial features. Face image selection based on low level cues is used to select usable face images to build a face model. Color features that capture the distribution of colors as well as the representative colors are used to build the color model. The model is built over a sequence of frames of an individual and hence captures the characteristic appearance as well as its variations over time. We also address the problem of multiple person re-identification in the absence of calibration data or prior knowledge about the camera layout. Multiple person re-identification is a open set matching problem with a dynamically evolving and open gallery set and an open probe set. Re-identification is posed as a rectangular assignment problem and is solved to find a bijection that minimizes the overall assignment cost. Open and closed set re-identification is tested on 30 videos collected with nine non-overlapping cameras spanning outdoor and indoor areas, with 40 subjects under observation. A false acceptance reduction scheme based on the developed model is also proposed.", "Pedestrian misalignment, which mainly arises from detector errors and pose variations, is a critical problem for a robust person re-identification (re-ID) system. With bad alignment, the background noise will significantly compromise the feature learning and matching process. To address this problem, this paper introduces the pose invariant embedding (PIE) as a pedestrian descriptor. First, in order to align pedestrians to a standard pose, the PoseBox structure is introduced, which is generated through pose estimation followed by affine transformations. Second, to reduce the impact of pose estimation errors and information loss during PoseBox construction, we design a PoseBox fusion (PBF) CNN architecture that takes the original image, the PoseBox, and the pose estimation confidence as input. The proposed PIE descriptor is thus defined as the fully connected layer of the PBF network for the retrieval task. Experiments are conducted on the Market-1501, CUHK03, and VIPeR datasets. We show that PoseBox alone yields decent re-ID accuracy and that when integrated in the PBF network, the learned PIE descriptor produces competitive performance compared with the state-of-the-art approaches." ] }
1709.04514
2756198004
Generative models are used in a wide range of applications building on large amounts of contextually rich information. Due to possible privacy violations of the individuals whose data is used to train these models, however, publishing or sharing generative models is not always viable. In this paper, we present a novel technique for privately releasing generative models and entire high-dimensional datasets produced by these models. We model the generator distribution of the training data with a mixture of @math generative neural networks. These are trained together and collectively learn the generator distribution of a dataset. Data is divided into @math clusters, using a novel differentially private kernel @math -means, then each cluster is given to separate generative neural networks, such as Restricted Boltzmann Machines or Variational Autoencoders, which are trained only on their own cluster using differentially private gradient descent. We evaluate our approach using the MNIST dataset, as well as call detail records and transit datasets, showing that it produces realistic synthetic samples, which can also be used to accurately compute arbitrary number of counting queries.
Private Data Release. The @math -anonymity @cite_34 paradigm aims to protect data by generalizing and suppressing certain identifying attributes, however, it does not work well on high-dimensional datasets @cite_55 @cite_43 . Therefore, rather than pursuing input sanitization, prior work has proposed techniques to produce plausible synthetic records with strong privacy guarantees, e.g., focusing on differentially private release of data @cite_23 @cite_44 @cite_40 @cite_33 @cite_50 @cite_4 @cite_26 . Alas, these can often support only the release of succinct data representations, such as histograms or contingency tables.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_33", "@cite_55", "@cite_44", "@cite_43", "@cite_40", "@cite_23", "@cite_50", "@cite_34" ], "mid": [ "2074006684", "172569136", "2119167955", "1606251440", "1602718999", "1982183556", "2293703278", "1937501050", "2080044359", "2159024459" ], "abstract": [ "One goal of statistical privacy research is to construct a data release mechanism that protects individual privacy while preserving information content. An example is a random mechanism that takes an input database X and outputs a random database Z according to a distribution Qn(⋅|X). Differential privacy is a particular privacy requirement developed by computer scientists in which Qn(⋅|X) is required to be insensitive to changes in one data point in X. This makes it difficult to infer from Z whether a given individual is in the original database X. We consider differential privacy from a statistical perspective. We consider several data-release mechanisms that satisfy the differential privacy requirement. We show that it is useful to compare these schemes by computing the rate of convergence of distributions and densities constructed from the released data. We study a general privacy method, called the exponential mechanism, introduced by McSherry and Talwar (2007). We show that the accuracy of this meth...", "We compare the disclosure risk criterion of e-differential privacy with a criterion based on probabilities that intruders uncover actual values given the released data. To do so, we generate fully synthetic data that satisfy e-differential privacy at different levels of e, make assumptions about the information available to intruders, and compute posterior probabilities of uncovering true values. The simulation results suggest that the two paradigms are not easily reconciled, since differential privacy is agnostic to the specific values in the observed data whereas probabilistic disclosure risk measures depend greatly on them. The results also suggest, perhaps surprisingly, that probabilistic disclosure risk measures can be small even when e is large. Motivated by these findings, we present an alternative disclosure risk assessment approach that integrates some of the strong confidentiality protection features in e-differential privacy with the interpretability and data-specific nature of probabilistic disclosure risk measures.", "Handling missing data is a critical step to ensuring good results in data mining. Like most data mining algorithms, existing privacy-preserving data mining algorithms assume data is complete. In order to maintain privacy in the data mining process while cleaning data, privacy-preserving methods of data cleaning will be required. In this paper, we address the problem of privacy-preserving data imputation of missing data. Specifically, we present a privacy-preserving protocol for filling in missing values using a lazy decision tree imputation algorithm for data that is horizontally partitioned between two parties. The participants of the protocol learn only the imputed values; the computed decision tree is not learned by either party.", "In recent years, the wide availability of personal data has made the problem of privacy preserving data mining an important one. A number of methods have recently been proposed for privacy preserving data mining of multidimensional data records. One of the methods for privacy preserving data mining is that of anonymization, in which a record is released only if it is indistinguishable from k other entities in the data. We note that methods such as k-anonymity are highly dependent upon spatial locality in order to effectively implement the technique in a statistically robust way. In high dimensional space the data becomes sparse, and the concept of spatial locality is no longer easy to define from an application point of view. In this paper, we view the k-anonymization problem from the perspective of inference attacks over all possible combinations of attributes. We show that when the data contains a large number of attributes which may be considered quasi-identifiers, it becomes difficult to anonymize the data without an unacceptably high amount of information loss. This is because an exponential number of combinations of dimensions can be used to make precise inference attacks, even when individual attributes are partially specified within a range. We provide an analysis of the effect of dimensionality on k-anonymity methods. We conclude that when a data set contains a large number of attributes which are open to inference attacks, we are faced with a choice of either completely suppressing most of the data or losing the desired level of anonymity. Thus, this paper shows that the curse of high dimensionality also applies to the problem of privacy preserving data mining.", "Synthetic datasets generated within the multiple imputation framework are now commonly used by statistical agencies to protect the confidentiality of their respondents. More recently, researchers have also proposed techniques to generate synthetic datasets which offer the formal guarantee of differential privacy. While combining rules were derived for the first type of synthetic datasets, little has been said on the analysis of differentially-private synthetic datasets generated with multiple imputations. In this paper, we show that we can not use the usual combining rules to analyze synthetic datasets which have been generated to achieve differential privacy. We consider specifically the case of generating synthetic count data with the beta-binomial synthetizer, and illustrate our discussion with simulation results. We also propose as a simple alternative a Bayesian model which models explicitly the mechanism for synthetic data generation.", "Re-identification is a major privacy threat to public datasets containing individual records. Many privacy protection algorithms rely on generalization and suppression of \"quasi-identifier\" attributes such as ZIP code and birthdate. Their objective is usually syntactic sanitization: for example, k-anonymity requires that each \"quasi-identifier\" tuple appear in at least k records, while l-diversity requires that the distribution of sensitive attributes for each quasi-identifier have high entropy. The utility of sanitized data is also measured syntactically, by the number of generalization steps applied or the number of records with the same quasi-identifier. In this paper, we ask whether generalization and suppression of quasi-identifiers offer any benefits over trivial sanitization which simply separates quasi-identifiers from sensitive attributes. Previous work showed that k-anonymous databases can be useful for data mining, but k-anonymization does not guarantee any privacy. By contrast, we measure the tradeoff between privacy (how much can the adversary learn from the sanitized records?) and utility, measured as accuracy of data-mining algorithms executed on the same sanitized records. For our experimental evaluation, we use the same datasets from the UCI machine learning repository as were used in previous research on generalization and suppression. Our results demonstrate that even modest privacy gains require almost complete destruction of the data-mining utility. In most cases, trivial sanitization provides equivalent utility and better privacy than k-anonymity, l-diversity, and similar methods based on generalization and suppression.", "Set-valued data provides enormous opportunities for various data mining tasks. In this paper, we study the problem of publishing set-valued data for data mining tasks under the rigorous differential privacy model. All existing data publishing methods for set-valued data are based on partitionbased privacy models, for example k-anonymity, which are vulnerable to privacy attacks based on background knowledge. In contrast, differential privacy provides strong privacy guarantees independent of an adversary’s background knowledge, computational power or subsequent behavior. Existing data publishing approaches for differential privacy, however, are not adequate in terms of both utility and scalability in the context of set-valued data due to its high dimensionality. We demonstrate that set-valued data could be efficiently released under differential privacy with guaranteed utility with the help of context-free taxonomy trees. We propose a probabilistic top-down partitioning algorithm to generate a differentially private release, which scales linearly with the input data size. We also discuss the applicability of our idea to the context of relational data. We prove that our result is (ǫ,δ)-useful for the class of counting queries, the foundation of many data mining tasks. We show that our approach maintains high utility for counting queries and frequent itemset mining and scales to large datasets through extensive experiments on real-life set-valued datasets.", "This short paper provides a synthesis of the statistical disclosure limitation and computer science data privacy approaches to measuring the confidentiality protections provided by fully synthetic data. Since all elements of the data records in the release file derived from fully synthetic data are sampled from an appropriate probability distribution, they do not represent \"real data,\" but there is still a disclosure risk. In SDL this risk is summarized by the inferential disclosure probability. In privacy-protected database queries, this risk is measured by the differential privacy ratio. The two are closely related. This result (not new) is demonstrated and examples are provided from recent work.", "In this paper, we propose the first formal privacy analysis of a data anonymization process known as the synthetic data generation, a technique becoming popular in the statistics community. The target application for this work is a mapping program that shows the commuting patterns of the population of the United States. The source data for this application were collected by the U.S. Census Bureau, but due to privacy constraints, they cannot be used directly by the mapping program. Instead, we generate synthetic data that statistically mimic the original data while providing privacy guarantees. We use these synthetic data as a surrogate for the original data. We find that while some existing definitions of privacy are inapplicable to our target application, others are too conservative and render the synthetic data useless since they guard against privacy breaches that are very unlikely. Moreover, the data in our target application is sparse, and none of the existing solutions are tailored to anonymize sparse data. In this paper, we propose solutions to address the above issues.", "Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k- anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, µ-Argus and k-Similar provide guarantees of privacy protection." ] }
1709.04514
2756198004
Generative models are used in a wide range of applications building on large amounts of contextually rich information. Due to possible privacy violations of the individuals whose data is used to train these models, however, publishing or sharing generative models is not always viable. In this paper, we present a novel technique for privately releasing generative models and entire high-dimensional datasets produced by these models. We model the generator distribution of the training data with a mixture of @math generative neural networks. These are trained together and collectively learn the generator distribution of a dataset. Data is divided into @math clusters, using a novel differentially private kernel @math -means, then each cluster is given to separate generative neural networks, such as Restricted Boltzmann Machines or Variational Autoencoders, which are trained only on their own cluster using differentially private gradient descent. We evaluate our approach using the MNIST dataset, as well as call detail records and transit datasets, showing that it produces realistic synthetic samples, which can also be used to accurately compute arbitrary number of counting queries.
Other mechanisms add noise directly to a generative model @cite_12 @cite_16 @cite_15 @cite_13 . In this paper, we follow this approach, while, in a first-of-its-kind attempt, focusing on building private generative machine learning models based on neural networks. Other approaches @cite_21 @cite_51 @cite_30 generate data records first, and then attempt to test their privacy guarantees, i.e., decoupling the generative model from the privacy mechanism. By contrast, we attempt to achieve privacy during the training of the model, thus avoiding eventual high sample rejection rates due to privacy tests.
{ "cite_N": [ "@cite_30", "@cite_21", "@cite_51", "@cite_15", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "28725724", "2950935670", "2168193374", "", "2200949486", "2165157425", "" ], "abstract": [ "Agencies seeking to disseminate public use microdata, i.e., data on individual records, can replace confidential values with multiple draws from statistical models estimated with the collected data. We present a famework for evaluating disclosure risks inherent in releasing multiply-imputed, synthetic data. The basic idea is to mimic an intruder who computes posterior distributions of confidential values given the released synthetic data and prior knowledge. We illustrate the methodology with artificial fully synthetic data and with partial synthesis of the Survey of Youth in Custody.", "Releasing full data records is one of the most challenging problems in data privacy. On the one hand, many of the popular techniques such as data de-identification are problematic because of their dependence on the background knowledge of adversaries. On the other hand, rigorous methods such as the exponential mechanism for differential privacy are often computationally impractical to use for releasing high dimensional data or cannot preserve high utility of original data due to their extensive data perturbation. This paper presents a criterion called plausible deniability that provides a formal privacy guarantee, notably for releasing sensitive datasets: an output record can be released only if a certain amount of input records are indistinguishable, up to a privacy parameter. This notion does not depend on the background knowledge of an adversary. Also, it can efficiently be checked by privacy tests. We present mechanisms to generate synthetic datasets with similar statistical properties to the input data and the same format. We study this technique both theoretically and experimentally. A key theoretical result shows that, with proper randomization, the plausible deniability mechanism generates differentially private synthetic data. We demonstrate the efficiency of this generative technique on a large dataset; it is shown to preserve the utility of original data with respect to various statistical analysis and machine learning measures.", "To limit disclosures, statistical agencies and other data disseminators can release partially synthetic, public use microdata sets. These comprise the units originally surveyed; but some collected values, for example, sensitive values at high risk of disclosure or values of key identifiers, are replaced with multiple draws from statistical models. Because the original records are on the file, there remain risks of identifications. In this paper, we describe how to evaluate identification disclosure risks in partially synthetic data, accounting for released information from the multiple datasets, the model used to generate synthetic values, and the approach used to select values to synthesize. We illustrate the computations using the Survey of Youths in Custody.", "", "Differential privacy has recently emerged in private statistical data release as one of the strongest privacy guarantees. Most of the existing techniques that generate differentially private histograms or synthetic data only work well for single dimensional or low-dimensional histograms. They become problematic for high dimensional and large domain data due to increased perturbation error and computation complexity. In this paper, we propose DPCopula, a differentially private data synthesization technique using Copula functions for multi-dimensional data. The core of our method is to compute a differentially private copula function from which we can sample synthetic data. Copula functions are used to describe the dependence between multivariate random vectors and allow us to build the multivariate joint distribution using one-dimensional marginal distributions. We present two methods for estimating the parameters of the copula functions with differential privacy: maximum likelihood estimation and Kendall’s τ estimation. We present formal proofs for the privacy guarantee as well as the convergence property of our methods. Extensive experiments using both real datasets and synthetic datasets demonstrate that DPCopula generates highly accurate synthetic multi-dimensional data with significantly better utility than state-of-the-art techniques.", "Privacy-preserving data publishing is an important problem that has been the focus of extensive study. The state-of-the-art solution for this problem is differential privacy, which offers a strong degree of privacy protection without making restrictive assumptions about the adversary. Existing techniques using differential privacy, however, cannot effectively handle the publication of high-dimensional data. In particular, when the input dataset contains a large number of attributes, existing methods require injecting a prohibitive amount of noise compared to the signal in the data, which renders the published data next to useless. To address the deficiency of the existing methods, this paper presents P riv B ayes , a differentially private method for releasing high-dimensional data. Given a dataset D, P riv B ayes first constructs a Bayesian network N, which (i) provides a succinct model of the correlations among the attributes in D and (ii) allows us to approximate the distribution of data in D using a set P of low-dimensional marginals of D. After that, P riv B ayes injects noise into each marginal in P to ensure differential privacy and then uses the noisy marginals and the Bayesian network to construct an approximation of the data distribution in D. Finally, P riv B ayes samples tuples from the approximate distribution to construct a synthetic dataset, and then releases the synthetic data. Intuitively, P riv B ayes circumvents the curse of dimensionality, as it injects noise into the low-dimensional marginals in P instead of the high-dimensional dataset D. Private construction of Bayesian networks turns out to be significantly challenging, and we introduce a novel approach that uses a surrogate function for mutual information to build the model more accurately. We experimentally evaluate P riv B ayes on real data and demonstrate that it significantly outperforms existing solutions in terms of accuracy.", "" ] }
1709.04514
2756198004
Generative models are used in a wide range of applications building on large amounts of contextually rich information. Due to possible privacy violations of the individuals whose data is used to train these models, however, publishing or sharing generative models is not always viable. In this paper, we present a novel technique for privately releasing generative models and entire high-dimensional datasets produced by these models. We model the generator distribution of the training data with a mixture of @math generative neural networks. These are trained together and collectively learn the generator distribution of a dataset. Data is divided into @math clusters, using a novel differentially private kernel @math -means, then each cluster is given to separate generative neural networks, such as Restricted Boltzmann Machines or Variational Autoencoders, which are trained only on their own cluster using differentially private gradient descent. We evaluate our approach using the MNIST dataset, as well as call detail records and transit datasets, showing that it produces realistic synthetic samples, which can also be used to accurately compute arbitrary number of counting queries.
Privacy in Deep Learning. Our work builds on the Differential Privacy (DP) framework, specifically, using the Gaussian mechanism @cite_53 . Due to its generality, DP has served as a building block in several recent efforts at the intersection of privacy and machine learning @cite_45 @cite_9 . In general, the majority of privacy-preserving learning schemes focus on convex optimization problems @cite_37 @cite_7 @cite_3 . Also, training neural networks typically requires to optimize non-convex objective functions -- as with Restricted Boltzmann Machine (RBM) @cite_57 and Variational Autoencoder (VAE) @cite_39 -- which is usually done through the application of Stochastic Gradient Descent (SGD) with poor theoretical guarantees. @cite_3 introduce a privacy-preserving technique which runs SGD for convex cases for a constant number of iterations and only adds noise to the final output. By contrast, we introduce a novel differentially private SGD algorithm for optimizing general non-convex loss functions.
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_9", "@cite_53", "@cite_3", "@cite_39", "@cite_57", "@cite_45" ], "mid": [ "1992926795", "2119874464", "", "2027595342", "", "", "2130314481", "2473418344" ], "abstract": [ "Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex.Our algorithms run in polynomial time, and in some cases even match the optimal non-private running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (aepsi;, 0)- and (aepsi;,a#x03B4;)-differential privacy, perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.", "Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the e-differential privacy definition due to (2006). First we apply the output perturbation ideas of (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.", "", "The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.", "", "", "Restricted Boltzmann Machines (RBMs) are widely used as building blocks for deep learning models. Learning typically proceeds by using stochastic gradient descent, and the gradients are estimated with sampling methods. However, the gradient estimation is a computational bottleneck, so better use of the gradients will speed up the descent algorithm. To this end, we rst derive upper bounds on the RBM cost function, then show that descent methods can have natural advantages by operating in the‘1 and Shatten1 norm. We introduce a new method called Spectral Descent\" that updates parameters in the normed space. Empirical results show dramatic improvements over stochastic gradient descent, and have only have a fractional increase on the per-iteration cost.", "Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality." ] }
1709.04514
2756198004
Generative models are used in a wide range of applications building on large amounts of contextually rich information. Due to possible privacy violations of the individuals whose data is used to train these models, however, publishing or sharing generative models is not always viable. In this paper, we present a novel technique for privately releasing generative models and entire high-dimensional datasets produced by these models. We model the generator distribution of the training data with a mixture of @math generative neural networks. These are trained together and collectively learn the generator distribution of a dataset. Data is divided into @math clusters, using a novel differentially private kernel @math -means, then each cluster is given to separate generative neural networks, such as Restricted Boltzmann Machines or Variational Autoencoders, which are trained only on their own cluster using differentially private gradient descent. We evaluate our approach using the MNIST dataset, as well as call detail records and transit datasets, showing that it produces realistic synthetic samples, which can also be used to accurately compute arbitrary number of counting queries.
Differentially Private k-means has also been studied in prior work @cite_25 , however, aiming to find linearly separable clusters and add noise which is proportional to the data dimension @math or the @math -norm of data records. By contrast, our private kernel @math -means approach can find even linearly non-separable clusters, and the added noise is independent of @math as well as the norm of data points. Also, we offer a tighter privacy analysis using the moments accountant method from @cite_45 . Kernel @math -means clustering with random Fourier features (RFF) has already been considered in @cite_2 , albeit without any privacy guarantee. We somewhat combine @cite_2 and @cite_58 , applying DP @math -means on Fourier features to ultimately achieve better accuracy than @cite_58 .
{ "cite_N": [ "@cite_45", "@cite_58", "@cite_25", "@cite_2" ], "mid": [ "2473418344", "2010523825", "2949118998", "2055663168" ], "abstract": [ "Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowdsourced and contain sensitive information. The models should not expose private information in these datasets. Addressing this goal, we develop new algorithmic techniques for learning and a refined analysis of privacy costs within the framework of differential privacy. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality.", "We consider a statistical database in which a trusted administrator introduces noise to the query responses with the goal of maintaining privacy of individual database entries. In such a database, a query consists of a pair (S, f) where S is a set of rows in the database and f is a function mapping database rows to 0, 1 . The true answer is Σ ieS f(d i ), and a noisy version is released as the response to the query. Results of Dinur, Dwork, and Nissim show that a strong form of privacy can be maintained using a surprisingly small amount of noise -- much less than the sampling error -- provided the total number of queries is sublinear in the number of database rows. We call this query and (slightly) noisy reply the SuLQ (Sub-Linear Queries) primitive. The assumption of sublinearity becomes reasonable as databases grow increasingly large.We extend this work in two ways. First, we modify the privacy analysis to real-valued functions f and arbitrary row types, as a consequence greatly improving the bounds on noise required for privacy. Second, we examine the computational power of the SuLQ primitive. We show that it is very powerful indeed, in that slightly noisy versions of the following computations can be carried out with very few invocations of the primitive: principal component analysis, k means clustering, the Perceptron Algorithm, the ID3 algorithm, and (apparently!) all algorithms that operate in the in the statistical query learning model [11].", "There are two broad approaches for differentially private data analysis. The interactive approach aims at developing customized differentially private algorithms for various data mining tasks. The non-interactive approach aims at developing differentially private algorithms that can output a synopsis of the input dataset, which can then be used to support various data mining tasks. In this paper we study the tradeoff of interactive vs. non-interactive approaches and propose a hybrid approach that combines interactive and non-interactive, using @math -means clustering as an example. In the hybrid approach to differentially private @math -means clustering, one first uses a non-interactive mechanism to publish a synopsis of the input dataset, then applies the standard @math -means clustering algorithm to learn @math cluster centroids, and finally uses an interactive approach to further improve these cluster centroids. We analyze the error behavior of both non-interactive and interactive approaches and use such analysis to decide how to allocate privacy budget between the non-interactive step and the interactive step. Results from extensive experiments support our analysis and demonstrate the effectiveness of our approach.", "Kernel clustering algorithms have the ability to capture the non-linear structure inherent in many real world data sets and thereby, achieve better clustering performance than Euclidean distance based clustering algorithms. However, their quadratic computational complexity renders them non-scalable to large data sets. In this paper, we employ random Fourier maps, originally proposed for large scale classification, to accelerate kernel clustering. The key idea behind the use of random Fourier maps for clustering is to project the data into a low-dimensional space where the inner product of the transformed data points approximates the kernel similarity between them. An efficient linear clustering algorithm can then be applied to the points in the transformed space. We also propose an improved scheme which uses the top singular vectors of the transformed data matrix to perform clustering, and yields a better approximation of kernel clustering under appropriate conditions. Our empirical studies demonstrate that the proposed schemes can be efficiently applied to large data sets containing millions of data points, while achieving accuracy similar to that achieved by state-of-the-art kernel clustering algorithms." ] }
1709.04121
2754855391
Sketch is an important media for human to communicate ideas, which reflects the superiority of human intelligence. Studies on sketch can be roughly summarized into recognition and generation. Existing models on image recognition failed to obtain satisfying performance on sketch classification. But for sketch generation, a recent study proposed a sequence-to-sequence variational-auto-encoder (VAE) model called sketch-rnn which was able to generate sketches based on human inputs. The model achieved amazing results when asked to learn one category of object, such as an animal or a vehicle. However, the performance dropped when multiple categories were fed into the model. Here, we proposed a model called sketch-pix2seq which could learn and draw multiple categories of sketches. Two modifications were made to improve the sketch-rnn model: one is to replace the bidirectional recurrent neural network (BRNN) encoder with a convolutional neural network(CNN); the other is to remove the Kullback-Leibler divergence from the objective function of VAE. Experimental results showed that models with CNN encoders outperformed those with RNN encoders in generating human-style sketches. Visualization of the latent space illustrated that the removal of KL-divergence made the encoder learn a posterior of latent space that reflected the features of different categories. Moreover, the combination of CNN encoder and removal of KL-divergence, i.e., the sketch-pix2seq model, had better performance in learning and generating sketches of multiple categories and showed promising results in creativity tasks.
The sketch-rnn is based on Variational AutoEncoder (VAE) framework @cite_5 which has gained increasing popularity these years. VAE has been applied in generating captions for images @cite_11 , learning parse trees @cite_10 , and modeling audience reactions to movies @cite_3 . Also, attempts have been made to find a better representation of the latent space for the original VAE. Both the adversarial variational Bayes @cite_15 and adversarial autoencoders @cite_0 applies the idea of generative adversarial network (GAN) @cite_1 to learn the latent space. In this work, we proposed a VAE-based sketch-pix2seq model which aims to generate sketches of multiple categories.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_0", "@cite_5", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "2099471712", "2739693027", "", "", "2952673310", "2592725663", "2951326654" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "Matrix and tensor factorization methods are often used for finding underlying low-dimensional patterns from noisy data. In this paper, we study non-linear tensor factorization methods based on deep variational autoencoders. Our approach is well-suited for settings where the relationship between the latent representation to be learned and the raw data representation is highly complex. We apply our approach to a large dataset of facial expressions of movie-watching audiences (over 16 million faces). Our experiments show that compared to conventional linear factorization methods, our method achieves better reconstruction of the data, and further discovers interpretable latent factors.", "", "", "Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.", "Deep generative models have been wildly successful at learning coherent latent representations for continuous data such as video and audio. However, generative modeling of discrete data such as arithmetic expressions and molecular structures still poses significant challenges. Crucially, state-of-the-art methods often produce outputs that are not valid. We make the key observation that frequently, discrete data can be represented as a parse tree from a context-free grammar. We propose a variational autoencoder which encodes and decodes directly to and from these parse trees, ensuring the generated outputs are always valid. Surprisingly, we show that not only does our model more often generate valid outputs, it also learns a more coherent latent space in which nearby points decode to similar discrete outputs. We demonstrate the effectiveness of our learned models by showing their improved performance in Bayesian optimization for symbolic regression and molecular synthesis.", "A novel variational autoencoder is developed to model images, as well as associated labels or captions. The Deep Generative Deconvolutional Network (DGDN) is used as a decoder of the latent image features, and a deep Convolutional Neural Network (CNN) is used as an image encoder; the CNN is used to approximate a distribution for the latent DGDN features code. The latent code is also linked to generative models for labels (Bayesian support vector machine) or captions (recurrent neural network). When predicting a label caption for a new image at test, averaging is performed across the distribution of latent codes; this is computationally efficient as a consequence of the learned CNN-based encoder. Since the framework is capable of modeling the image in the presence absence of associated labels captions, a new semi-supervised setting is manifested for CNN learning with images; the framework even allows unsupervised CNN learning, based on images alone." ] }
1709.04077
2754461651
We use online convex optimization (OCO) for setpoint tracking with uncertain, flexible loads. We consider full feedback from the loads, bandit feedback, and two intermediate types of feedback: partial bandit where a subset of the loads are individually observed and the rest are observed in aggregate, and Bernoulli feedback where in each round the aggregator receives either full or bandit feedback according to a known probability. We give sublinear regret bounds in all cases. We numerically evaluate our algorithms on examples with thermostatically controlled loads and electric vehicles.
Online learning @cite_15 @cite_17 and online convex optimization @cite_9 @cite_25 @cite_0 have already seen a number of applications in DR. Several variants of the multi-armed bandit framework have been used to curtail flexible loads in @cite_33 @cite_27 @cite_4 @cite_34 . Reference @cite_1 used adversarial bandits to shift load while learning load parameters.
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_9", "@cite_1", "@cite_0", "@cite_27", "@cite_15", "@cite_34", "@cite_25", "@cite_17" ], "mid": [ "2550393554", "2071205284", "2148825261", "2575468172", "2513180554", "2086120035", "2077723394", "2574554136", "2594297742", "2049934117" ], "abstract": [ "Demand response is a key component of existing and future grid systems facing increased variability and peak demands. Scaling demand response requires efficiently predicting individual responses for large numbers of consumers while selecting the right ones to signal. This paper proposes a new online learning problem that captures consumer diversity, messaging fatigue and response prediction. We use the framework of multi-armed bandits model to address this problem. This yields simple and easy to implement index based learning algorithms with provable performance guarantees.", "Demand response programs incentivize loads to actively moderate their energy consumption to aid the power system. Uncertainty is an intrinsic aspect of demand response because a load's capability is often unknown until the load has been deployed. Algorithms must therefore balance utilizing well-characterized, good loads and learning about poorly characterized but potentially good loads; this is a manifestation of the classical tradeoff between exploration and exploitation. We address this tradeoff in a restless bandit framework, a generalization of the well-known multi-armed bandit problem. The formulation yields index policies in which loads are ranked by a scalar index, and those with the highest are deployed. The policy is particularly appropriate for demand response because the indices have explicit analytical expressions that may be evaluated separately for each load, making them both simple and scalable. This formulation serves as a heuristic basis for when only the aggregate effect of demand response is observed, from which the state of each individual load must be inferred. We formulate a tractable, analytical approximation for individual state inference based on observations of aggregate load curtailments. In numerical examples, the restless bandit policy outperforms the greedy policy by 5 -10 of the total cost. When the states of deployed loads are inferred from aggregate measurements, the resulting performance degradation is on the order of a few percent for the (now heuristic) restless bandit policy.", "Convex programming involves a convex set F ⊆ Rn and a convex cost function c : F → R. The goal of convex programming is to find a point in F which minimizes c. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain. We also apply this algorithm to repeated games, and show that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized infinitesimal gradient ascent (GIGA) is universally consistent.", "", "This monograph portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives.", "The capabilities of electric loads participating in load curtailment programs are often unknown until the loads have been told to curtail (i.e., deployed) and observed. In programs in which payments are made each time a load is deployed, we aim to pick the \"best\" loads to deploy in each time step. Our choice is a tradeoff between exploration and exploitation, i.e., curtailing poorly characterized loads in order to better characterize them in the hope of benefiting in the future versus curtailing well-characterized loads so that we benefit now. We formulate this problem as a multi-armed restless bandit problem with controlled bandits. In contrast to past work that has assumed all load parameters are known allowing the use of optimization approaches, we assume the parameters of the controlled system are unknown and develop an online learning approach. Our problem has two features not commonly addressed in the bandit literature: the arms processes evolve according to different probabilistic laws depending on the control, and the reward feedback observed by the decision-maker is the total realized curtailment, not the curtailment of each load. We develop an adaptive demand response learning algorithm and an extended version that works with aggregate feedback, both aimed at approximating the Whittle index policy. We show numerically that the regret of our algorithms with respect to the Whittle index policy is of logarithmic order in time, and significantly outperforms standard learning algorithms like UCB1. I. INTRODUCTION", "Online learning is a well established learning paradigm which has both theoretical and practical appeals. The goal of online learning is to make a sequence of accurate predictions given knowledge of the correct answer to previous prediction tasks and possibly additional available information. Online learning has been studied in several research fields including game theory, information theory, and machine learning. It also became of great interest to practitioners due the recent emergence of large scale applications such as online advertisement placement and online web ranking. In this survey we provide a modern overview of online learning. Our goal is to give the reader a sense of some of the interesting ideas and in particular to underscore the centrality of convexity in deriving efficient online learning algorithms. We do not mean to be comprehensive but rather to give a high-level, rigorous yet easy to follow, survey.", "The increasing penetration of renewable sources like solar energy add new dimensions in planning power grid operations. We study the problem of curtailing a subset of prosumers generating solar power with the twin goals of being close to a target collection and maintaining fairness across prosumers. The problem is complicated by the uncertainty in the amount of energy fed-in by each prosumer and the large problem size in terms of number of prosumers. To meet these challenges, we propose an algorithm based on the Combinatorial Multi-Armed Bandit problem with an approximate Knapsack based oracle. With real-data on solar panel output across multiple prosumers, we are able to demonstrate the effectiveness of the proposed algorithm.", "", "1: Introduction 2: Stochastic bandits: fundamental results 3: Adversarial bandits: fundamental results 4: Contextual Bandits 5: Linear bandits 6: Nonlinear bandits 7: Variants. Acknowledgements. References" ] }
1709.04077
2754461651
We use online convex optimization (OCO) for setpoint tracking with uncertain, flexible loads. We consider full feedback from the loads, bandit feedback, and two intermediate types of feedback: partial bandit where a subset of the loads are individually observed and the rest are observed in aggregate, and Bernoulli feedback where in each round the aggregator receives either full or bandit feedback according to a known probability. We give sublinear regret bounds in all cases. We numerically evaluate our algorithms on examples with thermostatically controlled loads and electric vehicles.
Online learning has also been used in models of price-based DR. Reference @cite_22 uses a continuum-armed bandit to do real-time pricing of price responsive dynamic loads. Reference @cite_8 uses OCO and conditional random fields to predict the price sensitivity of Electric Vehicles (EVs). This model was then used as input to compute real-time prices. Using OCO, @cite_36 develops real-time pricing algorithms to flatten the aggregate load consumption. They later apply their algorithm to EVs charging. OCO was also used in @cite_6 to flatten the aggregated power consumption using EVs charging scheduling. @cite_10 used OCO to identify the controllable portion of demand in real-time from aggregate measurements.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_36", "@cite_6", "@cite_10" ], "mid": [ "2552794833", "2015555034", "2331891307", "2962806981", "2551993261" ], "abstract": [ "The problem of dynamically pricing of electricity by a retailer for customers in a demand response program is considered. It is assumed that the retailer obtains electricity in a two-settlement wholesale market consisting of a day ahead market and a real-time market. Under a day ahead dynamic pricing mechanism, the retailer aims to learn the aggregated demand function of its customers while maximizing its retail profit. A piecewise linear stochastic approximation algorithm is proposed. It is shown that the accumulative regret of the proposed algorithm grows with the learning horizon T at the order of O(log T). It is also shown that the achieved growth rate cannot be reduced by any piecewise linear policy.", "While electric vehicles (EVs) are expected to provide environmental and economical benefit, judicious coordination of EV charging is necessary to prevent overloading of the distribution grid. Leveraging the smart grid infrastructure, the utility company can adjust the electricity price intelligently for individual customers to elicit desirable load curves. In this context, this paper addresses the problem of predicting the EV charging behavior of the consumers at different prices, which is a prerequisite for optimal price adjustment. The dependencies on price responsiveness among consumers are captured by a conditional random field (CRF) model. To account for temporal dynamics potentially in a strategic setting, the framework of online convex optimization is adopted to develop an efficient online algorithm for tracking the CRF parameters. The proposed model is then used as an input to a stochastic profit maximization module for real-time price setting. Numerical tests using simulated and semi-real data verify the effectiveness of the proposed approach.", "Real-time price setting strategies are investigated for use by demand response programs in future power grids. The major challenge is that consumers have varying degrees of responsiveness to price adjustments at different time instants, which must be learned and accounted for by demand response initiatives. To this end, an online learning approach is developed here offering strong performance guarantees with minimal assumptions on the dynamics of load levels and consumer elasticity, even when consumers are adversarial and take actions strategically. The developed algorithms can determine electricity prices sequentially so as to elicit desirable usage behavior and flatten load curves, while implicitly learning individual consumers’ price elasticity based on available feedback information. Two feedback structures are considered: 1) a full information setup, where aggregate load levels as well as individual price elasticity parameters are directly available, and 2) a partial information (bandit) case, where only the aggregate load levels are revealed. Fairness and sparsity constraints are also incorporated via appropriate regularizers. Numerical tests verify the effectiveness of the proposed approach.", "We propose an algorithm for distributed charging control of electric vehicles (EVs) using online learning and online convex optimization. Many distributed charging control algorithms in the literature implicitly assume fast two-way communication between the distribution company and EV customers. This assumption is impractical at present and also raises security and privacy concerns. Our algorithm does not use this assumption; however, at the expense of slower convergence to the optimal solution and by relaxing the sense of optimality. The proposed algorithm requires one-way communication, which is implemented through the distribution company publishing the pricing profiles for the previous days. We provide convergence results for the algorithm and illustrate the results through numerical examples.", "In this paper, we apply an emerging method, online learning with dynamics, to deduce properties of distributed energy resources (DERs) from coarse measurements, e.g., measurements taken at distribution substations, rather than household-level measurements. Reduced sensing requirements can lower infrastructure costs associated with reliably incorporating DERs into the distribution network. We specifically investigate whether dynamic mirror descent (DMD), an online learning algorithm, can determine the real-time controllable demand served by a distribution feeder using feeder-level active power demand measurements. In our scenario, DMD incorporates various controllable demand and uncontrollable demand models to generate real-time controllable demand estimates. In a realistic scenario, these estimates have an RMS error of 8.34 of the average controllable demand, which improves to 5.53 by incorporating more accurate models. We propose topics for additional work in modeling, system identification, and the DMD algorithm itself that could improve the RMS errors." ] }
1709.03968
2754533216
Existing neural conversational models process natural language primarily on a lexico-syntactic level, thereby ignoring one of the most crucial components of human-to-human dialogue: its affective content. We take a step in this direction by proposing three novel ways to incorporate affective emotional aspects into long short term memory (LSTM) encoder-decoder neural conversation models: (1) affective word embeddings, which are cognitively engineered, (2) affect-based objective functions that augment the standard cross-entropy loss, and (3) affectively diverse beam search for decoding. Experiments show that these techniques improve the open-domain conversational prowess of encoder-decoder networks by enabling them to produce emotionally rich responses that are more interesting and natural.
Affectively cognizant virtual agents are generating interest both in the academia @cite_27 and the industry, https: www.ald.softbankrobotics.com en robots pepper due to their ability to provide emotional companionship to humans. Endowing text-based dialogue generation systems with emotions is also an active area of research. Past research has mostly focused on developing hand-crafted speech and text-based features to incorporate emotions in retrieval-based or slot-based spoken dialogue systems @cite_11 @cite_26 .
{ "cite_N": [ "@cite_27", "@cite_26", "@cite_11" ], "mid": [ "2575473210", "2121139690", "1988085234" ], "abstract": [ "", "In this paper we propose a method for predicting the user mental state for the development of more efficient and usable spoken dialogue systems. This prediction, carried out for each user turn in the dialogue, makes it possible to adapt the system dynamically to the user needs. The mental state is built on the basis of the emotional state of the user and their intention, and is recognized by means of a module conceived as an intermediate phase between natural language understanding and the dialogue management in the architecture of the systems. We have implemented the method in the UAH system, for which the evaluation results with both simulated and real users show that taking into account the user's mental state improves system performance as well as its perceived quality.", "The involvement of emotional states in intelligent spoken human-computer interfaces has evolved to a recent field of research. In this article we describe the enhancements and optimizations of a speech-based emotion recognizer jointly operating with automatic speech recognition. We argue that the knowledge about the textual content of an utterance can improve the recognition of the emotional content. Having outlined the experimental setup we present results and demonstrate the capability of a post-processing algorithm combining multiple speech-emotion recognizers. For the dialogue management we propose a stochastic approach comprising a dialogue model and an emotional model interfering with each other in a combined dialogue-emotion model. These models are trained from dialogue corpora and being assigned different weighting factors they determine the course of the dialogue." ] }
1709.03919
2754480022
The recent development of CNN-based image dehazing has revealed the effectiveness of end-to-end modeling. However, extending the idea to end-to-end video dehazing has not been explored yet. In this paper, we propose an End-to-End Video Dehazing Network (EVD-Net), to exploit the temporal consistency between consecutive video frames. A thorough study has been conducted over a number of structure options, to identify the best temporal fusion strategy. Furthermore, we build an End-to-End United Video Dehazing and Detection Network(EVDD-Net), which concatenates and jointly trains EVD-Net with a video object detection model. The resulting augmented end-to-end pipeline has demonstrated much more stable and accurate detection results in hazy video.
A number of methods @cite_24 @cite_17 @cite_12 @cite_10 @cite_29 take advantages of natural image statistics as priors, to predict @math and @math separately from the hazy image @math . Due to the often inaccurate estimation of either (or both), they tend to bring in many artifacts such as non-smoothness, unnatural color tones or contrasts. Many CNN-based methods @cite_14 @cite_35 employ CNN as a tool to regress @math from @math . With @math estimated using some other empirical methods, they are then able to estimate @math by ). Notably, @cite_7 @cite_5 design the first completely end-to-end CNN dehazing model based on re-formulating ), which directly generates @math from @math without any other intermediate step: Both @math and @math are integrated into the new variable @math There was a constant bias @math in @cite_7 @cite_5 , which is omitted here to simplify notations. . As shown in Figure , the AOD-Net architecture is composed of two modules: a consisting of five convolutional layers to estimate @math from @math , followed by a to estimate @math from both @math and @math via ). All those above-mentioned methods are designed for single-image dehazing, without taking into account the temporal dynamics in video.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_7", "@cite_29", "@cite_24", "@cite_5", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2519481857", "2256362396", "", "2156936307", "2114867966", "2739097844", "2147318913", "", "2028990532" ], "abstract": [ "The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.", "Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.", "", "Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we propose a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.", "Bad weather, such as fog and haze, can significantly degrade the visibility of a scene. Optically, this is due to the substantial presence of particles in the atmosphere that absorb and scatter light. In computer vision, the absorption and scattering processes are commonly modeled by a linear combination of the direct attenuation and the airlight. Based on this model, a few methods have been proposed, and most of them require multiple input images of a scene, which have either different degrees of polarization or different atmospheric conditions. This requirement is the main drawback of these methods, since in many situations, it is difficult to be fulfilled. To resolve the problem, we introduce an automated method that only requires a single input image. This method is based on two basic observations: first, images with enhanced visibility (or clear-day images) have more contrast than images plagued by bad weather; second, airlight whose variation mainly depends on the distance of objects to the viewer, tends to be smooth. Relying on these two observations, we develop a cost function in the framework of Markov random fields, which can be efficiently optimized by various techniques, such as graph-cuts or belief propagation. The method does not require the geometrical information of the input image, and is applicable for both color and gray images.", "This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level task performance on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN and training the joint pipeline from end to end, we witness a large improvement of the object detection performance on hazy images.", "Images captured in foggy weather conditions often suffer from bad visibility. In this paper, we propose an efficient regularization method to remove hazes from a single input image. Our method benefits much from an exploration on the inherent boundary constraint on the transmission function. This constraint, combined with a weighted L1-norm based contextual regularization, is modeled into an optimization problem to estimate the unknown scene transmission. A quite efficient algorithm based on variable splitting is also presented to solve the problem. The proposed method requires only a few general assumptions and can restore a high-quality haze-free image with faithful colors and fine image details. Experimental results on a variety of haze images demonstrate the effectiveness and efficiency of the proposed method.", "", "In this paper we present a new method for estimating the optical transmission in hazy scenes given a single input image. Based on this estimation, the scattered light is eliminated to increase scene visibility and recover haze-free scene contrasts. In this new approach we formulate a refined image formation model that accounts for surface shading in addition to the transmission function. This allows us to resolve ambiguities in the data by searching for a solution in which the resulting shading and transmission functions are locally statistically uncorrelated. A similar principle is used to estimate the color of the haze. Results demonstrate the new method abilities to remove the haze layer as well as provide a reliable transmission estimate which can be used for additional applications such as image refocusing and novel view synthesis." ] }
1709.03919
2754480022
The recent development of CNN-based image dehazing has revealed the effectiveness of end-to-end modeling. However, extending the idea to end-to-end video dehazing has not been explored yet. In this paper, we propose an End-to-End Video Dehazing Network (EVD-Net), to exploit the temporal consistency between consecutive video frames. A thorough study has been conducted over a number of structure options, to identify the best temporal fusion strategy. Furthermore, we build an End-to-End United Video Dehazing and Detection Network(EVDD-Net), which concatenates and jointly trains EVD-Net with a video object detection model. The resulting augmented end-to-end pipeline has demonstrated much more stable and accurate detection results in hazy video.
When it comes to video dehazing, a majority of existing approaches count on post processing to correct temporal inconsistencies, after applying single image dehazing algorithms frame-wise. @cite_28 proposes to inject temporal coherence into the cost function, with a clock filter for speed-up. @cite_6 jointly estimates the scene depth and recovers the clear latent images from a foggy video sequence. @cite_30 presents an image-guided, depth-edge-aware smoothing algorithm to refine the transmission matrix, and uses Gradient Residual Minimization to recover the haze-free images. @cite_31 designs a spatio-temporal optimization for real-time video dehazing. But as our experiments will show, those relatively simple and straightforward video dehazing approaches may not be even able to outperform the sophisticated CNN-based single image dehazing models. The observation reminds us that the utility of temporal coherence must be coupled with more advanced model structures (such as CNNs) for the further boost of video dehazing performance.
{ "cite_N": [ "@cite_28", "@cite_31", "@cite_30", "@cite_6" ], "mid": [ "2085744236", "2549850705", "2518979500", "1923499759" ], "abstract": [ "A fast and optimized dehazing algorithm for hazy images and videos is proposed in this work. Based on the observation that a hazy image exhibits low contrast in general, we restore the hazy image by enhancing its contrast. However, the overcompensation of the degraded contrast may truncate pixel values and cause information loss. Therefore, we formulate a cost function that consists of the contrast term and the information loss term. By minimizing the cost function, the proposed algorithm enhances the contrast and preserves the information optimally. Moreover, we extend the static image dehazing algorithm to real-time video dehazing. We reduce flickering artifacts in a dehazed video sequence by making transmission values temporally coherent. Experimental results show that the proposed algorithm effectively removes haze and is sufficiently fast for real-time dehazing applications.", "Video dehazing has a wide range of real-time applications, but the challenges mainly come from spatio-temporal coherence and computational efficiency. In this paper, a spatio-temporal optimization framework for real-time video dehazing is proposed, which reduces blocking and flickering artifacts and achieves high-quality enhanced results. We build a Markov Random Field (MRF) with an Intensity Value Prior (IVP) to handle spatial consistency and temporal coherence. By maximizing the MRF likelihood function, the proposed framework estimates the haze concentration and preserves the information optimally. Moreover, to facilitate real-time applications, integral image technique is approximated to reduce the main computational burden. Experimental results demonstrate that the proposed framework is effectively to remove haze and flickering artifacts, and sufficiently fast for real-time applications.", "Most existing image dehazing methods tend to boost local image contrast for regions with heavy haze. Without special treatment, these methods may significantly amplify existing image artifacts such as noise, color aliasing and blocking, which are mostly invisible in the input images but are visually intruding in the results. This is especially the case for low quality cellphone shots or compressed video frames. The recent work of (2014) addresses blocking artifacts for dehazing, but is insufficient to handle other artifacts. In this paper, we propose a new method for reliable suppression of different types of visual artifacts in image and video dehazing. Our method makes contributions in both the haze estimation step and the image recovery step. Firstly, an image-guided, depth-edge-aware smoothing algorithm is proposed to refine the initial atmosphere transmission map generated by local priors. In the image recovery process, we propose Gradient Residual Minimization (GRM) for jointly recovering the haze-free image while explicitly minimizing possible visual artifacts in it. Our evaluation suggests that the proposed method can generate results with much less visual artifacts than previous approaches for lower quality inputs such as compressed video clips.", "We present a method to jointly estimate scene depth and recover the clear latent image from a foggy video sequence. In our formulation, the depth cues from stereo matching and fog information reinforce each other, and produce superior results than conventional stereo or defogging algorithms. We first improve the photo-consistency term to explicitly model the appearance change due to the scattering effects. The prior matting Laplacian constraint on fog transmission imposes a detail-preserving smoothness constraint on the scene depth. We further enforce the ordering consistency between scene depth and fog transmission at neighboring points. These novel constraints are formulated together in an MRF framework, which is optimized iteratively by introducing auxiliary variables. The experiment results on real videos demonstrate the strength of our method." ] }
1709.03919
2754480022
The recent development of CNN-based image dehazing has revealed the effectiveness of end-to-end modeling. However, extending the idea to end-to-end video dehazing has not been explored yet. In this paper, we propose an End-to-End Video Dehazing Network (EVD-Net), to exploit the temporal consistency between consecutive video frames. A thorough study has been conducted over a number of structure options, to identify the best temporal fusion strategy. Furthermore, we build an End-to-End United Video Dehazing and Detection Network(EVDD-Net), which concatenates and jointly trains EVD-Net with a video object detection model. The resulting augmented end-to-end pipeline has demonstrated much more stable and accurate detection results in hazy video.
Recent years have witnessed a growing interest in modeling video using CNNs, for a wide range of tasks such as super-resolution (SR) @cite_16 , deblurring @cite_34 , classification @cite_32 @cite_22 , and style transfer @cite_20 . @cite_16 investigates a variety of structure configurations for video SR. Similar attempts are made by @cite_32 @cite_22 , both digging into different connectivity options for video classification. @cite_8 proposes a more flexible formulation by placing a spatial alignment network between frames. @cite_34 introduces a CNN trained end-to-end to learn accumulating information across frames for video deblurring. For video style transfer, @cite_20 incorporates both short-term and long-term coherences and also indicates the superiority of multi-frame methods over single-frame ones.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_32", "@cite_34", "@cite_16", "@cite_20" ], "mid": [ "2950444898", "2781335552", "2308045930", "2558246008", "2320725294", "2952148331" ], "abstract": [ "Recent advances have enabled \"oracle\" classifiers that can classify across many classes and input distributions with high accuracy without retraining. However, these classifiers are relatively heavyweight, so that applying them to classify video is costly. We show that day-to-day video exhibits highly skewed class distributions over the short term, and that these distributions can be classified by much simpler models. We formulate the problem of detecting the short-term skews online and exploiting models based on it as a new sequential decision making problem dubbed the Online Bandit Problem, and present a new algorithm to solve it. When applied to recognizing faces in TV shows and movies, we realize end-to-end classification speedups of 2.4-7.8x 2.6-11.2x (on GPU CPU) relative to a state-of-the-art convolutional neural network, at competitive accuracy.", "Video super-resolution (SR) aims to generate a highresolution (HR) frame from multiple low-resolution (LR) frames in a local temporal window. The inter-frame temporal relation is as crucial as the intra-frame spatial relation for tackling this problem. However, how to utilize temporal information efficiently and effectively remains challenging since complex motion is difficult to model and can introduce adverse effects if not handled properly. We address this problem from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency. Filters on various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated. Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network which is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner. Our proposed models with learned temporal dynamics are systematically evaluated on public video datasets and achieve state-of-the-art SR results compared with other recent video SR approaches. Both of the temporal adaptation and the spatial alignment modules are demonstrated to considerably improve SR quality over their plain counterparts.", "", "Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.", "Convolutional neural networks (CNN) are a special type of deep neural networks (DNN). They have so far been successfully applied to image super-resolution (SR) as well as other image restoration tasks. In this paper, we consider the problem of video super-resolution. We propose a CNN that is trained on both the spatial and the temporal dimensions of videos to enhance their spatial resolution. Consecutive frames are motion compensated and used as input to a CNN that provides super-resolved video frames as output. We investigate different options of combining the video frames within one CNN architecture. While large image databases are available to train deep neural networks, it is more challenging to create a large video database of sufficient quality to train neural nets for video restoration. We show that by using images to pretrain our model, a relatively small video database is sufficient for the training of our model to achieve and even improve upon the current state-of-the-art. We compare our proposed approach to current video as well as image SR algorithms.", "Training a feed-forward network for fast neural style transfer of images is proven to be successful. However, the naive extension to process video frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near real-time. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures the consistency over larger period of time. Our network can incorporate different image stylization networks. We show that the proposed method clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitudes faster in runtime." ] }
1709.03969
2754794180
We describe a method to use discrete human feedback to enhance the performance of deep learning agents in virtual three-dimensional environments by extending deep-reinforcement learning to model the confidence and consistency of human feedback. This enables deep reinforcement learning algorithms to determine the most appropriate time to listen to the human feedback, exploit the current policy model, or explore the agent's environment. Managing the trade-off between these three strategies allows DRL agents to be robust to inconsistent or intermittent human feedback. Through experimentation using a synthetic oracle, we show that our technique improves the training speed and overall performance of deep reinforcement learning in navigating three-dimensional environments using Minecraft. We further show that our technique is robust to highly innacurate human feedback and can also operate when no human feedback is given.
There has been much work done on incorporating demonstrations @cite_18 @cite_8 @cite_15 @cite_11 and critique @cite_17 @cite_8 @cite_12 @cite_15 @cite_16 into machine learning. These approaches have proven effective at speeding up learning in complex, sequential environments. Typically these methods assume the existence of a reward function and use human feedback to aid the agent in learing a policy that maximizes that reward. Inverse reinforcement learning @cite_14 @cite_13 , on the other hand, seeks to directly engineer a reward function based on examples of optimal behavior provided by human trainers.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_17", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2148112459", "1999874108", "2116671302", "", "2573393487", "2626804490", "2486334580", "1155403144", "" ], "abstract": [ "By now it is widely accepted that learning a task from scratch, i.e., without any prior knowledge, is a daunting undertaking. Humans, however, rarely attempt to learn from scratch. They extract initial biases as well as strategies how to approach a learning problem from instructions and or demonstrations of other humans. For teaming control, this paper investigates how learning from demonstration can be applied in the context of reinforcement learning. We consider priming the Q-function, the value function, the policy, and the model of the task dynamics as possible areas where demonstrations can speed up learning. In general nonlinear learning problems, only model-based reinforcement learning shows significant speed-up after a demonstration, while in the special case of linear quadratic regulator (LQR) problems, all methods profit from the demonstration. In an implementation of pole balancing on a complex anthropomorphic robot arm, we demonstrate that, when facing the complexities of real signal processing, model-based reinforcement learning offers the most robustness for LQR problems. Using the suggested methods, the robot learns pole balancing in just a single trial after a 30 second long demonstration of the human instructor.", "We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.", "We consider the problem of learning control policies via trajectory preference queries to an expert. In particular, the agent presents an expert with short runs of a pair of policies originating from the same state and the expert indicates which trajectory is preferred. The agent's goal is to elicit a latent target policy from the expert with as few queries as possible. To tackle this problem we propose a novel Bayesian model of the querying process and introduce two methods that exploit this model to actively select expert queries. Experimental results on four benchmark problems indicate that our model can effectively learn policies from trajectory preference queries and that active query selection can be substantially more efficient than random selection.", "", "Specifying a numeric reward function for reinforcement learning typically requires a lot of hand-tuning from a human expert. In contrast, preference-based reinforcement learning (PBRL) utilizes only pairwise comparisons between trajectories as a feedback signal, which are often more intuitive to specify. Currently available approaches to PBRL for control problems with continuous state action spaces require a known or estimated model, which is often not available and hard to learn. In this paper, we integrate preference-based estimation of the reward function into a model-free reinforcement learning (RL) algorithm, resulting in a model-free PBRL algorithm. Our new algorithm is based on Relative Entropy Policy Search (REPS), enabling us to utilize stochastic policies and to directly control the greediness of the policy update. REPS decreases exploration of the policy slowly by limiting the relative entropy of the policy update, which ensures that the algorithm is provided with a versatile set of trajectories, and consequently with informative preferences. The preference-based estimation is computed using a sample-based Bayesian method, which can also estimate the uncertainty of the utility. Additionally, we also compare to a linear solvable approximation, based on inverse RL. We show that both approaches perform favourably to the current state-of-the-art. The overall result is an algorithm that can learn non-parametric continuous action policies from a small number of preferences.", "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.", "This paper reports theoretical and empirical results obtained for the score-based Inverse Reinforcement Learning (IRL) algorithm. It relies on a non-standard setting for IRL consisting of learning a reward from a set of globally scored trajectories. This allows using any type of policy (optimal or not) to generate trajectories without prior knowledge during data collection. This way, any existing database (like logs of systems in use) can be scored a posteriori by an expert and used to learn a reward function. Thanks to this reward function, it is shown that a near-optimal policy can be computed. Being related to least-square regression, the algorithm (called SBIRL) comes with theoretical guarantees that are proven in this paper. SBIRL is compared to standard IRL algorithms on synthetic data showing that annotations do help under conditions on the quality of the trajectories. It is also shown to be suitable for real-world applications such as the optimisation of a spoken dialogue system.", "Reward functions are an essential component of many robot learning methods. Defining such functions, however, remains hard in many practical applications. For tasks such as grasping, there are no reliable success measures available. Defining reward functions by hand requires extensive task knowledge and often leads to undesired emergent behavior. We introduce a framework, wherein the robot simultaneously learns an action policy and a model of the reward function by actively querying a human expert for ratings. We represent the reward model using a Gaussian process and evaluate several classical acquisition functions (AFs) from the Bayesian optimization literature in this context. Furthermore, we present a novel AF, expected policy divergence. We demonstrate results of our method for a robot grasping task and show that the learned reward function generalizes to a similar task. Additionally, we evaluate the proposed novel AF on a real robot pendulum swing-up task.", "" ] }
1709.03969
2754794180
We describe a method to use discrete human feedback to enhance the performance of deep learning agents in virtual three-dimensional environments by extending deep-reinforcement learning to model the confidence and consistency of human feedback. This enables deep reinforcement learning algorithms to determine the most appropriate time to listen to the human feedback, exploit the current policy model, or explore the agent's environment. Managing the trade-off between these three strategies allows DRL agents to be robust to inconsistent or intermittent human feedback. Through experimentation using a synthetic oracle, we show that our technique improves the training speed and overall performance of deep reinforcement learning in navigating three-dimensional environments using Minecraft. We further show that our technique is robust to highly innacurate human feedback and can also operate when no human feedback is given.
Video games are complex virtual worlds that often emulate many of the complexities found in the real world. Thus, many machine learning researchers have taken an interest in using machine learning to train AI agents to play video games @cite_6 @cite_10 . So far, there have been successes in using machine learning in both 2-D @cite_6 @cite_5 and 3-D environments @cite_10 @cite_1 @cite_16 .
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_5", "@cite_16", "@cite_10" ], "mid": [ "2950560044", "1757796397", "", "2626804490", "2098441518" ], "abstract": [ "In this paper, we introduce a new set of reinforcement learning (RL) tasks in Minecraft (a flexible 3D world). We then use these tasks to systematically compare and contrast existing deep reinforcement learning (DRL) architectures with our new memory-based DRL architectures. These tasks are designed to emphasize, in a controllable manner, issues that pose challenges for RL methods including partial observability (due to first-person visual observations), delayed rewards, high-dimensional visual observations, and the need to use active perception in a correct manner so as to perform well in the tasks. While these tasks are conceptually simple to describe, by virtue of having all of these challenges simultaneously they are difficult for current DRL architectures. Additionally, we evaluate the generalization performance of the architectures on environments not used during training. The experimental results show that our new architectures generalize to unseen environments better than existing DRL architectures.", "We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.", "", "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.", "A long term goal of Interactive Reinforcement Learning is to incorporate nonexpert human feedback to solve complex tasks. Some state-of-the-art methods have approached this problem by mapping human information to rewards and values and iterating over them to compute better control policies. In this paper we argue for an alternate, more effective characterization of human feedback: Policy Shaping. We introduce Advise, a Bayesian approach that attempts to maximize the information gained from human feedback by utilizing it as direct policy labels. We compare Advise to state-of-the-art approaches and show that it can outperform them and is robust to infrequent and inconsistent human feedback." ] }
1709.03969
2754794180
We describe a method to use discrete human feedback to enhance the performance of deep learning agents in virtual three-dimensional environments by extending deep-reinforcement learning to model the confidence and consistency of human feedback. This enables deep reinforcement learning algorithms to determine the most appropriate time to listen to the human feedback, exploit the current policy model, or explore the agent's environment. Managing the trade-off between these three strategies allows DRL agents to be robust to inconsistent or intermittent human feedback. Through experimentation using a synthetic oracle, we show that our technique improves the training speed and overall performance of deep reinforcement learning in navigating three-dimensional environments using Minecraft. We further show that our technique is robust to highly innacurate human feedback and can also operate when no human feedback is given.
There are studies focusing on combining the reward learning methods with human input, such as @cite_3 @cite_10 . We seek to extend this work in this paper, especially the work performed in @cite_10 . Their method aims to rescale the human feedback and generate a universal value. We extended it by utilizing probabilistic approaches and deep reinforcement techniques to enable similar methods to be adapted to continuous state space.
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "2098441518", "1539975474" ], "abstract": [ "A long term goal of Interactive Reinforcement Learning is to incorporate nonexpert human feedback to solve complex tasks. Some state-of-the-art methods have approached this problem by mapping human information to rewards and values and iterating over them to compute better control policies. In this paper we argue for an alternate, more effective characterization of human feedback: Policy Shaping. We introduce Advise, a Bayesian approach that attempts to maximize the information gained from human feedback by utilizing it as direct policy labels. We compare Advise to state-of-the-art approaches and show that it can outperform them and is robust to infrequent and inconsistent human feedback.", "We consider the problem of incorporating end-user advice into reinforcement learning (RL). In our setting, the learner alternates between practicing, where learning is based on actual world experience, and end-user critique sessions where advice is gathered. During each critique session the end-user is allowed to analyze a trajectory of the current policy and then label an arbitrary subset of the available actions as good or bad. Our main contribution is an approach for integrating all of the information gathered during practice and critiques in order to effectively optimize a parametric policy. The approach optimizes a loss function that linearly combines losses measured against the world experience and the critique data. We evaluate our approach using a prototype system for teaching tactical battle behavior in a real-time strategy game engine. Results are given for a significant evaluation involving ten end-users showing the promise of this approach and also highlighting challenges involved in inserting end-users into the RL loop." ] }
1709.04304
2754068056
Spatially localized deformation components are very useful for shape analysis and synthesis in 3D geometry processing. Several methods have recently been developed, with an aim to extract intuitive and interpretable deformation components. However, these techniques suffer from fundamental limitations especially for meshes with noise or large-scale deformations, and may not always be able to identify important deformation components. In this paper we propose a novel mesh-based autoencoder architecture that is able to cope with meshes with irregular topology. We introduce sparse regularization in this framework, which along with convolutional operations, helps localize deformations. Our framework is capable of extracting localized deformation components from mesh data sets with large-scale deformations and is robust to noise. It also provides a nonlinear approach to reconstruction of meshes using the extracted basis, which is more effective than the current linear combination approach. Extensive experiments show that our method outperforms state-of-the-art methods in both qualitative and quantitative evaluations.
Traditional CNNs are defined on 2D images or 3D voxels with regular grids. Research has explored the potential to extend CNNs to irregular graphs by construction in the spectral domain @cite_24 @cite_15 or the spatial domain @cite_28 @cite_6 focusing on spatial construction. Such representations are exploited in recent work @cite_26 @cite_10 for finding correspondences or performing part-based segmentation on 3D shapes. Our method is based on spatial construction and utilizes this to build an autoencoder for analyzing deformation components.
{ "cite_N": [ "@cite_26", "@cite_28", "@cite_6", "@cite_24", "@cite_15", "@cite_10" ], "mid": [ "2398467116", "2406128552", "", "1662382123", "", "2559902616" ], "abstract": [ "Spectral methods have recently gained popularity in many domains of computer graphics and geometry processing, especially shape processing, computation of shape descriptors, distances, and correspondence. Spectral geometric structures are intrinsic and thus invariant to isometric deformations, are efficiently computed, and can be constructed on shapes in different representations. A notable drawback of these constructions, however, is that they are isotropic, i.e., insensitive to direction. In this paper, we show how to construct direction-sensitive spectral feature descriptors using anisotropic diffusion on meshes and point clouds. The core of our construction are directed local kernels acting similarly to steerable filters, which are learned in a task-specific manner. Remarkably, while being intrinsic, our descriptors allow to disambiguate reflection symmetries. We show the application of anisotropic descriptors for problems of shape correspondence on meshes and point clouds, achieving results significantly better than state-of-the-art methods.", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.", "", "Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.", "", "In this paper, we study the problem of semantic annotation on 3D models that are represented as shape graphs. A functional view is taken to represent localized information on graphs, so that annotations such as part segment or keypoint are nothing but 0-1 indicator vertex functions. Compared with images that are 2D grids, shape graphs are irregular and non-isomorphic data structures. To enable the prediction of vertex functions on them by convolutional neural networks, we resort to spectral CNN method that enables weight sharing by parameterizing kernels in the spectral domain spanned by graph laplacian eigenbases. Under this setting, our network, named SyncSpecCNN, strive to overcome two key challenges: how to share coefficients and conduct multi-scale analysis in different parts of the graph for a single shape, and how to share information across related but different shapes that may be represented by very different graphs. Towards these goals, we introduce a spectral parameterization of dilated convolutional kernels and a spectral transformer network. Experimentally we tested our SyncSpecCNN on various tasks, including 3D shape part segmentation and 3D keypoint prediction. State-of-the-art performance has been achieved on all benchmark datasets." ] }
1709.04072
2754296883
In this paper, we consider the convergence of an abstract inexact nonconvex and nonsmooth algorithm. We promise a pseudo sufficient descent condition and a pseudo relative error condition, which are both related to an auxiliary sequence, for the algorithm; and a continuity condition is assumed to hold. In fact, a lot of classical inexact nonconvex and nonsmooth algorithms allow these three conditions. Under a special kind of summable assumption on the auxiliary sequence, we prove the sequence generated by the general algorithm converges to a critical point of the objective function if being assumed Kurdyka- Lojasiewicz property. The core of the proofs lies in building a new Lyapunov function, whose successive difference provides a bound for the successive difference of the points generated by the algorithm. And then, we apply our findings to several classical nonconvex iterative algorithms and derive the corresponding convergence results
Recently, the convergence analysis in nonconvex optimization has paid increasing attention to using the semi-algebraic property in proofs. In paper @cite_25 , the authors proved the convergence of proximal algorithm minimizing the semi-algebraic functions. In @cite_25 , the rates for the iteration converging to a critical point were exploited. An alternating proximal algorithm was considered in @cite_19 , and the convergence was proved under semi-algebraic assumption on the objective function. Latter, a proximal linearized alternating minimization algorithm was proposed and studied in @cite_24 . A convergence framework was given in @cite_14 , which contains various nonconvex algorithms. In @cite_5 , the authors modified the framework for analyzing splitting methods with variable metric, and proved the general convergence rates. The nonconvex ADMM was studied under semi-algebraic assumption by @cite_26 @cite_0 . And latter paper @cite_30 proposed the nonconvex primal-dual algorithm and proved the convergence. The semi-algebraic analysis convergence method was applied to analyzing the convergence of the reweighted algorithm by @cite_4 . And the extension to the reweighted nuclear norm version was developed in @cite_17 . Recently, the DC algorithm has also employed the semi-algebraic property in the convergence analysis @cite_35 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_14", "@cite_26", "@cite_4", "@cite_24", "@cite_19", "@cite_0", "@cite_5", "@cite_25", "@cite_17" ], "mid": [ "2747285118", "", "1967138577", "2163232332", "2594413771", "2027982384", "2129732816", "2950547155", "2004160833", "", "2747056667" ], "abstract": [ "Abstract The Primal–Dual Hybrid Gradient (PDHG) algorithm is a powerful algorithm used quite frequently in recent years for solving saddle-point optimization problems. The classical application considers convex functions, and it is well studied in literature. In this paper, we consider the convergence of an alternative formulation of the PDHG algorithm in the nonconvex case under the precompact assumption. The proofs are based on the Kurdyka–Ł ojasiewic functions, that cover a wide range of problems. A simple numerical experiment illustrates the convergence properties.", "", "In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–Łojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.", "We consider the problem of minimizing the sum of a smooth function @math with a bounded Hessian and a nonsmooth function. We assume that the latter function is a composition of a proper closed function @math and a surjective linear map @math , with the proximal mappings of @math , @math , simple to compute. This problem is nonconvex in general and encompasses many important applications in engineering and machine learning. In this paper, we examined two types of splitting methods for solving this nonconvex optimization problem: the alternating direction method of multipliers and the proximal gradient algorithm. For the direct adaptation of the alternating direction method of multipliers, we show that if the penalty parameter is chosen sufficiently large and the sequence generated has a cluster point, then it gives a stationary point of the nonconvex problem. We also establish convergence of the whole sequence under an additional assumption that the functions @math and @math are semialgebraic. Further...", "Abstract In this paper, we investigate the convergence of the proximal iteratively reweighted algorithm for a class of nonconvex and nonsmooth problems. Such problems actually include numerous models in the area of signal processing and machine learning research. Two extensions of the algorithm are also studied. We provide a unified scheme for these three algorithms. With the Kurdyka–Łojasiewicz property, we prove that the unified algorithm globally converges to a critical point of the objective function.", "We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka---?ojasiewicz property, we derive a self-contained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvex-nonsmooth problems and related nonconvex proximal forward---backward algorithms with semi-algebraic problem's data, the later property being shared by many functions arising in a wide variety of fundamental applications. A by-product of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.", "We study the convergence properties of an alternating proximal minimization algorithm for nonconvex structured functions of the type: L(x,y)=f(x)+Q(x,y)+g(y), where f and g are proper lower semicontinuous functions, defined on Euclidean spaces, and Q is a smooth function that couples the variables x and y. The algorithm can be viewed as a proximal regularization of the usual Gauss-Seidel method to minimize L. We work in a nonconvex setting, just assuming that the function L satisfies the Kurdyka-Łojasiewicz inequality. An entire section illustrates the relevancy of such an assumption by giving examples ranging from semialgebraic geometry to “metrically regular” problems. Our main result can be stated as follows: If L has the Kurdyka-Łojasiewicz property, then each bounded sequence generated by the algorithm converges to a critical point of L. This result is completed by the study of the convergence rate of the algorithm, which depends on the geometrical properties of the function L around its critical points. When specialized to @math and to f, g indicator functions, the algorithm is an alternating projection mehod (a variant of von Neumann's) that converges for a wide class of sets including semialgebraic and tame sets, transverse smooth manifolds or sets with “regular” intersection. To illustrate our results with concrete problems, we provide a convergent proximal reweighted l1 algorithm for compressive sensing and an application to rank reduction problems.", "We propose a Generalized Dantzig Selector (GDS) for linear models, in which any norm encoding the parameter structure can be leveraged for estimation. We investigate both computational and statistical aspects of the GDS. Based on conjugate proximal operator, a flexible inexact ADMM framework is designed for solving GDS, and non-asymptotic high-probability bounds are established on the estimation error, which rely on Gaussian width of unit norm ball and suitable set encompassing estimation error. Further, we consider a non-trivial example of the GDS using @math -support norm. We derive an efficient method to compute the proximal operator for @math -support norm since existing methods are inapplicable in this setting. For statistical analysis, we provide upper bounds for the Gaussian widths needed in the GDS analysis, yielding the first statistical recovery guarantee for estimation with the @math -support norm. The experimental results confirm our theoretical analysis.", "We study the convergence of general descent methods applied to a lower semi-continuous and nonconvex function, which satisfies the Kurdyka---?ojasiewicz inequality in a Hilbert space. We prove that any precompact sequence converges to a critical point of the function, and obtain new convergence rates both for the values and the iterates. The analysis covers alternating versions of the forward---backward method with variable metric and relative errors. As an example, a nonsmooth and nonconvex version of the Levenberg---Marquardt algorithm is detailed.", "", "The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka–Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings." ] }
1709.04303
2754613063
Reading text in the wild is a challenging task in the field of computer vision. Existing approaches mainly adopted Connectionist Temporal Classification (CTC) or Attention models based on Recurrent Neural Network (RNN), which is computationally expensive and hard to train. In this paper, we present an end-to-end Attention Convolutional Network for scene text recognition. Firstly, instead of RNN, we adopt the stacked convolutional layers to effectively capture the contextual dependencies of the input sequence, which is characterized by lower computational complexity and easier parallel computation. Compared to the chain structure of recurrent networks, the Convolutional Neural Network (CNN) provides a natural way to capture long-term dependencies between elements, which is 9 times faster than Bidirectional Long Short-Term Memory (BLSTM). Furthermore, in order to enhance the representation of foreground text and suppress the background noise, we incorporate the residual attention modules into a small densely connected network to improve the discriminability of CNN features. We validate the performance of our approach on the standard benchmarks, including the Street View Text, IIIT5K and ICDAR datasets. As a result, state-of-the-art or highly-competitive performance and efficiency show the superiority of the proposed approach.
Traditional methods of scene text recognition first performed detection to generate multiple candidates of character locations, then applied a character classifier for recognition. @cite_7 used Random Ferns and HOG features to detect characters and then found an optimal configuration of a particular word via a pictorial structure. @cite_15 detected character candidates using sliding windows and integrated both bottom-up and top-down cues in a unified Conditional Random Field (CRF) model. @cite_3 constructed a part-based tree-structured model to recognize characters in cropped images. @cite_26 proposed an alternative way for character representation, denoted as Strokelets, which was a combination of multi-scale mid-lever features.
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_7", "@cite_26" ], "mid": [ "2049951199", "2069472161", "1998042868", "1978729128" ], "abstract": [ "Scene text recognition has gained significant attention from the computer vision community in recent years. Recognizing such text is a challenging problem, even more so than the recognition of scanned documents. In this work, we focus on the problem of recognizing text extracted from street images. We present a framework that exploits both bottom-up and top-down cues. The bottom-up cues are derived from individual character detections from the image. We build a Conditional Random Field model on these detections to jointly model the strength of the detections and the interactions between them. We impose top-down cues obtained from a lexicon-based prior, i.e. language statistics, on the model. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We show significant improvements in accuracies on two challenging public datasets, namely Street View Text (over 15 ) and ICDAR 2003 (nearly 10 ).", "Scene text recognition has inspired great interests from the computer vision community in recent years. In this paper, we propose a novel scene text recognition method using part-based tree-structured character detection. Different from conventional multi-scale sliding window character detection strategy, which does not make use of the character-specific structure information, we use part-based tree-structure to model each type of character so as to detect and recognize the characters at the same time. While for word recognition, we build a Conditional Random Field model on the potential character locations to incorporate the detection scores, spatial constraints and linguistic knowledge into one framework. The final word recognition result is obtained by minimizing the cost function defined on the random field. Experimental results on a range of challenging public datasets (ICDAR 2003, ICDAR 2011, SVT) demonstrate that the proposed method outperforms state-of-the-art methods significantly both for character detection and word recognition.", "This paper focuses on the problem of word detection and recognition in natural images. The problem is significantly more challenging than reading text in scanned documents, and has only recently gained attention from the computer vision community. Sub-components of the problem, such as text detection and cropped image word recognition, have been studied in isolation [7, 4, 20]. However, what is unclear is how these recent approaches contribute to solving the end-to-end problem of word recognition. We fill this gap by constructing and evaluating two systems. The first, representing the de facto state-of-the-art, is a two stage pipeline consisting of text detection followed by a leading OCR engine. The second is a system rooted in generic object recognition, an extension of our previous work in [20]. We show that the latter approach achieves superior performance. While scene text recognition has generally been treated with highly domain-specific methods, our results demonstrate the suitability of applying generic computer vision methods. Adopting this approach opens the door for real world scene text recognition to benefit from the rapid advances that have been taking place in object recognition.", "Driven by the wide range of applications, scene text detection and recognition have become active research topics in computer vision. Though extensively studied, localizing and reading text in uncontrolled environments remain extremely challenging, due to various interference factors. In this paper, we propose a novel multi-scale representation for scene text recognition. This representation consists of a set of detectable primitives, termed as strokelets, which capture the essential substructures of characters at different granularities. Strokelets possess four distinctive advantages: (1) Usability: automatically learned from bounding box labels, (2) Robustness: insensitive to interference factors, (3) Generality: applicable to variant languages, and (4) Expressivity: effective at describing characters. Extensive experiments on standard benchmarks verify the advantages of strokelets and demonstrate the effectiveness of the proposed algorithm for text recognition." ] }
1709.04303
2754613063
Reading text in the wild is a challenging task in the field of computer vision. Existing approaches mainly adopted Connectionist Temporal Classification (CTC) or Attention models based on Recurrent Neural Network (RNN), which is computationally expensive and hard to train. In this paper, we present an end-to-end Attention Convolutional Network for scene text recognition. Firstly, instead of RNN, we adopt the stacked convolutional layers to effectively capture the contextual dependencies of the input sequence, which is characterized by lower computational complexity and easier parallel computation. Compared to the chain structure of recurrent networks, the Convolutional Neural Network (CNN) provides a natural way to capture long-term dependencies between elements, which is 9 times faster than Bidirectional Long Short-Term Memory (BLSTM). Furthermore, in order to enhance the representation of foreground text and suppress the background noise, we incorporate the residual attention modules into a small densely connected network to improve the discriminability of CNN features. We validate the performance of our approach on the standard benchmarks, including the Street View Text, IIIT5K and ICDAR datasets. As a result, state-of-the-art or highly-competitive performance and efficiency show the superiority of the proposed approach.
Afterwards, the explorations of scene text recognition focus on the mapping from the entire image to word string directly. @cite_29 embedded word images and word labels into a common Euclidean space and the embedding vectors were used to match images and labels. @cite_25 constructed two CNNs to classify character at each position in the word and detect the N-grams contained within the word separately, following a CRF model to combile their representations. Recently, there are increasing researches on treating scene text recognition as a sequence recognition problem. @cite_30 proposed Convolutional Recurrent Neural Network (CRNN) which combined convolutional network and recurrent network to model the spatial dependencies. In @cite_19 , a recurrent network with attention mechanism was constructed to decode feature sequence and predict labels recurrently. @cite_11 adopted a convolutional-recurrent structure in the encoder to learn the sequential dynamics.
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_19", "@cite_25", "@cite_11" ], "mid": [ "2153182373", "2053317383", "2294053032", "1812645736", "" ], "abstract": [ "Scene text recognition is a useful but very challenging task due to uncontrolled condition of text in natural scenes. This paper presents a novel approach to recognize text in scene images. In the proposed technique, a word image is first converted into a sequential column vectors based on Histogram of Oriented Gradient (HOG). The Recurrent Neural Network (RNN) is then adapted to classify the sequential feature vectors into the corresponding word. Compared with most of the existing methods that follow a bottom-up approach to form words by grouping the recognized characters, our proposed method is able to recognize the whole word images without character-level segmentation and recognition. Experiments on a number of publicly available datasets show that the proposed method outperforms the state-of-the-art techniques significantly. In addition, the recognition results on publicly available datasets provide a good benchmark for the future research in this area.", "This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.", "We present recursive recurrent neural networks with attention modeling (R2AM) for lexicon-free optical character recognition in natural scene images. The primary advantages of the proposed method are: (1) use of recursive convolutional neural networks (CNNs), which allow for parametrically efficient and effective image feature extraction, (2) an implicitly learned character-level language model, embodied in a recurrent neural network which avoids the need to use N-grams, and (3) the use of a soft-attention mechanism, allowing the model to selectively exploit image features in a coordinated way, and allowing for end-to-end training within a standard backpropagation framework. We validate our method with state-of-the-art performance on challenging benchmark datasets: Street View Text, IIIT5k, ICDAR and Synth90k.", "We develop a representation suitable for the unconstrained recognition of words in natural images: the general case of no fixed lexicon and unknown length. To this end we propose a convolutional neural network (CNN) based architecture which incorporates a Conditional Random Field (CRF) graphical model, taking the whole word image as a single input. The unaries of the CRF are provided by a CNN that predicts characters at each position of the output, while higher order terms are provided by another CNN that detects the presence of N-grams. We show that this entire model (CRF, character predictor, N-gram predictor) can be jointly optimised by back-propagating the structured output loss, essentially requiring the system to perform multi-task learning, and training uses purely synthetically generated data. The resulting model is a more accurate system on standard real-world text recognition benchmarks than character prediction alone, setting a benchmark for systems that have not been trained on a particular lexicon. In addition, our model achieves state-of-the-art accuracy in lexicon-constrained scenarios, without being specifically modelled for constrained recognition. To test the generalisation of our model, we also perform experiments with random alpha-numeric strings to evaluate the method when no visual language model is applicable.", "" ] }
1709.04303
2754613063
Reading text in the wild is a challenging task in the field of computer vision. Existing approaches mainly adopted Connectionist Temporal Classification (CTC) or Attention models based on Recurrent Neural Network (RNN), which is computationally expensive and hard to train. In this paper, we present an end-to-end Attention Convolutional Network for scene text recognition. Firstly, instead of RNN, we adopt the stacked convolutional layers to effectively capture the contextual dependencies of the input sequence, which is characterized by lower computational complexity and easier parallel computation. Compared to the chain structure of recurrent networks, the Convolutional Neural Network (CNN) provides a natural way to capture long-term dependencies between elements, which is 9 times faster than Bidirectional Long Short-Term Memory (BLSTM). Furthermore, in order to enhance the representation of foreground text and suppress the background noise, we incorporate the residual attention modules into a small densely connected network to improve the discriminability of CNN features. We validate the performance of our approach on the standard benchmarks, including the Street View Text, IIIT5K and ICDAR datasets. As a result, state-of-the-art or highly-competitive performance and efficiency show the superiority of the proposed approach.
Since the lower computational complexity and greater parallelism, CNN is a more efficient structure to capture the sequential contextual information. Some attempts for applying CNN to sequence modeling have been made to replace RNN. @cite_24 introduced a new neural language model that replaced the recurrent connections typically used in RNN with gated temporal convolutions. @cite_18 proposed the Iterated Dilated Convolutional Neural Networks (ID-CNNs), which was a faster alternative to recurrent network for obtaining per token vector representations in Natural Language Processing (NLP). @cite_27 proposed an architecture based entirely on convolutional neural network in machine translation. Although CNN has shown the superiority of parallelism and efficiency, as far as we know, there is no research on using CNN to perform sequence generation in the field of scene text recognition. Our method incorporates CNN and CTC into a unified framework without any recurrent connections, which improves the efficiency while achieving good performance. Furthermore, the proposed algorithm is not limited by a pre-refined dictionary and is available in both lexicon-free and lexicon-based setting. Without individual character detection, the proposed network can be trained end-to-end with the word level annotations and can effectively deal with words with arbitrary length.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_18" ], "mid": [ "2567070169", "2613904329", "2740462959" ], "abstract": [ "The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens. We propose a novel simplified gating mechanism that outperforms (2016) and investigate the impact of key architectural decisions. The proposed approach achieves state-of-the-art on the WikiText-103 benchmark, even though it features long-term dependencies, as well as competitive results on the Google Billion Words benchmark. Our model reduces the latency to score a sentence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the first time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.", "The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.", "Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs. Recent advances in GPU hardware have led to the emergence of bi-directional LSTMs as a standard method for obtaining per-token vector representations serving as input to labeling tasks such as NER (often followed by prediction in a linear-chain CRF). Though expressive and accurate, these models fail to fully exploit GPU parallelism, limiting their computational efficiency. This paper proposes a faster alternative to Bi-LSTMs for NER: Iterated Dilated Convolutional Neural Networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, ID-CNNs permit fixed-depth convolutions to run in parallel across entire documents. We describe a distinct combination of network structure, parameter sharing and training procedures that enable dramatic 14-20x test-time speedups while retaining accuracy comparable to the Bi-LSTM-CRF. Moreover, ID-CNNs trained to aggregate context from the entire document are even more accurate while maintaining 8x faster test time speeds." ] }
1709.04123
2754559645
In traditional models for word-of-mouth recommendations and viral marketing, the objective function has generally been based on reaching as many people as possible. However, a number of studies have shown that the indiscriminate spread of a product by word-of-mouth can result in overexposure, reaching people who evaluate it negatively. This can lead to an effect in which the over-promotion of a product can produce negative reputational effects, by reaching a part of the audience that is not receptive to it. How should one make use of social influence when there is a risk of overexposure? In this paper, we develop and analyze a theoretical model for this process; we show how it captures a number of the qualitative phenomena associated with overexposure, and for the main formulation of our model, we provide a polynomial-time algorithm to find the optimal marketing strategy. We also present simulations of the model on real network topologies, quantifying the extent to which our optimal strategies outperform natural baselines
As noted in the introduction, our work --- through its focus on selecting a seed set of nodes with which to start a cascade --- follows the motivation underlying the line of theoretical work on influence maximization @cite_18 @cite_27 @cite_15 . There has been some theoretical work showing the counter-intuitive outcome where increased effort results in a less successful spread. An example is @cite_0 , where they show that due to the separation of the infection and viral stage, there are cases where an increased effort can result in a lower rate of spread. A related line of work has made use of rich datasets on digital friend-to-friend recommendations on e-commerce sites to analyze the flow of product recommendations through an underlying social network @cite_19 . Further work has experimentally explored influence strategies, with individuals either immediately broadcasting their product adoption to their social network, or selecting individuals to recommend the product to @cite_20 .
{ "cite_N": [ "@cite_18", "@cite_0", "@cite_19", "@cite_27", "@cite_15", "@cite_20" ], "mid": [ "2042123098", "2579376689", "1994473607", "", "2056609785", "1965291015" ], "abstract": [ "One of the major applications of data mining is in helping companies determine which potential customers to market to. If the expected profit from a customer is greater than the cost of marketing to her, the marketing action for that customer is executed. So far, work in this area has considered only the intrinsic value of the customer (i.e, the expected profit from sales to her). We propose to model also the customer's network value: the expected profit from sales to other customers she may influence to buy, the customers those may influence, and so on recursively. Instead of viewing a market as a set of independent entities, we view it as a social network and model it as a Markov random field. We show the advantages of this approach using a social network mined from a collaborative filtering database. Marketing that exploits the network value of customers---also known as viral marketing---can be extremely effective, but is still a black art. Our work can be viewed as a step towards providing a more solid foundation for it, taking advantage of the availability of large relevant databases.", "Many studies in the field of information spread through social networks focus on the detection of influencers. The spread dynamics in most of these studies assumes these influencers are first selected and \"infected\" with a message, and then this message spreads through the networks by a viral process. The following work presents some difficulties with this separation between the infection stage and the viral stage, and provides a case where an increased effort spent on the spread of an idea results in lower final rates of spread. Such results can be prevented by the Scheduling Seeding approach. This approach gradually plans the timing of infection for each particular node as the viral process progresses. It outperforms the initial seeding approach, and prevents the occurrence of the counter-intuitive (and unwanted) results where a greater effort results in a less successful spread. A simple but effective heuristics to detect what node to seed and where is provided.", "We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We analyze how user behavior varies within user communities defined by a recommendation network. Product purchases follow a ‘long tail’ where a significant share of purchases belongs to rarely sold items. We establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies communities, product, and pricing categories for which viral marketing seems to be very effective.", "", "Viral marketing takes advantage of networks of influence among customers to inexpensively achieve large changes in behavior. Our research seeks to put it on a firmer footing by mining these networks from data, building probabilistic models of them, and using these models to choose the best viral marketing plan. Knowledge-sharing sites, where customers review products and advise each other, are a fertile source for this type of data mining. In this paper we extend our previous techniques, achieving a large reduction in computational cost, and apply them to data from a knowledge-sharing site. We optimize the amount of marketing funds spent on each customer, rather than just making a binary decision on whether to market to him. We take into account the fact that knowledge of the network is partial, and that gathering that knowledge can itself have a cost. Our results show the robustness and utility of our approach.", "We examine how firms can create word-of-mouth peer influence and social contagion by designing viral features into their products and marketing campaigns. To econometrically identify the effectiveness of different viral features in creating social contagion, we designed and conducted a randomized field experiment involving the 1.4 million friends of 9,687 experimental users on Facebook.com. We find that viral features generate econometrically identifiable peer influence and social contagion effects. More surprisingly, we find that passive-broadcast viral features generate a 246 increase in peer influence and social contagion, whereas adding active-personalized viral features generate only an additional 98 increase. Although active-personalized viral messages are more effective in encouraging adoption per message and are correlated with more user engagement and sustained product use, passive-broadcast messaging is used more often, generating more total peer adoption in the network. Our work provides a model for how randomized trials can identify peer influence in social networks. This paper was accepted by Pradeep Chintagunta and Preyas Desai, special issue editors. This paper was accepted by Pradeep Chintagunta and Preyas Desai, special issue editors." ] }
1709.04271
2754596546
In this paper, we introduce the Action Schema Network (ASNet): a neural network architecture for learning generalised policies for probabilistic planning problems. By mimicking the relational structure of planning problems, ASNets are able to adopt a weight-sharing scheme which allows the network to be applied to any problem from a given planning domain. This allows the cost of training the network to be amortised over all problems in that domain. Further, we propose a training method which balances exploration and supervised training on small problems to produce a policy which remains robust when evaluated on larger problems. In experiments, we show that ASNet's learning capability allows it to significantly outperform traditional non-learning planners in several challenging domains.
Neural networks have been used to learn policies for probabilistic planning problems. The Factored Policy Gradient (FPG) planner trains a multi-layer perceptron with reinforcement learning to solve a factored MDP @cite_15 , but it cannot generalise across problems and must thus be trained anew on each evaluation problem. Concurrent with this work, groshev2017learning ( groshev2017learning ) propose generalising reactive'' policies and heuristics by applying a CNN to a 2D visual representation of the problem, and demonstrate an effective learnt heuristic for Sokoban. However, their approach requires the user to define an appropriate visual encoding of states, whereas ASNets are able to work directly from a PPDDL description.
{ "cite_N": [ "@cite_15" ], "mid": [ "2172261094" ], "abstract": [ "We present an any-time concurrent probabilistic temporal planner (CPTP) that includes continuous and discrete uncertainties and metric functions. Rather than relying on dynamic programming our approach builds on methods from stochastic local policy search. That is, we optimise a parameterised policy using gradient ascent. The flexibility of this policy-gradient approach, combined with its low memory use, the use of function approximation methods and factorisation of the policy, allow us to tackle complex domains. This factored policy gradient (FPG) planner can optimise steps to goal, the probability of success, or attempt a combination of both. We compare the FPG planner to other planners on CPTP domains, and on simpler but better studied non-concurrent non-temporal probabilistic planning (PP) domains. We present FPG-ipc, the PP version of the planner which has been successful in the probabilistic track of the fifth international planning competition." ] }
1709.04271
2754596546
In this paper, we introduce the Action Schema Network (ASNet): a neural network architecture for learning generalised policies for probabilistic planning problems. By mimicking the relational structure of planning problems, ASNets are able to adopt a weight-sharing scheme which allows the network to be applied to any problem from a given planning domain. This allows the cost of training the network to be amortised over all problems in that domain. Further, we propose a training method which balances exploration and supervised training on small problems to produce a policy which remains robust when evaluated on larger problems. In experiments, we show that ASNet's learning capability allows it to significantly outperform traditional non-learning planners in several challenging domains.
The integration of planning and neural networks has also been investigated in the context of deep reinforcement learning. For instance, Value Iteration Networks @cite_35 @cite_5 (VINs) learn to formulate and solve a probabilistic planning problem within a larger deep neural network. A VIN's internal model can allow it to learn more robust policies than would be possible with ordinary feedforward neural networks. In contrast to VINs, ASNets are intended to learn reactive policies for known planning problems, and operate on factored problem representations instead of (exponentially larger) explicit representations like those used by VINs.
{ "cite_N": [ "@cite_35", "@cite_5" ], "mid": [ "2258731934", "2624142409" ], "abstract": [ "We introduce the value iteration network (VIN): a fully differentiable neural network with a planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.", "In this paper, we introduce a generalized value iteration network (GVIN), which is an end-to-end neural network planning module. GVIN emulates the value iteration algorithm by using a novel graph convolution operator, which enables GVIN to learn and plan on irregular spatial graphs. We propose three novel differentiable kernels as graph convolution operators and show that the embedding based kernel achieves the best performance. We further propose episodic Q-learning, an improvement upon traditional n-step Q-learning that stabilizes training for networks that contain a planning module. Lastly, we evaluate GVIN on planning problems in 2D mazes, irregular graphs, and real-world street networks, showing that GVIN generalizes well for both arbitrary graphs and unseen graphs of larger scale and outperforms a naive generalization of VIN (discretizing a spatial graph into a 2D image)." ] }
1709.04271
2754596546
In this paper, we introduce the Action Schema Network (ASNet): a neural network architecture for learning generalised policies for probabilistic planning problems. By mimicking the relational structure of planning problems, ASNets are able to adopt a weight-sharing scheme which allows the network to be applied to any problem from a given planning domain. This allows the cost of training the network to be amortised over all problems in that domain. Further, we propose a training method which balances exploration and supervised training on small problems to produce a policy which remains robust when evaluated on larger problems. In experiments, we show that ASNet's learning capability allows it to significantly outperform traditional non-learning planners in several challenging domains.
In a similar vein, kansky2017schema present a model-based RL technique known as schema networks @cite_27 . A schema network can learn a transition model for an environment which has been decomposed into entities, but where those entities' interactions are initially unknown. The entity--relation structure of schema networks is reminiscent of the action--proposition structure of an ASNet; however, the relations between ASNet modules are obtained through grounding, whereas schema networks learn which entities are related from scratch. As with VINs, schema networks tend to yield agents which generalise well across a class of similar environments. However, unlike VINs and ASNets---which both learn policies directly---schema networks only learn a model of an environment, and planning on that model must be performed separately.
{ "cite_N": [ "@cite_27" ], "mid": [ "2624780181" ], "abstract": [ "The recent adaptation of deep neural network-based methods to reinforcement learning and planning domains has yielded remarkable progress on individual tasks. Nonetheless, progress on task-to-task transfer remains limited. In pursuit of efficient and robust generalization, we introduce the Schema Network, an object-oriented generative physics simulator capable of disentangling multiple causes of events and reasoning backward through causes to achieve goals. The richly structured architecture of the Schema Network can learn the dynamics of an environment directly from data. We compare Schema Networks with Asynchronous Advantage Actor-Critic and Progressive Networks on a suite of Breakout variations, reporting results on training efficiency and zero-shot generalization, consistently demonstrating faster, more robust learning and better transfer. We argue that generalizing from limited data and learning causal relationships are essential abilities on the path toward generally intelligent systems." ] }
1709.03947
2755357481
This paper presents a preliminary conceptual investigation into an environment representation that has constant space complexity with respect to the camera image space. This type of representation allows the planning algorithms of a mobile agent to bypass what are often complex and noisy transformations between camera image space and Euclidean space. The approach is to compute per-pixel potential values directly from processed camera data, which results in a discrete potential field that has constant space complexity with respect to the image plane. This can enable planning and control algorithms, whose complexity often depends on the size of the environment representation, to be defined with constant run-time. This type of approach can be particularly useful for platforms with strict resource constraints, such as embedded and real-time systems.
In order to define values for the potential fields, this approach draws on a wealth of related works in optical flow and monocular collision avoidance, notably @cite_16 @cite_32 @cite_5 @cite_3 @cite_27 @cite_22 @cite_14 @cite_2 @cite_40 @cite_11 @cite_3 @cite_5 . The intuition of these approaches is that the information contained in a sequence of monocular stills provides sufficient information to compute time-to-contact (Definition ), which informs an agent about the rate of change of object proximity. The primary contribution of this work is a sensor-inspired representation space and algebra for enabling planning and control algorithms to reason effectively and efficiently with the output of this class of perception algorithms.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_32", "@cite_3", "@cite_27", "@cite_40", "@cite_2", "@cite_5", "@cite_16", "@cite_11" ], "mid": [ "2165311980", "117604884", "1899791997", "2060791715", "", "", "1755205674", "2156498675", "1958281728", "" ], "abstract": [ "The lure of using motion vision as a fundamental element in the perception of space drives this effort to use flow features as the sole cues for robot mobility. Real-time estimates of image flow and flow divergence provide the robot's sense of space. The robot steers down a conceptual corridor comparing left and right peripheral flows. Large central flow divergence warns the robot of impending collisions at \"dead ends.\" When this occurs, the robot turns around and resumes wandering. Behavior is generated by directly using flow-based information in the 2D image sequence; no 3D reconstruction is attempted. Active mechanical gate stabilization simplifies the visual interpretation problems by reducing camera rotation. By combining corridor following and dead-end deflection, the robot has wandered around the lab at 30 cm s for as long as 20 minutes without collision. The ability to support this behavior in real-time with current equipment promises expanded capabilities as computational power increases in the future. >", "Computer-vision based collision risk assessment is important in collision detection and obstacle avoidance tasks. We present an approach to determine both time to collision (TTC) and collision risk for semi-rigid obstacles from videos obtained with an uncalibrated camera. TTC for a body moving relative to the camera can be calculated using the ratio of its image size and its time derivative. In order to compute this ratio, we utilize the local scale change and motion information obtained from detection and tracking of feature points, wherein lies the chief novelty of our approach. Using the same local scale change and motion information, we also propose a measure of collision risk for obstacles moving along different trajectories relative to the camera optical axis. Using videos of pedestrians captured in a controlled experimental setup, in which ground truth can be established, we demonstrate the accuracy of our TTC and collision risk estimation approach for different walking trajectories.", "Time to Contact (TTC) is a biologically inspired method for obstacle detection and reactive control of motion that does not require scene reconstruction or 3D depth estimation. TTC is a measure of distance expressed in time units. Our results show that TTC can be used to provide reactive obstacle avoidance for local navigation. In this paper we describe the principles of time to contact and show how time to contact can be measured from the rate of change of size of features. We show an algorithm for steering a vehicle using TTC to avoid obstacles while approaching a goal. We present the results of experiments for obstacle avoidance using TTC in static and dynamic environments.", "Obstacle avoidance is desirable for lightweight micro aerial vehicles and is a challenging problem since the payload constraints only permit monocular cameras and obstacles cannot be directly observed. Depth can however be inferred based on various cues in the image. Prior work has examined optical flow, and perspective cues, however these methods cannot handle frontal obstacles well. In this paper we examine the problem of detecting obstacles right in front of the vehicle. We developed a method to detect relative size changes of image patches that is able to detect size changes in the absence of optical flow. The method uses SURF feature matches in combination with template matching to compare relative obstacle sizes with different image spacing. We present results from our algorithm in autonomous flight tests on a small quadrotor. We are able to detect obstacles with a frame-to-frame enlargement of 120 with a high confidence and confirmed our algorithm in 20 successful flight experiments. In future work, we will improve the control algorithms to avoid more complicated obstacle configurations.", "", "", "This paper presents a novel two-frame motion estimation algorithm. The first step is to approximate each neighborhood of both frames by quadratic polynomials, which can be done efficiently using the polynomial expansion transform. From observing how an exact polynomial transforms under translation a method to estimate displacement fields from the polynomial expansion coefficients is derived and after a series of refinements leads to a robust algorithm. Evaluation on the Yosemite sequence shows good results.", "Collision detection and estimation from a monocular visual sensor is an important enabling technology for safe navigation of small or micro air vehicles in near earth flight. In this paper, we introduce a new approach called expansion segmentation, which simultaneously detects “collision danger regions” of significant positive divergence in inertial aided video, and estimates maximum likelihood time to collision (TTC) in a correspondenceless framework within the danger regions. This approach was motivated from a literature review which showed that existing approaches make strong assumptions about scene structure or camera motion, or pose collision detection without determining obstacle boundaries, both of which limit the operational envelope of a deployable system. Expansion segmentation is based on a new formulation of 6-DOF inertial aided TTC estimation, and a new derivation of a first order TTC uncertainty model due to subpixel quantization error and epipolar geometry uncertainty. Proof of concept results are shown in a custom designed urban flight simulator and on operational flight data from a small air vehicle.", "This paper proposes a new reactive method of avoiding dynamic obstacles by real-time optimization. A mobile robot is equipped with a monocular camera and dynamic obstacles are identified in the image sequence by clustering the optical flow. The respective epipoles of the clusters are determined and afterwards the relative epipole positions are evaluated to identify colliding and dangerous objects. In this work the correlation between the vehicle's velocities and the cluster epipoles is derived and utilized in the proposed cost function for shifting the epipoles. By optimizing this cost function the vehicle is able to avoid collisions, which means that the 3D motion is deduced from purely 2D image data. Finally, the validity of the concept is confirmed by hardware-in-the-loop simulations.", "" ] }
1906.01342
2948508429
Face parsing computes pixel-wise label maps for different semantic components (e.g., hair, mouth, eyes) from face images. Existing face parsing literature have illustrated significant advantages by focusing on individual regions of interest (RoIs) for faces and facial components. However, the traditional crop-and-resize focusing mechanism ignores all contextual area outside the RoIs, and thus is not suitable when the component area is unpredictable, e.g. hair. Inspired by the physiological vision system of human, we propose a novel RoI Tanh-warping operator that combines the central vision and the peripheral vision together. It addresses the dilemma between a limited sized RoI for focusing and an unpredictable area of surrounding context for peripheral information. To this end, we propose a novel hybrid convolutional neural network for face parsing. It uses hierarchical local based method for inner facial components and global methods for outer facial components. The whole framework is simple and principled, and can be trained end-to-end. To facilitate future research of face parsing, we also manually relabel the training data of the HELEN dataset and will make it public. Experiments on both HELEN and LFW-PL benchmarks demonstrate that our method surpasses state-of-the-art methods.
Semantic segmentation for generic images has become a fundamental topic in computer vision, and achieved significant progress, e.g. @cite_4 @cite_19 @cite_15 @cite_8 @cite_25 @cite_21 @cite_1 @cite_24 @cite_30 @cite_0 . FCN @cite_19 is a well-known baseline for generic images which employs full convolution on the entire image to extract per-pixel feature. Following this work, CRFasRNN @cite_21 and DeepLab @cite_25 adopt dense CRF optimization to refine the predicted label map. @cite_9 represent the segmentation mask as a truncated distance transform to alleviate the information loss caused by erroneous box cropping. Recently, Mask R-CNN @cite_0 further advances the cutting edge of semantic segmentation through extending Faster R-CNN @cite_5 and integrating a novel RoIAlign. However, directly applying these generic methods for face parsing may fail to model the complex-yet-varying spatial layout across face parts, especially hair, leading to unsatisfactory results.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_8", "@cite_9", "@cite_21", "@cite_1", "@cite_24", "@cite_19", "@cite_0", "@cite_5", "@cite_15", "@cite_25" ], "mid": [ "2605214291", "1507506748", "2962872526", "", "2124592697", "2471717510", "2605057961", "1903029394", "", "2613718673", "1745334888", "2412782625" ], "abstract": [ "", "We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top-down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.", "Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality.", "", "Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.", "Graph-based image segmentation organizes the image elements into graphs and partitions an image based on the graph. It has been widely used and many promising results are obtained. Since the segmentation performance highly depends on the graph, most of existing methods focus on obtaining a precise similarity graph or on designing efficient cutting merging strategies. However, these two components are often conducted in two separated steps, and thus the obtained graph similarity may not be the optimal one for segmentation and this may lead to suboptimal results. In this paper, we propose a novel framework, Graph-Without-Cut (GWC), for learning the similarity graph and image segmentations simultaneously. GWC learns the similarity graph by assigning adaptive and optimal neighbors to each vertex based on the spatial and visual information. Meanwhile, the new rank constraint is imposed to the Laplacian matrix of the similarity graph, such that the connected components in the resulted similarity graph are exactly equal to the region number. Extensive empirical results on three public data sets (i.e, BSDS300, BSDS500 and MSRC) show that our unsupervised GWC achieves state-of-the-art performance compared with supervised and unsupervised image segmentation approaches.", "", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "We propose a novel semantic segmentation algorithm by learning a deep deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixelwise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction, our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained without using Microsoft COCO dataset through ensemble with the fully convolutional network.", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online." ] }
1906.01342
2948508429
Face parsing computes pixel-wise label maps for different semantic components (e.g., hair, mouth, eyes) from face images. Existing face parsing literature have illustrated significant advantages by focusing on individual regions of interest (RoIs) for faces and facial components. However, the traditional crop-and-resize focusing mechanism ignores all contextual area outside the RoIs, and thus is not suitable when the component area is unpredictable, e.g. hair. Inspired by the physiological vision system of human, we propose a novel RoI Tanh-warping operator that combines the central vision and the peripheral vision together. It addresses the dilemma between a limited sized RoI for focusing and an unpredictable area of surrounding context for peripheral information. To this end, we propose a novel hybrid convolutional neural network for face parsing. It uses hierarchical local based method for inner facial components and global methods for outer facial components. The whole framework is simple and principled, and can be trained end-to-end. To facilitate future research of face parsing, we also manually relabel the training data of the HELEN dataset and will make it public. Experiments on both HELEN and LFW-PL benchmarks demonstrate that our method surpasses state-of-the-art methods.
Global-based methods directly predict per-pixel semantic label over the whole face image. Early works represent spatial correlation between facial parts by various designed models, such as the epitome model @cite_23 and the exemplar-based method @cite_7 . With the advance of deep learning techniques, a variety of CNN structures and loss functions are proposed to encode the underlying layouts of the whole face. @cite_32 integrate the CNN into the CRF framework, and jointly model pixel-wise likelihoods and label dependencies through a multi-objective learning method. @cite_18 use facial landmarks as the guidance, and integrate boundary cue into CNN to implicitly confine facial regions. @cite_11 design an architecture which employs fully-convolutional network, super-pixel information, and CRF model jointly. @cite_31 propose automatically regulating receptive fields in a deep image parsing network, therein obtaining better receptive fields for facial parsing. Besides these works, @cite_10 try to reduce computation to achieve real-time performance.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_32", "@cite_23", "@cite_31", "@cite_10", "@cite_11" ], "mid": [ "2527787998", "1988224075", "1905033729", "2151971489", "2742023617", "2339268922", "2746881582" ], "abstract": [ "This paper proposes a CNN cascade for semantic part segmentation guided by pose-specific information encoded in terms of a set of landmarks (or keypoints). There is large amount of prior work on each of these tasks separately, yet, to the best of our knowledge, this is the first time in literature that the interplay between pose estimation and semantic part segmentation is investigated. To address this limitation of prior work, in this paper, we propose a CNN cascade of tasks that firstly performs landmark localisation and then uses this information as input for guiding semantic part segmentation. We applied our architecture to the problem of facial part segmentation and report large performance improvement over the standard unguided network on the most challenging face datasets. Testing code and models will be published online at http: cs.nott.ac.uk psxasj .", "", "This paper formulates face labeling as a conditional random field with unary and pairwise classifiers. We develop a novel multi-objective learning method that optimizes a single unified deep convolutional network with two distinct non-structured loss functions: one encoding the unary label likelihoods and the other encoding the pairwise label dependencies. Moreover, we regularize the network by using a nonparametric prior as new input channels in addition to the RGB image, and show that significant performance improvements can be achieved with a much smaller network size. Experiments on both the LFW and Helen datasets demonstrate state-of-the-art results of the proposed algorithm, and accurate labeling results on challenging images can be obtained by the proposed algorithm for real-world applications.", "We consider the problem of parsing facial features from an image labeling perspective. We learn a per-pixel unary classifier, and a prior over expected label configurations, allowing us to estimate a dense labeling of facial images by part (e.g. hair, mouth, moustache, hat). This approach deals naturally with large variations in shape and appearance characteristic of unconstrained facial images, and also the problem of detecting classes that may be present or absent. We use an Adaboost-based unary classifier, and develop a family of priors based on ‘epitomes’ which are shown to be particularly effective in capturing the non-stationary aspects of face label distributions.", "In this paper, we introduce a novel approach to regulate receptive field in deep image parsing network automatically. Unlike previous works which have stressed much importance on obtaining better receptive fields using manually selected dilated convolutional kernels, our approach uses two affine transformation layers in the networks backbone and operates on feature maps. Feature maps will be inflated shrinked by the new layer and therefore receptive fields in following layers are changed accordingly. By end-to-end training, the whole framework is data-driven without laborious manual intervention. The proposed method is generic across dataset and different tasks. We conduct extensive experiments on both general parsing task and face parsing task as concrete examples to demonstrate the methods superior regulation ability over manual designs.", "We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking accuracy. The modeling of occlusions has been mostly avoided due to its immense space of appearance variability. To address this curse of high dimensionality, we perform tracking in unconstrained images assuming non-face regions can be fully masked out. Along with recent breakthroughs in deep learning, we demonstrate that pixel-level facial segmentation is possible in real-time by repurposing convolutional neural networks designed originally for general semantic segmentation. We develop an efficient architecture based on a two-stream deconvolution network with complementary characteristics, and introduce carefully designed training samples and data augmentation strategies for improved segmentation accuracy and robustness. We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views. Furthermore, the resulting segmentation can be directly used to composite partial 3D face models on the input images and enable seamless facial manipulation tasks, such as virtual make-up or face replacement.", "In this work, we address the face parsing task with a Fully-Convolutional continuous CRF Neural Network (FC-CNN) architecture. In contrast to previous face parsing methods that apply region-based subnetwork hundreds of times, our FC-CNN is fully convolutional with high segmentation accuracy. To achieve this goal, FC-CNN integrates three subnetworks, a unary network, a pairwise network and a continuous Conditional Random Field (C-CRF) network into a unified framework. The high-level semantic information and low-level details across different convolutional layers are captured by the convolutional and deconvolutional structures in the unary network. The semantic edge context is learnt by the pairwise network branch to construct pixel-wise affinity. Based on a differentiable superpixel pooling layer and a differentiable C-CRF layer, the unary network and pairwise network are combined via a novel continuous CRF network to achieve spatial consistency in both training and test procedure of a deep neural network. Comprehensive evaluations on LFW-PL and HELEN datasets demonstrate that FC-CNN achieves better performance over the other state-of-arts for accurate face labeling on challenging images." ] }
1906.01342
2948508429
Face parsing computes pixel-wise label maps for different semantic components (e.g., hair, mouth, eyes) from face images. Existing face parsing literature have illustrated significant advantages by focusing on individual regions of interest (RoIs) for faces and facial components. However, the traditional crop-and-resize focusing mechanism ignores all contextual area outside the RoIs, and thus is not suitable when the component area is unpredictable, e.g. hair. Inspired by the physiological vision system of human, we propose a novel RoI Tanh-warping operator that combines the central vision and the peripheral vision together. It addresses the dilemma between a limited sized RoI for focusing and an unpredictable area of surrounding context for peripheral information. To this end, we propose a novel hybrid convolutional neural network for face parsing. It uses hierarchical local based method for inner facial components and global methods for outer facial components. The whole framework is simple and principled, and can be trained end-to-end. To facilitate future research of face parsing, we also manually relabel the training data of the HELEN dataset and will make it public. Experiments on both HELEN and LFW-PL benchmarks demonstrate that our method surpasses state-of-the-art methods.
Local-based methods train separated models for various facial components ( eyes, nose ) to predict masks for each part individually. @cite_2 propose a hierarchical method which segments each detected facial part separately. @cite_29 design an interlinked CNN-based pipeline which predicts pixel labels after facial localization. Benefiting from the complicated design, the interlinked CNN structure is able to pass information between coarse and fine levels bidirectionally, thus achieving good performance at the expense of large memory and computation consumption. @cite_3 achieve state-of-the-art accuracy with very fast running speed by combining a shallow CNN and a spatially variant RNN in two successive stages.
{ "cite_N": [ "@cite_29", "@cite_3", "@cite_2" ], "mid": [ "2295744361", "2744147870", "1980163762" ], "abstract": [ "Face parsing is a basic task in face image analysis. It amounts to labeling each pixel with appropriate facial parts such as eyes and nose. In the paper, we present a interlinked convolutional neural network iCNN for solving this problem in an end-to-end fashion. It consists of multiple convolutional neural networks CNNs taking input in different scales. A special interlinking layer is designed to allow the CNNs to exchange information, enabling them to integrate local and contextual information efficiently. The hallmark of iCNN is the extensive use of downsampling and upsampling in the interlinking layers, while traditional CNNs usually uses downsampling only. A two-stage pipeline is proposed for face parsing and both stages use iCNN. The first stage localizes facial parts in the size-reduced image and the second stage labels the pixels in the identified facial parts in the original image. On a benchmark dataset we have obtained better results than the state-of-the-art methods.", "Face parsing is an important problem in computer vision that finds numerous applications including recognition and editing. Recently, deep convolutional neural networks (CNNs) have been applied to image parsing and segmentation with the state-of-the-art performance. In this paper, we propose a face parsing algorithm that combines hierarchical representations learned by a CNN, and accurate label propagations achieved by a spatially variant recurrent neural network (RNN). The RNN-based propagation approach enables efficient inference over a global space with the guidance of semantic edges generated by a local convolutional model. Since the convolutional architecture can be shallow and the spatial RNN can have few parameters, the framework is much faster and more light-weighted than the state-of-the-art CNNs for the same task. We apply the proposed model to coarse-grained and fine-grained face parsing. For fine-grained face parsing, we develop a two-stage approach by first identifying the main regions and then segmenting the detail components, which achieves better performance in terms of accuracy and efficiency. With a single GPU, the proposed algorithm parses face images accurately at 300 frames per second, which facilitates real-time applications.", "This paper investigates how to parse (segment) facial components from face images which may be partially occluded. We propose a novel face parser, which recasts segmentation of face components as a cross-modality data transformation problem, i.e., transforming an image patch to a label map. Specifically, a face is represented hierarchically by parts, components, and pixel-wise labels. With this representation, our approach first detects faces at both the part- and component-levels, and then computes the pixel-wise label maps (Fig.1). Our part-based and component-based detectors are generatively trained with the deep belief network (DBN), and are discriminatively tuned by logistic regression. The segmentators transform the detected face components to label maps, which are obtained by learning a highly nonlinear mapping with the deep autoencoder. The proposed hierarchical face parsing is not only robust to partial occlusions but also provide richer information for face analysis and face synthesis compared with face keypoint detection and face alignment. The effectiveness of our algorithm is shown through several tasks on 2, 239 images selected from three datasets (e.g., LFW [12], BioID [13] and CUFSF [29])." ] }
1906.01342
2948508429
Face parsing computes pixel-wise label maps for different semantic components (e.g., hair, mouth, eyes) from face images. Existing face parsing literature have illustrated significant advantages by focusing on individual regions of interest (RoIs) for faces and facial components. However, the traditional crop-and-resize focusing mechanism ignores all contextual area outside the RoIs, and thus is not suitable when the component area is unpredictable, e.g. hair. Inspired by the physiological vision system of human, we propose a novel RoI Tanh-warping operator that combines the central vision and the peripheral vision together. It addresses the dilemma between a limited sized RoI for focusing and an unpredictable area of surrounding context for peripheral information. To this end, we propose a novel hybrid convolutional neural network for face parsing. It uses hierarchical local based method for inner facial components and global methods for outer facial components. The whole framework is simple and principled, and can be trained end-to-end. To facilitate future research of face parsing, we also manually relabel the training data of the HELEN dataset and will make it public. Experiments on both HELEN and LFW-PL benchmarks demonstrate that our method surpasses state-of-the-art methods.
Portrait segmentation and hair segmentation, such as the works of @cite_34 @cite_27 and @cite_20 , to name a few, are closely related to the literature of face parsing. Recent approaches for these two tasks adopt knowledge of specific domains into DCNN and achieve practical results for following up applications. Nevertheless, they only tackle a sub-problem of face parsing, without addressing the task of segmenting all parts on the face, while the latter is more general and challenging.
{ "cite_N": [ "@cite_27", "@cite_34", "@cite_20" ], "mid": [ "", "2400000673", "2468764576" ], "abstract": [ "", "Portraiture is a major art form in both photography and painting. In most instances, artists seek to make the subject stand out from its surrounding, for instance, by making it brighter or sharper. In the digital world, similar effects can be achieved by processing a portrait image with photographic or painterly filters that adapt to the semantics of the image. While many successful user-guided methods exist to delineate the subject, fully automatic techniques are lacking and yield unsatisfactory results. Our paper first addresses this problem by introducing a new automatic segmentation algorithm dedicated to portraits. We then build upon this result and describe several portrait filters that exploit our automatic segmentation algorithm to generate high-quality portraits.", "We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval." ] }
1906.01340
2948086500
In this paper, we study the importance of pre-training for the generalization capability in the color constancy problem. We propose two novel approaches based on convolutional autoencoders: an unsupervised pre-training algorithm using a fine-tuned encoder and a semi-supervised pre-training algorithm using a novel composite-loss function. This enables us to solve the data scarcity problem and achieve competitive, to the state-of-the-art, results while requiring much fewer parameters on ColorChecker RECommended dataset. We further study the over-fitting phenomenon on the recently introduced version of INTEL-TUT Dataset for Camera Invariant Color Constancy Research, which has both field and non-field scenes acquired by three different camera models.
Typically, color constancy algorithms are divided into two main categories, namely unsupervised methods and supervised methods. The former involve methods with static parameters settings which are based on low-level statistics @cite_31 @cite_27 @cite_5 @cite_23 @cite_2 and methods using physics-based dichromatic reflection model @cite_21 @cite_1 @cite_18 @cite_24 , while the latter involve data-driven approaches that learn to estimate the illuminant in a supervised manner using labeled data. Supervised methods can be further divided into two main categories: characterization-based methods and training-based methods. The former involve characterization of camera response in one way or another, such as Gamut Mapping @cite_22 , which assumes that in a real world scenario, for a given illuminant, only a limited number of colors can be observed. The latter involve methods that try to learn illumination directly from the scene @cite_7 @cite_9 @cite_11 @cite_26 . One group of training-based methods considers different illumination estimation approaches and learns a model that uses the best performing method or a combination of methods to estimate the illuminant of each input based on certain scene characteristics @cite_26 . Another group of learning-based methods uses deep learning based approaches to solve the illumination estimation problem.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_22", "@cite_7", "@cite_9", "@cite_21", "@cite_1", "@cite_24", "@cite_27", "@cite_23", "@cite_2", "@cite_5", "@cite_31", "@cite_11" ], "mid": [ "", "2090412934", "2334082317", "", "2107210265", "2100001370", "2495048672", "1965230747", "1423380280", "2111810242", "2157005738", "1966855663", "2021501646", "1970141893" ], "abstract": [ "", "In this work, we investigate how illuminant estimation techniques can be improved taking into account intrinsic, low level properties of the images. We show how these properties can be used to drive, given a set of illuminant estimation algorithms, the selection of the best algorithm for a given image. The algorithm selection is made by a decision forest composed of several trees on the basis of the values of a set of heterogeneous features. The features represent the image content in terms of low-level visual properties. The trees are trained to select the algorithm that minimizes the expected error in illuminant estimation. We also designed a combination strategy that estimates the illuminant as a weighted sum of the different algorithms' estimations. Experimental results on the widely used Ciurea and Funt dataset demonstrate the effectiveness of our approach.", "", "", "We present a novel image transform called Scale Manipulation (SMT). The transform can be used for object pose estimation and registration of affine transformed images in the presence of non homogenous illumination changes. The transform calculates affine invariant features of objects in a global manner and avoids using any sort of edge detection. The computational load of the method is relatively low since it is linear in the data size. In this paper we introduce the transform and demonstrate its applications for pose estimation in the presence of non uniform varying illumination.", "Computational color constancy is a fundamental prerequisite for many computer vision applications. This paper presents a survey of many recent developments and state-of-the-art methods. Several criteria are proposed that are used to assess the approaches. A taxonomy of existing algorithms is proposed and methods are separated in three groups: static methods, gamut-based methods, and learning-based methods. Further, the experimental setup is discussed including an overview of publicly available datasets. Finally, various freely available methods, of which some are considered to be state of the art, are evaluated on two datasets.", "", "Most video processing applications require object tracking as it is the base operation for real-time implementations such as surveillance, monitoring and video compression. Therefore, accurate tracking of an object under varying scene conditions is crucial for robustness. It is well known that illumination variations on the observed scene and target are an obstacle against robust object tracking causing the tracker lose the target. In this paper, a 2D-cepstrum based approach is proposed to overcome this problem. Cepstral domain features extracted from the target region are introduced into the covari-ance tracking algorithm and it is experimentally observed that 2D-cepstrum analysis of the target object provides robustness to varying illumination conditions. Another contribution of the paper is the development of the co-difference matrix based object tracking instead of the recently introduced covariance matrix based method.", "In this paper, the estimation of the illuminant in color constancy issues is analysed in two perceptual color spaces, and a variation of a well-known methodology is presented. Such approach is based on the Gray-World assumption, here particularly applied on the chromatic components in the CIELAB and CIELUV color spaces. A comparison is made between the outcomes on imagery for each color model considered. Reference images from the Gray-Ball dataset are considered for experimental tests. The performance of the approach is evaluated with the angular error, a metric well accepted in this field. The experimental results show that operating on perceptual color spaces improves the illuminant estimation, in comparison with the results obtained using the standard approach in RGB.", "There are several models for lightness computation, as many are the retinex based theories about color constancy. This paper provides a new hybrid technique for color recovery, which combines one of the most effective retinex algorithm (McCann99) and the gray world transformation applied at multiple resolutions.. A novel post processing step is also presented that improves the final color balance and range. The experimental tests on synthetic and real images, confirm the algorithm robustness and the improved color recovery capability with respect to other on the edge algorithms.", "In many multi-media applications it is desirable to separate the influence of the illumination sources and imaging equipment from the properties of the depicted scene. The ability of the human visual system to solve this task in many situations is known as color constancy. Technical applications of these methods include automatic color correction and illumination independent search in image databases. Many conventional computational color constancy methods assume that the effect of an illumination change can be described by a matrix multiplication with a diagonal matrix. In this paper we introduce a color normalization algorithm which computes the unique color transformation matrix which normalizes a given set of moments computed from the color distribution of an image. This normalization procedure is a generalization of the channel independent color constancy methods since general matrix transformations are considered. We compare the performance of this new normalization method with conventional color constancy methods. The experiments show that diagonal transformation matrices provide a better illumination compensation. This shows that the color moments also contain significant information about the color distributions of the objects in the image which is independent of the illumination characteristics. In another set of experiments we use the unique transformation matrix as a descriptor of the set of moments which describe the global color distribution in the image. Combining the matrices computed from two such images describes the color differences between them. We then use this as a tool for color dependent search in image databases. This matrix based color search is computationally less demanding than histogram based color search tools.", "Color constancy is the ability to measure colors of objects independent of the color of the light source. A well-known color constancy method is based on the gray-world assumption which assumes that the average reflectance of surfaces in the world is achromatic. In this paper, we propose a new hypothesis for color constancy namely the gray-edge hypothesis, which assumes that the average edge difference in a scene is achromatic. Based on this hypothesis, we propose an algorithm for color constancy. Contrary to existing color constancy algorithms, which are computed from the zero-order structure of images, our method is based on the derivative structure of images. Furthermore, we propose a framework which unifies a variety of known (gray-world, max-RGB, Minkowski norm) and the newly proposed gray-edge and higher order gray-edge algorithms. The quality of the various instantiations of the framework is tested and compared to the state-of-the-art color constancy methods on two large data sets of images recording objects under a large number of different light sources. The experiments show that the proposed color constancy algorithms obtain comparable results as the state-of-the-art color constancy methods with the merit of being computationally more efficient.", "Color equalization algorithms exhibit a variety of behaviors described in two differing types of models: Gray World and White Patch. These two models are considered alternatives to each other in methods of color correction. They are the basis for two human visual adaptation mechanisms: Lightness Constancy and Color Constancy. The Gray World approach is typical of the Lightness Constancy adaptation because it centers the histogram dynamic, working the same way as the exposure control on a camera. Alternatively, the White Patch approach is typical of the Color Constancy adaptation, searching for the lightest patch to use as a white reference similar to how the human visual system does. The Retinex algorithm basically belongs to the White Patch family due to its reset mechanism. Searching for a way to merge these two approaches, we have developed a new chromatic correction algorithm, called Automatic Color Equalization (ACE), which is able to perform Color Constancy even if based on Gray World approach. It maintains the main Retinex idea that the color sensation derives from the comparison of the spectral lightness values across the image. We tested different performance measures on ACE, Retinex and other equalization algorithms. The results of this comparison are presented.© (2002) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.", "A neural network can learn color constancy, defined here as the ability to estimate the chromaticity of a scene’s overall illumination. We describe a multilayer neural network that is able to recover the illumination chromaticity given only an image of the scene. The network is previously trained by being presented with a set of images of scenes and the chromaticities of the corresponding scene illuminants. Experiments with real images show that the network performs better than previous color constancy methods. In particular, the performance is better for images with a relatively small number of distinct colors. The method has application to machine vision problems such as object recognition, where illumination-independent color descriptors are required, and in digital photography, where uncontrolled scene illumination can create an unwanted color cast in a photograph." ] }
1906.01108
2948759162
Localized communication in swarms has been shown to increase swarm effectiveness in some situations by allowing for additional opportunities for cooperation. However, communication and utilization of potentially outdated information is also a concern. We present an explicit non-directional goal-based communication model and message accept reject scheme, and test our model in a set of object gathering experiments with a swarm of robots. The results of the experiments indicate that even low levels of communication regarding the swarm's goal outperform high levels of random information communication.
Communication strategies in swarm robotics are often inspired by ethology, the study of animal behavior. This is due to the fact the in the animal kingdom many creatures are social and operate collectively to achieve their goals. Several strategies have roots in the studies of bees and ants @cite_6 @cite_9 @cite_21 . This is due to the fact that bees and ants commonly represent the two main methods for communication, explicit and implicit respectively.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_6" ], "mid": [ "2077935068", "1972218541", "2089077607" ], "abstract": [ "For robot swarms to operate outside of the laboratory in complex real-world environments, they require the kind of error tolerance, flexibility, and scalability seen in living systems. While robot swarms are often designed to mimic some aspect of the behavior of social insects or other organisms, no systems have yet addressed all of these capabilities in a single framework. We describe a swarm robotics system that emulates ant behaviors, which govern memory, communication, and movement, as well as an evolutionary process that tailors those behaviors into foraging strategies that maximize performance under varied and complex conditions. The system evolves appropriate solutions to different environmental challenges. Solutions include the following: (1) increased communication when sensed information is reliable and resources to be collected are highly clustered, (2) less communication and more individual memory when cluster sizes are variable, and (3) greater dispersal with increasing swarm size. Analysis of the evolved behaviors reveals the importance of interactions among behaviors, and of the interdependencies between behaviors and environments. The effectiveness of interacting behaviors depends on the uncertainty of sensed information, the resource distribution, and the swarm size. Such interactions could not be manually specified, but are effectively evolved in simulation and transferred to physical robots. This work is the first to demonstrate high-level robot swarm behaviors that can be automatically tuned to produce efficient collective foraging strategies in varied and complex environments.", "We study self-organized cooperation between heterogeneous robotic swarms. The robots of each swarm play distinct roles based on their different characteristics. We investigate how the use of simple local interactions between the robots of the different swarms can let the swarms cooperate in order to solve complex tasks. We focus on an indoor navigation task, in which we use a swarm of wheeled robots, called foot-bots, and a swarm of flying robots that can attach to the ceiling, called eye-bots. The task of the foot-bots is to move back and forth between a source and a target location. The role of the eye-bots is to guide foot-bots: they choose positions at the ceiling and from there give local directional instructions to foot-bots passing by. To obtain efficient paths for foot-bot navigation, eye-bots need on the one hand to choose good positions and on the other hand learn the right instructions to give. We investigate each of these aspects. Our solution is based on a process of mutual adaptation, in which foot-bots execute instructions given by eye-bots, and eye-bots observe the behavior of foot-bots to adapt their position and the instructions they give. Our approach is inspired by pheromone mediated navigation of ants, as eye-bots serve as stigmergic markers for foot-bot navigation. Through simulation, we show how this system is able to find efficient paths in complex environments, and to display different kinds of complex and scalable self-organized behaviors, such as shortest path finding and automatic traffic spreading.", "In this article, we analyze the behavior of a group of robots involved in an object retrieval task. The robots' control system is inspired by a model of ants' foraging. This model emphasizes the role of learning in the individual. Individuals adapt to the environment using only locally available information. We show that a simple parameter adaptation is an effective way to improve the efficiency of the group and that it brings forth division of labor between the members of the group. Moreover, robots that are best at retrieving have a higher probability of becoming active retrievers. This selection of the best members does not use any explicit representation of individual capabilities. We analyze this system and point out its strengths and its weaknesses." ] }
1906.01108
2948759162
Localized communication in swarms has been shown to increase swarm effectiveness in some situations by allowing for additional opportunities for cooperation. However, communication and utilization of potentially outdated information is also a concern. We present an explicit non-directional goal-based communication model and message accept reject scheme, and test our model in a set of object gathering experiments with a swarm of robots. The results of the experiments indicate that even low levels of communication regarding the swarm's goal outperform high levels of random information communication.
Implicit communication is the use of the environment to share information with other individuals. In the case of ants, pheromone trails are utilized to mark the path traversed. Pheromone trails have been replicated in prior swarm robotics research @cite_6 @cite_9 @cite_0 @cite_25 @cite_3 . Pheromone is left behind on the path an ant takes. The pheromone decays over time, so repeated usage of the trails strengthens them. The stronger the level of pheromone, the more ants are attracted to that specific pathway. In this way ants find the shortest paths.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_6", "@cite_0", "@cite_25" ], "mid": [ "2077935068", "2136473787", "2089077607", "2099680234", "2107472392" ], "abstract": [ "For robot swarms to operate outside of the laboratory in complex real-world environments, they require the kind of error tolerance, flexibility, and scalability seen in living systems. While robot swarms are often designed to mimic some aspect of the behavior of social insects or other organisms, no systems have yet addressed all of these capabilities in a single framework. We describe a swarm robotics system that emulates ant behaviors, which govern memory, communication, and movement, as well as an evolutionary process that tailors those behaviors into foraging strategies that maximize performance under varied and complex conditions. The system evolves appropriate solutions to different environmental challenges. Solutions include the following: (1) increased communication when sensed information is reliable and resources to be collected are highly clustered, (2) less communication and more individual memory when cluster sizes are variable, and (3) greater dispersal with increasing swarm size. Analysis of the evolved behaviors reveals the importance of interactions among behaviors, and of the interdependencies between behaviors and environments. The effectiveness of interacting behaviors depends on the uncertainty of sensed information, the resource distribution, and the swarm size. Such interactions could not be manually specified, but are effectively evolved in simulation and transferred to physical robots. This work is the first to demonstrate high-level robot swarm behaviors that can be automatically tuned to produce efficient collective foraging strategies in varied and complex environments.", "The present study investigated the trail-following behavior of the subterranean termite Coptotermes gestroi (Wasmann Rhinotermitidae) under laboratory conditions. The results showed that workers were the first to initiate the exploration to the food source. When food was discovered they returned to the nest laying a trail for recruiting nestmates to the food source. In this situation, workers always traveled significantly faster when returning from the arenas. Both workers and soldiers were recruited to the food source; however, the soldier worker proportion was higher during the first phase of the recruitment. When no food was available, the number of recruited nestmates and the speed on their way back to the nest were significantly lower. The results also showed that scout foragers always laid trail pheromones when entering into unknown territories, and that chemical signals found in the food could induce workers of C. gestroi to increase their travel speed.", "In this article, we analyze the behavior of a group of robots involved in an object retrieval task. The robots' control system is inspired by a model of ants' foraging. This model emphasizes the role of learning in the individual. Individuals adapt to the environment using only locally available information. We show that a simple parameter adaptation is an effective way to improve the efficiency of the group and that it brings forth division of labor between the members of the group. Moreover, robots that are best at retrieving have a higher probability of becoming active retrievers. This selection of the best members does not use any explicit representation of individual capabilities. We analyze this system and point out its strengths and its weaknesses.", "Abstract We are pursuing techniques for coordinating the actions of large numbers of small-scale robots to achieve useful large-scale results in surveillance, reconnaissance, hazard detection, and path finding. Using the biologically inspired notion of “virtual pheromone” messaging, we describe how many coordinated activities can be accomplished without centralized control. By virtue of this simple messaging scheme, a robot swarm can become a distributed computing mesh embedded within the environment, while simultaneously acting as a physical embodiment of the user interface. We further describe a set of logical primitives for controlling the flow of virtual pheromone messages throughout the robot swarm. These enable the design of complex group behaviors mediated by messages exchanged between neighboring robots.", "Pheromone trails laid by foraging ants serve as a positive feedback mechanism for the sharing of information about food sources. This feedback is nonlinear, in that ants do not react in a proportionate manner to the amount of pheromone deposited. Instead, strong trails elicit disproportionately stronger responses than weak trails. Such nonlinearity has important implications for how a colony distributes its workforce, when confronted with a choice of food sources. We investigated how colonies of the Pharaoh's ant, Monomorium pharaonis, distribute their workforce when offered a choice of two food sources of differing energetic value. By developing a nonlinear differential equation model of trail foraging, and comparing model with experiments, we examined how the ants allocate their workforce between the two food sources. In this allocation, the most profitable feeder (i.e. the feeder with the highest concentration of sugar syrup) was usually exploited by the majority of ants. The particular form of the nonlinear feedback in trail foraging means that when we offered the ants a choice between two feeders of equal profitability, foraging was biased to the feeder with the highest initial number of visitors. Taken together, our experiments illuminate how pheromones provide a mechanism whereby ants can efficiently allocate their workforce among the available food sources without centralized control." ] }
1906.01108
2948759162
Localized communication in swarms has been shown to increase swarm effectiveness in some situations by allowing for additional opportunities for cooperation. However, communication and utilization of potentially outdated information is also a concern. We present an explicit non-directional goal-based communication model and message accept reject scheme, and test our model in a set of object gathering experiments with a swarm of robots. The results of the experiments indicate that even low levels of communication regarding the swarm's goal outperform high levels of random information communication.
Conversely, explicit communication is the act of communicating directly with other entities @cite_22 @cite_14 . This can be done in many ways. In the case of bees, the medium is a form of dance, known as the waggle dance @cite_26 @cite_19 . This dance may need to be repeated if the bees fail to find the location encoded within it. Using this method, robots have danced in order to communicate source locations to the rest of the swarm @cite_20 .
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_22", "@cite_19", "@cite_20" ], "mid": [ "1989359103", "2023789955", "1500120067", "2027403759", "2265279503" ], "abstract": [ "This paper proposes several designs for a reliable infra-red based communication techniques for swarm robotic applications. The communication system was deployed on an autonomous miniature mobile robot (AMiR), a swarm robotic platform developed earlier. In swarm applications, all participating robots must be able to communicate and share data. Hence a suitable communication medium and a reliable technique are required. This work uses infrared radiation for transmission of swarm robots messages. Infrared transmission methods such as amplitude and frequency modulations will be presented along with experimental results. Finally the effects of the modulation techniques and other parameters on collective behavior of swarm robots will be analyzed.", "Until his death in 1982, Karl von Frisch was the world's most renowned authority on bees. \"The Dance Language and Orientation of Bees\" is his masterwork - the culmination of more than fifty years of research. Now available for the first time in paperback, it describes in non-technical language what he discovered in a lifetime of study about honeybees - their methods of orientation, their sensory faculties, and their remarkable ability to communicate with one another. Thomas Seeley's new foreword traces the revolutionary effects of von Frisch's work, not just for the study of bees, but for all subsequent research in animal behaviour. This new paperback edition also includes a \"Personal Appreciation\" of von Frisch by the distinguished biologist Martin Lindauer, who was von Frisch's protege and later his colleague and friend.", "Communication is often required for coordination of collective behaviours. Social insects like ants, termites or bees make use of different forms of communication, which can be roughly classified in three classes: indirect (stigmergic) communication, direct interaction and direct communication. The use of stigmergic communication is predominant in social insects (e.g., the pheromone trails in ants), but also direct interactions (e.g., antennation in ants) and direct communication can be observed (e.g., the waggle dance of honey bee workers). Direct communication may be beneficial when a fast reaction is expected, as for instance, when a danger is detected and countermeasures must be taken. This is the case of hole avoidance, the task studied in this paper: a group of self-assembled robots – called swarm-bot – coordinately explores an arena containing holes, avoiding to fall into them. In particular, we study the use of direct communication in order to achieve a reaction to the detection of a hole faster than with the sole use of direct interactions through physical links. We rely on artificial evolution for the synthesis of neural network controllers, showing that evolving behaviours that make use of direct communication is more effective than exploiting direct interactions only.", "Karl von Frisch won a Nobel prize for discovering that when honeybee foragers return to the hive after discovering a new food source, they perform a ‘waggle dance’ conveying coded information about the range and bearing of the food. He hypothesized that ‘recruits’ attending the dance read the code, and use the information to get to the food. Sceptics suggested that the watching bees simply picked up food odours from the dancer and then searched for the food by smell. Though most biologists are inclined to von Frisch's view of the dance as a source of information, the quantitative description of how the ‘code’ is translated into a flightplan has been lacking. Now with the advent of a radar tracking system capable of following the flight paths of individual recruits, show that the bees not only read the dance, but allow for wind drift on their way to the target.", "An important characteristic of a robot swarm that must operate in the real world is the ability to cope with changeable environments by exhibiting behavioural plasticity at the collective level. For example, a swarm of foraging robots should be able to repeatedly reorganise in order to exploit resource deposits that appear intermittently in different locations throughout their environment. In this paper, we report on simulation experiments with homogeneous foraging robot teams and show that analysing swarm behaviour in terms of information flow can help us to identify whether a particular behavioural strategy is likely to exhibit useful swarm plasticity in response to dynamic environments. While it is beneficial to maximise the rate at which robots share information when they make collective decisions in a static environment, plastic swarm behaviour in changeable environments requires regulated information transfer in order to achieve a balance between the exploitation of existing information and exploration leading to acquisition of new information. We give examples of how information flow analysis can help designers to decide on robot control strategies with relevance to a number of applications explored in the swarm robotics literature." ] }
1906.01108
2948759162
Localized communication in swarms has been shown to increase swarm effectiveness in some situations by allowing for additional opportunities for cooperation. However, communication and utilization of potentially outdated information is also a concern. We present an explicit non-directional goal-based communication model and message accept reject scheme, and test our model in a set of object gathering experiments with a swarm of robots. The results of the experiments indicate that even low levels of communication regarding the swarm's goal outperform high levels of random information communication.
Regardless of the medium, the purpose is clear: to recruit other members of the swarm for cooperative task completion. There have been many variants in the implementations, all to increase the swarm effectiveness given their specific situation @cite_18 . However, it is clear that communication is useful to increase the swarm effectiveness in accomplishing the task.
{ "cite_N": [ "@cite_18" ], "mid": [ "2107914600" ], "abstract": [ "Swarm robotics is a new approach to the coordination of multi-robot systems which consist of large numbers of relatively simple robots which takes its inspiration from social insects. The most remarkable characteristic of swarm robots are the ability to work cooperatively to achieve a common goal. In this paper, classification of existing researches, problems and algorithms aroused in the study of swarm robotics are presented. The existing studies are classified into major areas and relevant sub-categories in the major areas." ] }
1906.01108
2948759162
Localized communication in swarms has been shown to increase swarm effectiveness in some situations by allowing for additional opportunities for cooperation. However, communication and utilization of potentially outdated information is also a concern. We present an explicit non-directional goal-based communication model and message accept reject scheme, and test our model in a set of object gathering experiments with a swarm of robots. The results of the experiments indicate that even low levels of communication regarding the swarm's goal outperform high levels of random information communication.
However, while Arkin explored the usage of state-based communication, Balch studied the effects of goal and state based communication over no communication @cite_5 . He noted that goal based communication, the communication of locations of a goal object or place, within a foraging scenario demonstrated a notable improvement over non communicating swarms, but only a small improvement over state based communication. To give our communication schema ideal conditions, we follow the principles of goal based communication, being able to transmit source locations to others within the swarm.
{ "cite_N": [ "@cite_5" ], "mid": [ "1533885008" ], "abstract": [ "This paper reviews research in three important areas concerning robot swarms: communication, diversity, and learning. Communication (or the lack of it) is a key design consideration for robot teams. Communication can enable certain types of coordination that would be impossible otherwise. However communication can also add unnecessary cost and complexity. Important research issues regarding communication concern what should be communicated, over what range, and when the communication should occur. We also consider how diverse behaviors might help or hinder a team, and how to measure diversity in the first place. Finally, we show how learning can provide a powerful means for enabling a team to master a task or adapt to changing conditions. We hypothesize that these three topics are critically interrelated in the context of learning swarms, and we suggest research directions to explore them." ] }
1906.01108
2948759162
Localized communication in swarms has been shown to increase swarm effectiveness in some situations by allowing for additional opportunities for cooperation. However, communication and utilization of potentially outdated information is also a concern. We present an explicit non-directional goal-based communication model and message accept reject scheme, and test our model in a set of object gathering experiments with a swarm of robots. The results of the experiments indicate that even low levels of communication regarding the swarm's goal outperform high levels of random information communication.
This has been explored further by Pugh @cite_1 . Entities are able traverse the environment in search of a single food source. However, the food source requires three individual robots to lift it and move it to a nest location. Pugh state that communication is promoted by this need for several entities to lift and transport the food. By communicating, the robots are able to gain more food through the course of the experiment, and spend less time exploring.
{ "cite_N": [ "@cite_1" ], "mid": [ "1988200152" ], "abstract": [ "The question of how to best design a communication architecture is becoming increasingly important for evolving autonomous multiagent systems. Directional reception of signals, a design feature of communication that appears in most animals, is present in only some existing artificial communication systems. This paper hypothesizes that such directional reception benefits the evolution of communicating autonomous agents because it simplifies the language required to express positional information, which is critical to solving many group coordination tasks. This hypothesis is tested by comparing the evolutionary performance of several alternative communication architectures (both directional and non-directional) in a multiagent foraging domain designed to require a basic \"come here\" type of signal for the optimal solution. Results confirm that directional reception is a key ingredient in the evolutionary tractability of effective communication. Furthermore, the real world viability of directional reception is demonstrated through the successful transfer of the best evolved controllers to real robots. The conclusion is that directional reception is important to consider when designing communication architectures for more complicated tasks in the future." ] }
1906.01108
2948759162
Localized communication in swarms has been shown to increase swarm effectiveness in some situations by allowing for additional opportunities for cooperation. However, communication and utilization of potentially outdated information is also a concern. We present an explicit non-directional goal-based communication model and message accept reject scheme, and test our model in a set of object gathering experiments with a swarm of robots. The results of the experiments indicate that even low levels of communication regarding the swarm's goal outperform high levels of random information communication.
Arkin and Pugh aren't alone in their studies. Many researchers have utilized communication in order to increase their swarms effectiveness and ability to cooperate (e.g., @cite_6 @cite_9 @cite_0 @cite_25 @cite_3 @cite_20 @cite_16 ). However, what is missing on all these studies is the ability of the robots to reject communication. As given in our description of swarm robotics, robots are individuals and as such can make decisions about their environment and the information available to them. This should include the information shared with them.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_6", "@cite_0", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "2077935068", "2136473787", "2089077607", "2099680234", "2097774761", "2107472392", "2265279503" ], "abstract": [ "For robot swarms to operate outside of the laboratory in complex real-world environments, they require the kind of error tolerance, flexibility, and scalability seen in living systems. While robot swarms are often designed to mimic some aspect of the behavior of social insects or other organisms, no systems have yet addressed all of these capabilities in a single framework. We describe a swarm robotics system that emulates ant behaviors, which govern memory, communication, and movement, as well as an evolutionary process that tailors those behaviors into foraging strategies that maximize performance under varied and complex conditions. The system evolves appropriate solutions to different environmental challenges. Solutions include the following: (1) increased communication when sensed information is reliable and resources to be collected are highly clustered, (2) less communication and more individual memory when cluster sizes are variable, and (3) greater dispersal with increasing swarm size. Analysis of the evolved behaviors reveals the importance of interactions among behaviors, and of the interdependencies between behaviors and environments. The effectiveness of interacting behaviors depends on the uncertainty of sensed information, the resource distribution, and the swarm size. Such interactions could not be manually specified, but are effectively evolved in simulation and transferred to physical robots. This work is the first to demonstrate high-level robot swarm behaviors that can be automatically tuned to produce efficient collective foraging strategies in varied and complex environments.", "The present study investigated the trail-following behavior of the subterranean termite Coptotermes gestroi (Wasmann Rhinotermitidae) under laboratory conditions. The results showed that workers were the first to initiate the exploration to the food source. When food was discovered they returned to the nest laying a trail for recruiting nestmates to the food source. In this situation, workers always traveled significantly faster when returning from the arenas. Both workers and soldiers were recruited to the food source; however, the soldier worker proportion was higher during the first phase of the recruitment. When no food was available, the number of recruited nestmates and the speed on their way back to the nest were significantly lower. The results also showed that scout foragers always laid trail pheromones when entering into unknown territories, and that chemical signals found in the food could induce workers of C. gestroi to increase their travel speed.", "In this article, we analyze the behavior of a group of robots involved in an object retrieval task. The robots' control system is inspired by a model of ants' foraging. This model emphasizes the role of learning in the individual. Individuals adapt to the environment using only locally available information. We show that a simple parameter adaptation is an effective way to improve the efficiency of the group and that it brings forth division of labor between the members of the group. Moreover, robots that are best at retrieving have a higher probability of becoming active retrievers. This selection of the best members does not use any explicit representation of individual capabilities. We analyze this system and point out its strengths and its weaknesses.", "Abstract We are pursuing techniques for coordinating the actions of large numbers of small-scale robots to achieve useful large-scale results in surveillance, reconnaissance, hazard detection, and path finding. Using the biologically inspired notion of “virtual pheromone” messaging, we describe how many coordinated activities can be accomplished without centralized control. By virtue of this simple messaging scheme, a robot swarm can become a distributed computing mesh embedded within the environment, while simultaneously acting as a physical embodiment of the user interface. We further describe a set of logical primitives for controlling the flow of virtual pheromone messages throughout the robot swarm. These enable the design of complex group behaviors mediated by messages exchanged between neighboring robots.", "Task partitioning consists in dividing a task into sub-tasks that can be tackled separately. Partitioning a task might have both positive and negative effects: On the one hand, partitioning might reduce physical interference between workers, enhance exploitation of specialization, and increase efficiency. On the other hand, partitioning may introduce overheads due to coordination requirements. As a result, whether partitioning is advantageous or not has to be evaluated on a case-by-case basis. In this paper we consider the case in which a swarm of robots must decide whether to complete a given task as an unpartitioned task, or utilize task partitioning and tackle it as a sequence of two sub-tasks. We show that the problem of selecting between the two options can be formulated as a multi-armed bandit problem and tackled with algorithms that have been proposed in the reinforcement learning literature. Additionally, we study the implications of using explicit communication between the robots to tackle the studied task partitioning problem. We consider a foraging scenario as a testbed and we perform simulation-based experiments to evaluate the behavior of the system. The results confirm that existing multi-armed bandit algorithms can be employed in the context of task partitioning. The use of communication can result in better performance, but in may also hinder the flexibility of the system.", "Pheromone trails laid by foraging ants serve as a positive feedback mechanism for the sharing of information about food sources. This feedback is nonlinear, in that ants do not react in a proportionate manner to the amount of pheromone deposited. Instead, strong trails elicit disproportionately stronger responses than weak trails. Such nonlinearity has important implications for how a colony distributes its workforce, when confronted with a choice of food sources. We investigated how colonies of the Pharaoh's ant, Monomorium pharaonis, distribute their workforce when offered a choice of two food sources of differing energetic value. By developing a nonlinear differential equation model of trail foraging, and comparing model with experiments, we examined how the ants allocate their workforce between the two food sources. In this allocation, the most profitable feeder (i.e. the feeder with the highest concentration of sugar syrup) was usually exploited by the majority of ants. The particular form of the nonlinear feedback in trail foraging means that when we offered the ants a choice between two feeders of equal profitability, foraging was biased to the feeder with the highest initial number of visitors. Taken together, our experiments illuminate how pheromones provide a mechanism whereby ants can efficiently allocate their workforce among the available food sources without centralized control.", "An important characteristic of a robot swarm that must operate in the real world is the ability to cope with changeable environments by exhibiting behavioural plasticity at the collective level. For example, a swarm of foraging robots should be able to repeatedly reorganise in order to exploit resource deposits that appear intermittently in different locations throughout their environment. In this paper, we report on simulation experiments with homogeneous foraging robot teams and show that analysing swarm behaviour in terms of information flow can help us to identify whether a particular behavioural strategy is likely to exhibit useful swarm plasticity in response to dynamic environments. While it is beneficial to maximise the rate at which robots share information when they make collective decisions in a static environment, plastic swarm behaviour in changeable environments requires regulated information transfer in order to achieve a balance between the exploitation of existing information and exploration leading to acquisition of new information. We give examples of how information flow analysis can help designers to decide on robot control strategies with relevance to a number of applications explored in the swarm robotics literature." ] }
1906.01030
2948180798
We present the first verification that a neural network produces a correct output within a specified tolerance for every input of interest. We define correctness relative to a specification which identifies 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Tiling the state and input spaces with a finite number of tiles, obtaining ground truth bounds from the state tiles and network output bounds from the input tiles, then comparing the ground truth and network output bounds delivers an upper bound on the network output error for any input of interest. Results from a case study highlight the ability of our technique to deliver tight error bounds for all inputs of interest and show how the error bounds vary over the state and input spaces.
Motivated by the vulnerability of neural networks to adversarial attacks , researchers have developed a range of techniques for verifying robustness --- they aim to verify if the neural network prediction is stable in some neighborhood around a selected input point. @cite_11 provides an overview of the field. A range of approaches have been explored, including layer-by-layer reachability analysis with abstract interpretation or bounding the local Lipschitz constant , formulating the network as constraints and solving the resulting optimization problem , solving the dual problem , and formulating and solving using SMT SAT solvers . In the context of control systems, @cite_17 introduces an approach to verify state reachability and region stability of closed-loop systems with neural network controllers. @cite_8 verifies safety by computes the reachable set of states and checks if they overlap with the unsafe states. Unlike the research presented in this paper, none of this prior research formalizes or attempts to verify that the neural network computes correct outputs within a specified tolerance for all inputs of interest.
{ "cite_N": [ "@cite_8", "@cite_17", "@cite_11" ], "mid": [ "2963334980", "2889490035", "2922172279" ], "abstract": [ "In this work, the reachable set estimation and safety verification problems for a class of piecewise linear systems equipped with neural network controllers are addressed. The neural network is considered to consist of Rectified Linear Unit (ReLU) activation functions. A layer-by-layer approach is developed for the output reachable set computation of ReLU neural networks. The computation is formulated in the form of a set of manipulations for a union of polytopes. Based on the output reachable set for neural network controllers, the output reachable set for a piecewise linear feedback control system can be estimated iteratively for a given finite-time interval. With the estimated output reachable set, the safety verification for piecewise linear systems with neural network controllers can be performed by checking the existence of intersections of unsafe regions and output reach set. A numerical example is presented to illustrate the effectiveness of our approach.", "Abstract We present an approach to learn and formally verify feedback laws for data-driven models of neural networks. Neural networks are emerging as powerful and general data-driven representations for functions. This has led to their increased use in data-driven plant models and the representation of feedback laws in control systems. However, it is hard to formally verify properties of such feedback control systems. The proposed learning approach uses a receding horizon formulation that samples from the initial states and disturbances to enforce properties such as reachability, safety and stability. Next, our verification approach uses an over-approximate reachability analysis over the system, supported by range analysis for feedforward neural networks. We report promising results obtained by applying our techniques on several challenging nonlinear dynamical systems.", "Deep neural networks are widely used for nonlinear function approximation with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. This article surveys methods that have emerged recently for soundly verifying such properties. These methods borrow insights from reachability analysis, optimization, and search. We discuss fundamental differences and connections between existing algorithms. In addition, we provide pedagogical implementations of existing methods and compare them on a set of benchmark problems." ] }
1906.01030
2948180798
We present the first verification that a neural network produces a correct output within a specified tolerance for every input of interest. We define correctness relative to a specification which identifies 1) a state space consisting of all relevant states of the world and 2) an observation process that produces neural network inputs from the states of the world. Tiling the state and input spaces with a finite number of tiles, obtaining ground truth bounds from the state tiles and network output bounds from the input tiles, then comparing the ground truth and network output bounds delivers an upper bound on the network output error for any input of interest. Results from a case study highlight the ability of our technique to deliver tight error bounds for all inputs of interest and show how the error bounds vary over the state and input spaces.
Prior work on neural network testing focuses on constructing better test cases to expose problematic network behaviors. Researchers have developed approaches to build test cases that improve coverage on possible states of the neural network, for example neuron coverage and generalizations to multi-granular coverage and MC DC inspired coverage . @cite_9 presents coverage-guided fuzzing methods for testing neural networks using the above coverage criteria. @cite_14 generates realistic test cases by applying natural transformations (e.g. brightness change, rotation, add rain) to seed images. Unlike this prior research, which tests the neural network on only a set of input points, the research presented in this paper verifies correctness for all inputs of interest.
{ "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "2949346385", "2963327228" ], "abstract": [ "Machine learning models are notoriously difficult to interpret and debug. This is particularly true of neural networks. In this work, we introduce automated software testing techniques for neural networks that are well-suited to discovering errors which occur only for rare inputs. Specifically, we develop coverage-guided fuzzing (CGF) methods for neural networks. In CGF, random mutations of inputs to a neural network are guided by a coverage metric toward the goal of satisfying user-specified constraints. We describe how fast approximate nearest neighbor algorithms can provide this coverage metric. We then discuss the application of CGF to the following goals: finding numerical errors in trained neural networks, generating disagreements between neural networks and quantized versions of those networks, and surfacing undesirable behavior in character level language models. Finally, we release an open source library called TensorFuzz that implements the described techniques.", "Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads. However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner-case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases. In this paper, we design, implement, and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explore different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge." ] }
1906.01188
2948163029
The global Electronic Health Record (EHR) market is growing dramatically and expected to reach $39.7 billions by 2022. To safe-guard security and privacy of EHR, access control is an essential mechanism for managing EHR data. This paper proposes a hybrid architecture to facilitate access control of EHR data by using both blockchain and edge node. Within the architecture, a blockchain-based controller manages identity and access control policies and serves as a tamper-proof log of access events. In addition, off-chain edge nodes store the EHR data and apply policies specified in Abbreviated Language For Authorization (ALFA) to enforce attribute-based access control on EHR data in collaboration with the blockchain-based access control logs. We evaluate the proposed hybrid architecture by utilizing Hyperledger Composer Fabric blockchain to measure the performance of executing smart contracts and ACL policies in terms of transaction processing time and response time against unauthorized data retrieval.
There have been various attempts to address the proper access control issues on data management using blockchain. @cite_15 described a decentralized data management system which ensures users own and control their data and proposed a protocol to enable automated access-control manager using multi-party computation. @cite_0 described a blockchain-based framework for data sharing in decentralized storage systems combining Ethereum blockchain and attribute-based encryption technology. @cite_2 proposed an multi-authority attribute-based access control mechanism by utilizing Ethereum’s smart contracts. @cite_6 proposed the FairAccess framework to include transactions used to grant, get, delegate, and revoke access. As a proof of concept, FairAccess was implemented on a Raspberry Pi device using a local blockchain.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_6", "@cite_2" ], "mid": [ "2810011639", "1559136758", "2588585573", "2921784726" ], "abstract": [ "In traditional cloud storage systems, attribute-based encryption (ABE) is regarded as an important technology for solving the problem of data privacy and fine-grained access control. However, in all ABE schemes, the private key generator has the ability to decrypt all data stored in the cloud server, which may bring serious problems such as key abuse and privacy data leakage. Meanwhile, the traditional cloud storage model runs in a centralized storage manner, so single point of failure may leads to the collapse of system. With the development of blockchain technology, decentralized storage mode has entered the public view. The decentralized storage approach can solve the problem of single point of failure in traditional cloud storage systems and enjoy a number of advantages over centralized storage, such as low price and high throughput. In this paper, we study the data storage and sharing scheme for decentralized storage systems and propose a framework that combines the decentralized storage system interplanetary file system, the Ethereum blockchain, and ABE technology. In this framework, the data owner has the ability to distribute secret key for data users and encrypt shared data by specifying access policy, and the scheme achieves fine-grained access control over data. At the same time, based on smart contract on the Ethereum blockchain, the keyword search function on the cipher text of the decentralized storage systems is implemented, which solves the problem that the cloud server may not return all of the results searched or return wrong results in the traditional cloud storage systems. Finally, we simulated the scheme in the Linux system and the Ethereum official test network Rinkeby, and the experimental results show that our scheme is feasible.", "The recent increase in reported incidents of surveillance and security breaches compromising users' privacy call into question the current model, in which third-parties collect and control massive amounts of personal data. Bit coin has demonstrated in the financial space that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we describe a decentralized personal data management system that ensures users own and control their data. We implement a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party. Unlike Bit coin, transactions in our system are not strictly financial -- they are used to carry instructions, such as storing, querying and sharing data. Finally, we discuss possible future extensions to block chains that could harness them into a well-rounded solution for trusted computing problems in society.", "Security and privacy are huge challenges in Internet of Things (IoT) environments, but unfortunately, the harmonization of the IoT-related standards and protocols is hardly and slowly widespread. In this paper, we propose a new framework for access control in IoT based on the blockchain technology. Our first contribution consists in providing a reference model for our proposed framework within the Objectives, Models, Architecture and Mechanism specification in IoT. In addition, we introduce FairAccess as a fully decentralized pseudonymous and privacy preserving authorization management framework that enables users to own and control their data. To implement our model, we use and adapt the blockchain into a decentralized access control manager. Unlike financial bitcoin transactions, FairAccess introduces new types of transactions that are used to grant, get, delegate, and revoke access. As a proof of concept, we establish an initial implementation with a Raspberry PI device and local blockchain. Finally, we discuss some limitations and propose further opportunities. Copyright © 2017 John Wiley & Sons, Ltd.", "" ] }
1906.01308
2948562462
Person re-identification aims to establish the correct identity correspondences of a person moving through a non-overlapping multi-camera installation. Recent advances based on deep learning models for this task mainly focus on supervised learning scenarios where accurate annotations are assumed to be available for each setup. Annotating large scale datasets for person re-identification is demanding and burdensome, which renders the deployment of such supervised approaches to real-world applications infeasible. Therefore, it is necessary to train models without explicit supervision in an autonomous manner. In this paper, we propose an elegant and practical clustering approach for unsupervised person re-identification based on the cluster validity consideration. Concretely, we explore a fundamental concept in statistics, namely , to achieve a robust clustering criterion. Dispersion reflects the compactness of a cluster when employed at the intra-cluster level and reveals the separation when measured at the inter-cluster level. With this insight, we design a novel Dispersion-based Clustering (DBC) approach which can discover the underlying patterns in data. This approach considers a wider context of sample-level pairwise relationships to achieve a robust cluster affinity assessment which handles the complications may arise due to prevalent imbalanced data distributions. Additionally, our solution can automatically prioritize standalone data points and prevents inferior clustering. Our extensive experimental analysis on image and video re-identification benchmarks demonstrate that our method outperforms the state-of-the-art unsupervised methods by a significant margin. Code is available at this https URL.
Some unsupervised methods with hand-crafted features have been proposed in recent years @cite_39 @cite_33 @cite_24 @cite_45 @cite_28 @cite_40 @cite_0 @cite_42 @cite_3 @cite_21 @cite_29 @cite_49 . However, they achieve inadequate re-ID performance when compared to the supervised learning methods. Specifically, Farenzena al @cite_39 exploited the property of symmetry in person images to deal with view variances. To handle the illumination changes and cluttered background, Ma al @cite_30 proposed to combine the Gabor filters and the covariance descriptor. Fisher Vector is explored in @cite_34 to encode higher order statistics of local features. Kodvirov al @cite_45 proposed to combine a laplacian regularization term with conventional dictionary learning formulation to encode cross-view correspondences.
{ "cite_N": [ "@cite_30", "@cite_33", "@cite_28", "@cite_29", "@cite_42", "@cite_21", "@cite_3", "@cite_39", "@cite_24", "@cite_0", "@cite_40", "@cite_45", "@cite_49", "@cite_34" ], "mid": [ "2062677035", "2963152148", "2009907187", "1941498359", "1963702692", "", "2511556322", "1979260620", "", "2550580161", "2778652957", "", "", "2113609219" ], "abstract": [ "This paper proposes a novel image representation which can properly handle both background and illumination variations. It is therefore adapted to the person face reidentification tasks, avoiding the use of any additional pre-processing steps such as foreground-background separation or face and body part segmentation. This novel representation relies on the combination of Biologically Inspired Features (BIF) and covariance descriptors used to compute the similarity of the BIF features at neighboring scales. Hence, we will refer to it as the BiCov representation. To show the effectiveness of BiCov, this paper conducts experiments on two person re-identification tasks (VIPeR and ETHZ) and one face verification task (LFW), on which it improves the current state-of-the-art performance.", "", "In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations. An extensive comparative evaluation is given demonstrating that our approach achieves state-of-the-art performance on single- and multi-shot person re-identification scenarios on the VIPeR, i-LIDS, ETHZ, and CAVIAR4REID datasets. The combination of our descriptor and iterative sparse basis expansion improves state-of-the-art rank-1 performance by six percentage points on VIPeR and by 20 on CAVIAR4REID compared to other methods with a single gallery image per person. With multiple gallery and probe images per person our approach improves by 17 percentage points the state-of-the-art on i-LIDS and by 72 on CAVIAR4REID at rank-1. The approach is also quite efficient, capable of single-shot person re-identification over galleries containing hundreds of individuals at about 30 re-identifications per second.", "Human eyes can recognize person identities based on small salient regions, i.e., person saliency is distinctive and reliable in pedestrian matching across disjoint camera views. However, such valuable information is often hidden when computing similarities of pedestrian images with existing approaches. Inspired by our user study result of human perception on person saliency, we propose a novel perspective for person re-identification based on learning person saliency and matching saliency distribution. The proposed saliency learning and matching framework consists of four steps: (1) To handle misalignment caused by drastic viewpoint change and pose variations, we apply adjacency constrained patch matching to build dense correspondence between image pairs. (2) We propose two alternative methods, i.e., K-Nearest Neighbors and One-class SVM, to estimate a saliency score for each image patch, through which distinctive features stand out without using identity labels in the training procedure. (3) saliency matching is proposed based on patch matching. Matching patches with inconsistent saliency brings penalty, and images of the same identity are recognized by minimizing the saliency matching cost. (4) Furthermore, saliency matching is tightly integrated with patch matching in a unified structural RankSVM learning framework. The effectiveness of our approach is validated on the four public datasets. Our approach outperforms the state-of-the-art person re-identification methods on all these datasets.", "(c) 2014. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.", "", "Most existing person re-identification (ReID) methods assume the availability of extensively labelled cross-view person pairs and a closed-set scenario (i.e. all the probe people exist in the gallery set). These two assumptions significantly limit their usefulness and scalability in real-world applications, particularly with large scale camera networks. To overcome the limitations, we introduce a more challenging yet realistic ReID setting termed OneShot-OpenSet-RelD, and propose a novel Regularised Kernel Subspace Learning model for ReID under this setting. Our model differs significantly from existing ReID methods due to its ability of effectively learning cross-view identity-specific information from unlabelled data alone, and its flexibility of naturally accommodating pairwise labels if available.", "In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.", "", "Most existing person re-identification (ReID) methods rely only on the spatial appearance information from either one or multiple person images, whilst ignore the space-time cues readily available in video or image-sequence data. Moreover, they often assume the availability of exhaustively labelled cross-view pairwise data for every camera pair, making them non-scalable to ReID applications in real-world large scale camera networks. In this work, we introduce a novel video based person ReID method capable of accurately matching people across views from arbitrary unaligned image-sequences without any labelled pairwise data. Specifically, we introduce a new space-time person representation by encoding multiple granularities of spatio-temporal dynamics in form of time series. Moreover, a Time Shift Dynamic Time Warping (TS-DTW) model is derived for performing automatically alignment whilst achieving data selection and matching between inherently inaccurate and incomplete sequences in a unified way. We further extend the TS-DTW model for accommodating multiple feature-sequences of an image-sequence in order to fuse information from different descriptions. Crucially, this model does not require pairwise labelled training data (i.e. unsupervised) therefore readily scalable to large scale camera networks of arbitrary camera pairs without the need for exhaustive data annotation for every camera pair. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two benchmarking ReID datasets, PRID2011 and iLIDS-VID. HighlightsWe propose an unsupervised approach to person re-identification based on typical surveillance image-sequences.We present a new video representation particularly tailored for person ReID. Specifically, this representation is built up on existing action space-time features.We introduce an effective video matching algorithm, Time Shift Dynamic Time Warping (TS-DTW) and its Multi-Dimension variant MDTS-DTW, for data selective based sequence matching.", "The intensive annotation cost and the rich but unlabeled data contained in videos motivate us to propose an unsupervised video-based person re-identification (re-ID) method. We start from two assumptions: 1) different video tracklets typically contain different persons, given that the tracklets are taken at distinct places or with long intervals; 2) within each tracklet, the frames are mostly of the same person. Based on these assumptions, this paper propose a stepwise metric promotion approach to estimate the identities of training tracklets, which iterates between cross-camera tracklet association and feature learning. Specifically, We use each training tracklet as a query, and perform retrieval in the cross-camera training set. Our method is built on reciprocal nearest neighbor search and can eliminate the hard negative label matches, i.e., the cross-camera nearest neighbors of the false matches in the initial rank list. The tracklet that passes the reciprocal nearest neighbor check is considered to have the same ID with the query. Experimental results on the PRID 2011, ILIDS-VID, and MARS datasets show that the proposed method achieves very competitive re-ID accuracy compared with its supervised counterparts.", "", "", "This paper proposes a new descriptor for person re-identification building on the recent advances of Fisher Vectors. Specifically, a simple vector of attributes consisting in the pixel coordinates, its intensity as well as the first and second-order derivatives is computed for each pixel of the image. These local descriptors are turned into Fisher Vectors before being pooled to produce a global representation of the image. The so-obtained Local Descriptors encoded by Fisher Vector (LDFV) have been validated through experiments on two person re-identification benchmarks (VIPeR and ETHZ), achieving state-of-the-art performance on both datasets." ] }
1906.01308
2948562462
Person re-identification aims to establish the correct identity correspondences of a person moving through a non-overlapping multi-camera installation. Recent advances based on deep learning models for this task mainly focus on supervised learning scenarios where accurate annotations are assumed to be available for each setup. Annotating large scale datasets for person re-identification is demanding and burdensome, which renders the deployment of such supervised approaches to real-world applications infeasible. Therefore, it is necessary to train models without explicit supervision in an autonomous manner. In this paper, we propose an elegant and practical clustering approach for unsupervised person re-identification based on the cluster validity consideration. Concretely, we explore a fundamental concept in statistics, namely , to achieve a robust clustering criterion. Dispersion reflects the compactness of a cluster when employed at the intra-cluster level and reveals the separation when measured at the inter-cluster level. With this insight, we design a novel Dispersion-based Clustering (DBC) approach which can discover the underlying patterns in data. This approach considers a wider context of sample-level pairwise relationships to achieve a robust cluster affinity assessment which handles the complications may arise due to prevalent imbalanced data distributions. Additionally, our solution can automatically prioritize standalone data points and prevents inferior clustering. Our extensive experimental analysis on image and video re-identification benchmarks demonstrate that our method outperforms the state-of-the-art unsupervised methods by a significant margin. Code is available at this https URL.
Clustering analysis is a long-standing approach to unsupervised machine learning. With the surge of deep learning techniques, recent studies have attempted to optimize clustering analysis and representation learning jointly for maximizing their complementary benefits @cite_36 @cite_13 @cite_46 @cite_12 . Fan al @cite_50 combines domain transfer and clustering for unsupervised re-ID task. They first train the model on an external labeled dataset which is used as a good model initialization. After that, unlabeled data samples are progressively selected for training according to their credibility defined as their distance to cluster centroids. However, this work relies on a strong assumption about the total number of identities. Aside from these methods that require auxiliary datasets or assumptions, Lin al @cite_18 proposed to apply a bottom-to-up framework for clustering, which hierarchically combines clusters according to a predefined criterion. The merging in @cite_18 is based on a very simple minimum distance criterion with a cluster size regularization term. Different from their work, our dispersion criterion exploits feature affinities within and between clusters, which also has mutual interaction with the CNN model training process to reciprocate the model strength.
{ "cite_N": [ "@cite_18", "@cite_36", "@cite_50", "@cite_46", "@cite_13", "@cite_12" ], "mid": [ "2904427185", "2883725317", "2963975998", "2964074409", "2608862709", "2533545350" ], "abstract": [ "Most person re-identification (re-ID) approaches are based on supervised learning, which requires intensive manual annotation for training data. However, it is not only resourceintensive to acquire identity annotation but also impractical to label the large-scale real-world data. To relieve this problem, we propose a bottom-up clustering (BUC) approach to jointly optimize a convolutional neural network (CNN) and the relationship among the individual samples. Our algorithm considers two fundamental facts in the re-ID task, i.e., diversity across different identities and similarity within the same identity. Specifically, our algorithm starts with regarding individual sample as a different identity, which maximizes the diversity over each identity. Then it gradually groups similar samples into one identity, which increases the similarity within each identity. We utilizes a diversity regularization term in the bottom-up clustering procedure to balance the data volume of each cluster. Finally, the model achieves an effective trade-off between the diversity and similarity. We conduct extensive experiments on the large-scale image and video re-ID datasets, including Market-1501, DukeMTMCreID, MARS and DukeMTMC-VideoReID. The experimental results demonstrate that our algorithm is not only superior to state-of-the-art unsupervised re-ID approaches, but also performs favorably than competing transfer learning and semi-supervised learning methods.", "Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large-scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.", "The superiority of deeply learned pedestrian representations has been reported in very recent literature of person re-identification (re-ID). In this article, we consider the more pragmatic issue of learning a deep feature with no or only a few labels. We propose a progressive unsupervised learning (PUL) method to transfer pretrained deep representations to unseen domains. Our method is easy to implement and can be viewed as an effective baseline for unsupervised re-ID feature learning. Specifically, PUL iterates between (1) pedestrian clustering and (2) fine-tuning of the convolutional neural network (CNN) to improve the initialization model trained on the irrelevant labeled dataset. Since the clustering results can be very noisy, we add a selection operation between the clustering and fine-tuning. At the beginning, when the model is weak, CNN is fine-tuned on a small amount of reliable examples that locate near to cluster centroids in the feature space. As the model becomes stronger, in subsequent iterations, more images are being adaptively selected as CNN training samples. Progressively, pedestrian clustering and the CNN model are improved simultaneously until algorithm convergence. This process is naturally formulated as self-paced learning. We then point out promising directions that may lead to further improvement. Extensive experiments on three large-scale re-ID datasets demonstrate that PUL outputs discriminative features that improve the re-ID accuracy. Our code has been released at https: github.com hehefan Unsupervised-Person-Re-identification-Clustering-and-Fine-tuning.", "Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.", "In this paper, we propose a new clustering model, called DEeP Embedded Regularized ClusTering (DEPICT), which efficiently maps data into a discriminative embedding subspace and precisely predicts cluster assignments. DEPICT generally consists of a multinomial logistic regression function stacked on top of a multi-layer convolutional autoencoder. We define a clustering objective function using relative entropy (KL divergence) minimization, regularized by a prior for the frequency of cluster assignments. An alternating strategy is then derived to optimize the objective by updating parameters and estimating cluster assignments. Furthermore, we employ the reconstruction loss functions in our autoencoder, as a data-dependent regularization term, to prevent the deep embedding function from overfitting. In order to benefit from end-to-end optimization and eliminate the necessity for layer-wise pre-training, we introduce a joint learning framework to minimize the unified clustering and reconstruction loss functions together and train all network layers simultaneously. Experimental results indicate the superiority and faster running time of DEPICT in real-world clustering tasks, where no labeled data is available for hyper-parameter tuning.", "Most learning approaches treat dimensionality reduction (DR) and clustering separately (i.e., sequentially), but recent research has shown that optimizing the two tasks jointly can substantially improve the performance of both. The premise behind the latter genre is that the data samples are obtained via linear transformation of latent representations that are easy to cluster; but in practice, the transformation from the latent space to the data can be more complicated. In this work, we assume that this transformation is an unknown and possibly nonlinear function. To recover the 'clustering-friendly' latent representations and to better cluster the data, we propose a joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN). The motivation is to keep the advantages of jointly optimizing the two tasks, while exploiting the deep neural network's ability to approximate any nonlinear function. This way, the proposed approach can work well for a broad class of generative models. Towards this end, we carefully design the DNN structure and the associated joint optimization criterion, and propose an effective and scalable algorithm to handle the formulated optimization problem. Experiments using different real datasets are employed to showcase the effectiveness of the proposed approach." ] }
1906.01123
2948358421
Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of (2017) have shown significant improvement of style transfer rendering quality by adjusting traditional methods of (2016) and (2016) with regularizer, forcing preservation of the depth map of the content image. However these traditional methods are either computationally inefficient or require training a separate neural network for new style. AdaIN method of (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original style transfer methods of (2016) and (2017).
Image stylization is a long studied problem in computer vision. It takes its origin from non-photorealistic rendering @cite_18 and texture generation @cite_10 @cite_3 @cite_12 tasks. Early approaches @cite_13 @cite_21 @cite_14 relied on low-level hand-crafted features and often failed to capture semantic structures. Later work of Gatys @cite_15 proposed a new style transfer algorithm that was flexible enough to stylize any content image using arbitrary style extracted from any and showed impressive results. It performed stylization by matching Gram-matrices of image features taken from layers of a deep convolutional classification network VGG-19 @cite_5 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_10", "@cite_21", "@cite_3", "@cite_5", "@cite_15", "@cite_13", "@cite_12" ], "mid": [ "2109253138", "", "2116013899", "1594772103", "", "1686810756", "2475287302", "", "" ], "abstract": [ "This paper surveys the field of nonphotorealistic rendering (NPR), focusing on techniques for transforming 2D input (images and video) into artistically stylized renderings. We first present a taxonomy of the 2D NPR algorithms developed over the past two decades, structured according to the design characteristics and behavior of each technique. We then describe a chronology of development from the semiautomatic paint systems of the early nineties, through to the automated painterly rendering systems of the late nineties driven by image gradient analysis. Two complementary trends in the NPR literature are then addressed, with reference to our taxonomy. First, the fusion of higher level computer vision and NPR, illustrating the trends toward scene analysis to drive artistic abstraction and diversity of style. Second, the evolution of local processing approaches toward edge-aware filtering for real-time stylization of images and video. The survey then concludes with a discussion of open challenges for 2D NPR identified in recent NPR symposia, including topics such as user and aesthetic evaluation.", "", "A non-parametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.", "In the past decade, the field of non-photorealistic computer graphics (NPR) has developed as the product of research marked by diverse and sometimes divergent assumptions, approaches, and aims. This book is the first to offer a systematic assessment of this work, identifying and exploring the underlying principles that have given the field its cohesion. In the course of this assessment, the authors provide detailed accounts of today's major non-photorealistic algorithms, along with the background information and implementation advice you need to put them to productive use. As NPR finds new applications in a broadening array of fields, Non-Photorealistic Computer Graphics is destined to be the standard reference for researchers and practitioners alike.", "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.", "", "" ] }
1906.01123
2948358421
Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of (2017) have shown significant improvement of style transfer rendering quality by adjusting traditional methods of (2016) and (2016) with regularizer, forcing preservation of the depth map of the content image. However these traditional methods are either computationally inefficient or require training a separate neural network for new style. AdaIN method of (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original style transfer methods of (2016) and (2017).
However method of Gatys is very computationally demanding. The generation of single image with moderate resolution takes several minutes even on modern GPUs. So later works by Ulyanov @cite_7 and Jonson @cite_8 proposed to impose style by passing a content image through a transformation network, trained to impose fixed style on a big dataset of content images. The transformation network is trained to solve the optimization task introduced in Gatys . Thus trained model is tied to one specific style, and imposing another style requires a separate model.
{ "cite_N": [ "@cite_7", "@cite_8" ], "mid": [ "2502312327", "2331128040" ], "abstract": [ "It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at this https URL. Full paper can be found at arXiv:1701.02096.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results." ] }
1906.01123
2948358421
Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of (2017) have shown significant improvement of style transfer rendering quality by adjusting traditional methods of (2016) and (2016) with regularizer, forcing preservation of the depth map of the content image. However these traditional methods are either computationally inefficient or require training a separate neural network for new style. AdaIN method of (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original style transfer methods of (2016) and (2017).
In later works, Dumoulin @cite_11 and Chen @cite_1 proposed to use so-called for different styles to reduce the number of parameters for multi-style rendering. The main idea was to let only a small portion of the network to control the style, all other parameters were held constant for different styles. Chen proposed to use convolutional layers in the middle of end-to-end transform network as style banks, and Dumoulin used parameters of @cite_7 for style representations. This approach allowed to use several different styles for fast style transfer, but still it required to train some subset of parameters for each new style. @cite_6 it was proposed to use separate model, predicting parameters of instance normalization, given the style image, rather than train them.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_7", "@cite_11" ], "mid": [ "2604737827", "2620076854", "2502312327", "2545656684" ], "abstract": [ "We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility introduced by the explicit filter bank representation. It also enables us to conduct incremental learning to add a new image style by learning a new filter bank while holding the auto-encoder fixed. The explicit style representation along with the flexible network design enables us to fuse styles at not only the image level, but also the region level. Our method is the first style transfer network that links back to traditional texton mapping methods, and hence provides new understanding on neural style transfer. Our method is easy to train, runs in real-time, and produces results that qualitatively better or at least comparable to existing methods.", "In this paper, we present a method which combines the flexibility of the neural algorithm of artistic style with the speed of fast style transfer networks to allow real-time stylization using any content style image pair. We build upon recent work leveraging conditional instance normalization for multi-style transfer networks by learning to predict the conditional instance normalization parameters directly from a style image. The model is successfully trained on a corpus of roughly 80,000 paintings and is able to generalize to paintings previously unobserved. We demonstrate that the learned embedding space is smooth and contains a rich structure and organizes semantic information associated with paintings in an entirely unsupervised manner.", "It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at this https URL. Full paper can be found at arXiv:1701.02096.", "The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style." ] }
1906.01123
2948358421
Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of (2017) have shown significant improvement of style transfer rendering quality by adjusting traditional methods of (2016) and (2016) with regularizer, forcing preservation of the depth map of the content image. However these traditional methods are either computationally inefficient or require training a separate neural network for new style. AdaIN method of (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original style transfer methods of (2016) and (2017).
Alternative approach was used in AdaIN style transfer method @cite_9 . The main idea of the paper was to replace instance normalization layers with (AdaIN), which first normalized content features to have zero means and unit standard deviations, and then rescaled them with means and standard deviations obtained from the style image representation. The AdaIN method has the advantage of fast stylization by just passing the content image through the transformation network. Also it can be applied to any style and does not require training, because only means of standard deviations of style image inner representation are necessary for transformation. Since style has straightforward representation, it becomes possible to control global stylization strength and mix different styles.
{ "cite_N": [ "@cite_9" ], "mid": [ "2603777577" ], "abstract": [ "recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network." ] }
1906.01123
2948358421
Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of (2017) have shown significant improvement of style transfer rendering quality by adjusting traditional methods of (2016) and (2016) with regularizer, forcing preservation of the depth map of the content image. However these traditional methods are either computationally inefficient or require training a separate neural network for new style. AdaIN method of (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original style transfer methods of (2016) and (2017).
The other arbitrary real time method was proposed by Li @cite_16 , so-called style transfer. This approach considered pretrained deep convolution encoder (they also used VGG-19 architecture) for high-features extraction and decoders, which can reconstruct images from hidden representation from different layers of VGG. For style transfer they applied whitening and linear transformation which imposed the style, represented as mean vector and covariance matrix of the transformation. Stylization results were obtained after passing hidden representations through the trained decoders.
{ "cite_N": [ "@cite_16" ], "mid": [ "2962772087" ], "abstract": [ "Universal style transfer aims to transfer arbitrary visual styles to content images. Existing feed-forward based methods, while enjoying the inference efficiency, are mainly limited by inability of generalizing to unseen styles or compromised visual quality. In this paper, we present a simple yet effective method that tackles these limitations without training on any pre-defined styles. The key ingredient of our method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network. The whitening and coloring transforms reflect direct matching of feature covariance of the content image to a given style image, which shares similar spirits with the optimization of Gram matrix based cost in neural style transfer. We demonstrate the effectiveness of our algorithm by generating high-quality stylized images with comparisons to a number of recent methods. We also analyze our method by visualizing the whitened features and synthesizing textures by simple feature coloring." ] }
1906.01123
2948358421
Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of (2017) have shown significant improvement of style transfer rendering quality by adjusting traditional methods of (2016) and (2016) with regularizer, forcing preservation of the depth map of the content image. However these traditional methods are either computationally inefficient or require training a separate neural network for new style. AdaIN method of (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original style transfer methods of (2016) and (2017).
The task of depth-preserving style transfer was first covered by Liu @cite_0 , in their work the transformation network from Jonson's paper @cite_8 learned to generate images that were also close to content by its depth, where image depth was calculated with network proposed by Chen @cite_2 .
{ "cite_N": [ "@cite_0", "@cite_2", "@cite_8" ], "mid": [ "2740729727", "2963825193", "2331128040" ], "abstract": [ "Neural style transfer has recently received significant attention and demonstrated amazing results. An efficient solution proposed by trains feed-forward convolutional neural networks by defining and optimizing perceptual loss functions. Such methods are typically based on high-level features extracted from pre-trained neural networks, where the loss functions contain two components: style loss and content loss. However, such pre-trained networks are originally designed for object recognition, and hence the high-level features often focus on the primary target and neglect other details. As a result, when input images contain multiple objects potentially at different depths, the resulting images are often unsatisfactory because image layout is destroyed and the boundary between the foreground and background as well as different objects becomes obscured. We observe that the depth map effectively reflects the spatial distribution in an image and preserving the depth map of the content image after stylization helps produce an image that preserves its semantic content. In this paper, we introduce a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer.", "This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset “Depth in the Wild” consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results." ] }
1906.01123
2948358421
Style transfer is the process of rendering one image with some content in the style of another image, representing the style. Recent studies of (2017) have shown significant improvement of style transfer rendering quality by adjusting traditional methods of (2016) and (2016) with regularizer, forcing preservation of the depth map of the content image. However these traditional methods are either computationally inefficient or require training a separate neural network for new style. AdaIN method of (2017) allows efficient transferring of arbitrary style without training a separate model but is not able to reproduce the depth map of the content image. We propose an extension to this method, allowing depth map preservation. Qualitative analysis and results of user evaluation study indicate that the proposed method provides better stylizations, compared to the original style transfer methods of (2016) and (2017).
Style can also be imposed using generative adversarial networks (GANs). For example, Zhu @cite_17 considered cycle GANs for transferring images between two different domains, Zhang @cite_19 used GANs to colorize sketches. Difference, compared to style transfer, is that GANs require many style images to reconstruct the style.
{ "cite_N": [ "@cite_19", "@cite_17" ], "mid": [ "2963784525", "2962793481" ], "abstract": [ "Recently, with the revolutionary neural style transferring methods, creditable paintings can be synthesized automatically from content images and style images. However, when it comes to the task of applying a painting's style to an anime sketch, these methods will just randomly colorize sketch lines as outputs and fail in the main task: specific style transfer. In this paper, we integrated residual U-net to apply the style to the gray-scale sketch with auxiliary classifier generative adversarial network (AC-GAN). The whole process is automatic and fast. Generated results are creditable in the quality of art style as well as colorization.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach." ] }
1906.01202
2948342290
While multi-agent interactions can be naturally modeled as a graph, the environment has traditionally been considered as a black box. We propose to create a shared agent-entity graph, where agents and environmental entities form vertices, and edges exist between the vertices which can communicate with each other. Agents learn to cooperate by exchanging messages along the edges of this graph. Our proposed multi-agent reinforcement learning framework is invariant to the number of agents or entities present in the system as well as permutation invariance, both of which are desirable properties for any multi-agent system representation. We present state-of-the-art results on coverage, formation and line control tasks for multi-agent teams in a fully decentralized framework and further show that the learned policies quickly transfer to scenarios with different team sizes along with strong zero-shot generalization performance. This is an important step towards developing multi-agent teams which can be realistically deployed in the real world without assuming complete prior knowledge or instantaneous communication at unbounded distances.
CommNet is one of the earliest works to learn a differentiable communication protocol between multiple agents in a fully cooperative centralized setting. However, they did not explicitly model interactions between agents, instead each agent receives the averaged states of all its neighbors. VAIN improves upon the mean aggregation by using an exponential kernel based attention to selectively attend to the messages received from other agents, and showed predictive modeling of multi-agent systems using supervised learning. In this work, we use the scaled dot-product attention mechanism proposed by @cite_0 for inter-agent communication, which can be easily substituted with the ones used in CommNet and VAIN.
{ "cite_N": [ "@cite_0" ], "mid": [ "2963403868" ], "abstract": [ "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature." ] }
1906.01204
2948233355
The sample mean is often used to aggregate different unbiased estimates of a parameter, producing a final estimate that is unbiased but possibly high-variance. This paper introduces the Bayesian median of means, an aggregation rule that roughly interpolates between the sample mean and median, resulting in estimates with much smaller variance at the expense of bias. While the procedure is non-parametric, its squared bias is asymptotically negligible relative to the variance, similar to maximum likelihood estimators. The Bayesian median of means is consistent, and concentration bounds for the estimator's bias and @math error are derived, as well as a fast non-randomized approximating algorithm. The performances of both the exact and the approximate procedures match that of the sample mean in low-variance settings, and exhibit much better results in high-variance scenarios. The empirical performances are examined in real and simulated data, and in applications such as importance sampling, cross-validation and bagging.
The idea of combining mean and median estimators has been visited several times in the statistical robustness literature, particularly for the estimation of location parameters in symmetric distributions. For instance, @cite_24 propose an adaptive estimator that picks either the sample mean or median to estimate the center of a symmetric distribution, while @cite_33 and @cite_5 investigate using linear combinations of mean and medians, with weights picked according to asymptotic criteria.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_33" ], "mid": [ "2021680704", "2011147699", "2064940455" ], "abstract": [ "Abstract An adaptive choice of the sample mean ¯xn or the sample median mn is proposed for estimating the center of a symmetric distribution. This choice becomes correct as n → ∞, and in simulation results for finite n it is almost as good as the better of ¯xn and mn.", "We propose a location estimator based on a convex linear combination of the sample mean and median. The main attraction is the conceptual simplicity and transparency, but it remains very competitive in performance for a wide range of distributions. The estimator aims at minimizing the asymptotic variance in the class of all linear combinations of mean and median. Comparisons with some of the best location estimators, the maximum likelihood, Huber's and the Hodges-Lehmann M-estimators, are given based on asymptotic relative efficiency and Monte Carlo simulations. Computationally, the new estimator has an explicit expression and requires no iteration. Robustness is assessed by calculation of breakdown point.", "We characterize all symmetric location models for which a linear combination of the median and the sample mean is an asymptotically efficient estimator of the location parameter. The resulting model can be understood as a symmetrized or double truncated normal distribution. A simple algorithm to estimate the parameters is given and an application is presented. Copyright 2004 Board of the Foundation of the Scandinavian Journal of Statistics.." ] }
1906.01204
2948233355
The sample mean is often used to aggregate different unbiased estimates of a parameter, producing a final estimate that is unbiased but possibly high-variance. This paper introduces the Bayesian median of means, an aggregation rule that roughly interpolates between the sample mean and median, resulting in estimates with much smaller variance at the expense of bias. While the procedure is non-parametric, its squared bias is asymptotically negligible relative to the variance, similar to maximum likelihood estimators. The Bayesian median of means is consistent, and concentration bounds for the estimator's bias and @math error are derived, as well as a fast non-randomized approximating algorithm. The performances of both the exact and the approximate procedures match that of the sample mean in low-variance settings, and exhibit much better results in high-variance scenarios. The empirical performances are examined in real and simulated data, and in applications such as importance sampling, cross-validation and bagging.
Such concentration cannot be achieved by the sample mean in general, unless stronger hypotheses, such as sub-Gaussianity of @math , are assumed. Variants of this estimator are further analyzed in the heavy-tailed setting of @cite_34 , @cite_19 and @cite_8 . The estimator was also used to combine Bayesian posterior updates in split datasets in @cite_29 .
{ "cite_N": [ "@cite_19", "@cite_29", "@cite_34", "@cite_8" ], "mid": [ "2963572540", "62820921", "1850992575", "1984332158" ], "abstract": [ "This work studies applications and generalizations of a simple estimation technique that provides exponential concentration under heavy-tailed distributions, assuming only bounded low-order moments. We show that the technique can be used for approximate minimization of smooth and strongly convex losses, and specifically for least squares linear regression. For instance, our d-dimensional estimator requires just O(d log(1 δ)) random samples to obtain a constant factor approximation to the optimal least squares loss with probability 1-δ, without requiring the covariates or noise to be bounded or subgaussian. We provide further applications to sparse linear regression and low-rank covariance matrix estimation with similar allowances on the noise and covariate distributions. The core technique is a generalization of the median-of-means estimator to arbitrary metric spaces.", "Many Bayesian learning methods for massive data benefit from working with small subsets of observations. In particular, significant progress has been made in scalable Bayesian learning via stochastic approximation. However, Bayesian learning methods in distributed computing environments are often problem- or distributionspecific and use ad hoc techniques. We propose a novel general approach to Bayesian inference that is scalable and robust to corruption in the data. Our technique is based on the idea of splitting the data into several non-overlapping subgroups, evaluating the posterior distribution given each independent subgroup, and then combining the results. Our main contribution is the proposed aggregation step which is based on finding the geometric median of subset posterior distributions. Presented theoretical and numerical results confirm the advantages of our approach.", "The purpose of this paper is to discuss empirical risk minimization when the losses are not necessarily bounded and may have a distribution with heavy tails. In such situations, usual empirical averages may fail to provide reliable estimates and empirical risk minimization may provide large excess risk. However, some robust mean estimators proposed in the literature may be used to replace empirical means. In this paper, we investigate empirical risk minimization based on a robust estimate proposed by Catoni. We develop performance bounds based on chaining arguments tailored to Catoni's mean estimator.", "The stochastic multiarmed bandit problem is well understood when the reward distributions are sub-Gaussian. In this paper, we examine the bandit problem under the weaker assumption that the distributions have moments of order 1 + e, for some e ∈ (0,1]. Surprisingly, moments of order 2 (i.e., finite variance) are sufficient to obtain regret bounds of the same order as under sub-Gaussian reward distributions. In order to achieve such regret, we define sampling strategies based on refined estimators of the mean such as the truncated empirical mean, Catoni's M-estimator, and the median-of-means estimator. We also derive matching lower bounds that also show that the best achievable regret deteriorates when e <; 1." ] }
1906.01204
2948233355
The sample mean is often used to aggregate different unbiased estimates of a parameter, producing a final estimate that is unbiased but possibly high-variance. This paper introduces the Bayesian median of means, an aggregation rule that roughly interpolates between the sample mean and median, resulting in estimates with much smaller variance at the expense of bias. While the procedure is non-parametric, its squared bias is asymptotically negligible relative to the variance, similar to maximum likelihood estimators. The Bayesian median of means is consistent, and concentration bounds for the estimator's bias and @math error are derived, as well as a fast non-randomized approximating algorithm. The performances of both the exact and the approximate procedures match that of the sample mean in low-variance settings, and exhibit much better results in high-variance scenarios. The empirical performances are examined in real and simulated data, and in applications such as importance sampling, cross-validation and bagging.
Unfortunately, however, there are practical challenges in using the median of means estimator. First, there is little guidance in how to pick the number of groups, which essentially amounts to the estimator's willingness to trade off bias for variance. Furthermore, in spite of its theoretical properties, the median of means underutilizes the data available by only using each datapoint once. This guarantees independence between blocks, but limits the number of means one can obtain in a given dataset. This requirement can be relaxed to a degree, but not completely (see @cite_21 ). The estimator considered here has no such restrictions, and can be viewed as a smoothed version of the median of means. Besides, the randomness introduced in sampling the blocks allow for probabilistic analyses and parameter choices conditional on the realized datapoints. A further benefit is that, unlike the median of means, its smoothed counterpart does not depend on the order of the datapoints while still being computationally tractable.
{ "cite_N": [ "@cite_21" ], "mid": [ "2964027658" ], "abstract": [ "An important part of the legacy of Evarist Gine is his fundamental contributions to our understanding of U-statistics and U-processes. In this paper we discuss the estimation of the mean of multivariate functions in case of possibly heavy-tailed distributions. In such situations, reliable estimates of the mean cannot be obtained by usual U-statistics. We introduce a new estimator, based on the so-called median-of-means technique. We develop performance bounds for this new estimator that generalizes an estimate of Arcones and Gine (1993), showing that the new estimator performs, under minimal moment conditions, as well as classical U-statistics for bounded random variables. We discuss an application of this estimator to clustering." ] }
1906.01204
2948233355
The sample mean is often used to aggregate different unbiased estimates of a parameter, producing a final estimate that is unbiased but possibly high-variance. This paper introduces the Bayesian median of means, an aggregation rule that roughly interpolates between the sample mean and median, resulting in estimates with much smaller variance at the expense of bias. While the procedure is non-parametric, its squared bias is asymptotically negligible relative to the variance, similar to maximum likelihood estimators. The Bayesian median of means is consistent, and concentration bounds for the estimator's bias and @math error are derived, as well as a fast non-randomized approximating algorithm. The performances of both the exact and the approximate procedures match that of the sample mean in low-variance settings, and exhibit much better results in high-variance scenarios. The empirical performances are examined in real and simulated data, and in applications such as importance sampling, cross-validation and bagging.
In fact, the median of means itself can be cast as a computation compromise on the celebrated Hodges-Lehmann estimator proposed in @cite_12 : Many theoretical properties are known about it; for instance, it has an asymptotic breakdown point of @math , meaning a contamination of up to @math _1, , _n @math P$-means ( @cite_18 ).
{ "cite_N": [ "@cite_18", "@cite_12" ], "mid": [ "2798600727", "2148347826" ], "abstract": [ "This article offers a simplified approach to the distribution theory of randomly weighted averages or @math -means @math , for a sequence of i.i.d.random variables @math , and independent random weights @math with @math and @math . The collection of distributions of @math , indexed by distributions of @math , is shown to encode Kingman's partition structure derived from @math . For instance, if @math has Bernoulli @math distribution on @math , the @math th moment of @math is a polynomial function of @math which equals the probability generating function of the number @math of distinct values in a sample of size @math from @math : @math . This elementary identity illustrates a general moment formula for @math -means in terms of the partition structure associated with random samples from @math , first developed by Diaconis and Kemperman (1996) and Kerov (1998) in terms of random permutations. As shown by Tsilevich (1997) if the partition probabilities factorize in a way characteristic of the generalized Ewens sampling formula with two parameters @math , found by Pitman (1992), then the moment formula yields the Cauchy-Stieltjes transform of an @math mean. The analysis of these random means includes the characterization of @math -means, known as Dirichlet means, due to Von Neumann (1941), Watson (1956) and Cifarelli and Regazzini (1990) and generalizations of L 'evy's arcsine law for the time spent positive by a Brownian motion, due to Darling (1949) Lamperti (1958) and Barlow, Pitman and Yor (1989).", "A serious objection to many of the classical statistical methods based on linear models or normality assumptions is their vulnerability to gross errors. For certain testing problems this difficulty is suc-cessfully overcome by rank tests such as the two Wilcoxon tests or the Kruskal- Wallis H-test. Their power is more robust against gross errors than that of the t- and F-tests, and their efficiency loss is quite small even in the rare case in which the suspicion of the possibility of gross errors is unfounded." ] }
1906.01161
2949044118
Pronoun resolution is part of coreference resolution, the task of pairing an expression to its referring entity. This is an important task for natural language understanding and a necessary component of machine translation systems, chat bots and assistants. Neural machine learning systems perform far from ideally in this task, reaching as low as 73 F1 scores on modern benchmark datasets. Moreover, they tend to perform better for masculine pronouns than for feminine ones. Thus, the problem is both challenging and important for NLP researchers and practitioners. In this project, we describe our BERT-based approach to solving the problem of gender-balanced pronoun resolution. We are able to reach 92 F1 score and a much lower gender bias on the benchmark dataset shared by Google AI Language team.
Among popular approaches to coreference resolution are: https: bit.ly 2JbKxv1 rule-based, mention pair, mention ranking, and clustering. As for rule-based approaches, they describe naïve Hobbs algorithm @cite_12 which, in spite of being naïve, has shown state-of-the-art performance on the OntoNotes dataset https: catalog.ldc.upenn.edu LDC2013T19 up to 2010.
{ "cite_N": [ "@cite_12" ], "mid": [ "1588806355" ], "abstract": [ "The book presents papers on natural language processing, focusing on the central issues of representation, reasoning, and recognition. The introduction discusses theoretical issues, historical developments, and current problems and approaches. The book presents work in syntactic models (parsing and grammars), semantic interpretation, discourse interpretation, language action and intentions, language generation, and systems." ] }
1906.01161
2949044118
Pronoun resolution is part of coreference resolution, the task of pairing an expression to its referring entity. This is an important task for natural language understanding and a necessary component of machine translation systems, chat bots and assistants. Neural machine learning systems perform far from ideally in this task, reaching as low as 73 F1 scores on modern benchmark datasets. Moreover, they tend to perform better for masculine pronouns than for feminine ones. Thus, the problem is both challenging and important for NLP researchers and practitioners. In this project, we describe our BERT-based approach to solving the problem of gender-balanced pronoun resolution. We are able to reach 92 F1 score and a much lower gender bias on the benchmark dataset shared by Google AI Language team.
Some more state-of-the-art coreference resolution systems are reviewed in @cite_3 as well as popular datasets with ambiguous pronouns: Winograd schemas @cite_5 , WikiCoref @cite_15 , and The Definite Pronoun Resolution Dataset @cite_6 . We also refer to the GAP paper for a brief review of gender bias in machine learning.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_6", "@cite_3" ], "mid": [ "1599016936", "2574762171", "2163794943", "2920114910" ], "abstract": [ "In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. A Wino-grad schema is a pair of sentences that differ only in one or two words and that contain a referential ambiguity that is resolved in opposite directions in the two sentences. We have compiled a collection of Winograd schemas, designed so that the correct answer is obvious to the human reader, but cannot easily be found using selectional restrictions or statistical techniques over text corpora. A contestant in the Winograd Schema Challenge is presented with a collection of one sentence from each pair, and required to achieve human-level accuracy in choosing the correct disambiguation.", "", "Most research in the field of anaphora or coreference detection has been limited to noun phrase coreference, usually on a restricted set of entities, such as ACE entities. In part, this has been due to the lack of corpus resources tagged with general anaphoric coreference. The OntoNotes project is creating a large-scale, accurate corpus for general anaphoric coreference that covers entities and events not limited to noun phrases or a limited set of entity types. The coreference layer in OntoNotes constitutes one part of a multi-layer, integrated annotation of shallow semantic structure in text. This paper presents an initial model for unrestricted coreference based on this data that uses a machine learning architecture with state-of-the-art features. Significant improvements can be expected from using such cross-layer information for training predictive models. This paper describes the coreference annotation in OntoNotes, presents the baseline model, and provides an analysis of the contribution of this new resource in the context of recent MUC and ACE results.", "Coreference resolution is an important task for natural language understanding, and the resolution of ambiguous pronouns a longstanding challenge. Nonetheless, existing corpora do not capture ambiguous pronouns in sufficient volume or diversity to accurately indicate the practical utility of models. Furthermore, we find gender bias in existing corpora and systems favoring masculine entities. To address this, we present and release GAP, a gender-balanced labeled corpus of 8,908 ambiguous pronoun-name pairs sampled to provide diverse coverage of challenges posed by real-world text. We explore a range of baselines which demonstrate the complexity of the challenge, the best achieving just 66.9 F1. We show that syntactic structure and continuous neural models provide promising, complementary cues for approaching the challenge." ] }
1906.01282
2948852532
Neural machine translation (NMT) takes deterministic sequences for source representations. However, either word-level or subword-level segmentations have multiple choices to split a source sequence with different word segmentors or different subword vocabulary sizes. We hypothesize that the diversity in segmentations may affect the NMT performance. To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training. We propose two methods: 1) lattice positional encoding and 2) lattice-aware self-attention. These two methods can be used together and show complementary to each other to further improve translation performance. Experiment results show superiorities of lattice-based encoders in word-level and subword-level representations over conventional Transformer encoder.
As models mentioned above only use 1-best segmentation as inputs, lattice which can pack many different segmentations in a compact form has been widely used in statistical machine translation (SMT) @cite_30 @cite_24 and RNN-based NMT @cite_2 @cite_11 . To enhance the representaions of the input, lattice has also been applied in many other NLP tasks such as named entity recognition @cite_42 , Chinese word segmentation @cite_20 and part-of-speech tagging @cite_7 @cite_28 .
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_28", "@cite_42", "@cite_24", "@cite_2", "@cite_20", "@cite_11" ], "mid": [ "1412698887", "2102461220", "2104747875", "2962904552", "2157435188", "2527133236", "2963997155", "2950448199" ], "abstract": [ "A Chinese sentence is represented as a sequence of characters, and words are not separated from each other. In statistical machine translation, the conventional approach is to segment the Chinese character sequence into words during the pre-processing. The training and translation are performed afterwards. However, this method is not optimal for two reasons: 1. The segmentations may be erroneous. 2. For a given character sequence, the best segmentation depends on its context and translation. In order to minimize the translation errors, we take different segmentation alternatives instead of a single segmentation into account and integrate the segmentation process with the search for the best translation. The segmentation decision is only taken during the generation of the translation. With this method we are able to translate Chinese text at the character level. The experiments on the IWSLT 2005 task showed improvements in the translation performance using two translation systems: a phrase-based system and a finite state transducer based system. For the phrase-based system, the improvement of the BLEU score is 1.5 absolute.", "In this paper, we describe a new reranking strategy named word lattice reranking, for the task of joint Chinese word segmentation and part-of-speech (POS) tagging. As a derivation of the forest reranking for parsing (Huang, 2008), this strategy reranks on the pruned word lattice, which potentially contains much more candidates while using less storage, compared with the traditional n-best list reranking. With a perceptron classifier trained with local features as the baseline, word lattice reranking performs reranking with non-local features that can't be easily incorporated into the perceptron baseline. Experimental results show that, this strategy achieves improvement on both segmentation and POS tagging, above the perceptron baseline and the n-best list reranking.", "For the cascaded task of Chinese word segmentation, POS tagging and parsing, the pipeline approach suffers from error propagation while the joint learning approach suffers from inefficient decoding due to the large combined search space. In this paper, we present a novel lattice-based framework in which a Chinese sentence is first segmented into a word lattice, and then a lattice-based POS tagger and a lattice-based parser are used to process the lattice from two different viewpoints: sequential POS tagging and hierarchical tree building. A strategy is designed to exploit the complementary strengths of the tagger and parser, and encourage them to predict agreed structures. Experimental results on Chinese Treebank show that our lattice-based framework significantly improves the accuracy of the three sub-tasks.", "", "Word lattice decoding has proven useful in spoken language translation; we argue that it provides a compelling model for translation of text genres, as well. We show that prior work in translating lattices using finite state techniques can be naturally extended to more expressive synchronous context-free grammarbased models. Additionally, we resolve a significant complication that non-linear word lattice inputs introduce in reordering models. Our experiments evaluating the approach demonstrate substantial gains for ChineseEnglish and Arabic-English translation.", "Neural machine translation (NMT) heavily relies on word-level modelling to learn semantic representations of input sentences. However, for languages without natural word delimiters (e.g., Chinese) where input sentences have to be tokenized first, conventional NMT is confronted with two issues: 1) it is difficult to find an optimal tokenization granularity for source sentence modelling, and 2) errors in 1-best tokenizations may propagate to the encoder of NMT. To handle these issues, we propose word-lattice based Recurrent Neural Network (RNN) encoders for NMT, which generalize the standard RNN to word lattice topology. The proposed encoders take as input a word lattice that compactly encodes multiple tokenizations, and learn to generate new hidden states from arbitrarily many inputs and hidden states in preceding time steps. As such, the word-lattice based encoders not only alleviate the negative impact of tokenization errors but also are more expressive and flexible to embed input sentences. Experiment results on Chinese-English translation demonstrate the superiorities of the proposed encoders over the conventional encoder.", "", "The input to a neural sequence-to-sequence model is often determined by an up-stream system, e.g. a word segmenter, part of speech tagger, or speech recognizer. These up-stream models are potentially error-prone. Representing inputs through word lattices allows making this uncertainty explicit by capturing alternative sequences and their posterior probabilities in a compact form. In this work, we extend the TreeLSTM (, 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoder-decoder model. We integrate lattice posterior scores into this architecture by extending the TreeLSTM's child-sum and forget gates and introducing a bias term into the attention mechanism. We experiment with speech translation lattices and report consistent improvements over baselines that translate either the 1-best hypothesis or the lattice without posterior scores." ] }
1906.01288
2948545623
Learning representations with diversified information remains an open problem. Towards learning diversified representations, a new approach, termed Information Competing Process (ICP), is proposed in this paper. Aiming to enrich the information carried by feature representations, ICP separates a representation into two parts with different mutual information constraints. The separated parts are forced to accomplish the downstream task independently in a competitive environment which prevents the two parts from learning what each other learned for the downstream task. Such competing parts are then combined synergistically to complete the task. By fusing representation parts learned competitively under different conditions, ICP facilitates obtaining diversified representations which contain complementary information. Experiments on image classification and image reconstruction tasks demonstrate the great potential of ICP to learn discriminative and disentangled representations in both supervised and self-supervised learning settings.
Mutual information has been a powerful tool in representation learning for a long time. In the unsupervised setting, mutual information maximization is typically studied, which targets at adding specific information to the representation and forces the representation to be discriminative. For instance, the InfoMax principle @cite_32 @cite_3 advocates maximizing mutual information between the inputs and the representations, which forms the basis of independent component analysis @cite_31 . Contrastive Predictive Coding @cite_29 and Deep InfoMax @cite_19 maximize mutual information between global and local representation pairs, or the input and global local representation pairs.
{ "cite_N": [ "@cite_29", "@cite_32", "@cite_3", "@cite_19", "@cite_31" ], "mid": [ "2842511635", "2122925692", "2108384452", "2887997457", "2123649031" ], "abstract": [ "While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.", "The emergence of a feature-analyzing function from the development rules of simple, multilayered networks is explored. It is shown that even a single developing cell of a layered network exhibits a remarkable set of optimization properties that are closely related to issues in statistics, theoretical physics, adaptive signal processing, the formation of knowledge representation in artificial intelligence, and information theory. The network studied is based on the visual system. These results are used to infer an information-theoretic principle that can be applied to the network as a whole, rather than a single cell. The organizing principle proposed is that the network connections develop in such a way as to maximize the amount of information that is preserved when signals are transformed at each processing stage, subject to certain constraints. The operation of this principle is illustrated for some simple cases. >", "We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in \"blind\" signal processing.", "", "A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject." ] }
1906.01288
2948545623
Learning representations with diversified information remains an open problem. Towards learning diversified representations, a new approach, termed Information Competing Process (ICP), is proposed in this paper. Aiming to enrich the information carried by feature representations, ICP separates a representation into two parts with different mutual information constraints. The separated parts are forced to accomplish the downstream task independently in a competitive environment which prevents the two parts from learning what each other learned for the downstream task. Such competing parts are then combined synergistically to complete the task. By fusing representation parts learned competitively under different conditions, ICP facilitates obtaining diversified representations which contain complementary information. Experiments on image classification and image reconstruction tasks demonstrate the great potential of ICP to learn discriminative and disentangled representations in both supervised and self-supervised learning settings.
In the supervised or self-supervised settings, mutual information minimization is commonly utilized. For instance, the Information Bottleneck (IB) theory @cite_22 uses the information theoretic objective to constrain the mutual information between the input and the representation. IB was then introduced to deep neural networks @cite_5 @cite_9 @cite_10 , and Deep Variational Information Bottleneck (VIB) @cite_11 was recently proposed to refine IB with a variational approximation. Another group of works in self-supervised setting adopt generative models to learn representations @cite_26 @cite_20 , in which the mutual information plays an important role in learning disentangled representations. For instance, @math -VAE @cite_24 is a variant of Variation Auto-Encoder @cite_26 that attempts to learn a disentangled representation by optimizing a heavily penalized objective with mutual information minimization. Recent works in @cite_4 @cite_17 @cite_15 revise the objective of @math -VAE by applying various constraints. One special case is InfoGAN @cite_35 , which maximizes the mutual information between representation and a factored Gaussian distribution. Differing from the above schemes, the proposed ICP leverages both mutual information maximization and minimization to create competitive environment for learning diversified representations.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_4", "@cite_22", "@cite_9", "@cite_17", "@cite_24", "@cite_5", "@cite_15", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2963226019", "", "2796704765", "1686946872", "2593634001", "2963104724", "2753738274", "2964184826", "2787273002", "2785885194", "2962897886", "2964160479" ], "abstract": [ "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657.", "", "We present new intuitions and theoretical assessments of the emergence of disentangled representation in variational autoencoders. Taking a rate-distortion theory perspective, we show the circumstances under which representations aligned with the underlying generative factors of variation of data emerge when optimising the modified ELBO bound in @math -VAE, as training progresses. From these insights, we propose a modification to the training regime of @math -VAE, that progressively increases the information capacity of the latent code during training. This modification facilitates the robust learning of disentangled representations in @math -VAE, without the previous trade-off in reconstruction accuracy.", "We define the relevant information in a signal @math as being the information that this signal provides about another signal @math . Examples include the information that face images provide about the names of the people portrayed, or the information that speech sounds provide about the words spoken. Understanding the signal @math requires more than just predicting @math , it also requires specifying which features of @math play a role in the prediction. We formalize this problem as that of finding a short code for @math that preserves the maximum information about @math . That is, we squeeze the information that @math provides about @math through a bottleneck' formed by a limited set of codewords @math . This constrained optimization problem can be seen as a generalization of rate distortion theory in which the distortion measure @math emerges from the joint statistics of @math and @math . This approach yields an exact set of self consistent equations for the coding rules @math and @math . Solutions to these equations can be found by a convergent re-estimation method that generalizes the Blahut-Arimoto algorithm. Our variational principle provides a surprisingly rich framework for discussing a variety of problems in signal processing and learning, as will be described in detail elsewhere.", "Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work proposed to analyze DNNs in the ; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of the network is to optimize the Information Bottleneck (IB) tradeoff between compression and prediction, successively, for each layer. In this work we follow up on this idea and demonstrate the effectiveness of the Information-Plane visualization of DNNs. Our main results are: (i) most of the training epochs in standard DL are spent on ph compression of the input to efficient representation and not on fitting the training labels. (ii) The representation compression phase begins when the training errors becomes small and the Stochastic Gradient Decent (SGD) epochs change from a fast drift to smaller training error into a stochastic relaxation, or random diffusion, constrained by the training error value. (iii) The converged layers lie on or very close to the Information Bottleneck (IB) theoretical bound, and the maps from the input to any hidden layer and from this hidden layer to the output satisfy the IB self-consistent equations. This generalization through noise mechanism is unique to Deep Neural Networks and absent in one layer networks. (iv) The training time is dramatically reduced when adding more hidden layers. Thus the main advantage of the hidden layers is computational. This can be explained by the reduced relaxation time, as this it scales super-linearly (exponentially for simple diffusion) with the information compression from the previous layer.", "", "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.", "Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms.", "We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our @math -TCVAE (Total Correlation Variational Autoencoder), a refinement of the state-of-the-art @math -VAE objective for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We perform extensive quantitative and qualitative experiments, in both restricted and non-restricted settings, and show a strong relation between total correlation and disentanglement, when the latent variables model is trained using our framework.", "The practical successes of deep neural networks have not been matched by theoretical progress that satisfyingly explains their behavior. In this work, we study the information bottleneck (IB) theory of deep learning, which makes three specific claims: first, that deep networks undergo two distinct phases consisting of an initial fitting phase and a subsequent compression phase; second, that the compression phase is causally related to the excellent generalization performance of deep networks; and third, that the compression phase occurs due to the diffusion-like behavior of stochastic gradient descent. Here we show that none of these claims hold true in the general case. Through a combination of analytical results and simulation, we demonstrate that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities like tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturating nonlinearities like the widely used ReLU in fact do not. Moreover, we find that there is no evident causal connection between compression and generalization: networks that do not compress are still capable of generalization, and vice versa. Next, we show that the compression phase, when it exists, does not arise from stochasticity in training by demonstrating that we can replicate the IB findings using full batch gradient descent rather than stochastic gradient descent. Finally, we show that when an input domain consists of a subset of task-relevant and task-irrelevant information, hidden representations do compress the task-irrelevant information, although the overall information about the input may monotonically increase with training time, and that this compression happens concurrently with the fitting process rather than during a subsequent compression period.", "We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent an approximate posterior distribution and uses this for optimisation of a variational lower bound. We develop stochastic backpropagation - rules for gradient backpropagation through stochastic variables - and derive an algorithm that allows for joint optimisation of the parameters of both the generative and recognition models. We demonstrate on several real-world data sets that by using stochastic backpropagation and variational inference, we obtain models that are able to generate realistic samples of data, allow for accurate imputations of missing data, and provide a useful tool for high-dimensional data visualisation.", "We present a variational approximation to the information bottleneck of (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method “Deep Variational Information Bottleneck”, or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack." ] }
1906.01288
2948545623
Learning representations with diversified information remains an open problem. Towards learning diversified representations, a new approach, termed Information Competing Process (ICP), is proposed in this paper. Aiming to enrich the information carried by feature representations, ICP separates a representation into two parts with different mutual information constraints. The separated parts are forced to accomplish the downstream task independently in a competitive environment which prevents the two parts from learning what each other learned for the downstream task. Such competing parts are then combined synergistically to complete the task. By fusing representation parts learned competitively under different conditions, ICP facilitates obtaining diversified representations which contain complementary information. Experiments on image classification and image reconstruction tasks demonstrate the great potential of ICP to learn discriminative and disentangled representations in both supervised and self-supervised learning settings.
The idea of collaborating neural representations can be found in Neural Expectation Maximization @cite_21 and Tagger @cite_13 , which uses different representations to group and represent individual entities. The Competitive Collaboration @cite_28 method is the most relevant to our work. It defines a three-player game with two competitors and a moderator, where the moderator takes the role of a critic and the two competitors collaborate to train the moderator. Unlike Competitive Collaboration, the proposed ICP enforces two (or more) parts to be complementary for the same downstream task by a competitive environment, which endows the capability of learning more discriminative and disentangled representations.
{ "cite_N": [ "@cite_28", "@cite_21", "@cite_13" ], "mid": [ "2964968086", "2964213104", "2962889261" ], "abstract": [ "", "Many real world tasks such as reasoning and physical interaction require identification and manipulation of conceptual entities. A first step towards solving these tasks is the automated discovery of distributed symbol-like representations. In this paper, we explicitly formalize this problem as inference in a spatial mixture model where each component is parametrized by a neural network. Based on the Expectation Maximization framework we then derive a differentiable clustering method that simultaneously learns how to group and represent individual entities. We evaluate our method on the (sequential) perceptual grouping task and find that it is able to accurately recover the constituent objects. We demonstrate that the learned representations are useful for next-step prediction.", "We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. We enable a neural network to group the representations of different objects in an iterative manner through a differentiable mechanism. We achieve very fast convergence by allowing the system to amortize the joint iterative inference of the groupings and their representations. In contrast to many other recently proposed methods for addressing multi-object scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. We evaluate our method on multi-digit classification of very cluttered images that require texture segmentation. Remarkably our method achieves improved classification performance over convolutional networks despite being fully connected, by making use of the grouping mechanism. Furthermore, we observe that our system greatly improves upon the semi-supervised result of a baseline Ladder network on our dataset. These results are evidence that grouping is a powerful tool that can help to improve sample efficiency." ] }
1906.01102
2948682407
We posit that hippocampal place cells encode information about future locations under a transition distribution observed as an agent explores a given (physical or conceptual) space. The encoding of information about the current location, usually associated with place cells, then emerges as a necessary step to achieve this broader goal. We formally derive a biologically-inspired neural network from Nystrom kernel approximations and empirically demonstrate that the network successfully approximates transition distributions. The proposed network yields representations that, just like place cells, soft-tile the input space with highly sparse and localized receptive fields. Additionally, we show that the proposed computational motif can be extended to handle supervised problems, creating class-specific place cells while exhibiting low sample complexity.
Our work takes inspiration from convolutional kernel networks (CKN) @cite_15 @cite_19 . CKNs replace the matrix-vector multiplications used in convolutional networks by kernel feature maps and are used for supervised classification. Their feature maps use Nystr "om-like approximations. CKNs are also related to radial basis function networks @cite_31 and self-organizing maps @cite_27 .
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_31", "@cite_27" ], "mid": [ "2963766931", "2123872146", "94523489", "2725470024" ], "abstract": [ "In this paper, we introduce a new image representation based on a multilayer kernel machine. Unlike traditional kernel methods where data representation is decoupled from the prediction task, we learn how to shape the kernel with supervision. We proceed by first proposing improvements of the recently-introduced convolutional kernel networks (CKNs) in the context of unsupervised learning; then, we derive backpropagation rules to take advantage of labeled training data. The resulting model is a new type of convolutional neural network, where optimizing the filters at each layer is equivalent to learning a linear subspace in a reproducing kernel Hilbert space (RKHS). We show that our method achieves reasonably competitive performance for image classification on some standard \" deep learning \" datasets such as CIFAR-10 and SVHN, and also for image super-resolution, demonstrating the applicability of our approach to a large variety of image-related tasks.", "An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art.", "Abstract : The relationship between 'learning' in adaptive layered networks and the fitting of data with high dimensional surfaces is discussed. This leads naturally to a picture of 'generalization in terms of interpolation between known data points and suggests a rational approach to the theory of such networks. A class of adaptive networks is identified which makes the interpolation scheme explicit. This class has the property that learning is equivalent to the solution of a set of linear equations. These networks thus represent nonlinear relationships while having a guaranteed learning rule. Great Britain.", "" ] }
1906.01102
2948682407
We posit that hippocampal place cells encode information about future locations under a transition distribution observed as an agent explores a given (physical or conceptual) space. The encoding of information about the current location, usually associated with place cells, then emerges as a necessary step to achieve this broader goal. We formally derive a biologically-inspired neural network from Nystrom kernel approximations and empirically demonstrate that the network successfully approximates transition distributions. The proposed network yields representations that, just like place cells, soft-tile the input space with highly sparse and localized receptive fields. Additionally, we show that the proposed computational motif can be extended to handle supervised problems, creating class-specific place cells while exhibiting low sample complexity.
Our model also shares some similarities with the bag-of-features (BoF) approach, as place cells can be interpreted as soft-quantizers of their input space. @cite_23 proposed a convolutional neural network that incorporates a BoF layer composed of @math -normalized neurons with RBF receptive fields. They use this model for supervised classification.
{ "cite_N": [ "@cite_23" ], "mid": [ "2963828468" ], "abstract": [ "Convolutional Neural Networks (CNNs) are well established models capable of achieving state-of-the-art classification accuracy for various computer vision tasks. However, they are becoming increasingly larger, using millions of parameters, while they are restricted to handling images of fixed size. In this paper, a quantization-based approach, inspired from the well-known Bag-of-Features model, is proposed to overcome these limitations. The proposed approach, called Convolutional BoF (CBoF), uses RBF neurons to quantize the information extracted from the convolutional layers and it is able to natively classify images of various sizes as well as to significantly reduce the number of parameters in the network. In contrast to other global pooling operators and CNN compression techniques the proposed method utilizes a trainable pooling layer that it is end-to-end differentiable, allowing the network to be trained using regular back-propagation and to achieve greater distribution shift invariance than competitive methods. The ability of the proposed method to reduce the parameters of the network and increase the classification accuracy over other state-of-the-art techniques is demonstrated using three image datasets." ] }