aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1706.00672
2963835147
Abstract We propose a new framework that extends the standard Probability Hypothesis Density (PHD) filter for multiple targets having N ⩾ 2 different types based on Random Finite Set theory, taking into account not only background clutter, but also confusions among detections of different target types, which are in general different in character from background clutter. Under Gaussianity and linearity assumptions, our framework extends the existing Gaussian mixture (GM) implementation of the standard PHD filter to create a N-type GM-PHD filter. The methodology is applied to real video sequences by integrating object detectors’ information into this filter for two scenarios. For both cases, Munkres’s variant of the Hungarian assignment algorithm is used to associate tracked target identities between frames. This approach is evaluated and compared to both raw detection and independent GM-PHD filters using the Optimal Sub-pattern Assignment metric and discrimination rate. This shows the improved performance of our strategy on real video sequences.
To address the problems of increasing complexity, a unified framework that directly extends single to multiple target tracking by representing multi-target states and observations as RFS was developed by Mahler @cite_42 . This estimates the states and cardinality of an unknown and time varying number of targets in the scene, and allows for target birth, death, handles clutter (false alarms) and missing detections. Mahler propagated the first-order moment of the multi-target posterior, called the Probability Hypothesis Density (PHD), rather than the full multi-target posterior.
{ "cite_N": [ "@cite_42" ], "mid": [ "2014787937" ], "abstract": [ "The theoretically optimal approach to multisensor-multitarget detection, tracking, and identification is a suitable generalization of the recursive Bayes nonlinear filter. Even in single-target problems, this optimal filter is so computationally challenging that it must usually be approximated. Consequently, multitarget Bayes filtering will never be of practical interest without the development of drastic but principled approximation strategies. In single-target problems, the computationally fastest approximate filtering approach is the constant-gain Kalman filter. This filter propagates a first-order statistical moment - the posterior expectation - in the place of the posterior distribution. The purpose of this paper is to propose an analogous strategy for multitarget systems: propagation of a first-order statistical moment of the multitarget posterior. This moment, the probability hypothesis density (PHD), is the function whose integral in any region of state space is the expected number of targets in that region. We derive recursive Bayes filter equations for the PHD that account for multiple sensors, nonconstant probability of detection, Poisson false alarms, and appearance, spawning, and disappearance of targets. We also show that the PHD is a best-fit approximation of the multitarget posterior in an information-theoretic sense." ] }
1706.00672
2963835147
Abstract We propose a new framework that extends the standard Probability Hypothesis Density (PHD) filter for multiple targets having N ⩾ 2 different types based on Random Finite Set theory, taking into account not only background clutter, but also confusions among detections of different target types, which are in general different in character from background clutter. Under Gaussianity and linearity assumptions, our framework extends the existing Gaussian mixture (GM) implementation of the standard PHD filter to create a N-type GM-PHD filter. The methodology is applied to real video sequences by integrating object detectors’ information into this filter for two scenarios. For both cases, Munkres’s variant of the Hungarian assignment algorithm is used to associate tracked target identities between frames. This approach is evaluated and compared to both raw detection and independent GM-PHD filters using the Optimal Sub-pattern Assignment metric and discrimination rate. This shows the improved performance of our strategy on real video sequences.
There are two popular implementation schemes for the PHD filter, the Gaussian mixture (GM-PHD) @cite_18 and the Sequential Monte Carlo (SMC) or particle-PHD filter @cite_23 . The GM-PHD filter is preferred for linear (and by extension mildly non-linear) dynamic and observation models and assumes a Gaussian stochastic process @cite_18 . However, for highly non-linear dynamic and observation models and non-Gaussian stochastic process, the SMC-PHD filter is the better implementation scheme @cite_23 . For example, the GM-PHD filter is used in @cite_15 for tracking pedestrians in video sequences but there is only one type of target and the motion model is fixed, and in @cite_20 @cite_40 for selective tracking in sparse and dense environments. As an extension, a GM-PHD Filter was also developed in @cite_8 for maneuvering targets but this employed a Jump Markov System (JMS) that switched between several motion models. In contrast, a particle-PHD filter was applied in @cite_9 to allow for more complex motion models, and to cope with variation of scale, which has significant effects not just on object motion but also on the detection process. It was also used in @cite_43 , treating high-confidence (strong) and low-confidence (weak) detections separately for better performance.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_9", "@cite_43", "@cite_40", "@cite_23", "@cite_15", "@cite_20" ], "mid": [ "2137585588", "2123185183", "2071625890", "2547098537", "2962822995", "2161435744", "1972619391", "" ], "abstract": [ "A new recursive algorithm is proposed for jointly estimating the time-varying number of targets and their states from a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise, and false alarms. The approach involves modelling the respective collections of targets and measurements as random finite sets and applying the probability hypothesis density (PHD) recursion to propagate the posterior intensity, which is a first-order statistic of the random finite set of targets, in time. At present, there is no closed-form solution to the PHD recursion. This paper shows that under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture. More importantly, closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posterior intensity are derived. The proposed algorithm combines these recursions with a strategy for managing the number of Gaussian components to increase efficiency. This algorithm is extended to accommodate mildly nonlinear target dynamics using approximation strategies from the extended and unscented Kalman filters", "The probability hypothesis density (PHD) filter is an attractive approach to tracking an unknown and time-varying number of targets in the presence of data association uncertainty, clutter, noise, and detection uncertainty. The PHD filter admits a closed-form solution for a linear Gaussian multi-target model. However, this model is not general enough to accommodate maneuvering targets that switch between several models. In this paper, we generalize the notion of linear jump Markov systems to the multiple target case to accommodate births, deaths, and switching dynamics. We then derive a closed-form solution to the PHD recursion for the proposed linear Gaussian jump Markov multi-target model. Based on this an efficient method for tracking multiple maneuvering targets that switch between a set of linear Gaussian models is developed. An analytic implementation of the PHD filter using statistical linear regression technique is also proposed for targets that switch between a set of nonlinear models. We demonstrate through simulations that the proposed PHD filters are effective in tracking multiple maneuvering targets.", "We propose a filtering framework for multitarget tracking that is based on the probability hypothesis density (PHD) filter and data association using graph matching. This framework can be combined with any object detectors that generate positional and dimensional information of objects of interest. The PHD filter compensates for missing detections and removes noise and clutter. Moreover, this filter reduces the growth in complexity with the number of targets from exponential to linear by propagating the first-order moment of the multitarget posterior, instead of the full posterior. In order to account for the nature of the PHD propagation, we propose a novel particle resampling strategy and we adapt dynamic and observation models to cope with varying object scales. The proposed resampling strategy allows us to use the PHD filter when a priori knowledge of the scene is not available. Moreover, the dynamic and observation models are not limited to the PHD filter and can be applied to any Bayesian tracker that can handle state-dependent variances. Extensive experimental results on a large video surveillance dataset using a standard evaluation protocol show that the proposed filtering framework improves the accuracy of the tracker, especially in cluttered scenes.", "We propose an online multi-target tracker that exploits both high- and low-confidence target detections in a Probability Hypothesis Density Particle Filter framework. High-confidence (strong) detections are used for label propagation and target initialization. Low-confidence (weak) detections only support the propagation of labels, i.e. tracking existing targets. Moreover, we perform data association just after the prediction stage thus avoiding the need for computationally expensive labeling procedures such as clustering. Finally, we perform sampling by considering the perspective distortion in the target observations. The tracker runs on average at 12 frames per second. Results show that our method outperforms alternative online trackers on the Multiple Object Tracking 2016 and 2015 benchmark datasets in terms tracking accuracy, false negatives and speed.", "Abstract Tracking a target of interest in both sparse and crowded environments is a challenging problem, not yet successfully addressed in the literature. In this paper, we propose a new long-term visual tracking algorithm, learning discriminative correlation filters and using an online classifier, to track a target of interest in both sparse and crowded video sequences. First, we learn a translation correlation filter using a multi-layer hybrid of convolutional neural networks (CNN) and traditional hand-crafted features. Second, we include a re-detection module for overcoming tracking failures due to long-term occlusions using online SVM and Gaussian mixture probability hypothesis density (GM-PHD) filter. Finally, we learn a scale correlation filter for estimating the scale of a target by constructing a target pyramid around the estimated or re-detected position using the HOG features. We carry out extensive experiments on both sparse and dense data sets which show that our method significantly outperforms state-of-the-art methods.", "Random finite sets (RFSs) are natural representations of multitarget states and observations that allow multisensor multitarget filtering to fit in the unifying random set framework for data fusion. Although the foundation has been established in the form of finite set statistics (FISST), its relationship to conventional probability is not clear. Furthermore, optimal Bayesian multitarget filtering is not yet practical due to the inherent computational hurdle. Even the probability hypothesis density (PHD) filter, which propagates only the first moment (or PHD) instead of the full multitarget posterior, still involves multiple integrals with no closed forms in general. This article establishes the relationship between FISST and conventional probability that leads to the development of a sequential Monte Carlo (SMC) multitarget filter. In addition, an SMC implementation of the PHD filter is proposed and demonstrated on a number of simulated scenarios. Both of the proposed filters are suitable for problems involving nonlinear nonGaussian dynamics. Convergence results for these filters are also established.", "Tracking multiple moving targets in a video is a challenge because of several factors, including noisy video data, varying number of targets, and mutual occlusion problems. The Gaussian mixture probability hypothesis density (GM-PHD) filter, which aims to recursively propagate the intensity associated with the multi-target posterior density, can overcome the difficulty caused by the data association. This paper develops a multi-target visual tracking system that combines the GM-PHD filter with object detection. First, a new birth intensity estimation algorithm based on entropy distribution and coverage rate is proposed to automatically and accurately track the newborn targets in a noisy video. Then, a robust game-theoretical mutual occlusion handling algorithm with an improved spatial color appearance model is proposed to effectively track the targets in mutual occlusion. The spatial color appearance model is improved by incorporating interferences of other targets within the occlusion region. Finally, the experiments conducted on publicly available videos demonstrate the good performance of the proposed visual tracking system.", "" ] }
1706.00637
2621106696
While several matrix factorization (MF) and tensor factorization (TF) models have been proposed for knowledge base (KB) inference, they have rarely been compared across various datasets. Is there a single model that performs well across datasets? If not, what characteristics of a dataset determine the performance of MF and TF models? Is there a joint TF+MF model that performs robustly on all datasets? We perform an extensive evaluation to compare popular KB inference models across popular datasets in the literature. In addition to answering the questions above, we remove a limitation in the standard evaluation protocol for MF models, propose an extension to MF models so that they can better handle out-of-vocabulary (OOV) entity pairs, and develop a novel combination of TF and MF models. We also analyze and explain the results based on models and dataset characteristics. Our best model is robust, and obtains strong results across all datasets.
Traditional methods for inference over KBs include random walks over knowledge graphs @cite_9 , natural logic inference @cite_28 , and use of statistical relational learning models such as Markov Logic Network, Bayesian Logic Programs, and Probabilistic Soft Logic @cite_3 @cite_30 @cite_36 . These need (or benefit from) a background knowledge of inference rules, predominantly generated via extended distributional similarity @cite_34 @cite_37 @cite_24 @cite_31 @cite_6 @cite_27 @cite_33 .
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_33", "@cite_28", "@cite_36", "@cite_9", "@cite_3", "@cite_6", "@cite_24", "@cite_27", "@cite_31", "@cite_34" ], "mid": [ "", "", "", "", "2251616524", "1756422141", "1984626158", "2251074656", "1483236033", "", "2151502664", "" ], "abstract": [ "", "", "", "", "A standard pipeline for statistical relational learning involves two steps: one first constructs the knowledge base (KB) from text, and then performs the learning and reasoning tasks using probabilistic first-order logics. However, a key issue is that information extraction (IE) errors from text affect the quality of the KB, and propagate to the reasoning task. In this paper, we propose a statistical relational learning model for joint information extraction and reasoning. More specifically, we incorporate context-based entity extraction with structure learning (SL) in a scalable probabilistic logic framework. We then propose a latent context invention (LCI) approach to improve the performance. In experiments, we show that our approach outperforms state-of-the-art baselines over three real-world Wikipedia datasets from multiple domains; that joint learning and inference for IE and SL significantly improve both tasks; that latent context invention further improves the results.", "We consider the problem of performing learning and inference in a large scale knowledge base containing imperfect knowledge with incomplete coverage. We show that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for the knowledge base. More specifically, we show that the system can learn to infer different target relations by tuning the weights associated with random walks that follow different paths through the graph, using a version of the Path Ranking Algorithm (Lao and Cohen, 2010b). We apply this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by NELL, a never-ending language learner (, 2010). This new system improves significantly over NELL's earlier Horn-clause learning and inference method: it obtains nearly double the precision at rank 100, and the new learning method is also applicable to many more inference tasks.", "Most Web-based Q A systems work by finding pages that contain an explicit answer to a question. These systems are helpless if the answer has to be inferred from multiple sentences, possibly on different pages. To solve this problem, we introduce the Holmes system, which utilizes textual inference (TI) over tuples extracted from text. Whereas previous work on TI (e.g., the literature on textual entailment) has been applied to paragraph-sized texts, Holmes utilizes knowledge-based model construction to scale TI to a corpus of 117 million Web pages. Given only a few minutes, Holmes doubles recall for example queries in three disparate domains (geography, business, and nutrition). Importantly, Holmes's runtime is linear in the size of its input corpus due to a surprising property of many textual relations in the Web corpus---they are \"approximately\" functional in a well-defined sense.", "Relational phrases (e.g., “got married to”) and their hypernyms (e.g., “is a relative of”) are central for many tasks including question answering, open information extraction, paraphrasing, and entailment detection. This has motivated the development of several linguistic resources (e.g. DIRT, PATTY, and WiseNet) which systematically collect and organize relational phrases. These resources have demonstrable practical benefits, but are each limited due to noise, sparsity, or size. We present a new general-purpose method, RELLY, for constructing a large hypernymy graph of relational phrases with high-quality subsumptions using collective probabilistic programming techniques. Our graph induction approach integrates small highprecision knowledge bases together with large automatically curated resources, and reasons collectively to combine these resources into a consistent graph. Using RELLY, we construct a high-coverage, high-precision hypernymy graph consisting of 20K relational phrases and 35K hypernymy links. Our evaluation indicates a hypernymy link precision of 78 , and demonstrates the value of this resource for a document-relevance ranking task.", "This paper presents PATTY: a large resource for textual patterns that denote binary relations between entities. The patterns are semantically typed and organized into a subsumption taxonomy. The PATTY system is based on efficient algorithms for frequent itemset mining and can process Web-scale corpora. It harnesses the rich type system and entity population of large knowledge bases. The PATTY taxonomy comprises 350,569 pattern synsets. Random-sampling-based evaluation shows a pattern accuracy of 84.7 . PATTY has 8,162 subsumptions, with a random-sampling-based precision of 75 . The PATTY resource is freely available for interactive access and download.", "", "Recent advances in information extraction have led to huge knowledge bases (KBs), which capture knowledge in a machine-readable format. Inductive Logic Programming (ILP) can be used to mine logical rules from the KB. These rules can help deduce and add missing knowledge to the KB. While ILP is a mature field, mining logical rules from KBs is different in two aspects: First, current rule mining systems are easily overwhelmed by the amount of data (state-of-the art systems cannot even run on today's KBs). Second, ILP usually requires counterexamples. KBs, however, implement the open world assumption (OWA), meaning that absent data cannot be used as counterexamples. In this paper, we develop a rule mining model that is explicitly tailored to support the OWA scenario. It is inspired by association rule mining and introduces a novel measure for confidence. Our extensive experiments show that our approach outperforms state-of-the-art approaches in terms of precision and coverage. Furthermore, our system, AMIE, mines rules orders of magnitude faster than state-of-the-art approaches.", "" ] }
1706.00637
2621106696
While several matrix factorization (MF) and tensor factorization (TF) models have been proposed for knowledge base (KB) inference, they have rarely been compared across various datasets. Is there a single model that performs well across datasets? If not, what characteristics of a dataset determine the performance of MF and TF models? Is there a joint TF+MF model that performs robustly on all datasets? We perform an extensive evaluation to compare popular KB inference models across popular datasets in the literature. In addition to answering the questions above, we remove a limitation in the standard evaluation protocol for MF models, propose an extension to MF models so that they can better handle out-of-vocabulary (OOV) entity pairs, and develop a novel combination of TF and MF models. We also analyze and explain the results based on models and dataset characteristics. Our best model is robust, and obtains strong results across all datasets.
Similarly, other TF models also exist, for example, Parafac @cite_26 , Rescal @cite_25 and NTN @cite_17 . These are older models which are shown to be outperformed by models evaluated in this paper. More recent models have also been introduced such as a model using holographic embeddings @cite_13 , and another with asymmetric embeddings using complex vectors @cite_10 . It will be nice to compare these rigorously as well. The learned embeddings can use additional information such as typing @cite_2 , have been used to mine logical rules @cite_11 and have been used for schema induction @cite_15 .
{ "cite_N": [ "@cite_13", "@cite_26", "@cite_11", "@cite_2", "@cite_15", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2949972983", "2121739212", "", "", "2369012442", "2432356473", "", "2127426251" ], "abstract": [ "Learning embeddings of entities and relations is an efficient and versatile method to perform machine learning on relational data such as knowledge graphs. In this work, we propose holographic embeddings (HolE) to learn compositional vector space representations of entire knowledge graphs. The proposed method is related to holographic models of associative memory in that it employs circular correlation to create compositional representations. By using correlation as the compositional operator HolE can capture rich interactions but simultaneously remains efficient to compute, easy to train, and scalable to very large datasets. In extensive experiments we show that holographic embeddings are able to outperform state-of-the-art methods for link prediction in knowledge graphs and relational learning benchmark datasets.", "Simple structure and other common principles of factor rotation do not in general provide strong grounds for attributing explanatory significance to the factors which they select. In contrast, it is shown that an extension of Cattell's principle of rotation to Proportional Profiles (PP) offers a basis for determining explanatory factors for three-way or higher order multi-mode data. Conceptual models are developed for two basic patterns of multi-mode data variation, systemand object-variation, and PP analysis is found to apply in the system-variation case. Although PP was originally formulated as a principle of rotation to be used with classic two-way factor analysis, it is shown to embody a latent three-mode factor model, which is here made explicit and generalized frown two to N \"parallel occasions\". As originally formulated, PP rotation was restricted to orthogonal factors. The generalized PP model is demonstrated to give unique \"correct\" solutions with oblique, non-simple structure, and even non-linear factor structures. A series of tests, conducted with synthetic data of known factor composition, demonstrate the capabilities of linear and non-linear versions of the model, provide data on the minimal necessary conditions of uniqueness, and reveal the properties of the analysis procedures when these minimal conditions are not fulfilled. In addition, a mathematical proof is presented for the uniqueness of the solution given certain conditions on the data. Three-mode PP factor analysis is applied to a three-way set of real data consisting of the fundamental and first three formant frequencies of 11 persons saying 8 vowels. A unique solution is extracted, consisting of three factors which are highly meaningful and consistent with prior knowledge and theory concerning vowel quality. The relationships between the three-mode PP model and Tucker's multi-modal model, McDonald's non-linear model and Carroll and Chang's multi-dimensional scaling model are explored.", "", "", "Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).", "In statistical relational learning, the link prediction problem is key to automatically understand the structure of large knowledge bases. As in previous studies, we propose to solve this problem through latent factorization. However, here we make use of complex valued embeddings. The composition of complex embeddings can handle a large variety of binary relations, among them symmetric and antisymmetric relations. Compared to state-of-the-art models such as Neural Tensor Network and Holographic Embeddings, our approach based on complex embeddings is arguably simpler, as it only uses the Hermitian dot product, the complex counterpart of the standard dot product between real vectors. Our approach is scalable to large datasets as it remains linear in both space and time, while consistently outperforming alternative approaches on standard link prediction benchmarks.", "", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1706.00139
2620635248
Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.
More recently, RNN encoder-decoder based models with attention mechanism @cite_3 have shown improved performances in various tasks. proposed a review network to the image captioning, which reviews all the information encoded by the encoder and produces a compact thought vector. proposed RNN encoder-decoder-based model by using two attention layers to jointly train content selection and surface realization. More close to our work, proposed an attentive encoder-decoder based generator which computed the attention mechanism over the slot-value pairs. The model showed a domain scalability when a very limited amount of data is available.
{ "cite_N": [ "@cite_3" ], "mid": [ "2133564696" ], "abstract": [ "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition." ] }
1706.00139
2620635248
Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets.
Moving from a limited domain dialogue system to an open domain dialogue system raises some issues. Therefore, it is important to build an open domain dialogue system that can make as much use of existing abilities of functioning from other domains. There have been several works to tackle this problem, such as @cite_4 using RNN-based networks for multi-domain dialogue state tracking, @cite_25 using a procedure to train multi-domain via multiple adaptation steps, or @cite_20 @cite_11 adapting of SDS components to new domains.
{ "cite_N": [ "@cite_4", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2115090890", "2951718774", "1492935830", "2157098039" ], "abstract": [ "Dialog state tracking is a key component of many modern dialog systems, most of which are designed with a single, well-defined domain in mind. This paper shows that dialog data drawn from different dialog domains can be used to train a general belief tracking model which can operate across all of these domains, exhibiting superior performance to each of the domain-specific models. We propose a training procedure which uses out-of-domain data to initialise belief tracking models for entirely new domains. This procedure leads to improvements in belief tracking performance regardless of the amount of in-domain data available for training the model.", "Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.", "Statistical dialogue systems offer the potential to reduce costs by learning policies automatically on-line, but are not designed to scale to large open-domains. This paper proposes a hierarchical distributed dialogue architecture in which policies are organised in a class hierarchy aligned to an underlying knowledge graph. This allows a system to be deployed using a modest amount of data to train a small set of generic policies. As further data is collected, generic policies can be adapted to give in-domain performance. Using Gaussian process-based reinforcement learning, it is shown that within this framework generic policies can be constructed which provide acceptable user performance, and better performance than can be obtained using under-trained domain specific policies. It is also shown that as sufficient in-domain data becomes available, it is possible to seamlessly improve performance, without subjecting users to unacceptable behaviour during the adaptation period and without limiting the final performance compared to policies trained from scratch.", "Statistical approaches to dialog state tracking synthesize information across multiple turns in the dialog, overcoming some speech recognition errors. When training a dialog state tracker, there is typically only a small corpus of well-matched dialog data available. However, often there is a large corpus of mis-matched but related data ‐ perhaps pertaining to different semantic concepts, or from a different dialog system. It would be desirable to use this related dialog data to supplement the small corpus of well-matched dialog data. This paper addresses this task as multi-domain learning, presenting 3 methods which synthesize data from different slots and different dialog systems. Since deploying a new dialog state tracker often changes the resulting dialogs in ways that are difficult to predict, we study how well each method generalizes to unseen distributions of dialog data. Our main result is the finding that a simple method for multi-domain learning substantially improves performance in highly mis-matched conditions." ] }
1706.00290
2620869433
End-to-end training of automated speech recognition (ASR) systems requires massive data and compute resources. We explore transfer learning based on model adaptation as an approach for training ASR models under constrained GPU memory, throughput and training data. We conduct several systematic experiments adapting a Wav2Letter convolutional neural network originally trained for English ASR to the German language. We show that this technique allows faster training on consumer-grade resources while requiring less training data in order to achieve the same accuracy, thereby lowering the cost of training ASR models in other languages. Model introspection revealed that small adaptations to the network's weights were sufficient for good performance, especially for inner layers.
One such method, known as , is a machine learning technique for enhancing a model's performance in a data-scarce domain by cross-training on data from other domains or tasks. There are several kinds of transfer learning. The predominant one being applied to ASR is @cite_7 which involves training a base model on multiple languages (and tasks) simultaneously. While this achieves some competitive results @cite_4 @cite_3 , it still requires large amounts of data to yield robust improvements @cite_16 .
{ "cite_N": [ "@cite_3", "@cite_4", "@cite_7", "@cite_16" ], "mid": [ "2286443923", "2025401819", "2278264165", "1978660892" ], "abstract": [ "Copyright © 2014 ISCA. Developing high-performance speech processing systems for low-resource languages is very challenging. One approach to address the lack of resources is to make use of data from multiple languages. A popular direction in recent years is to train a multi-language bottleneck DNN. Language dependent and or multi-language (all training languages) Tandem acoustic models (AM) are then trained. This work considers a particular scenario where the target language is unseen in multi-language training and has limited language model training data, a limited lexicon, and acoustic training data without transcriptions. A zero acoustic resources case is first described where a multilanguage AM is directly applied, as a language independent AM (LIAM), to an unseen language. Secondly, in an unsupervised approach a LIAM is used to obtain hypotheses for the target language acoustic data transcriptions which are then used in training a language dependent AM. 3 languages from the IARPA Babel project are used for assessment: Vietnamese, Haitian Creole and Bengali. Performance of the zero acoustic resources system is found to be poor, with keyword spotting at best 60 of language dependent performance. Unsupervised language dependent training yields performance gains. For one language (Haitian Creole) the Babel target is achieved on the in-vocabulary data.", "We propose a multitask learning (MTL) approach to improve low-resource automatic speech recognition using deep neural networks (DNNs) without requiring additional language resources. We first demonstrate that the performance of the phone models of a single low-resource language can be improved by training its grapheme models in parallel under the MTL framework. If multiple low-resource languages are trained together, we investigate learning a set of universal phones (UPS) as an additional task again in the MTL framework to improve the performance of the phone models of all the involved languages. In both cases, the heuristic guideline is to select a task that may exploit extra information from the training data of the primary task(s). In the first method, the extra information is the phone-to-grapheme mappings, whereas in the second method, the UPS helps to implicitly map the phones of the multiple languages among each other. In a series of experiments using three low-resource South African languages in the Lwazi corpus, the proposed MTL methods obtain significant word recognition gains when compared with single-task learning (STL) of the corresponding DNNs or ROVER that combines results from several STL-trained DNNs.", "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field.", "Today's speech recognition technology is mature enough to be useful for many practical applications. In this context, it is of paramount importance to train accurate acoustic models for many languages within given resource constraints such as data, processing power, and time. Multilingual training has the potential to solve the data issue and close the performance gap between resource-rich and resource-scarce languages. Neural networks lend themselves naturally to parameter sharing across languages, and distributed implementations have made it feasible to train large networks. In this paper, we present experimental results for cross- and multi-lingual network training of eleven Romance languages on 10k hours of data in total. The average relative gains over the monolingual baselines are 4 2 (data-scarce data-rich languages) for cross- and 7 2 for multi-lingual training. However, the additional gain from jointly training the languages on all data comes at an increased training time of roughly four weeks, compared to two weeks (monolingual) and one week (crosslingual)." ] }
1706.00290
2620869433
End-to-end training of automated speech recognition (ASR) systems requires massive data and compute resources. We explore transfer learning based on model adaptation as an approach for training ASR models under constrained GPU memory, throughput and training data. We conduct several systematic experiments adapting a Wav2Letter convolutional neural network originally trained for English ASR to the German language. We show that this technique allows faster training on consumer-grade resources while requiring less training data in order to achieve the same accuracy, thereby lowering the cost of training ASR models in other languages. Model introspection revealed that small adaptations to the network's weights were sufficient for good performance, especially for inner layers.
In terms of how much data is needed for effective retraining, a much more promising type of transfer learning is called @cite_7 . With this technique, we first train a model on one (or more) languages, then retrain all or parts of it on another language which was unseen during the first training round. The parameters learned from the first language serve as a starting point, similar in effect to pre-training. applied this technique by first learning a MLP from multiple languages with relatively abundant data, such as English, and then getting competitive results on languages like Czech and Vietnamese, for which there is not as much data available.
{ "cite_N": [ "@cite_7" ], "mid": [ "2278264165" ], "abstract": [ "Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field." ] }
1706.00350
2621114650
This paper considers the slotted ALOHA protocol in a communication channel shared by N users. It is assumed that the channel has the multiple-packet reception (MPR) capability that allows the correct reception of up to M ( @math ) time-overlapping packets. To evaluate the reliability in the scenario that a packet needs to be transmitted within a strict delivery deadline D ( @math ) (in unit of slot) since its arrival at the head of queue, we consider the successful delivery probability (SDP) of a packet as performance metric of interest. We derive the optimal transmission probability that maximizes the SDP for any @math and any @math , and show it can be computed by a fixed-point iteration. In particular, the case for D = 1 (i.e., throughput maximization) is first completely addressed in this paper. Based on these theoretical results, for real-life scenarios where N may be unknown and changing, we develop a distributed algorithm that enables each user to tune its transmission probability at runtime according to the estimate of N. Simulation results show that the proposed algorithm is effective in dynamic scenarios, with near-optimal performance.
The first attempt to study slotted ALOHA under MPR was made by @cite_27 @cite_10 , in which they proposed the symmetric MPR channel model and analyzed stability properties under an infinite-user assumption. @cite_24 extended the stability study to finite-user systems without posing any limitation on the MPR model, and in addition investigated the average delay in capture channels. @cite_20 further established the throughput and stability regions for finite population over a standard MPR channel in which simultaneous packet transmissions are not helpful for the reception of any particular group of packets.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_10", "@cite_20" ], "mid": [ "", "1977809285", "2070391234", "2105747968" ], "abstract": [ "", "The stability of the Aloha random-access algorithm in an infinite-user slotted channel with multipacket-reception capability is considered. This channel is a generalization of the usual collision channel, in that it allows the correct reception of one or more packets involved in a collision. The number of successfully received packets in each slot is modeled as a random variable which depends exclusively on the number of simultaneously attempted transmissions. This general model includes as special cases channels with capture, noise, and code-division multiplexing. It is shown by drift analysis that the channel backlog Markov chain is ergodic if the packet-arrival rate is less than the expected number of packets successfully received in a collision of n as n goes to infinity. The properties of the backlog in the nonergodicity region are examined. >", "A decentralized control algorithm is sought that maximizes the stability region of the infinite-user slotted multipacket channel and is easily implementable. To this end, the perfect state information case in which the stations can use the instantaneous value of the backlog to compute the retransmission probability is studied first. The vest throughput possible for a decentralized control protocol is obtained, as well as an algorithm that achieves it. These results are then applied to derive a control scheme when the backlog is unknown, which is the case of practical relevance. This scheme, based on a binary feedback, is shown to be optimal, given some restrictions on the channel multipacket reception capability. >", "This paper studies finite-terminal random multiple access over the standard multipacket reception (MPR) channel. We characterize the relations among the throughput region of random multiple access, the capacity region of multiple access without code synchronization, and the stability region of ALOHA protocol. In the first part of the paper, we show that if the MPR channel is standard, the throughput region of random multiple access is coordinate convex. We then study the information capacity region of multiple access without code synchronization and feedback. Inner and outer bounds to the capacity region are derived. We show that both the inner and the outer bounds converge asymptotically to the throughput region. In the second part of the paper, we study the stability region of finite-terminal ALOHA multiple access. For a class of packet arrival distributions, we demonstrate that the stationary distribution of the queues possesses positive and strong positive correlation properties, which consequently yield an outer bound to the stability region. We also show the major challenge in obtaining the closure of the stability region is due to the lack of sensitivity analysis results with respect to the transmission probabilities. Particularly, if a conjectured \"sensitivity monotonicity\" property held for the stationary distribution of the queues, then equivalence between the closure of the stability region and the throughput region follows as a direct consequence, irrespective of the packet arrival distributions." ] }
1706.00350
2621114650
This paper considers the slotted ALOHA protocol in a communication channel shared by N users. It is assumed that the channel has the multiple-packet reception (MPR) capability that allows the correct reception of up to M ( @math ) time-overlapping packets. To evaluate the reliability in the scenario that a packet needs to be transmitted within a strict delivery deadline D ( @math ) (in unit of slot) since its arrival at the head of queue, we consider the successful delivery probability (SDP) of a packet as performance metric of interest. We derive the optimal transmission probability that maximizes the SDP for any @math and any @math , and show it can be computed by a fixed-point iteration. In particular, the case for D = 1 (i.e., throughput maximization) is first completely addressed in this paper. Based on these theoretical results, for real-life scenarios where N may be unknown and changing, we develop a distributed algorithm that enables each user to tune its transmission probability at runtime according to the estimate of N. Simulation results show that the proposed algorithm is effective in dynamic scenarios, with near-optimal performance.
After the aforementioned studies for various generalized MPR channels, the throughput performance of slotted ALOHA over an @math -user MPR channel has received much attention recently. Gau @cite_14 @cite_16 derived the saturation and non-saturation throughput for finite-user cases. To demonstrate the capacity-enhancement, in @cite_17 proved that the maximum achievable throughput increases superlinearly with @math for both finite-user case with saturation traffic and infinite-user case with random traffic, and in @cite_23 further showed that superlinear scaling also holds under bounded delay-moment requirements. Following @cite_17 , to fully utilize the @math -user MPR channel, @cite_6 derived the optimal transmission probability that maximizes the saturation throughput in the finite-user case under some unproved technical conditions. To the best of our knowledge, little work has been done to investigate the reliability issue over an @math -user MPR channel. Our paper here is an attempt along this direction.
{ "cite_N": [ "@cite_14", "@cite_6", "@cite_23", "@cite_16", "@cite_17" ], "mid": [ "", "2163569758", "2155547821", "2164902512", "2133114791" ], "abstract": [ "", "This paper considers random access protocols with multipacket reception (MPR), which include both slotted-Aloha and slotted τ-persistent CSMA protocols. For both protocols, each node makes a transmission attempt in a slot with a given probability. The goals of this paper are to derive the optimal transmission probability maximizing a system throughput for both protocols and to develop a simple random access protocol with MPR, which achieves a system throughput close to the maximum value. To this end, we first obtain the optimal transmission probability of a node in the slotted-Aloha protocol. The result provides a useful guideline to help us develop a simple distributed algorithm for estimating the number of active nodes. We then obtain the optimal transmission probability in the τ-persistent CSMA protocol. An in-depth study on the relation between the optimal transmission probabilities in both protocols shows that under certain conditions the optimal transmission probability in the slotted-Aloha protocol is a good approximation for the τ-persistent CSMA protocol. Based on this result, we propose a simple τ-persistent CSMA protocol with MPR which dynamically adjusts the transmission probability τ depending on the estimated number of active nodes, and thus can achieve a system throughput close to the maximum value.", "With the rapid proliferation of broadband wireless services, it is of paramount importance to understand how fast data can be sent through a wireless local area network (WLAN). Thanks to a large body of research following the seminal work of Bianchi, WLAN throughput under saturated traffic condition has been well understood. By contrast, prior investigations on throughput performance under unsaturated traffic condition was largely based on phenomenological observations, which lead to a common misconception that WLAN can support a traffic load as high as saturation throughput, if not higher, under nonsaturation condition. In this paper, we show through rigorous analysis that this misconception may result in unacceptable quality of service: mean packet delay and delay jitter may approach infinity even when the traffic load is far below the saturation throughput. Hence, saturation throughput is not a sound measure of WLAN capacity under nonsaturation condition. To bridge the gap, we define safe-bounded-mean-delay (SBMD) throughput and safe-bounded-delay-jitter (SBDJ) throughput that reflect the actual network capacity users can enjoy when they require finite mean delay and delay jitter, respectively. Our earlier work proved that in a WLAN with multi-packet reception (MPR) capability, saturation throughput scales superlinearly with the MPR capability of the network. This paper extends the investigation to the nonsaturation case and shows that superlinear scaling also holds for SBMD and SBDJ throughputs. Our results here complete the demonstration of MPR as a powerful capacity-enhancement technique for WLAN under both saturation and nonsaturation conditions.", "In this paper a new multiple access scheme dubbed contention resolution diversity slotted ALOHA (CRDSA) is introduced and its performance and implementation are thoroughly analyzed. The scheme combines diversity transmission of data bursts with efficient interference cancellation techniques. It is shown that CRDSA largely outperforms the classical lotted ALOHA (SA) technique in terms of throughput under equal packet loss ratio conditions (e.g. 17-fold improvement at packet loss ratio = 2 middot 10-2). CRDSA allows to boost the performance of random access (RA) channels in the return link of interactive satellite networks, making RA very efficient and providing low latency for the transmission of small size sparse packets. Implementation-wise it is shown that the CRDSA technique can be easily integrated in systems equipped with digital burst demodulators.", "Due to its simplicity and cost efficiency, wireless local area network (WLAN) enjoys unique advantages in providing high-speed and low-cost wireless services in hot spots and indoor environments. Traditional WLAN medium-access-control (MAC) protocols assume that only one station can transmit at a time: simultaneous transmissions of more than one station cause the destruction of all packets involved. By exploiting recent advances in PHY-layer multiuser detection (MUD) techniques, it is possible for a receiver to receive multiple packets simultaneously. This paper argues that such multipacket reception (MPR) capability can greatly enhance the capacity of future WLANs. In addition, the paper provides the MAC-layer and PHY-layer designs needed to achieve the improved capacity. First, to demonstrate MPR as a powerful capacity-enhancement technique, we prove a \"superlinearityrdquo result, which states that the system throughput per unit cost increases as the MPR capability increases. Second, we show that the commonly deployed binary exponential backoff (BEB) algorithm in today's WLAN MAC may not be optimal in an MPR system, and the optimal backoff factor increases with the MPR capability, the number of packets that can be received simultaneously. Third, based on the above insights, we design a joint MAC-PHY layer protocol for an IEEE 802.11-like WLAN that incorporates advanced PHY-layer signal processing techniques to implement MPR." ] }
1706.00046
2621220797
We propose to focus on the problem of discovering neural network architectures efficient in terms of both prediction quality and cost. For instance, our approach is able to solve the following tasks: learn a neural network able to predict well in less than 100 milliseconds or learn an efficient model that fits in a 50 Mb memory. Our contribution is a novel family of models called Budgeted Super Networks (BSN). They are learned using gradient descent techniques applied on a budgeted learning objective function which integrates a maximum authorized cost, while making no assumption on the nature of this cost. We present a set of experiments on computer vision problems and analyze the ability of our technique to deal with three different costs: the computation cost, the memory consumption cost and a distributed computation cost. We particularly show that our model can discover neural network architectures that have a better accuracy than the ResNet and Convolutional Neural Fabrics architectures on CIFAR-10 and CIFAR-100, at a lower cost.
One of the first approaches to learn efficient models is to compress the learned network, typically by pruning some connections. The oldest work is certainly the Optimal Brain Surgeon @cite_32 which removes weights in a classical neural network. The problem of network compression can also be seen as a way to speed up a particular architecture, for example by using quantization of the weights of the network @cite_23 , or by combining pruning and quantization @cite_27 . Other algorithms include the use of hardware efficient operations that allow a high speedup @cite_4 .
{ "cite_N": [ "@cite_27", "@cite_4", "@cite_32", "@cite_23" ], "mid": [ "2119144962", "2613065256", "2125389748", "587794757" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "Attentional sequence-to-sequence models have become the new standard for machine translation, but one challenge of such models is a significant increase in training and decoding cost compared to phrase-based systems. Here, we focus on efficient decoding, with a goal of achieving accuracy close the state-of-the-art in neural machine translation (NMT), while achieving CPU decoding speed throughput close to that of a phrasal decoder. We approach this problem from two angles: First, we describe several techniques for speeding up an NMT beam search decoder, which obtain a 4.4x speedup over a very efficient baseline decoder without changing the decoder output. Second, we propose a simple but powerful network architecture which uses an RNN (GRU LSTM) layer at bottom, followed by a series of stacked fully-connected layers applied at every timestep. This architecture achieves similar accuracy to a deep recurrent model, at a small fraction of the training and decoding cost. By combining these techniques, our best system achieves a very competitive accuracy of 38.3 BLEU on WMT English-French NewsTest2014, while decoding at 100 words sec on single-threaded CPU. We believe this is the best published accuracy speed trade-off of an NMT system.", "We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization, simplify networks, reduce hardware or storage requirements, increase the speed of further training, and in some cases enable rule extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Solla, 1990], which often remove the wrong weights. OBS permits the pruning of more weights than other methods (for the same error on the training set), and thus yields better generalization on test data. Crucial to OBS is a recursion relation for calculating the inverse Hessian matrix H-1 from training data and structural information of the net. OBS permits a 90 , a 76 , and a 62 reduction in weights over backpropagation with weight decay on three benchmark MONK's problems [, 1991]. Of OBS, Optimal Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987] used 18,000 weights in their NETtalk network, we used OBS to prune a network to just 1560 weights, yielding better generalization.", "Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model neural network (HMM NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware." ] }
1706.00046
2621220797
We propose to focus on the problem of discovering neural network architectures efficient in terms of both prediction quality and cost. For instance, our approach is able to solve the following tasks: learn a neural network able to predict well in less than 100 milliseconds or learn an efficient model that fits in a 50 Mb memory. Our contribution is a novel family of models called Budgeted Super Networks (BSN). They are learned using gradient descent techniques applied on a budgeted learning objective function which integrates a maximum authorized cost, while making no assumption on the nature of this cost. We present a set of experiments on computer vision problems and analyze the ability of our technique to deal with three different costs: the computation cost, the memory consumption cost and a distributed computation cost. We particularly show that our model can discover neural network architectures that have a better accuracy than the ResNet and Convolutional Neural Fabrics architectures on CIFAR-10 and CIFAR-100, at a lower cost.
Architecture improvements have been widely used in CNN to improve cost efficiency of network components, some examples are the bottleneck units in the ResNet model @cite_1 , the use of depthwise separable convolution in Xception @cite_26 and the lightweight MobileNets @cite_2 or the combinaison of pointwise group convolution and channel shuffle in ShuffleNet @cite_21 . A first example of end-to-end approaches is the usage of quantization at training time: different authors trained models using binary weight quantization coupled with full precision arithmetic operations @cite_19 , @cite_25 . Recently, @cite_7 proposed an method using half precision floating numbers during training. Another technique proposed by @cite_6 , @cite_0 and used in @cite_3 @cite_29 is the distillation of knowledge, which consists of training a smaller network to imitate the outputs of a larger network. Other approaches are dynamic networks which conditionally select the modules to respect a budget objective. @cite_30 @cite_17 @cite_10 @cite_13 @cite_28 .
{ "cite_N": [ "@cite_30", "@cite_13", "@cite_26", "@cite_7", "@cite_28", "@cite_29", "@cite_21", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_19", "@cite_2", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2592790041", "2179423374", "2951583185", "2763421725", "", "", "", "2949650786", "1821462560", "", "1690739335", "2963114950", "2612445135", "2598097916", "2729503159", "" ], "abstract": [ "", "Deep learning has become the state-of-art tool in many applications, but the evaluation and training of deep models can be time-consuming and computationally expensive. The conditional computation approach has been proposed to tackle this problem (, 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. In this paper, we use reinforcement learning as a tool to optimize conditional computation policies. More specifically, we cast the problem of learning activation-dependent policies for dropping out blocks of units as a reinforcement learning problem. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a policy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversification of the dropout policy. We present encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.", "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.", "Deep neural networks have enabled progress in a wide variety of applications. Growing the size of the neural network typically results in improved accuracy. As model sizes grow, the memory and compute requirements for training these models also increases. We introduce a technique to train deep neural networks using half precision floating point numbers. In our technique, weights, activations and gradients are stored in IEEE half-precision format. Half-precision floating numbers have limited numerical range compared to single-precision numbers. We propose two techniques to handle this loss of information. Firstly, we recommend maintaining a single-precision copy of the weights that accumulates the gradients after each optimizer step. This single-precision copy is rounded to half-precision format during training. Secondly, we propose scaling the loss appropriately to handle the loss of information with half-precision gradients. We demonstrate that this approach works for a wide variety of models including convolution neural networks, recurrent neural networks and generative adversarial networks. This technique works for large scale models with more than 100 million parameters trained on large datasets. Using this approach, we can reduce the memory consumption of deep learning models by nearly 2x. In future processors, we can also expect a significant computation speedup using half-precision hardware units.", "", "", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "", "While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.", "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.", "", "Neural network acoustic models have significantly advanced state of the art speech recognition over the past few years. However, they are usually computationally expensive due to the large number of matrix-vector multiplications and nonlinearity operations. Neural network models also require significant amounts of memory for inference because of the large model size. For these two reasons, it is challenging to deploy neural network based speech recognizers on resource-constrained platforms such as embedded devices. This paper investigates the use of binary weights and activations for computation and memory efficient neural network acoustic models. Compared to real-valued weight matrices, binary weights require much fewer bits for storage, thereby cutting down the memory footprint. Furthermore, with binary weights or activations, the matrix-vector multiplications are turned into addition and subtraction operations, which are computationally much faster and more energy efficient for hardware platforms. In this paper, we study the applications of binary weights and activations for neural network acoustic modeling, reporting encouraging results on the WSJ and AMI corpora.", "" ] }
1706.00134
2950405500
Natural language generation (NLG) plays a critical role in spoken dialogue systems. This paper presents a new approach to NLG by using recurrent neural networks (RNN), in which a gating mechanism is applied before RNN computation. This allows the proposed model to generate appropriate sentences. The RNN-based generator can be learned from unaligned data by jointly training sentence planning and surface realization to produce natural language responses. The model was extensively evaluated on four different NLG domains. The results show that the proposed generator achieved better performance on all the NLG domains compared to previous generators.
Conventional approaches to NLG traditionally split the task into two subtasks: sentence planning and surface realization. Sentence planning deals with mapping of the input semantic symbols onto a linguistic structure, ., a tree-like or a template structure. The surface realization then converts the structure into an appropriate sentence @cite_10 . Despite their success and wide use in solving NLG problems, these traditional methods still rely on the handcrafted rule-based generators or rerankers. The authors in @cite_7 proposed a class-based n-gram language model (LM) generator which can learn to generate the sentences for a given DA and then select the best sentences using a rule-based reranker. Some of the limitation of the class-based LMs were addressed in @cite_20 by proposing a method based on a syntactic dependency tree. A phrase-based generator based on factored LMs was introduced in @cite_19 , which can learn from a semantically aligned corpus.
{ "cite_N": [ "@cite_19", "@cite_20", "@cite_10", "@cite_7" ], "mid": [ "2122514299", "2949465329", "2139079654", "2004637830" ], "abstract": [ "Most previous work on trainable language generation has focused on two paradigms: (a) using a generation decisions of an existing generator. Both approaches rely on the existence of a handcrafted generation component, which is likely to limit their scalability to new domains. The first contribution of this article is to present Bagel, a fully data-driven generation method that treats the language generation task as a search for the most likely sequence of semantic concepts and realization phrases, according to Factored Language Models (FLMs). As domain utterances are not readily available for most natural language generation tasks, a large creative effort is required to produce the data necessary to represent human linguistic variation for nontrivial domains. This article is based on the assumption that learning to produce paraphrases can be facilitated by collecting data from a large sample of untrained annotators using crowdsourcing—rather than a few domain experts—by relying on a coarse meaning representation. A second contribution of this article is to use crowdsourced data to show how dialogue naturalness can be improved by learning to vary the output utterances generated for a given semantic input. Two data-driven methods for generating paraphrases in dialogue are presented: (a) by sampling from the n-best list of realizations produced by Bagel's FLM reranker; and (b) by learning a structured perceptron predicting whether candidate realizations are valid paraphrases. We train Bagel on a set of 1,956 utterances produced by 137 annotators, which covers 10 types of dialogue acts and 128 semantic concepts in a tourist information system for Cambridge. An automated evaluation shows that Bagel outperforms utterance class LM baselines on this domain. A human evaluation of 600 resynthesized dialogue extracts shows that Bagel's FLM output produces utterances comparable to a handcrafted baseline, whereas the perceptron classifier performs worse. Interestingly, human judges find the system sampling from the n-best list to be more natural than a system always returning the first-best utterance. The judges are also more willing to interact with the n-best system in the future. These results suggest that capturing the large variation found in human language using data-driven methods is beneficial for dialogue interaction.", "We present three systems for surface natural language generation that are trainable from annotated corpora. The first two systems, called NLG1 and NLG2, require a corpus marked only with domain-specific semantic attributes, while the last system, called NLG3, requires a corpus marked with both semantic attributes and syntactic dependency information. All systems attempt to produce a grammatical natural language phrase from a domain-specific semantic representation. NLG1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step, while NLG2 and NLG3 use maximum entropy probability models to individually generate each word in the phrase. The systems NLG2 and NLG3 learn to determine both the word choice and the word order of the phrase. We present experiments in which we generate phrases to describe flights in the air travel domain.", "A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH's template-based generator even for quite complex information presentations.", "The two current approaches to language generation, template-based and rule-based (linguistic) NLG, have limitations when applied to spoken dialogue systems, in part because they were developed for text generation. In this paper, we propose a new corpus-based approach to natural language generation, specifically designed for spoken dialogue systems." ] }
1706.00134
2950405500
Natural language generation (NLG) plays a critical role in spoken dialogue systems. This paper presents a new approach to NLG by using recurrent neural networks (RNN), in which a gating mechanism is applied before RNN computation. This allows the proposed model to generate appropriate sentences. The RNN-based generator can be learned from unaligned data by jointly training sentence planning and surface realization to produce natural language responses. The model was extensively evaluated on four different NLG domains. The results show that the proposed generator achieved better performance on all the NLG domains compared to previous generators.
Recently, RNNs-based approaches have shown promising performance in the NLG domain. The authors in @cite_8 @cite_17 used RNNs in a multi-modal setting to generate captions for images, while a generator using RNNs to create Chinese poetry was also proposed in @cite_2 . The authors in @cite_15 encoded an unstructured textual knowledge source along with previous responses and context to produce a response for technical support queries. For task-oriented dialogue systems, a combination of a forward RNN generator, a CNN reranker, and a backward RNN reranker was proposed in @cite_18 to generate utterances. A semantically conditioned-based Long Short-Term Memory (LSTM) generator was introduced in @cite_1 , which proposed a control " gate to the traditional LSTM cell and can learn the gating mechanism and language model jointly. A recurring problem in such systems is the lack of sufficient domain-specific annotated data.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_1", "@cite_2", "@cite_15", "@cite_17" ], "mid": [ "", "2951805548", "2952013107", "2115221470", "", "2951912364" ], "abstract": [ "", "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.", "We propose a model for Chinese poem generation based on recurrent neural networks which we argue is ideally suited to capturing poetic content and form. Our generator jointly performs content selection (“what to say”) and surface realization (“how to say”) by learning representations of individual characters, and their combinations into one or more lines as well as how these mutually reinforce and constrain each other. Poem lines are generated incrementally by taking into account the entire history of what has been generated so far rather than the limited horizon imposed by the previous line or lexical n-grams. Experimental results show that our model outperforms competitive Chinese poetry generation systems using both automatic and manual evaluation methods.", "", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art." ] }
1706.00280
2621119169
We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed Integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The intESN architecture is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such an architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss.
There are many practical tasks that require the history of inputs to be solved. In the area of artificial neural networks (ANN), such tasks require working memory. This could be implemented by recurrent connections between neurons of an RNN. Training RNNs is much harder than that of feed-forward ANNs (FFNNs) due to vanishing gradient problem @cite_41 .
{ "cite_N": [ "@cite_41" ], "mid": [ "2950894517" ], "abstract": [ "We introduce a method to train Quantized Neural Networks (QNNs) --- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At train-time the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves @math top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online." ] }
1706.00280
2621119169
We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed Integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The intESN architecture is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such an architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss.
Another recent research area is binary RC with cellular automata (CARC) which started as a interdisciplinary research within three areas: cellular automata, RC, and HDC. CARC was initially explored in @cite_47 for projecting binarized features into high-dimensional space. Further in @cite_12 , it was applied for modality classification of medical images. The usage of CARC for symbolic reasoning is explored in @cite_23 . The memory characteristics of a reservoir formed by CARC are presented in @cite_13 . Work @cite_33 proposed the usage of coupled cellular automata in CARC. Examples of recent RC developments also include advanced architectures such as Laplacian ESN @cite_32 , learning of reservoir's size and topology @cite_46 , new tools for investigating reservoir dynamics @cite_24 and determining its edge of criticality @cite_25 .
{ "cite_N": [ "@cite_33", "@cite_46", "@cite_32", "@cite_24", "@cite_23", "@cite_47", "@cite_13", "@cite_25", "@cite_12" ], "mid": [ "2950820837", "2314582146", "2779129326", "2294916050", "", "2408592118", "2604511101", "2295142501", "2522828268" ], "abstract": [ "A framework for implementing reservoir computing (RC) and extreme learning machines (ELMs), two types of artificial neural networks, based on 1D elementary Cellular Automata (CA) is presented, in which two separate CA rules explicitly implement the minimum computational requirements of the reservoir layer: hyperdimensional projection and short-term memory. CAs are cell-based state machines, which evolve in time in accordance with local rules based on a cells current state and those of its neighbors. Notably, simple single cell shift rules as the memory rule in a fixed edge CA afforded reasonable success in conjunction with a variety of projection rules, potentially significantly reducing the optimal solution search space. Optimal iteration counts for the CA rule pairs can be estimated for some tasks based upon the category of the projection rule. Initial results support future hardware realization, where CAs potentially afford orders of magnitude reduction in size, weight, and power (SWaP) requirements compared with floating point RC implementations.", "An echo-state network (ESN) is an effective alternative to gradient methods for training recurrent neural network. However, it is difficult to determine the structure (mainly the reservoir) of the ESN to match with the given application. In this paper, a growing ESN (GESN) is proposed to design the size and topology of the reservoir automatically. First, the GESN makes use of the block matrix theory to add hidden units to the existing reservoir group by group, which leads to a GESN with multiple subreservoirs. Second, every subreservoir weight matrix in the GESN is created with a predefined singular value spectrum, which ensures the echo-sate property of the ESN without posterior scaling of the weights. Third, during the growth of the network, the output weights of the GESN are updated in an incremental way. Moreover, the convergence of the GESN is proved. Finally, the GESN is tested on some artificial and real-world time-series benchmarks. Simulation results show that the proposed GESN has better prediction performance and faster leaning speed than some ESNs with fixed sizes and topologies.", "Echo state network is a novel kind of recurrent neural networks, with a trainable linear readout layer and a large fixed recurrent connected hidden layer, which can be used to map the rich dynamics of complex real-world data sets. It has been extensively studied in time series prediction. However, there may be an ill-posed problem caused by the number of real-world training samples less than the size of the hidden layer. In this brief, a Laplacian echo state network (LAESN), is proposed to overcome the ill-posed problem and obtain low-dimensional output weights. First, an echo state network is used to map the multivariate time series into a large reservoir. Then, assuming that an unknown underlying manifold is inside the reservoir, we employ the Laplacian eigenmaps to estimate the manifold by constructing an adjacency graph associated with the reservoir states. Finally, the output weights are calculated by the low-dimensional manifold. In addition, some criteria of transient stability, local controllability, and local observability are given. Experimental results based on two real-world data sets substantiate the effectiveness and characteristics of the proposed LAESN model.", "In this paper, we elaborate over the well-known interpretability issue in echo-state networks (ESNs). The idea is to investigate the dynamics of reservoir neurons with time-series analysis techniques developed in complex systems research. Notably, we analyze time series of neuron activations with recurrence plots (RPs) and recurrence quantification analysis (RQA), which permit to visualize and characterize high-dimensional dynamical systems. We show that this approach is useful in a number of ways. First, the 2-D representation offered by RPs provides a visualization of the high-dimensional reservoir dynamics. Our results suggest that, if the network is stable, reservoir and input generate similar line patterns in the respective RPs. Conversely, as the ESN becomes unstable, the patterns in the RP of the reservoir change. As a second result, we show that an RQA measure, called @math , is highly correlated with the well-established maximal local Lyapunov exponent. This suggests that complexity measures based on RP diagonal lines distribution can quantify network stability. Finally, our analysis shows that all RQA measures fluctuate on the proximity of the so-called edge of stability, where an ESN typically achieves maximum computational capability. We leverage on this property to determine the edge of stability and show that our criterion is more accurate than two well-known counterparts, both based on the Jacobian matrix of the reservoir. Therefore, we claim that RPs and RQA-based analyses are valuable tools to design an ESN, given a specific problem.", "", "", "Recurrent Neural Networks (RNNs) have been a prominent concept within artificial intelligence. They are inspired by Biological Neural Networks (BNNs) and provide an intuitive and abstract representation of how BNNs work. Derived from the more generic Artificial Neural Networks (ANNs), the recurrent ones are meant to be used for temporal tasks, such as speech recognition, because they are capable of memorizing historic input. However, such networks are very time consuming to train as a result of their inherent nature. Recently, Echo State Networks and Liquid State Machines have been proposed as possible RNN alternatives, under the name of Reservoir Computing (RC). RCs are far more easy to train. In this paper, Cellular Automata are used as reservoir, and are tested on the 5-bit memory task (a well known benchmark within the RC community). The work herein provides a method of mapping binary inputs from the task onto the automata, and a recurrent architecture for handling the sequential aspects of it. Furthermore, a layered (deep) reservoir architecture is proposed. Performances are compared towards earlier work, in addition to its single-layer version. Results show that the single CA reservoir system yields similar results to state-of-the-art work. The system comprised of two layered reservoirs do show a noticeable improvement compared to a single CA reservoir. This indicates potential for further research and provides valuable insight on how to design CA reservoir systems.", "It is a widely accepted fact that the computational capability of recurrent neural networks (RNNs) is maximized on the so-called “edge of criticality.” Once the network operates in this configuration, it performs efficiently on a specific application both in terms of: 1) low prediction error and 2) high short-term memory capacity. Since the behavior of recurrent networks is strongly influenced by the particular input signal driving the dynamics, a universal, application-independent method for determining the edge of criticality is still missing. In this paper, we aim at addressing this issue by proposing a theoretically motivated, unsupervised method based on Fisher information for determining the edge of criticality in RNNs. It is proved that Fisher information is maximized for (finite-size) systems operating in such critical regions. However, Fisher information is notoriously difficult to compute and requires the analytic form of the probability density function ruling the system behavior. This paper takes advantage of a recently developed nonparametric estimator of the Fisher information matrix and provides a method to determine the critical region of echo state networks (ESNs), a particular class of recurrent networks. The considered control parameters, which indirectly affect the ESN performance, are explored to identify those configurations lying on the edge of criticality and, as such, maximizing Fisher information and computational performance. Experimental results on benchmarks and real-world data demonstrate the effectiveness of the proposed method.", "Positional binding specifies feature positions for an image (or for text). We show how to incorporate position into a fully distributed vector formed from Vector Quantization, or add position to a vector formed from a Vector Symbolic Architecture. The method guarantees that small shifts in position result in small changes to the representation vector, and does not require an increase in vector size. The incorporation of positional binding improves performance on CIFAR-10 and on a new database of noisy abstract face images, which we hereby make public. For Deep Learning approaches, we emphasize the importance of positional binding, and this sheds light on why multiple layers and pooling are beneficial." ] }
1706.00280
2621119169
We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed Integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The intESN architecture is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such an architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss.
The approach which is ideologically closest to our intESN, was presented in @cite_3 . The authors argued that a simple cycle reservoir can be used to achieve a performance similar to the conventional ESN. While this work explored reservoir update solutions which are similar to one of our optimizations, the technical side is very different from our approach as intESN strives at using only integers as neurons activation values.
{ "cite_N": [ "@cite_3" ], "mid": [ "2159682675" ], "abstract": [ "Reservoir computing (RC) refers to a new class of state-space models with a fixed state transition structure (the reservoir) and an adaptable readout form the state space. The reservoir is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be exploited by the reservoir-to-output readout mapping. The field of RC has been growing rapidly with many successful applications. However, RC has been criticized for not being principled enough. Reservoir construction is largely driven by a series of randomized model-building stages, with both researchers and practitioners having to rely on a series of trials and errors. To initialize a systematic study of the field, we concentrate on one of the most popular classes of RC methods, namely echo state network, and ask: What is the minimal complexity of reservoir construction for obtaining competitive models and what is the memory capacity (MC) of such simplified reservoirs? On a number of widely used time series benchmarks of different origin and characteristics, as well as by conducting a theoretical analysis we show that a simple deterministically constructed cycle reservoir is comparable to the standard echo state network methodology. The (short-term) of linear cyclic reservoirs can be made arbitrarily close to the proved optimal value." ] }
1705.11035
2617996878
We revisit the following problem: Given a convex polygon @math , find the largest-area inscribed triangle. We show by example that the linear-time algorithm presented in 1979 by Dobkin and Snyder for solving this problem fails. We then proceed to show that with a small adaptation, their approach does lead to a quadratic-time algorithm. We also present a more involved @math time divide-and-conquer algorithm. Also we show by example that the algorithm presented in 1979 by Dobkin and Snyder for finding the largest-area @math -gon that is inscribed in a convex polygon fails to find the optimal solution for @math . Finally, we discuss the implications of our discoveries on the literature.
Cabello al @cite_0 studied the problem of finding the largest-area or largest-perimeter rectangle that is inscribed in a convex polygon. They presented an exact algorithm that runs in @math time, and a @math -approximation algorithm that runs in @math time; see also @cite_20 @cite_25 @cite_8 @cite_18 . DePano, Ke and O'Rourke @cite_16 studied the problem of computing the largest-area square and equilateral triangle contained in a convex polygon. Jin and Matulel @cite_1 studied the problem of computing the largest-area parallelogram that is inscribed in a convex polygon @math , and presented an @math time algorithm, which is based on the fact that largest-area parallelogram must have all of its corners on the perimeter of @math .
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_1", "@cite_0", "@cite_16", "@cite_25", "@cite_20" ], "mid": [ "1977740129", "2004838871", "2186258632", "1688647404", "", "2045509593", "245723479" ], "abstract": [ "We consider approximation algorithms for the problem of computing an inscribed rectangle having largest area in a convex polygon on n vertices. If the order of the vertices of the polygon is given, we present a randomized algorithm that computes an inscribed rectangle with area at least ([email protected]) times the optimum with probability t in time O([email protected]) for any constant t<1. We further give a deterministic approximation algorithm that computes an inscribed rectangle of area at least ([email protected]) times the optimum in running time O([email protected]^2logn) and show how this running time can be slightly improved.", "We study a class of optimization problems in polygons that seek to compute the \"largest\" subset of a prescribed type, e.g., a longest line segment (\"stick\") or a maximum-area triangle or convex body (\"potato\"). Exact polynomial-time algorithms are known for some of these problems, but their time bounds are high (e.g., O(n7) for the largest convex polygon in a simple n-gon). We devise efficient approximation algorithms for these problems. In particular, we give near-linear time algorithms for a (1 - ∈)-approximation of the biggest stick, an O(1)-approximation of the maximum-area convex body, and a (1 - ∈)-approximation of the maximum-area fat triangle or rectangle. In addition, we give efficient methods for computing large ellipses inside a polygon (whose vertices are a dense sampling of a closed smooth curve). Our algorithms include both deterministic and randomized methods, one of which has been implemented (for computing large area ellipses in a well sampled closed smooth curve).", "We consider the problem of finding the maximum area parallelogram (MAP) inside a given convex polygon. Our main result is an algorithm for computing the MAP in an n-sided polygon in O(n 2 ) time. Achieving this running time requires proving several new structural properties of the MAP, and combining them with a rotating technique of Toussaint [10]. We also discuss applications of our result to the problem of computing the maximum area centrallysymmetric convex body (MAC) inside a given convex polygon, and to a “fault tolerant area maximization” problem which we define.", "We consider the following geometric optimization problem: find a maximum-area rectangle and a maximum-perimeter rectangle contained in a given convex polygon with n vertices. We give exact algorithms that solve these problems in time O ( n 3 ) . We also give ( 1 - e ) -approximation algorithms that take time O ( e - 1 2 log ? n + e - 3 2 ) .", "", "Abstract This paper considers the geometric optimization problem of finding the Largest area axis-parallel Rectangle (LR) in an n-vertex general polygon. We characterize the LR for general polygons by considering different cases based on the types of contacts between the rectangle and the polygon. A general framework is presented for solving a key subproblem of the LR problem which dominates the running time for a variety of polygon types. This framework permits us to transform an algorithm for orthogonal polygons into an algorithm for non-orthogonal polygons. Using this framework, we show that the LR in a general polygon (allowing holes) can be found in O(n log2 n) time. This matches the running time of the best known algorithm for orthogonal polygons. References are given for the application of the framework to other types of polygons. For each type, the running time of the resulting algorithm matches the running time of the best known algorithm for orthogonal polygons of that type. A lower bound of time in Ω(n log n) is established for finding the LR in both self-intersecting polygons and general polygons with holes. The latter result gives us both a lower bound of Ω(n log n) and an upper bound of O(n log2 n) for general polygons.", "This paper describes an algorithm to compute, in Theta(log n) time, a rectangle that is contained in a convex n-gon, has sides parallel to the coordinate axes, and has maximum area. With a slight modification it will compute the smallest perimeter. The algorithm uses a tentative prune-and-search approach, even though this problem does not appear to fit into the functional framework of Kirkpatrick and Snoeyink." ] }
1705.10960
2737427031
In this paper we propose an effective vision-based navigation method that allows a multirotor vehicle to simultaneously reach a desired goal pose in the environment while constantly facing a target object or landmark. Standard techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast maneuvers) do not allow to constantly maintain the line of sight with a target of interest. Instead, we compute the optimal trajectory by solving a non-linear optimization problem that minimizes the target reprojection error while meeting the UAV's dynamic constraints. The desired trajectory is then tracked by means of a real-time Non-linear Model Predictive Controller (NMPC): this implicitly allows the multirotor to satisfy both the required constraints. We successfully evaluate the proposed approach in many real and simulated experiments, making an exhaustive comparison with a standard approach.
Several IBVS @cite_8 @cite_18 @cite_17 and PBVS @cite_0 @cite_20 approaches has been applied to control aerial vehicles in the last decades. In those standard solutions, the controller uses the visual information as the main source for the target pose computation, taking into account where the target is re-projected into the image plane along the trajectory. A possible solution that usually mitigates such weakness is the kinematic limitation of the multirotor in terms of roll and pitch angles, but this penalizes the vehicle maneuverability. To this end, Ozawa @cite_6 present an approach that takes advantage of the rotational-dynamic of the vehicle, where a virtual spring penalizes large rotation with respect to a gravity aligned frame. Some recent approaches map the target features' dynamic into a "virtual image-plane" used to compensate the current roll and pitch angles, in order to keep them close to zero @cite_15 @cite_7 @cite_19 . Being the re-projection error obtained from the rotation-compensated frame, it is still possible that the target, due to significant rotations, completely leaves the camera field of view.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_8", "@cite_6", "@cite_0", "@cite_19", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "2122147410", "2089801374", "2139241879", "2137740690", "1574945970", "2104116558", "1995067233", "2009727770", "2141300598" ], "abstract": [ "A new image-based control strategy for visual servoing of a class of under-actuated rigid body systems is presented. The proposed control design applies to \"eye-in-hand\" systems where the camera is fixed to a rigid body with actuated dynamics. The control design is motivated by a theoretical analysis of the dynamic equations of motion of a rigid body and exploits passivity-like properties of these dynamics to derive a Lyapunov control algorithm using robust backstepping techniques. The proposed control is novel in considering the full dynamic system incorporating all degrees of freedom (albeit for a restricted class of dynamics) and in not requiring measurement of the relative depths of the observed image points. A motivating application is the stabilization of a scale model autonomous helicopter over a marked landing pad.", "This paper addresses the dynamics, control, planning, and visual servoing for micro aerial vehicles to perform high-speed aerial grasping tasks. We draw inspiration from agile, fast-moving birds, such as raptors, that detect, locate, and execute high-speed swoop maneuvers to capture prey. Since these grasping maneuvers are predominantly in the sagittal plane, we consider the planar system and present mathematical models and algorithms for motion planning and control, required to incorporate similar capabilities in quadrotors equipped with a monocular camera. In particular, we develop a dynamical model directly in the image space, show that this is a differentially-flat system with the image features serving as flat outputs, outline a method for generating trajectories directly in the image feature space, develop a geometric visual controller that considers the second order dynamics (in contrast to most visual servoing controllers that assume first order dynamics), and present validation of our methods through both simulations and experiments.", "The motivation of this research is to show that visual based object tracking and following is reliable using a cheap GPS-denied multirotor platform such as the AR Drone 2.0. Our architecture allows the user to specify an object in the image that the robot has to follow from an approximate constant distance. At the current stage of our development, in the event of image tracking loss the system starts to hover and waits for the image tracking recovery or second detection, which requires the usage of odometry measurements for self stabilization. During the following task, our software utilizes the forward-facing camera images and part of the IMU data to calculate the references for the four on-board low-level control loops. To obtain a stronger wind disturbance rejection and an improved navigation performance, a yaw heading reference based on the IMU data is internally kept and updated by our control algorithm. We validate the architecture using an AR Drone 2.0 and the OpenTLD tracker in outdoor suburban areas. The experimental tests have shown robustness against wind perturbations, target occlusion and illumination changes, and the system's capability to track a great variety of objects present on suburban areas, for instance: walking or running people, windows, AC machines, static and moving cars and plants.", "This paper presents an image-based visual servoing for controlling the position and orientation of a quadrotor using a fixed downward camera observing landmarks on the level ground. In the proposed method, the negative feedback to the image moments is used to control the vertical motion and rotation around the roll axis. On the other hand, the negative feedback cannot be used to control the horizontal motion due to under-actuation of a quadrotor. Thus, a novel control method is introduced to control the horizontal motion. Simulations are presented to validate the proposed method.", "We present control methods for an autonomous four-rotor helicopter, called a quadrotor, using visual feedback as the primary sensor. The vision system uses aground camera to estimate the pose (position and orientation) of the helicopter. Two methods of control are studied - one using a series of mode-based, feedback linearizing controllers, and the other using a backstepping-like control law. Various simulations of the model demonstrate the implementation of feedback linearization and the backstepping controllers. Finally,. we present initial flight experiments. where the helicopter is restricted to vertical and yaw motions.", "In this article, image-based visual servoing control of an underactuated unmanned aerial vehicle is considered for the three-dimensional translational motion. Taking into account the low quality of accelerometers’ data, the main objective of this article is to only use information of rate gyroscopes and a camera, as the sensor suite, in order to design an image-based visual servoing controller. Kinematics and dynamics of the unmanned aerial vehicle are expressed in terms of visual information, which make it possible to design dynamic image-based visual servoing controllers without using linear velocity information obtained from accelerometers. Image features are selected through perspective image moments of a flat target plane in which no geometric information is required, and therefore, the approach can be applied in unknown environments. Two output feedback controllers that deal with uncertainties in dynamics of the system related to the motion of the target and also unknown depth information of the image are proposed using a linear observer. Stability analysis guarantees that the errors of the system remain uniformly ultimately bounded during a tracking mission and converge to 0 when the target is stationary. Simulation results are presented to validate the designed controllers.", "In this paper we describe a vision-based algorithm to control a vertical-takeoff-and-landing unmanned aerial vehicle while tracking and landing on a moving platform. Specifically, we use image-based visual servoing (IBVS) to track the platform in two-dimensional image space and generate a velocity reference command used as the input to an adaptive sliding mode controller. Compared with other vision-based control algorithms that reconstruct a full three-dimensional representation of the target, which requires precise depth estimation, IBVS is computationally cheaper since it is less sensitive to the depth estimation allowing for a faster method to obtain this estimate. To enhance velocity tracking of the sliding mode controller, an adaptive rule is described to account for the ground effect experienced during the maneuver. Finally, the IBVS algorithm integrated with the adaptive sliding mode controller for tracking and landing is validated in an experimental setup using a quadrotor.", "We present the design and implementation of a vision-based feature tracking system for an autonomous helicopter. Visual sensing is used for estimating the position and velocity of features in the image plane (urban features like windows) in order to generate velocity references for the flight control. These visual-based references are then combined with GPS-positioning references to navigate towards these features and then track them. We present results from experimental flight trials, performed in two UAV systems and under different conditions, that show the feasibility and robustness of our approach. © 2006 Wiley Periodicals, Inc.", "An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight. The proposed control design addresses visual servo of 'eye-in-hand' type systems. The control of the position and orientation dynamics are decoupled using a visual error based on a spherical centroid data, along with estimation of the gravitational inertial direction. The error used compensates for the poor conditioning of the Jacobian matrix seen in earlier work in this area by introducing a non-homogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller is derived for the full dynamics of the system. Experimental results on an experimental UAV known as an X4-flyer made by the French Atomic Energy Commission (CEA) demonstrate the robustness and performances of the proposed control strategy." ] }
1705.10960
2737427031
In this paper we propose an effective vision-based navigation method that allows a multirotor vehicle to simultaneously reach a desired goal pose in the environment while constantly facing a target object or landmark. Standard techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast maneuvers) do not allow to constantly maintain the line of sight with a target of interest. Instead, we compute the optimal trajectory by solving a non-linear optimization problem that minimizes the target reprojection error while meeting the UAV's dynamic constraints. The desired trajectory is then tracked by means of a real-time Non-linear Model Predictive Controller (NMPC): this implicitly allows the multirotor to satisfy both the required constraints. We successfully evaluate the proposed approach in many real and simulated experiments, making an exhaustive comparison with a standard approach.
In @cite_4 @cite_22 the authors present two approaches based on a spherical camera geometry, allowing to design the control law as a function of the position while neglecting the angular velocity. Being solely position-based, these kind of methods suffer from the above discussed problems, since the system is still vulnerable to large rotations.
{ "cite_N": [ "@cite_4", "@cite_22" ], "mid": [ "2143017962", "2323652230" ], "abstract": [ "In this paper, we investigate a range of image-based visual servo control algorithms for regulation of the position of a quadrotor aerial vehicle. The most promising control algorithms have been successfully implemented on an autonomous aerial vehicle and demonstrate excellent performance.", "In this paper we propose a new control method for quadrotor autonomous landing on a visual target without linear velocity measurements. Only onboard sensing is exploited, such that only the images of the landing pad from a down-looking camera, along with data from an Inertial Measurement Unit's gyro, are used. The control system consists of an image-based nonlinear observer that estimates online the linear velocity of the vehicle and a backstepping image-based controller that generates attitude, and thrust setpoints to the quadrotor autopilot. Both observer and controller share the same feedback information: spherical visual features. Therefore no further image elaboration is needed for the estimation. This, along with the fact that only simple computations on low- and constant-dimension arrays are involved, makes the proposed solution computationally cheap. Real-hardware experiments on a quadrotor are carried out to verify the validity of the proposed control system." ] }
1705.10960
2737427031
In this paper we propose an effective vision-based navigation method that allows a multirotor vehicle to simultaneously reach a desired goal pose in the environment while constantly facing a target object or landmark. Standard techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast maneuvers) do not allow to constantly maintain the line of sight with a target of interest. Instead, we compute the optimal trajectory by solving a non-linear optimization problem that minimizes the target reprojection error while meeting the UAV's dynamic constraints. The desired trajectory is then tracked by means of a real-time Non-linear Model Predictive Controller (NMPC): this implicitly allows the multirotor to satisfy both the required constraints. We successfully evaluate the proposed approach in many real and simulated experiments, making an exhaustive comparison with a standard approach.
In all the above mentioned work, except for @cite_14 , there is no guarantee that the target is constantly kept in the camera field of view because they don't directly take into account the vehicle dynamics.
{ "cite_N": [ "@cite_14" ], "mid": [ "2561121121" ], "abstract": [ "This work introduces a hybrid visual servoing technique for differentially flat, underactuated systems that is well suited for aggressive dynamics. Standard Position-Based Visual Servoing (PBVS) and Image-Based Visual Servoing (IBVS) approaches for underactuated systems, such as quadrotors, oftentimes do not explicitly ensure that the relevant image features stay in the camera's field of view, especially while the system is performing agile maneuvers. We present a control technique that is designed to mitigate this issue and that results in increased robustness. Given a goal image, we first solve a constrained Perspective-n-Point (PnP) problem to find an equilibrium pose which aligns the camera with the goal. We then formulate the task of navigating to the goal pose as an optimal control problem, where a cost over the resulting image feature tracks along the trajectory is minimized which implicitly keeps features in the field of view over the course of the trajectory. The optimization is performed over a polynomial parametrization of the flat outputs of the system to decrease the dimensionality of the optimization. Simulations and physical experiments are performed with a quadrotor system to benchmark the algorithm's performance against a typical PBVS approach." ] }
1705.10998
2619240030
Recent progress in Reinforcement Learning (RL), fueled by its combination, with Deep Learning has enabled impressive results in learning to interact with complex virtual environments, yet real-world applications of RL are still scarce. A key limitation is data efficiency, with current state-of-the-art approaches requiring millions of training samples. A promising way to tackle this problem is to augment RL with learning from human demonstrations. However, human demonstration data is not yet readily available. This hinders progress in this direction. The present work addresses this problem as follows. We (i) collect and describe a large dataset of human Atari 2600 replays -- the largest and most diverse such data set publicly released to date, (ii) illustrate an example use of this dataset by analyzing the relation between demonstration quality and imitation learning performance, and (iii) outline possible research directions that are opened up by our work.
There are two directions of RL research which are working on leveraging demonstration data for training an autonomous agent: Inverse Reinforcement Learning (IRL) and Imitation Learning. The former group addresses scenarios where there is no access to the reward function. It is true that in RL tasks the goal is often underspecified, and sometimes it is hard to provide a reward that represents all the useful information from expert's demonstration. The general idea is to approximate the reward function and learn a policy using this approximation @cite_12 @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_12" ], "mid": [ "1999874108", "2061562262" ], "abstract": [ "We consider learning in a Markov decision process where we are not explicitly given a reward function, but where instead we can observe an expert demonstrating the task that we want to learn to perform. This setting is useful in applications (such as the task of driving) where it may be difficult to write down an explicit reward function specifying exactly how different desiderata should be traded off. We think of the expert as trying to maximize a reward function that is expressible as a linear combination of known features, and give an algorithm for learning the task demonstrated by the expert. Our algorithm is based on using \"inverse reinforcement learning\" to try to recover the unknown reward function. We show that our algorithm terminates in a small number of iterations, and that even though we may never recover the expert's reward function, the policy output by the algorithm will attain performance close to that of the expert, where here performance is measured with respect to the expert's unknown reward function.", "Objective—To evaluate the pharmacokinetics of a novel commercial formulation of ivermectin after administration to goats. Animals—6 healthy adult goats. Procedure—Ivermectin (200 μg kg) was initially administered IV to each goat, and plasma samples were obtained for 36 days. After a washout period of 3 weeks, each goat received a novel commercial formulation of ivermectin (200 μg kg) by SC injection. Plasma samples were then obtained for 42 days. Drug concentrations were quantified by use of high-performance liquid chromatography with fluorescence detection. Results—Pharmacokinetics of ivermectin after IV administration were best described by a 2-compartment open model; values for main compartmental variables included volume of distribution at a steady state (9.94 L kg), clearance (1.54 L kg d), and area under the plasma concentration-time curve (AUC; 143 [ng•d] mL). Values for the noncompartmental variables included mean residence time (7.37 days), AUC (153 [ng•d] mL), and clearance (1.43 L kg d). After ..." ] }
1705.10998
2619240030
Recent progress in Reinforcement Learning (RL), fueled by its combination, with Deep Learning has enabled impressive results in learning to interact with complex virtual environments, yet real-world applications of RL are still scarce. A key limitation is data efficiency, with current state-of-the-art approaches requiring millions of training samples. A promising way to tackle this problem is to augment RL with learning from human demonstrations. However, human demonstration data is not yet readily available. This hinders progress in this direction. The present work addresses this problem as follows. We (i) collect and describe a large dataset of human Atari 2600 replays -- the largest and most diverse such data set publicly released to date, (ii) illustrate an example use of this dataset by analyzing the relation between demonstration quality and imitation learning performance, and (iii) outline possible research directions that are opened up by our work.
Whilst IRL can benefit from the Atari Grand Challenge dataset by ignoring the reward information, Imitation Learning is the direct benefactor of our dataset. Imitation learning exploits the reward information to learn an action-value function, or directly a policy. @cite_19 uses a pre-trained model to speed up training, and has an interesting comparison of pre-training influence on model-free and model-based RL. The paper notes that model-based learning benefits more from using demonstration data. The latest work on Learning from Demonstration shows that model-free RL can also greatly benefit from using human player data @cite_5 @cite_11 @cite_18 .
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_18", "@cite_11" ], "mid": [ "2148112459", "2788862220", "2481567506", "2415726935" ], "abstract": [ "By now it is widely accepted that learning a task from scratch, i.e., without any prior knowledge, is a daunting undertaking. Humans, however, rarely attempt to learn from scratch. They extract initial biases as well as strategies how to approach a learning problem from instructions and or demonstrations of other humans. For teaming control, this paper investigates how learning from demonstration can be applied in the context of reinforcement learning. We consider priming the Q-function, the value function, the policy, and the model of the task dynamics as possible areas where demonstrations can speed up learning. In general nonlinear learning problems, only model-based reinforcement learning shows significant speed-up after a demonstration, while in the special case of linear quadratic regulator (LQR) problems, all methods profit from the demonstration. In an implementation of pole balancing on a complex anthropomorphic robot arm, we demonstrate that, when facing the complexities of real signal processing, model-based reinforcement learning offers the most robustness for LQR problems. Using the suggested methods, the robot learns pole balancing in just a single trial after a 30 second long demonstration of the human instructor.", "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.", "This paper introduces a novel method for learning how to play the most difficult Atari 2600 games from the Arcade Learning Environment using deep reinforcement learning. The proposed method, human checkpoint replay, consists in using checkpoints sampled from human gameplay as starting points for the learning process. This is meant to compensate for the difficulties of current exploration strategies, such as epsilon-greedy, to find successful control policies in games with sparse rewards. Like other deep reinforcement learning architectures, our model uses a convolutional neural network that receives only raw pixel inputs to estimate the state value function. We tested our method on Montezuma's Revenge and Private Eye, two of the most challenging games from the Atari platform. The results we obtained show a substantial improvement compared to previous learning approaches, as well as over a random player. We also propose a method for training deep reinforcement learning agents using human gameplay experience, which we call human experience replay.", "Reinforcement Learning (RL) has been effectively used to solve complex problems given careful design of the problem and algorithm parameters. However standard RL approaches do not scale particularly well with the size of the problem and often require extensive engineering on the part of the designer to minimize the search space. To alleviate this problem, we present a model-free policy-based approach called Exploration from Demonstration (EfD) that uses human demonstrations to guide search space exploration. We use statistical measures of RL algorithms to provide feedback to the user about the agent's uncertainty and use this to solicit targeted demonstrations useful from the agent's perspective. The demonstrations are used to learn an exploration policy that actively guides the agent towards important aspects of the problem. We instantiate our approach in a gridworld and a popular arcade game and validate its performance under different experimental conditions. We show how EfD scales to large problems and provides convergence speed-ups over traditional exploration and interactive learning methods." ] }
1705.10924
2618428124
Deep neural network (DNN) based approaches hold significant potential for reinforcement learning (RL) and have already shown remarkable gains over state-of-art methods in a number of applications. The effectiveness of DNN methods can be attributed to leveraging the abundance of supervised data to learn value functions, Q-functions, and policy function approximations without the need for feature engineering. Nevertheless, the deployment of DNN-based predictors with very deep architectures can pose an issue due to computational and other resource constraints at test-time in a number of applications. We propose a novel approach for reducing the average latency by learning a computationally efficient gating function that is capable of recognizing states in a sequential decision process for which policy prescriptions of a shallow network suffices and deeper layers of the DNN have little marginal utility. The overall system is adaptive in that it dynamically switches control actions based on state-estimates in order to reduce average latency without sacrificing terminal performance. We experiment with a number of alternative loss-functions to train gating functions and shallow policies and show that in a number of applications a speed-up of up to almost 5X can be obtained with little loss in performance.
Resource-constrained machine learning has become an active area of research. Many algorithms have been proposed to reduce test-time costs (in terms of computation or feature acquisition) for classification regression @cite_0 @cite_26 @cite_21 @cite_5 @cite_10 @cite_27 @cite_20 and structured prediction @cite_18 by learning adaptive feature acquisition evaluation rules using generative models or empirical risk minimization with no direct relation to RL.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_21", "@cite_0", "@cite_27", "@cite_5", "@cite_10", "@cite_20" ], "mid": [ "2415062102", "118463236", "", "", "2435989427", "2949654875", "", "1870740349" ], "abstract": [ "We study the problem of structured prediction under test-time budget constraints. We propose a novel approach applicable to a wide range of structured prediction problems in computer vision and natural language processing. Our approach seeks to adaptively generate computationally costly features during test-time in order to reduce the computational cost of prediction while maintaining prediction performance. We show that training the adaptive feature generation system can be reduced to a series of structured learning problems, resulting in efficient training using existing structured learning algorithms. This framework provides theoretical justification for several existing heuristic approaches found in literature. We evaluate our proposed adaptive system on two structured prediction tasks, optical character recognition (OCR) and dependency parsing and show strong performance in reduction of the feature costs without degrading accuracy.", "We present a convex framework to learn sequential decisions and apply it to the problem of learning under a budget. We consider the structure proposed in [1], where sensor measurements are acquired in a sequence. The goal after acquiring each new measurement is to make a decision whether to stop and classify or to pay the cost of using the next sensor in the sequence. We introduce a novel formulation of an empirical risk objective for the multi stage sequential decision problem. This objective naturally lends itself to a non-convex multilinear formulation. Nevertheless, we derive a novel perspective that leads to a tight convex objective. This is accomplished by expressing the empirical risk in terms of linear superposition of indicator functions. We then derive an LP formulation by utilizing hinge loss surrogates. Our LP achieves or exceeds the empirical performance of the nonconvex alternating algorithm that requires a large number of random initializations. Consequently, the LP has the advantage of guaranteed convergence, global optimality, repeatability and computation eciency.", "", "", "We propose to prune a random forest (RF) for resource-constrained prediction. We first construct a RF and then prune it to optimize expected feature cost & accuracy. We pose pruning RFs as a novel 0-1 integer program with linear constraints that encourages feature re-use. We establish total unimodularity of the constraint set to prove that the corresponding LP relaxation solves the original integer program. We then exploit connections to combinatorial optimization and develop an efficient primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up approach, which benefits from good RF initialization, conventional methods are top-down acquiring features based on their utility value and is generally intractable, requiring heuristics. Empirically, our pruning algorithm outperforms existing state-of-the-art resource-constrained algorithms.", "As machine learning algorithms enter applications in industrial settings, there is increased interest in controlling their cpu-time during testing. The cpu-time consists of the running time of the algorithm and the extraction time of the features. The latter can vary drastically when the feature set is diverse. In this paper, we propose an algorithm, the Greedy Miser, that incorporates the feature extraction cost during training to explicitly minimize the cpu-time during testing. The algorithm is a straightforward extension of stage-wise regression and is equally suitable for regression or multi-class classification. Compared to prior work, it is significantly more cost-effective and scales to larger data sets.", "", "We study the problem of reducing test-time acquisition costs in classification systems. Our goal is to learn decision rules that adaptively select sensors for each example as necessary to make a confident prediction. We model our system as a directed acyclic graph (DAG) where internal nodes correspond to sensor subsets and decision functions at each node choose whether to acquire a new sensor or classify using the available measurements. This problem can be posed as an empirical risk minimization over training data. Rather than jointly optimizing such a highly coupled and non-convex problem over all decision nodes, we propose an efficient algorithm motivated by dynamic programming. We learn node policies in the DAG by reducing the global objective to a series of cost sensitive learning problems. Our approach is computationally efficient and has proven guarantees of convergence to the optimal system for a fixed architecture. In addition, we present an extension to map other budgeted learning problems with large number of sensors to our DAG architecture and demonstrate empirical performance exceeding state-of-the-art algorithms for data composed of both few and many sensors." ] }
1705.10924
2618428124
Deep neural network (DNN) based approaches hold significant potential for reinforcement learning (RL) and have already shown remarkable gains over state-of-art methods in a number of applications. The effectiveness of DNN methods can be attributed to leveraging the abundance of supervised data to learn value functions, Q-functions, and policy function approximations without the need for feature engineering. Nevertheless, the deployment of DNN-based predictors with very deep architectures can pose an issue due to computational and other resource constraints at test-time in a number of applications. We propose a novel approach for reducing the average latency by learning a computationally efficient gating function that is capable of recognizing states in a sequential decision process for which policy prescriptions of a shallow network suffices and deeper layers of the DNN have little marginal utility. The overall system is adaptive in that it dynamically switches control actions based on state-estimates in order to reduce average latency without sacrificing terminal performance. We experiment with a number of alternative loss-functions to train gating functions and shallow policies and show that in a number of applications a speed-up of up to almost 5X can be obtained with little loss in performance.
Others have formulated the adaptive feature acquisition evaluation problem as a MDP @cite_24 @cite_7 @cite_17 @cite_28 . They encode observations so far as state, unused features base classifiers as action space, and formulate various reward functions to account for classification error and costs. Such methods have been successfully applied in NLP @cite_29 to adaptively select features for dependency parsing, as well as computer vision @cite_25 for human pose tracking. These methods use RL as a tool to learn decision rules that reduce test-time cost in the static environment of classification or structured prediction problems.
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_29", "@cite_24", "@cite_25", "@cite_17" ], "mid": [ "", "2152588577", "2117598836", "1821462560", "2108599331", "1814353042" ], "abstract": [ "", "Imitation Learning has been shown to be successful in solving many challenging real-world problems. Some recent approaches give strong performance guarantees by training the policy iteratively. However, it is important to note that these guarantees depend on how well the policy we found can imitate the oracle on the training data. When there is a substantial difference between the oracle's ability and the learner's policy space, we may fail to find a policy that has low error on the training set. In such cases, we propose to use a coach that demonstrates easy-to-learn actions for the learner and gradually approaches the oracle. By a reduction of learning by demonstration to online learning, we prove that coaching can yield a lower regret bound than using the oracle. We apply our algorithm to cost-sensitive dynamic feature selection, a hard decision problem that considers a user-specified accuracy-cost trade-off. Experimental results on UCI datasets show that our method outperforms state-of-the-art imitation learning methods in dynamic feature selection and two static feature selection methods.", "Feature computation and exhaustive search have significantly restricted the speed of graph-based dependency parsing. We propose a faster framework of dynamic feature selection, where features are added sequentially as needed, edges are pruned early, and decisions are made online for each sentence. We model this as a sequential decision-making problem and solve it by imitation learning techniques. We test our method on 7 languages. Our dynamic parser can achieve accuracies comparable or even superior to parsers using a full set of features, while computing fewer than 30 of the feature templates.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.", "Discriminative methods for learning structured models have enabled wide-spread use of very rich feature representations. However, the computational cost of feature extraction is prohibitive for large-scale or time-sensitive applications, often dominating the cost of inference in the models. Significant efforts have been devoted to sparsity-based model selection to decrease this cost. Such feature selection methods control computation statically and miss the opportunity to fine-tune feature extraction to each input at run-time. We address the key challenge of learning to control fine-grained feature extraction adaptively, exploiting non-homogeneity of the data. We propose an architecture that uses a rich feedback loop between extraction and prediction. The run-time control policy is learned using efficient value-function approximation, which adaptively determines the value of information of features at the level of individual variables for each input. We demonstrate significant speedups over state-of-the-art methods on two challenging datasets. For articulated pose estimation in video, we achieve a more accurate state-of-the-art model that is also faster, with similar results on an OCR task.", "We seek decision rules for prediction-time cost reduction, where complete data is available for training, but during prediction-time, each feature can only be acquired for an additional cost. We propose a novel random forest algorithm to minimize prediction error for a user-specified average feature acquisition budget. While random forests yield strong generalization performance, they do not explicitly account for feature costs and furthermore require low correlation among trees, which amplifies costs. Our random forest grows trees with low acquisition cost and high strength based on greedy minimax cost-weighted-impurity splits. Theoretically, we establish near-optimal acquisition cost guarantees for our algorithm. Empirically, on a number of benchmark datasets we demonstrate superior accuracy-cost curves against state-of-the-art prediction-time algorithms." ] }
1705.10924
2618428124
Deep neural network (DNN) based approaches hold significant potential for reinforcement learning (RL) and have already shown remarkable gains over state-of-art methods in a number of applications. The effectiveness of DNN methods can be attributed to leveraging the abundance of supervised data to learn value functions, Q-functions, and policy function approximations without the need for feature engineering. Nevertheless, the deployment of DNN-based predictors with very deep architectures can pose an issue due to computational and other resource constraints at test-time in a number of applications. We propose a novel approach for reducing the average latency by learning a computationally efficient gating function that is capable of recognizing states in a sequential decision process for which policy prescriptions of a shallow network suffices and deeper layers of the DNN have little marginal utility. The overall system is adaptive in that it dynamically switches control actions based on state-estimates in order to reduce average latency without sacrificing terminal performance. We experiment with a number of alternative loss-functions to train gating functions and shallow policies and show that in a number of applications a speed-up of up to almost 5X can be obtained with little loss in performance.
The algorithms proposed in this paper use techniques from imitation learning or Learning from Demonstration (LfD), where a expert's demonstration or guidance is used to help learn the MDP. One of such methods is DAGGER @cite_30 , which trains a policy that directly imitate the expert's behavior. Recent works have exploited this idea by training a multi-task reinforcement learning agent @cite_12 or perform deep Q-learning from demonstrations @cite_9 . These can be viewed as complementary to our work as we use these tools to reduce latency in policy evaluation. The teacher-student or distilling framework @cite_23 @cite_3 @cite_8 is also related to our approach; a low-cost student policy model learns to approximate the teacher policy model so as to meet test-time budget. However, the goal there is to learn a better stand-alone student model. In contrast, we make use of both the low-cost (student) and high-accuracy (teacher) policy models during prediction via a gating function, which learns the limitation of the low-cost (student) model and consults the high-accuracy (teacher) model if necessary, thereby avoiding accuracy loss.
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_9", "@cite_3", "@cite_23", "@cite_12" ], "mid": [ "1931877416", "", "", "1690739335", "2253986341", "2174786457" ], "abstract": [ "Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.", "", "", "While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.", "Distillation (, 2015) and privileged information (Vapnik & Izmailov, 2015) are two techniques that enable machines to learn from other machines. This paper unifies these two techniques into generalized distillation, a framework to learn from multiple machines and data representations. We provide theoretical and causal insight about the inner workings of generalized distillation, extend it to unsupervised, semisupervised and multitask learning scenarios, and illustrate its efficacy on a variety of numerical simulations on both synthetic and real-world data.", "The ability to act in multiple environments and transfer previous knowledge to new situations can be considered a critical aspect of any intelligent agent. Towards this goal, we define a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains. This method, termed \"Actor-Mimic\", exploits the use of deep reinforcement learning and model compression techniques to train a single policy network that learns how to act in a set of distinct tasks by using the guidance of several expert teachers. We then show that the representations learnt by the deep policy network are capable of generalizing to new tasks with no prior expert guidance, speeding up learning in novel environments. Although our method can in general be applied to a wide range of problems, we use Atari games as a testing environment to demonstrate these methods." ] }
1705.10924
2618428124
Deep neural network (DNN) based approaches hold significant potential for reinforcement learning (RL) and have already shown remarkable gains over state-of-art methods in a number of applications. The effectiveness of DNN methods can be attributed to leveraging the abundance of supervised data to learn value functions, Q-functions, and policy function approximations without the need for feature engineering. Nevertheless, the deployment of DNN-based predictors with very deep architectures can pose an issue due to computational and other resource constraints at test-time in a number of applications. We propose a novel approach for reducing the average latency by learning a computationally efficient gating function that is capable of recognizing states in a sequential decision process for which policy prescriptions of a shallow network suffices and deeper layers of the DNN have little marginal utility. The overall system is adaptive in that it dynamically switches control actions based on state-estimates in order to reduce average latency without sacrificing terminal performance. We experiment with a number of alternative loss-functions to train gating functions and shallow policies and show that in a number of applications a speed-up of up to almost 5X can be obtained with little loss in performance.
Finally, our approach draws inspiration from cognitive science and neuroscience. @cite_4 suggests that there are two systems in the brain, being coordinated by the uncertainty of the decision under a competition rule. @cite_19 implies that the switching of two systems that are with different levels of accuracy, reflect time, and metabolic cost, saves human energy budgets while maintaining a moderately good decision-making ability. This is reminiscent of our combination of gating, SNN and DNN to meet budget constraints.
{ "cite_N": [ "@cite_19", "@cite_4" ], "mid": [ "2752099845", "2167362547" ], "abstract": [ "Daniel Kahneman, recipient of the Nobel Prize in Economic Sciences for his seminal work in psychology challenging the rational model of judgment and decision making, is one of the world's most important thinkers. His ideas have had a profound impact on many fields - including business, medicine, and politics - but until now, he has never brought together his many years of research in one book. In \"Thinking, Fast and Slow\", Kahneman takes us on a groundbreaking tour of the mind and explains the two systems that drive the way we think and make choices. One system is fast, intuitive, and emotional; the other is slower, more deliberative, and more logical. Kahneman exposes the extraordinary capabilities - and also the faults and biases - of fast thinking, and reveals the pervasive influence of intuitive impressions on our thoughts and behaviour. The importance of properly framing risks, the effects of cognitive biases on how we view others, the dangers of prediction, the right ways to develop skills, the pros and cons of fear and optimism, the difference between our experience and memory of events, the real components of happiness - each of these can be understood only by knowing how the two systems work together to shape our judgments and decisions. Drawing on a lifetime's experimental experience, Kahneman reveals where we can and cannot trust our intuitions and how we can tap into the benefits of slow thinking. He offers practical and enlightening insights into how choices are made in both our professional and our personal lives-and how we can use different techniques to guard against the mental glitches that often get us into trouble. \"Thinking, Fast and Slow\" will transform the way you take decisions and experience the world.", "A broad range of neural and behavioral data suggests that the brain contains multiple systems for behavioral choice, including one associated with prefrontal cortex and another with dorsolateral striatum. However, such a surfeit of control raises an additional choice problem: how to arbitrate between the systems when they disagree. Here, we consider dual-action choice systems from a normative perspective, using the computational theory of reinforcement learning. We identify a key trade-off pitting computational simplicity against the flexible and statistically efficient use of experience. The trade-off is realized in a competition between the dorsolateral striatal and prefrontal systems. We suggest a Bayesian principle of arbitration between them according to uncertainty, so each controller is deployed when it should be most accurate. This provides a unifying account of a wealth of experimental evidence about the factors favoring dominance by either system." ] }
1705.10633
2617739603
Abstract Using the matrix factorization technique in machine learning is very common mainly in areas like rec-ommender systems. Despite its high prediction accuracy and its ability to avoid over-fitting of the data, the Bayesian Probabilistic Matrix Factorization algorithm (BPMF) has not been widely used on large scale data because of the prohibitive cost. In this paper, we propose a distributed high-performance parallel implementation of the BPMF using Gibbs sampling on shared and distributed architectures. We show by using efficient load balancing using work stealing on a single node, and by using asynchronous communication in the distributed version we beat state of the art implementations.
Apart from Bayesian Probabilistic Matrix Factorization (BPMF) @cite_18 , the most popular algorithms for low-row matrix factorization are probably alternating least-squares (ALS) @cite_16 and stochastic gradient descent (SGD) @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_16" ], "mid": [ "2054141820", "2085040216", "1511814458" ], "abstract": [ "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.", "Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation.", "Many recommendation systems suggest items to users by utilizing the techniques of collaborative filtering(CF) based on historical records of items that the users have viewed, purchased, or rated. Two major problems that most CF approaches have to contend with are scalability and sparseness of the user profiles. To tackle these issues, in this paper, we describe a CF algorithm alternating-least-squares with weighted-?-regularization(ALS-WR), which is implemented on a parallel Matlab platform. We show empirically that the performance of ALS-WR (in terms of root mean squared error(RMSE)) monotonically improves with both the number of features and the number of ALS iterations. We applied the ALS-WR algorithm on a large-scale CF problem, the Netflix Challenge, with 1000 hidden features and obtained a RMSE score of 0.8985, which is one of the best results based on a pure method. In addition, combining with the parallel version of other known methods, we achieved a performance improvement of 5.91 over Netflix's own CineMatch recommendation system. Our method is simple and scales well to very large datasets." ] }
1705.10633
2617739603
Abstract Using the matrix factorization technique in machine learning is very common mainly in areas like rec-ommender systems. Despite its high prediction accuracy and its ability to avoid over-fitting of the data, the Bayesian Probabilistic Matrix Factorization algorithm (BPMF) has not been widely used on large scale data because of the prohibitive cost. In this paper, we propose a distributed high-performance parallel implementation of the BPMF using Gibbs sampling on shared and distributed architectures. We show by using efficient load balancing using work stealing on a single node, and by using asynchronous communication in the distributed version we beat state of the art implementations.
While a growing number of works studied parallel implementations of the SGD @cite_0 @cite_19 and ALS @cite_16 , less research work dealt with a parallelization of the BPMF @cite_14 @cite_15 . Indeed, computing the posterior inference which time complexity per iteration is cubic with the respect of the rank of the factor matrix ( @math @math @math ), may become very exorbitant when the number of users and movies runs into millions. SGD, in the other hand, is computationally less expensive even if it needs more iterations to reach a good enough prediction and its performance is sensitive to the choice of the learning rate. For ALS, although its time complexity per iteration, previous related work @cite_16 showed that it is well suited for parallelization.
{ "cite_N": [ "@cite_14", "@cite_0", "@cite_19", "@cite_15", "@cite_16" ], "mid": [ "2139762415", "2054141820", "1980147176", "2101893470", "1511814458" ], "abstract": [ "Abstract Recommendation systems have received great attention for their commercial value in today's online business world. However, most recommendation systems encounter the data sparsity problem and the cold-start problem. To improve recommendation accuracy in this circumstance, additional sources of information about the users and items should be incorporated in recommendation systems. In this paper, we modify the model in Bayesian Probabilistic Matrix Factorization, and propose two recommendation approaches fusing social relations and item contents with user ratings in a novel way. The proposed approach is computationally efficient and can be applied to trust-aware or content-aware recommendation systems with very large dataset. Experimental results on three real world datasets show that our method gets more accurate recommendation results with faster converging speed than other matrix factorization based methods. We also verify our method in cold-start settings, and our method gets more accurate recommendation results than the compared approaches.", "As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.", "We provide a novel algorithm to approximately factor large matrices with millions of rows, millions of columns, and billions of nonzero elements. Our approach rests on stochastic gradient descent (SGD), an iterative stochastic optimization algorithm. We first develop a novel \"stratified\" SGD variant (SSGD) that applies to general loss-minimization problems in which the loss function can be expressed as a weighted sum of \"stratum losses.\" We establish sufficient conditions for convergence of SSGD using results from stochastic approximation theory and regenerative process theory. We then specialize SSGD to obtain a new matrix-factorization algorithm, called DSGD, that can be fully distributed and run on web-scale datasets using, e.g., MapReduce. DSGD can handle a wide variety of matrix factorizations. We describe the practical techniques used to optimize performance in our DSGD implementation. Experiments suggest that DSGD converges significantly faster and has better scalability properties than alternative algorithms.", "Despite having various attractive qualities such as high prediction accuracy and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix Factorization has not been widely adopted because of the prohibitive cost of inference. In this paper, we propose a scalable distributed Bayesian matrix factorization algorithm using stochastic gradient MCMC. Our algorithm, based on Distributed Stochastic Gradient Langevin Dynamics, can not only match the prediction accuracy of standard MCMC methods like Gibbs sampling, but at the same time is as fast and simple as stochastic gradient descent. In our experiments, we show that our algorithm can achieve the same level of prediction accuracy as Gibbs sampling an order of magnitude faster. We also show that our method reduces the prediction error as fast as distributed stochastic gradient descent, achieving a 4.1 improvement in RMSE for the Netflix dataset and an 1.8 for the Yahoo music dataset.", "Many recommendation systems suggest items to users by utilizing the techniques of collaborative filtering(CF) based on historical records of items that the users have viewed, purchased, or rated. Two major problems that most CF approaches have to contend with are scalability and sparseness of the user profiles. To tackle these issues, in this paper, we describe a CF algorithm alternating-least-squares with weighted-?-regularization(ALS-WR), which is implemented on a parallel Matlab platform. We show empirically that the performance of ALS-WR (in terms of root mean squared error(RMSE)) monotonically improves with both the number of features and the number of ALS iterations. We applied the ALS-WR algorithm on a large-scale CF problem, the Netflix Challenge, with 1000 hidden features and obtained a RMSE score of 0.8985, which is one of the best results based on a pure method. In addition, combining with the parallel version of other known methods, we achieved a performance improvement of 5.91 over Netflix's own CineMatch recommendation system. Our method is simple and scales well to very large datasets." ] }
1705.10633
2617739603
Abstract Using the matrix factorization technique in machine learning is very common mainly in areas like rec-ommender systems. Despite its high prediction accuracy and its ability to avoid over-fitting of the data, the Bayesian Probabilistic Matrix Factorization algorithm (BPMF) has not been widely used on large scale data because of the prohibitive cost. In this paper, we propose a distributed high-performance parallel implementation of the BPMF using Gibbs sampling on shared and distributed architectures. We show by using efficient load balancing using work stealing on a single node, and by using asynchronous communication in the distributed version we beat state of the art implementations.
In @cite_15 , a distributed Bayesian matrix factorization algorithm using stochastic gradient Markov Chain Monte Carlo (MCMC) is proposed. This work is much more similar to this work than the aforementioned ALS and SGD. In the paper, the authors extended the Distributed Stochastic Gradient Langevin Dynamics (DSGLD) for more efficient learning. For the sake of increasing prediction's accuracy, they use multiple parallel chains in order to collect samples at a much faster rate and to explore different modes of parameter space. In this work, the Gibbs sampler is used because it is popular for its best quality samples even though it is more difficult to parallelize.
{ "cite_N": [ "@cite_15" ], "mid": [ "2101893470" ], "abstract": [ "Despite having various attractive qualities such as high prediction accuracy and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix Factorization has not been widely adopted because of the prohibitive cost of inference. In this paper, we propose a scalable distributed Bayesian matrix factorization algorithm using stochastic gradient MCMC. Our algorithm, based on Distributed Stochastic Gradient Langevin Dynamics, can not only match the prediction accuracy of standard MCMC methods like Gibbs sampling, but at the same time is as fast and simple as stochastic gradient descent. In our experiments, we show that our algorithm can achieve the same level of prediction accuracy as Gibbs sampling an order of magnitude faster. We also show that our method reduces the prediction error as fast as distributed stochastic gradient descent, achieving a 4.1 improvement in RMSE for the Netflix dataset and an 1.8 for the Yahoo music dataset." ] }
1705.10633
2617739603
Abstract Using the matrix factorization technique in machine learning is very common mainly in areas like rec-ommender systems. Despite its high prediction accuracy and its ability to avoid over-fitting of the data, the Bayesian Probabilistic Matrix Factorization algorithm (BPMF) has not been widely used on large scale data because of the prohibitive cost. In this paper, we propose a distributed high-performance parallel implementation of the BPMF using Gibbs sampling on shared and distributed architectures. We show by using efficient load balancing using work stealing on a single node, and by using asynchronous communication in the distributed version we beat state of the art implementations.
From parallel programming prospective, a master slave model is considered in @cite_15 . The initial matrix @math is grid into as many independent blocks as used workers. At each iteration, the master picks a block using a block scheduler and sends the corresponding chunk of @math and @math to the block's worker. Upon reception, the worker updates these chunks by running DSGLD using its local block of ratings. Afterwards, the worker sends the chunks back to the master. Upon reception, this later updates its global copy of the matrices @math and @math . Two levels of parallelism are used by the authors as a way of compensating the low mixing rate of SGLD: a parallel execution of the same sampling step (chain) and different samples in parallel.
{ "cite_N": [ "@cite_15" ], "mid": [ "2101893470" ], "abstract": [ "Despite having various attractive qualities such as high prediction accuracy and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix Factorization has not been widely adopted because of the prohibitive cost of inference. In this paper, we propose a scalable distributed Bayesian matrix factorization algorithm using stochastic gradient MCMC. Our algorithm, based on Distributed Stochastic Gradient Langevin Dynamics, can not only match the prediction accuracy of standard MCMC methods like Gibbs sampling, but at the same time is as fast and simple as stochastic gradient descent. In our experiments, we show that our algorithm can achieve the same level of prediction accuracy as Gibbs sampling an order of magnitude faster. We also show that our method reduces the prediction error as fast as distributed stochastic gradient descent, achieving a 4.1 improvement in RMSE for the Netflix dataset and an 1.8 for the Yahoo music dataset." ] }
1705.10633
2617739603
Abstract Using the matrix factorization technique in machine learning is very common mainly in areas like rec-ommender systems. Despite its high prediction accuracy and its ability to avoid over-fitting of the data, the Bayesian Probabilistic Matrix Factorization algorithm (BPMF) has not been widely used on large scale data because of the prohibitive cost. In this paper, we propose a distributed high-performance parallel implementation of the BPMF using Gibbs sampling on shared and distributed architectures. We show by using efficient load balancing using work stealing on a single node, and by using asynchronous communication in the distributed version we beat state of the art implementations.
In this work, a PGAS approach is used where the computation is totally decentralized and where the matrices are defined as global arrays. In such a decentralized model, no global barrier is needed to update the matrices neither for synchronizing the block distribution scheduling such as in @cite_15 . No bottleneck is also created when the updates of the matrices are exchanged.
{ "cite_N": [ "@cite_15" ], "mid": [ "2101893470" ], "abstract": [ "Despite having various attractive qualities such as high prediction accuracy and the ability to quantify uncertainty and avoid over-fitting, Bayesian Matrix Factorization has not been widely adopted because of the prohibitive cost of inference. In this paper, we propose a scalable distributed Bayesian matrix factorization algorithm using stochastic gradient MCMC. Our algorithm, based on Distributed Stochastic Gradient Langevin Dynamics, can not only match the prediction accuracy of standard MCMC methods like Gibbs sampling, but at the same time is as fast and simple as stochastic gradient descent. In our experiments, we show that our algorithm can achieve the same level of prediction accuracy as Gibbs sampling an order of magnitude faster. We also show that our method reduces the prediction error as fast as distributed stochastic gradient descent, achieving a 4.1 improvement in RMSE for the Netflix dataset and an 1.8 for the Yahoo music dataset." ] }
1705.10861
2618435598
We develop a novel framework for action localization in videos. We propose the Tube Proposal Network (TPN), which can generate generic, class-independent, video-level tubelet proposals in videos. The generated tubelet proposals can be utilized in various video analysis tasks, including recognizing and localizing actions in videos. In particular, we integrate these generic tubelet proposals into a unified temporal deep network for action classification. Compared with other methods, our generic tubelet proposal method is accurate, general, and is fully differentiable under a smoothL1 loss function. We demonstrate the performance of our algorithm on the standard UCF-Sports, J-HMDB21, and UCF-101 datasets. Our class-independent TPN outperforms other tubelet generation methods, and our unified temporal deep network achieves state-of-the-art localization results on all three datasets.
Extensive research effort has been devoted to action recognition. Recent surveys @cite_14 @cite_26 summarize methods based on hand-crafted features.
{ "cite_N": [ "@cite_14", "@cite_26" ], "mid": [ "1983705368", "2098339052" ], "abstract": [ "Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas.", "Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human-computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions." ] }
1705.10861
2618435598
We develop a novel framework for action localization in videos. We propose the Tube Proposal Network (TPN), which can generate generic, class-independent, video-level tubelet proposals in videos. The generated tubelet proposals can be utilized in various video analysis tasks, including recognizing and localizing actions in videos. In particular, we integrate these generic tubelet proposals into a unified temporal deep network for action classification. Compared with other methods, our generic tubelet proposal method is accurate, general, and is fully differentiable under a smoothL1 loss function. We demonstrate the performance of our algorithm on the standard UCF-Sports, J-HMDB21, and UCF-101 datasets. Our class-independent TPN outperforms other tubelet generation methods, and our unified temporal deep network achieves state-of-the-art localization results on all three datasets.
Seminal work includes @cite_5 , who proposed a template-based method to build models for human action localization in crowded areas. These hand-labeled templates are matched based on shape and motion features against over-segmented spatio-temporal clips. Shechtman and Irani @cite_3 proposed a space-time correlation method for actions in video segments with an action template based on enforced consistency constraints on the local intensity patterns of spatio-temporal tubes. @cite_17 used latent SVM learning to jointly detect and recognize actions in videos based on a figure-centric visual word representation. Van @cite_19 utilize dense trajectory features for region proposal and classification.
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_3", "@cite_17" ], "mid": [ "2175354415", "2137981002", "2154346517", "2131311058" ], "abstract": [ "This paper is on action localization in video with the aid of spatio-temporal proposals. To alleviate the computational expensive segmentation step of existing proposals, we propose bypassing the segmentations completely by generating proposals directly from the dense trajectories used to represent videos during classification. Our Action localization Proposals from dense Trajectories (APT) use an efficient proposal generation algorithm to handle the high number of trajectories in a video. Our spatio-temporal proposals are faster than current methods and outperform the localization and classification accuracy of current proposals on the UCF Sports, UCF 101, and MSR-II video datasets. Corrected version: we fixed a mistake in our UCF-101 ground truth. Numbers are different; conclusions are unchanged", "Real-world actions occur often in crowded, dynamic environments. This poses a difficult challenge for current approaches to video event detection because it is difficult to segment the actor from the background due to distracting motion from other objects in the scene. We propose a technique for event recognition in crowded videos that reliably identifies actions in the presence of partial occlusion and background clutter. Our approach is based on three key ideas: (1) we efficiently match the volumetric representation of an event against oversegmented spatio-temporal video volumes; (2) we augment our shape-based features using flow; (3) rather than treating an event template as an atomic entity, we separately match by parts (both in space and time), enabling robustness against occlusions and actor variability. Our experiments on human actions, such as picking up a dropped object or waving in a crowd show reliable detection with few false positives.", "We introduce a behavior-based similarity measure which tells us whether two different space-time intensity patterns of two different video segments could have resulted from a similar underlying motion field. This is done directly from the intensity information, without explicitly computing the underlying motions. Such a measure allows us to detect similarity between video segments of differently dressed people performing the same type of activity. It requires no foreground background segmentation, no prior learning of activities, and no motion estimation or tracking. Using this behavior-based similarity measure, we extend the notion of 2-dimensional image correlation into the 3-dimensional space-time volume, thus allowing to correlate dynamic behaviors and actions. Small space-time video segments (small video clips) are \"correlated\" against entire video sequences in all three dimensions (x,y, and t). Peak correlation values correspond to video locations with similar dynamic behaviors. Our approach can detect very complex behaviors in video sequences (e.g., ballet movements, pool dives, running water), even when multiple complex activities occur simultaneously within the field-of-view of the camera.", "In this paper we develop an algorithm for action recognition and localization in videos. The algorithm uses a figure-centric visual word representation. Different from previous approaches it does not require reliable human detection and tracking as input. Instead, the person location is treated as a latent variable that is inferred simultaneously with action recognition. A spatial model for an action is learned in a discriminative fashion under a figure-centric representation. Temporal smoothness over video sequences is also enforced. We present results on the UCF-Sports dataset, verifying the effectiveness of our model in situations where detection and tracking of individuals is challenging." ] }
1705.10861
2618435598
We develop a novel framework for action localization in videos. We propose the Tube Proposal Network (TPN), which can generate generic, class-independent, video-level tubelet proposals in videos. The generated tubelet proposals can be utilized in various video analysis tasks, including recognizing and localizing actions in videos. In particular, we integrate these generic tubelet proposals into a unified temporal deep network for action classification. Compared with other methods, our generic tubelet proposal method is accurate, general, and is fully differentiable under a smoothL1 loss function. We demonstrate the performance of our algorithm on the standard UCF-Sports, J-HMDB21, and UCF-101 datasets. Our class-independent TPN outperforms other tubelet generation methods, and our unified temporal deep network achieves state-of-the-art localization results on all three datasets.
Several approaches @cite_10 @cite_0 @cite_16 present interesting improvements on this direction. @cite_10 uses a linking approach based on tracking, with region candidates generated using EdgeBoxes @cite_21 . @cite_0 proposes a better method to fuse the two stream detections in each frame and smooth path labeling. Peng and Schmid @cite_16 further expands the two-stream structure into four streams by dividing the proposal region in each frame into upper and lower regions. These two methods improve performance significantly compared with @cite_15 , by updating the CNN from Alexnet (used in @cite_15 ) to VGGnet, and replacing selective search with the region proposal network for more accurate 2-D region proposal generation. Further improvements are possible by class-specific approaches, e.g. @cite_0 obtains a $6 Although the recent state-of-the-art methods @cite_15 @cite_0 @cite_16 are very effective, they all treat each frame independently in many classification stages, reducing the temporal information present in a video sequence. Different from these methods, we first generate a set of generic class-independent tubelet proposals for each video, which are then classified using a temporal model. Our model thereby can build detailed person-centric models for exploiting both spatial and temporal information for localizing and classifying actions.
{ "cite_N": [ "@cite_21", "@cite_0", "@cite_15", "@cite_16", "@cite_10" ], "mid": [ "7746136", "2484328966", "1923332106", "", "2950966695" ], "abstract": [ "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "In this work, we propose an approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages. In stage 1, appearance and motion detection networks are employed to localise and score actions from colour images and optical flow. In stage 2, the appearance network detections are boosted by combining them with the motion detection scores, in proportion to their respective spatial overlap. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called action tubes, are constructed by solving two energy maximisation problems via dynamic programming. While in the first pass, action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass, temporal trimming is performed by ensuring label consistency for all constituting detection boxes. We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly increasing detection speed at test time. We achieve a huge leap forward in action detection performance and report a 20 and 11 gain in mAP (mean average precision) on UCF-101 and J-HMDB-21 datasets respectively when compared to the state-of-the-art.", "We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.", "", "We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15 , 7 and 12 respectively in mAP." ] }
1705.10667
2949383447
Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation. Existing adversarial domain adaptation methods may not effectively align different domains of multimodal distributions native in classification problems. In this paper, we present conditional adversarial domain adaptation, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Conditional domain adversarial networks (CDANs) are designed with two novel conditioning strategies: multilinear conditioning that captures the cross-covariance between feature representations and classifier predictions to improve the discriminability, and entropy conditioning that controls the uncertainty of classifier predictions to guarantee the transferability. With theoretical guarantees and a few lines of codes, the approach has exceeded state-of-the-art results on five datasets.
Pioneered by the Generative Adversarial Networks (GANs) @cite_1 , the adversarial learning has been successfully explored for generative modeling. GANs constitute two networks in a two-player game: a generator that captures data distribution and a discriminator that distinguishes between generated samples and real data. The networks are trained in a minimax paradigm such that the generator is learned to fool the discriminator while the discriminator struggles to be not fooled. Several difficulties of GANs have been addressed, e.g. improved training @cite_7 @cite_26 and mode collapse @cite_35 @cite_53 @cite_42 , but others still remain, e.g. failure in matching two distributions @cite_9 . Towards adversarial learning for domain adaptation, unconditional ones have been leveraged while conditional ones remain under explored.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_7", "@cite_53", "@cite_42", "@cite_1", "@cite_9" ], "mid": [ "2125389028", "2964201867", "", "2963865839", "2548275288", "", "2593729559" ], "abstract": [ "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of gen- erative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first sec- tion introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a prac- tical and theoretically grounded direction towards solving these problems, while introducing new tools to study them.", "", "Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution during the early phases of training, thus providing a unified solution to the missing modes problem.", "Synthesizing high resolution photorealistic images has been a long-standing challenge in machine learning. In this paper we introduce new methods for the improved training of generative adversarial networks (GANs) for image synthesis. We construct a variant of GANs employing label conditioning that results in 128x128 resolution image samples exhibiting global coherence. We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models. These analyses demonstrate that high resolution samples provide class information not present in low resolution samples. Across 1000 ImageNet classes, 128x128 samples are more than twice as discriminable as artificially resized 32x32 samples. In addition, 84.7 of the classes have samples exhibiting diversity comparable to real ImageNet data.", "", "It is shown that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator generator game for a natural training objective (Wasserstein) when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them." ] }
1705.10667
2949383447
Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation. Existing adversarial domain adaptation methods may not effectively align different domains of multimodal distributions native in classification problems. In this paper, we present conditional adversarial domain adaptation, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Conditional domain adversarial networks (CDANs) are designed with two novel conditioning strategies: multilinear conditioning that captures the cross-covariance between feature representations and classifier predictions to improve the discriminability, and entropy conditioning that controls the uncertainty of classifier predictions to guarantee the transferability. With theoretical guarantees and a few lines of codes, the approach has exceeded state-of-the-art results on five datasets.
Sharing some spirit of the conditional GANs @cite_9 , another line of works match the features and classes using separate domain discriminators. Hoffman @cite_2 performs global domain alignment by learning features to deceive the domain discriminator, and category specific adaptation by minimizing a constrained multiple instance loss. In particular, the adversarial module for feature representation is not conditioned on the class-adaptation module with class information. Chen @cite_41 performs class-wise alignment over the classifier layer; i.e., multiple domain discriminators take as inputs only the softmax probabilities of source classifier, rather than conditioned on the class information. Tsai @cite_0 imposes two independent domain discriminators on the feature and class layers. These methods do not explore the dependency between the features and classes in a unified conditional domain discriminator, which is important to capture the multimodal structures underlying data distributions.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_0", "@cite_2" ], "mid": [ "2963120918", "2593729559", "2963107255", "2562192638" ], "abstract": [ "Despite the recent success of deep-learning based semantic segmentation, deploying a pre-trained road scene segmenter to a city whose images are not presented in the training set would not achieve satisfactory performance due to dataset biases. Instead of collecting a large number of annotated images of each city of interest to train or refine the segmenter, we propose an unsupervised learning approach to adapt road scene segmenters across different cities. By utilizing Google Street View and its timemachine feature, we can collect unannotated images for each road scene at different times, so that the associated static-object priors can be extracted accordingly. By advancing a joint global and class-specific domain adversarial learning framework, adaptation of pre-trained segmenters to that city can be achieved without the need of any user annotation or interaction. We show that our method improves the performance of semantic segmentation in multiple cities across continents, while it performs favorably against state-of-the-art approaches requiring annotated training data.", "It is shown that training of generative adversarial network (GAN) may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. However, generalization does occur for a weaker metric called neural net distance. It is also shown that an approximate pure equilibrium exists in the discriminator generator game for a natural training objective (Wasserstein) when generator capacity and training set sizes are moderate. This existence of equilibrium inspires MIX+GAN protocol, which can be combined with any existing GAN training, and empirically shown to improve some of them.", "Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.", "Fully convolutional models for dense prediction have proven successful for a wide range of visual tasks. Such models perform well in a supervised setting, but performance can be surprisingly poor under domain shifts that appear mild to a human observer. For example, training on one city and testing on another in a different geographic region and or weather condition may result in significantly degraded performance due to pixel-level distribution shift. In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems. Our method consists of both global and category specific adaptation techniques. Global domain alignment is performed using a novel semantic segmentation network with fully convolutional domain adversarial learning. This initially adapted space then enables category specific adaptation through a generalization of constrained weak learning, with explicit transfer of the spatial layout from the source to the target domains. Our approach outperforms baselines across different settings on multiple large-scale datasets, including adapting across various real city environments, different synthetic sub-domains, from simulated to real environments, and on a novel large-scale dash-cam dataset." ] }
1705.10667
2949383447
Adversarial learning has been embedded into deep networks to learn disentangled and transferable representations for domain adaptation. Existing adversarial domain adaptation methods may not effectively align different domains of multimodal distributions native in classification problems. In this paper, we present conditional adversarial domain adaptation, a principled framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Conditional domain adversarial networks (CDANs) are designed with two novel conditioning strategies: multilinear conditioning that captures the cross-covariance between feature representations and classifier predictions to improve the discriminability, and entropy conditioning that controls the uncertainty of classifier predictions to guarantee the transferability. With theoretical guarantees and a few lines of codes, the approach has exceeded state-of-the-art results on five datasets.
This paper extends the conditional adversarial mechanism to enable discriminative and transferable domain adaptation, by defining the domain discriminator on the features while conditioning it on the class information. Two novel conditioning strategies are designed to capture the cross-covariance dependency between the feature representations and class predictions while controlling the uncertainty of classifier predictions. This is different from aligning the features and classes separately @cite_2 @cite_41 @cite_0 .
{ "cite_N": [ "@cite_41", "@cite_0", "@cite_2" ], "mid": [ "2963120918", "2963107255", "2562192638" ], "abstract": [ "Despite the recent success of deep-learning based semantic segmentation, deploying a pre-trained road scene segmenter to a city whose images are not presented in the training set would not achieve satisfactory performance due to dataset biases. Instead of collecting a large number of annotated images of each city of interest to train or refine the segmenter, we propose an unsupervised learning approach to adapt road scene segmenters across different cities. By utilizing Google Street View and its timemachine feature, we can collect unannotated images for each road scene at different times, so that the associated static-object priors can be extracted accordingly. By advancing a joint global and class-specific domain adversarial learning framework, adaptation of pre-trained segmenters to that city can be achieved without the need of any user annotation or interaction. We show that our method improves the performance of semantic segmentation in multiple cities across continents, while it performs favorably against state-of-the-art approaches requiring annotated training data.", "Convolutional neural network-based approaches for semantic segmentation rely on supervision with pixel-level ground truth, but may not generalize well to unseen image domains. As the labeling process is tedious and labor intensive, developing algorithms that can adapt source ground truth labels to the target domain is of great interest. In this paper, we propose an adversarial learning method for domain adaptation in the context of semantic segmentation. Considering semantic segmentations as structured outputs that contain spatial similarities between the source and target domains, we adopt adversarial learning in the output space. To further enhance the adapted model, we construct a multi-level adversarial network to effectively perform output space domain adaptation at different feature levels. Extensive experiments and ablation study are conducted under various domain adaptation settings, including synthetic-to-real and cross-city scenarios. We show that the proposed method performs favorably against the state-of-the-art methods in terms of accuracy and visual quality.", "Fully convolutional models for dense prediction have proven successful for a wide range of visual tasks. Such models perform well in a supervised setting, but performance can be surprisingly poor under domain shifts that appear mild to a human observer. For example, training on one city and testing on another in a different geographic region and or weather condition may result in significantly degraded performance due to pixel-level distribution shift. In this paper, we introduce the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems. Our method consists of both global and category specific adaptation techniques. Global domain alignment is performed using a novel semantic segmentation network with fully convolutional domain adversarial learning. This initially adapted space then enables category specific adaptation through a generalization of constrained weak learning, with explicit transfer of the spatial layout from the source to the target domains. Our approach outperforms baselines across different settings on multiple large-scale datasets, including adapting across various real city environments, different synthetic sub-domains, from simulated to real environments, and on a novel large-scale dash-cam dataset." ] }
1705.10586
2618023758
Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation, it seems a bag-of-words type of representation has its strength. Taking advantage of both representions, we present a model known as TDSM (Top Down Semantic Model) for extracting a sentence representation that considers both the word-level semantics by linearly combining the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification. We apply the model on characters and our results show that our model is better than all the other character-based and word-based convolutional neural network models by zhang15 across seven different datasets with only 1 of their parameters. We also demonstrate that this model beats traditional linear models on TF-IDF vectors on small and polished datasets like news article in which typically deep learning models surrender.
Similarly, Zhang al zhang15 also employs the convolutional networks but on characters instead of words for text classification. They design two networks for the task, one large and one small. Both of them have nine layers including six convolutional layers and three fully-connected layers. Between the three fully connected layers they insert two dropout layers for regularization. For both convolution and max-pooling layers, they employ 1-D filters @cite_6 . After each convolution, they apply 1-D max-pooling. Specially, they claim that 1-D max-pooling enable them to train a relatively deep network.
{ "cite_N": [ "@cite_6" ], "mid": [ "2090042335" ], "abstract": [ "Many successful models for scene or object recognition transform low-level descriptors (such as Gabor filter responses, or SIFT descriptors) into richer representations of intermediate complexity. This process can often be broken down into two steps: (1) a coding step, which performs a pointwise transformation of the descriptors into a representation better adapted to the task, and (2) a pooling step, which summarizes the coded features over larger neighborhoods. Several combinations of coding and pooling schemes have been proposed in the literature. The goal of this paper is threefold. We seek to establish the relative importance of each step of mid-level feature extraction through a comprehensive cross evaluation of several types of coding modules (hard and soft vector quantization, sparse coding) and pooling schemes (by taking the average, or the maximum), which obtains state-of-the-art performance or better on several recognition benchmarks. We show how to improve the best performing coding scheme by learning a supervised discriminative dictionary for sparse coding. We provide theoretical and empirical insight into the remarkable performance of max pooling. By teasing apart components shared by modern mid-level feature extractors, our approach aims to facilitate the design of better recognition architectures." ] }
1705.10586
2618023758
Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation, it seems a bag-of-words type of representation has its strength. Taking advantage of both representions, we present a model known as TDSM (Top Down Semantic Model) for extracting a sentence representation that considers both the word-level semantics by linearly combining the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification. We apply the model on characters and our results show that our model is better than all the other character-based and word-based convolutional neural network models by zhang15 across seven different datasets with only 1 of their parameters. We also demonstrate that this model beats traditional linear models on TF-IDF vectors on small and polished datasets like news article in which typically deep learning models surrender.
Attention model is also utilized in our model, which is used to assign weights for each word. Usually, attention is used in sequential model @cite_17 @cite_4 @cite_2 @cite_20 . The attention mechanism includes sensor, internal state, actions and reward. At each time-step, the sensor will capture a glimpse of the input which is a small part of the entire input. Internal state will summarize the extracted information. Actions will decide the next glimpse location for the next step and reward suggests the benefit when taking the action. In our network, we adopt a simplified attention network as @cite_16 @cite_21 . We learn the weights over the words directly instead of through a sequence of actions and rewards.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_2", "@cite_16", "@cite_20", "@cite_17" ], "mid": [ "2951527505", "2397291310", "2952020226", "2201092681", "2288995089", "" ], "abstract": [ "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "Searching a large database to find a sequence that is most similar to a query can be prohibitively expensive, particularly if individual sequence comparisons involve complex operations such as warping. To achieve scalability, \"pruning\" heuristics are typically employed to minimize the portion of the database that must be searched with more complex matching. We present an approximate pruning technique which involves embedding sequences in a Euclidean space. Sequences are embedded using a convolutional network with a form of attention that integrates over time, trained on matching and non-matching pairs of sequences. By using fixed-length embeddings, our pruning method effectively runs in constant time, making it many orders of magnitude faster than full dynamic time warping-based matching for large datasets. We demonstrate our approach on a large-scale musical score-to-audio recording retrieval task.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.", "We propose a simplified model of attention which is applicable to feed-forward neural networks and demonstrate that the resulting model can solve the synthetic \"addition\" and \"multiplication\" long-term memory problems for sequence lengths which are both longer and more widely varying than the best published results for these tasks.", "Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets.", "" ] }
1705.10899
2619069813
Representing symbolic knowledge into a connectionist network is the key element for the integration of scalable learning and sound reasoning. Most of the previous studies focus on discriminative neural networks which unnecessarily require a separation of input output variables. Recent development of generative neural networks such as restricted Boltzmann machines (RBMs) has shown a capability of learning semantic abstractions directly from data, posing a promise for general symbolic learning and reasoning. Previous work on Penalty logic show a link between propositional logic and symmetric connectionist networks, however it is not applicable to RBMs. This paper proposes a novel method to represent propositional formulas into RBMs stack of RBMs where Gibbs sampling can be seen as maximising satisfiability. It also shows a promising use of RBMs to learn symbolic knowledge through maximum likelihood estimation.
In artificial neural networks, symbolic knowledge representation is based on the equivalence between feed-forward inference of the networks and modus ponens of logical rules. One of the earliest work is Knowledge-based artificial neural network @cite_20 which encodes if-then rules in a hierarchy of perceptrons. In another approach @cite_0 an 1-hidden layer neural networks with recurrent connections is proposed to support more complex rules. An extension of this system, called CILP++, uses bottom clause propositionalisation technique to work with first-order logic @cite_7 . Logic tensor network @cite_15 employs neural embedding to transform symbols to a vector space where logical inference is carried out through matrix tensor computation.
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_7", "@cite_20" ], "mid": [ "2065204741", "2547729384", "1986767090", "" ], "abstract": [ "Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The “explanation capability” of neural networks can be achieved by the extraction of symbolic knowledge. In this paper, we present a new method of extraction that captures nonmonotonic rules encoded in the network, and prove that such a method is sound. We start by discussing some of the main problems of knowledge extraction methods. We then discuss how these problems may be ameliorated. To this end, a partial ordering on the set of input vectors of a network is defined, as well as a number of pruning and simplification rules. The pruning rules are then used to reduce the search space of the extraction algorithm during a pedagogical extraction, whereas the simplification rules are used to reduce the size of the extracted set of rules. We show that, in the case of regular networks, the extraction algorithm is sound and complete. We proceed to extend the extraction algorithm to the class of non-regular networks, the general case. We show that non-regular networks always contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in a decompositional fashion. In order to combine the sets of rules extracted from each subnetwork into the final set of rules, we use a method whereby we are able to keep the soundness of the extraction algorithm. Finally, we present the results of an empirical analysis of the extraction system, using traditional examples and real-world application problems. The results have shown that a very high fidelity between the extracted set of rules and the network can be achieved.", "The paper introduces real logic: a framework that seamlessly integrates logical deductive reasoning with efficient, data-driven relational learning. Real logic is based on full first order language. Terms are interpreted in n-dimensional feature vectors, while predicates are interpreted in fuzzy sets. In real logic it is possible to formally define the following two tasks: i learning from data in presence of logical constraints, and ii reasoning on formulas exploiting concrete data. We implement real logic in an deep learning architecture, called logic tensor networks, based on Google's @math primitives. The paper concludes with experiments on a simple but representative example of knowledge completion.", "Relational learning can be described as the task of learning first-order logic rules from examples. It has enabled a number of new machine learning applications, e.g. graph mining and link analysis. Inductive Logic Programming (ILP) performs relational learning either directly by manipulating first-order rules or through propositionalization, which translates the relational task into an attribute-value learning task by representing subsets of relations as features. In this paper, we introduce a fast method and system for relational learning based on a novel propositionalization called Bottom Clause Propositionalization (BCP). Bottom clauses are boundaries in the hypothesis search space used by ILP systems Progol and Aleph. Bottom clauses carry semantic meaning and can be mapped directly onto numerical vectors, simplifying the feature extraction process. We have integrated BCP with a well-known neural-symbolic system, C-IL2P, to perform learning from numerical vectors. C-IL2P uses background knowledge in the form of propositional logic programs to build a neural network. The integrated system, which we call CILP++, handles first-order logic knowledge and is available for download from Sourceforge. We have evaluated CILP++ on seven ILP datasets, comparing results with Aleph and a well-known propositionalization method, RSD. The results show that CILP++ can achieve accuracy comparable to Aleph, while being generally faster, BCP achieved statistically significant improvement in accuracy in comparison with RSD when running with a neural network, but BCP and RSD perform similarly when running with C4.5. We have also extended CILP++ to include a statistical feature selection method, mRMR, with preliminary results indicating that a reduction of more than 90 of features can be achieved with a small loss of accuracy.", "" ] }
1705.10899
2619069813
Representing symbolic knowledge into a connectionist network is the key element for the integration of scalable learning and sound reasoning. Most of the previous studies focus on discriminative neural networks which unnecessarily require a separation of input output variables. Recent development of generative neural networks such as restricted Boltzmann machines (RBMs) has shown a capability of learning semantic abstractions directly from data, posing a promise for general symbolic learning and reasoning. Previous work on Penalty logic show a link between propositional logic and symmetric connectionist networks, however it is not applicable to RBMs. This paper proposes a novel method to represent propositional formulas into RBMs stack of RBMs where Gibbs sampling can be seen as maximising satisfiability. It also shows a promising use of RBMs to learn symbolic knowledge through maximum likelihood estimation.
Symbolic representation in graphical models has also been commonly studied. For example, in a notable work @cite_3 Markov networks are employed to generalise first-order logic. This work is different from ours in that it combines statistical and logical inference while we show the relation between the former and the latter. Besides the logical knowledge we study here, some other work also show the advantage of learning structural knowledge in graphical models, especially for tractable inference @cite_21 @cite_25 .
{ "cite_N": [ "@cite_21", "@cite_25", "@cite_3" ], "mid": [ "2949869425", "2153074847", "1977970897" ], "abstract": [ "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs). SPNs are directed acyclic graphs with variables as leaves, sums and products as internal nodes, and weighted edges. We show that if an SPN is complete and consistent it represents the partition function and all marginals of some graphical model, and give semantics to its nodes. Essentially all tractable graphical models can be cast as SPNs, but SPNs are also strictly more general. We then propose learning algorithms for SPNs, based on backpropagation and EM. Experiments show that inference and learning with SPNs can be both faster and more accurate than with standard deep networks. For example, SPNs perform image completion better than state-of-the-art deep networks for this task. SPNs also have intriguing potential connections to the architecture of the cortex.", "We present a new approach to inference in Bayesian networks, which is based on representing the network using a polynomial and then retrieving answers to probabilistic queries by evaluating and differentiating the polynomial. The network polynomial itself is exponential in size, but we show how it can be computed efficiently using an arithmetic circuit that can be evaluated and differentiated in time and space linear in the circuit size. The proposed framework for inference subsumes one of the most influential methods for inference in Bayesian networks, known as the tree-clustering or jointree method, which provides a deeper understanding of this classical method and lifts its desirable characteristics to a much more general setting. We discuss some theoretical and practical implications of this subsumption.", "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach." ] }
1705.10899
2619069813
Representing symbolic knowledge into a connectionist network is the key element for the integration of scalable learning and sound reasoning. Most of the previous studies focus on discriminative neural networks which unnecessarily require a separation of input output variables. Recent development of generative neural networks such as restricted Boltzmann machines (RBMs) has shown a capability of learning semantic abstractions directly from data, posing a promise for general symbolic learning and reasoning. Previous work on Penalty logic show a link between propositional logic and symmetric connectionist networks, however it is not applicable to RBMs. This paper proposes a novel method to represent propositional formulas into RBMs stack of RBMs where Gibbs sampling can be seen as maximising satisfiability. It also shows a promising use of RBMs to learn symbolic knowledge through maximum likelihood estimation.
Penalty logic is among the earliest attempts to show that any propositional formulas can be represented in symmetric connectionist systems, which can also be applied to RBMs @cite_9 . Penalty logic explains the relation between propositional formulas and SCNs. Penalty logic formulas are defined as a finite set of pairs @math , in which each propositional well-formed formula (WFF) @math is associated with a real value @math called penalty . A violation rank function @math is defined as the sum of the penalties from violated formulas. A preferred model is a truth-value assignment @math that has minimum total penalty. Applied to classification, for example, to decide the truth-value of a target proposition @math given an assignment @math of the other propositions, one will choose the value of @math that minimises @math . Reasoning with Penalty logic is shown to be equivalent to minimising an energy function in a SCN @cite_9 . This is the fundamental to form a link between propositional logic program and the network.
{ "cite_N": [ "@cite_9" ], "mid": [ "2036368940" ], "abstract": [ "The paper presents a connectionist framework that is capable of representing and learning propositional knowledge. An extended version of propositional calculus is developed and is demonstrated to be useful for nonmonotonic reasoning, dealing with conflicting beliefs and for coping with inconsistency generated by unreliable knowledge sources. Formulas of the extended calculus are proved to be equivalent in a very strong sense to symmetric networks (like Hopfield networks and Boltzmann machines), and efficient algorithms are given for translating back and forth between the two forms of knowledge representation. A fast learning procedure is presented that allows symmetric networks to learn representations of unknown logic formulas by looking at examples. A connectionist inference engine is then sketched whose knowledge is either compiled from a symbolic representation or learned inductively from training examples. Experiments with large scale randomly generated formulas suggest that the parallel local search that is executed by the networks is extremely fast on average. Finally, it is shown that the extended logic can be used as a high-level specification language for connectionist networks, into which several recent symbolic systems may be mapped. The paper demonstrates how a rigorous bridge can be constructed that ties together the (sometimes opposing) connectionist and symbolic approaches." ] }
1705.10899
2619069813
Representing symbolic knowledge into a connectionist network is the key element for the integration of scalable learning and sound reasoning. Most of the previous studies focus on discriminative neural networks which unnecessarily require a separation of input output variables. Recent development of generative neural networks such as restricted Boltzmann machines (RBMs) has shown a capability of learning semantic abstractions directly from data, posing a promise for general symbolic learning and reasoning. Previous work on Penalty logic show a link between propositional logic and symmetric connectionist networks, however it is not applicable to RBMs. This paper proposes a novel method to represent propositional formulas into RBMs stack of RBMs where Gibbs sampling can be seen as maximising satisfiability. It also shows a promising use of RBMs to learn symbolic knowledge through maximum likelihood estimation.
The Penalty logic idea seems to work straightforwardly with dense structures such as higher-order Boltzmann machines, however it is computationally expensive to represent a formula in RBMs, despite that compared to BMs learning in RBMs is easier due to the efficient inference mechanism. More importantly, it shows that by stacking several RBMs, one on top of another we can not only extract different level of abstractions from domain's data but also achieve better performance @cite_4 @cite_26 . Recently, several attempts have been made to extract and encode symbolic knowledge into RBMs @cite_8 @cite_16 . However, it is not theoretically clear how such knowledge is represented in the RBMs formally.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_16", "@cite_8" ], "mid": [ "2133257461", "2136922672", "2556838012", "2154125551" ], "abstract": [ "Motivated in part by the hierarchical organization of the cortex, a number of algorithms have recently been proposed that try to learn hierarchical, or \"deep,\" structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hierarchy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear (\"contour\") features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex \"corner\" features matches well with the results from the Ito & Komatsu's study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.", "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.", "Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language—a set of logical rules that we call confidence rules —and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural–symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.", "In real-world applications, the effective integration of learning and reasoning in a cognitive agent model is a difficult task. However, such integration may lead to a better understanding, use and construction of more realistic models. Unfortunately, existing models are either oversimplified or require much processing time, which is unsuitable for online learning and reasoning. Currently, controlled environments like training simulators do not effectively integrate learning and reasoning. In particular, higher-order concepts and cognitive abilities have many unknown temporal relations with the data, making it impossible to represent such relationships by hand. We introduce a novel cognitive agent model and architecture for online learning and reasoning that seeks to effectively represent, learn and reason in complex training environments. The agent architecture of the model combines neural learning with symbolic knowledge representation. It is capable of learning new hypotheses from observed data, and infer new beliefs based on these hypotheses. Furthermore, it deals with uncertainty and errors in the data using a Bayesian inference model. The validation of the model on real-time simulations and the results presented here indicate the promise of the approach when performing online learning and reasoning in real-world scenarios, with possible applications in a range of areas." ] }
1705.10899
2619069813
Representing symbolic knowledge into a connectionist network is the key element for the integration of scalable learning and sound reasoning. Most of the previous studies focus on discriminative neural networks which unnecessarily require a separation of input output variables. Recent development of generative neural networks such as restricted Boltzmann machines (RBMs) has shown a capability of learning semantic abstractions directly from data, posing a promise for general symbolic learning and reasoning. Previous work on Penalty logic show a link between propositional logic and symmetric connectionist networks, however it is not applicable to RBMs. This paper proposes a novel method to represent propositional formulas into RBMs stack of RBMs where Gibbs sampling can be seen as maximising satisfiability. It also shows a promising use of RBMs to learn symbolic knowledge through maximum likelihood estimation.
From a statistical perspective, representing a propositional formula is similar to representing a uniform distribution over all assignments that satisfy the formula, i.e. make the formula hold. In this paper such assignments are referred to as preferred models . Since RBMs are universal approximators, it is true that any discrete distribution can be approximated arbitrarily well'' @cite_27 , and therefore we can always find an RBM to represent a uniform distribution of preferred models. However, while that work utilises statistical methods over the preferred models of formulas resulting in a very large network, our work focuses on logical calculus to transform and convert formulas directly to the energy function of a more compact RBM.
{ "cite_N": [ "@cite_27" ], "mid": [ "2064630666" ], "abstract": [ "Deep belief networks (DBN) are generative neural network models with many layers of hidden explanatory factors, recently introduced by Hinton, Osindero, and Teh (2006) along with a greedy layer-wise unsupervised learning algorithm. The building block of a DBN is a probabilistic model called a restricted Boltzmann machine (RBM), used to represent one layer of the model. Restricted Boltzmann machines are interesting because inference is easy in them and because they have been successfully used as building blocks for training deeper models. We first prove that adding hidden units yields strictly improved modeling power, while a second theorem shows that RBMs are universal approximators of discrete distributions. We then study the question of whether DBNs with more layers are strictly more powerful in terms of representational power. This suggests a new and less greedy criterion for training RBMs within DBNs." ] }
1705.10659
2950484205
Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.
Tagged visual data structure analysis : Compared with low-level visual features, textual information provides high-level semantic meanings which can help bridge the gap between video features and human cognition. Textual tags have been widely employed along with visual features to help solve a variety of challenging computer vision problems, such as visual recognition @cite_42 and retrieval @cite_54 , image annotation @cite_51 . Rather than these supervised methods, we focus on structurally-constrained learning approach without the need of particular human labelling . Whilst a simple combination of visual features and textural tags may give rise to the difficult heteroscedasticity problem, @cite_64 alternatively seek an optimal combination of similarity measures derived black from different data modalities. The fused pairwise similarity can be then utilised for data clustering by existing graph based clustering algorithms such as spectral clustering @cite_60 . As the interaction between visual appearance and textual tags is not modelled in the raw feature space but on the similarity graphs, the information loss in graph construction can not be recovered. Also, this model considers no inter-tag correlation.
{ "cite_N": [ "@cite_64", "@cite_60", "@cite_54", "@cite_42", "@cite_51" ], "mid": [ "", "2165874743", "2141939040", "2113005879", "1877469910" ], "abstract": [ "", "Despite many empirical successes of spectral clustering methods— algorithms that cluster points using eigenvectors of matrices derived from the data—there are several unresolved issues. First. there are a wide variety of algorithms that use the eigenvectors in slightly different ways. Second, many of these algorithms have no proof that they will actually compute a reasonable clustering. In this paper, we present a simple spectral clustering algorithm that can be implemented using a few lines of Matlab. Using tools from matrix perturbation theory, we analyze the algorithm, and give conditions under which it can be expected to do well. We also show surprisingly good experimental results on a number of challenging clustering problems.", "Combining multiple low-level visual features is a proven and effective strategy for a range of computer vision tasks. However, limited attention has been paid to combining such features with information from other modalities, such as audio and videotext, for large scale analysis of web videos. In our work, we rigorously analyze and combine a large set of low-level features that capture appearance, color, motion, audio and audio-visual co-occurrence patterns in videos. We also evaluate the utility of high-level (i.e., semantic) visual information obtained from detecting scene, object, and action concepts. Further, we exploit multimodal information by analyzing available spoken and videotext content using state-of-the-art automatic speech recognition (ASR) and videotext recognition systems. We combine these diverse features using a two-step strategy employing multiple kernel learning (MKL) and late score level fusion methods. Based on the TRECVID MED 2011 evaluations for detecting 10 events in a large benchmark set of ∼45000 videos, our system showed the best performance among the 19 international teams.", "Gathering accurate training data for recognizing a set of attributes or tags on images or videos is a challenge. Obtaining labels via manual effort or from weakly-supervised data typically results in noisy training labels. We develop the FlipSVM, a novel algorithm for handling these noisy, structured labels. The FlipSVM models label noise by \"flipping\" labels on training examples. We show empirically that the FlipSVM is effective on images-and-attributes and video tagging datasets.", "Automatically assigning keywords to images is of great interest as it allows one to index, retrieve, and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes low-level image features and a simple combination of basic distances to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques." ] }
1705.10659
2950484205
Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.
Alternatively, multi-view learning embedding methods are also able to jointly learn visual and text data by inferring a latent common subspace, such as multi-view metric learning , Restricted Boltzmann Machine and auto-encoders , visual-semantic embedding @cite_7 , Canonical Correlation Analysis (CCA) and its variants @cite_4 @cite_0 @cite_32 @cite_69 @cite_61 . Inspired by the huge success of deep neural networks, recently a few works have attempted to combine deep feature learning and CCA for advancing multi-view modality data modelling @cite_18 @cite_3 . However, these methods usually assume a reasonably large number of tags available. Otherwise, the learned subspace may be subject to sub-optimal cross-modal correlation, e.g. in the case of significantly sparse tags. In addition, whilst incomplete tags can be considered as a special case of noisy labels, existing noise-tolerant methods @cite_70 @cite_73 @cite_62 are not directly applicable. This is because they usually handle classification problems where a separate training dataset is required for model building, which however is not available in our context.
{ "cite_N": [ "@cite_61", "@cite_69", "@cite_18", "@cite_4", "@cite_62", "@cite_7", "@cite_70", "@cite_32", "@cite_3", "@cite_0", "@cite_73" ], "mid": [ "2071207147", "", "1523385540", "", "", "2123024445", "", "2070753207", "1883346539", "2029163572", "1866072925" ], "abstract": [ "This paper presents a general multi-view feature extraction approach that we call Generalized Multiview Analysis or GMA. GMA has all the desirable properties required for cross-view classification and retrieval: it is supervised, it allows generalization to unseen classes, it is multi-view and kernelizable, it affords an efficient eigenvalue based solution and is applicable to any domain. GMA exploits the fact that most popular supervised and unsupervised feature extraction techniques are the solution of a special form of a quadratic constrained quadratic program (QCQP), which can be solved efficiently as a generalized eigenvalue problem. GMA solves a joint, relaxed QCQP over different feature spaces to obtain a single (non)linear subspace. Intuitively, GMA is a supervised extension of Canonical Correlational Analysis (CCA), which is useful for cross-view classification and retrieval. The proposed approach is general and has the potential to replace CCA whenever classification or retrieval is the purpose and label information is available. We outperform previous approaches for textimage retrieval on Pascal and Wiki text-image data. We report state-of-the-art results for pose and lighting invariant face recognition on the MultiPIE face dataset, significantly outperforming other approaches.", "", "We introduce Deep Canonical Correlation Analysis (DCCA), a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated. Parameters of both transformations are jointly learned to maximize the (regularized) total correlation. It can be viewed as a nonlinear extension of the linear method canonical correlation analysis (CCA). It is an alternative to the nonparametric method kernel canonical correlation analysis (KCCA) for learning correlated nonlinear transformations. Unlike KCCA, DCCA does not require an inner product, and has the advantages of a parametric method: training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances. In experiments on two real-world datasets, we find that DCCA learns representations with significantly higher correlation than those learned by CCA and KCCA. We also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks.", "", "", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "", "This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.", "We consider learning representations (features) in the setting in which we have access to multiple unlabeled views of the data for representation learning while only one view is available at test time. Previous work on this problem has proposed several techniques based on deep neural networks, typically involving either autoencoder-like networks with a reconstruction objective or paired feedforward networks with a correlation-based objective. We analyze several techniques based on prior work, as well as new variants, and compare them experimentally on visual, speech, and language domains. To our knowledge this is the first head-to-head comparison of a variety of such techniques on multiple tasks. We find an advantage for correlation-based representation learning, while the best results on most tasks are obtained with our new variant, deep canonically correlated autoencoders (DCCAE).", "We introduce an approach to image retrieval and auto-tagging that leverages the implicit information about object importance conveyed by the list of keyword tags a person supplies for an image. We propose an unsupervised learning procedure based on Kernel Canonical Correlation Analysis that discovers the relationship between how humans tag images (e.g., the order in which words are mentioned) and the relative importance of objects and their layout in the scene. Using this discovered connection, we show how to boost accuracy for novel queries, such that the search results better preserve the aspects a human may find most worth mentioning. We evaluate our approach on three datasets using either keyword tags or natural language descriptions, and quantify results with both ground truth parameters as well as direct tests with human subjects. Our results show clear improvements over approaches that either rely on image features alone, or that use words and image features but ignore the implied importance cues. Overall, our work provides a novel way to incorporate high-level human perception of scenes into visual representations for enhanced image search.", "The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark." ] }
1705.10659
2950484205
Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.
: Random forests have been shown to be effective for many computer vision tasks @cite_5 @cite_31 @cite_40 @cite_71 . Below we review several most related random forest variants. @cite_44 presented an Entangled Decision Forest for helping image segmentation by propagating knowledge across layers, e.g. dependencies between pixels and objects. Recently, @cite_68 proposed a multi-task forest for face analysis via learning different tasks at distinct layers according to the correlations between multi-tasks (e.g. head pose, facial landmarks). All these models are supervised. In contrast, our forest model performs structurally-constrained learning since we aim to discover and obtain semantic data structure using heterogeneous tags that are not target category labels but merely some semantic constraints. Furthermore, our model is unique in its capability of handling missing data, which is not considered in @cite_68 @cite_44 . The Constrained Clustering Forest (CC-Forest) @cite_28 @cite_39 is the most related to our HML-RF model, in that it is also utilised for data structure analysis e.g. measuring data affinity. The advantage of our model over CC-Forest are two-folds: (1) The capability for exploiting the tag hierarchical structure knowledge and (2) The superior effectiveness of tackling missing data, as shown in our experiments (Section ).
{ "cite_N": [ "@cite_28", "@cite_39", "@cite_44", "@cite_40", "@cite_71", "@cite_5", "@cite_31", "@cite_68" ], "mid": [ "", "2138081639", "1568207135", "2003293138", "", "2058224795", "2077989172", "" ], "abstract": [ "", "Many visual surveillance tasks, e.g. video summarisation, is conventionally accomplished through analysing imagery-based features. Relying solely on visual cues for public surveillance video understanding is unreliable, since visual observations obtained from public space CCTV video data are often not sufficiently trustworthy and events of interest can be subtle. We believe that non-visual data sources such as weather reports and traffic sensory signals can be exploited to complement visual data for video content analysis and summarisation. In this paper, we present a novel unsupervised framework to learn jointly from both visual and independently-drawn non-visual data sources for discovering meaningful latent structure of surveillance video data. In particular, we investigate ways to cope with discrepant dimension and representation whilst associating these heterogeneous data sources, and derive effective mechanism to tolerate with missing and incomplete data from different sources. We show that the proposed multi-source learning framework not only achieves better video content clustering than state-of-the-art methods, but also is capable of accurately inferring missing non-visual semantics from previously-unseen videos. In addition, a comprehensive user study is conducted to validate the quality of video summarisation generated using the proposed multi-source model.", "This work addresses the challenging problem of simultaneously segmenting multiple anatomical structures in highly varied CT scans. We propose the entangled decision forest (EDF) as a new discriminative classifier which augments the state of the art decision forest, resulting in higher prediction accuracy and shortened decision time. Our main contribution is two-fold. First, we propose entangling the binary tests applied at each tree node in the forest, such that the test result can depend on the result of tests applied earlier in the same tree and at image points offset from the voxel to be classified. This is demonstrated to improve accuracy and capture long-range semantic context. Second, during training, we propose injecting randomness in a guided way, in which node feature types and parameters are randomly drawn from a learned (non-uniform) distribution. This further improves classification accuracy. We assess our probabilistic anatomy segmentation technique using a labeled database of CT image volumes of 250 different patients from various scan protocols and scanner vendors. In each volume, 12 anatomical structures have been manually segmented. The database comprises highly varied body shapes and sizes, a wide array of pathologies, scan resolutions, and diverse contrast agents. Quantitative comparisons with state of the art algorithms demonstrate both superior test accuracy and computational efficiency.", "While spectral clustering is usually an unsupervised operation, there are circumstances in which we have prior belief that pairs of samples should (or should not) be assigned with the same cluster. Constrained spectral clustering aims to exploit this prior belief as constraint (or weak supervision) to influence the cluster formation so as to obtain a structure more closely resembling human perception. Two important issues remain open: (1) how to propagate sparse constraints effectively, (2) how to handle ill-conditioned noisy constraints generated by imperfect oracles. In this paper we present a unified framework to address the above issues. Specifically, in contrast to existing constrained spectral clustering approaches that blindly rely on all features for constructing the spectral, our approach searches for neighbours driven by discriminative feature selection for more effective constraint diffusion. Crucially, we formulate a novel data-driven filtering approach to handle the noisy constraint problem, which has been unrealistically ignored in constrained spectral clustering literature.", "", "This review presents a unified, efficient model of random decision forests which can be applied to a number of machine learning, computer vision, and medical image analysis tasks. Our model extends existing forest-based techniques as it unifies classification, regression, density estimation, manifold learning, semi-supervised learning, and active learning under the same decision forest framework. This gives us the opportunity to write and optimize the core implementation only once, with application to many diverse tasks. The proposed model may be used both in a discriminative or generative way and may be applied to discrete or continuous, labeled or unlabeled data. The main contributions of this review are: (1) Proposing a unified, probabilistic and efficient model for a variety of learning tasks; (2) Demonstrating margin-maximizing properties of classification forests; (3) Discussing probabilistic regression forests in comparison with other nonlinear regression algorithms; (4) Introducing density forests for estimating probability density functions; (5) Proposing an efficient algorithm for sampling from a density forest; (6) Introducing manifold forests for nonlinear dimensionality reduction; (7) Proposing new algorithms for transductive learning and active learning. Finally, we discuss how alternatives such as random ferns and extremely randomized trees stem from our more general forest model. This document is directed at both students who wish to learn the basics of decision forests, as well as researchers interested in the new contributions. It presents both fundamental and novel concepts in a structured way, with many illustrative examples and real-world applications. Thorough comparisons with state-of-the-art algorithms such as support vector machines, boosting and Gaussian processes are presented and relative advantages and disadvantages discussed. The many synthetic examples and existing commercial applications demonstrate the validity of the proposed model and its flexibility.", "While clustering is usually an unsupervised operation, there are circumstances where we have access to prior belief that pairs of samples should (or should not) be assigned with the same cluster. Constrained clustering aims to exploit this prior belief as constraint (or weak supervision) to influence the cluster formation so as to obtain a data structure more closely resembling human perception. Two important issues remain open: 1) how to exploit sparse constraints effectively and 2) how to handle ill-conditioned noisy constraints generated by imperfect oracles. In this paper, we present a novel pairwise similarity measure framework to address the above issues. Specifically, in contrast to existing constrained clustering approaches that blindly rely on all features for constraint propagation, our approach searches for neighborhoods driven by discriminative feature selection for more effective constraint diffusion. Crucially, we formulate a novel approach to handling the noisy constraint problem, which has been unrealistically ignored in the constrained clustering literature. Extensive comparative results show that our method is superior to the state-of-the-art constrained clustering approaches and can generally benefit existing pairwise similarity-based data clustering algorithms, such as spectral clustering and affinity propagation.", "" ] }
1705.10609
2620432330
In this work we investigate opportunities offered by telematics and analytics to enable better informed, and more integrated, collaborative management decisions on construction sites. We focus on efficient refuelling of assets across construction sites. More specifically, we develop decision support models that, by leveraging data supplied by different assets, schedule refuelling operations by minimising the distance travelled by the bowser truck as well as fuel shortages. Motivated by a practical case study elicited in the context of a project we recently conducted at the C610 Systemwide Crossrail site, we introduce the Dynamic Bowser Routing Problem. In this problem the decision maker aims to dynamically refuel, by dispatching a bowser truck, a set of assets which consume fuel and whose location changes over time; the goal is to ensure that assets do not run out of fuel and that the bowser covers the minimum possible distance. We investigate deterministic and stochastic variants of this problem. To tackle deterministic variants, we introduce and contrast a bilinear programming model and a mixed-integer linear programming model. To tackle stochastic variants, by employing a new general purpose software library for stochastic modeling, we introduce a complete stochastic dynamic programming model as well as a novel heuristic we named "sample waning." We consider two stochastic variants of the problem in which asset locations or asset fuel consumptions are uncertain. We demonstrate the effectiveness of our approaches in the context of an extensive computational study designed around information and data collected at C610 Systemwide as well as supplied by our project partners.
The origins of the Truck Dispatching Problem,'' a generalization of the Traveling Salesman Problem, date back to the seminal work by @cite_0 . Since the early days, a sizeable literature developed on the so-called Vehicle Routing Problem (VRP), whose aim is to dispatch a fleet of vehicles on a given network to serve a set of customers while meeting a number of constraints. A comprehensive discussion on the VRP can be found in .
{ "cite_N": [ "@cite_0" ], "mid": [ "2111563176" ], "abstract": [ "The paper is concerned with the optimum routing of a fleet of gasoline delivery trucks between a bulk terminal and a large number of service stations supplied by the terminal. The shortest routes between any two points in the system are given and a demand for one or several products is specified for a number of stations within the distribution system. It is desired to find a way to assign stations to trucks in such a manner that station demands are satisfied and total mileage covered by the fleet is a minimum A procedure based on a linear programming formulation is given for obtaining a near optimal solution. The calculations may be readily performed by hand or by an automatic digital computing machine. No practical applications of the method have been made as yet. A number of trial problems have been calculated, however." ] }
1705.10609
2620432330
In this work we investigate opportunities offered by telematics and analytics to enable better informed, and more integrated, collaborative management decisions on construction sites. We focus on efficient refuelling of assets across construction sites. More specifically, we develop decision support models that, by leveraging data supplied by different assets, schedule refuelling operations by minimising the distance travelled by the bowser truck as well as fuel shortages. Motivated by a practical case study elicited in the context of a project we recently conducted at the C610 Systemwide Crossrail site, we introduce the Dynamic Bowser Routing Problem. In this problem the decision maker aims to dynamically refuel, by dispatching a bowser truck, a set of assets which consume fuel and whose location changes over time; the goal is to ensure that assets do not run out of fuel and that the bowser covers the minimum possible distance. We investigate deterministic and stochastic variants of this problem. To tackle deterministic variants, we introduce and contrast a bilinear programming model and a mixed-integer linear programming model. To tackle stochastic variants, by employing a new general purpose software library for stochastic modeling, we introduce a complete stochastic dynamic programming model as well as a novel heuristic we named "sample waning." We consider two stochastic variants of the problem in which asset locations or asset fuel consumptions are uncertain. We demonstrate the effectiveness of our approaches in the context of an extensive computational study designed around information and data collected at C610 Systemwide as well as supplied by our project partners.
Equally fundamental in production economics is the literature on inventory control. Pioneering works in this area were carried out by @cite_6 , who introduced the concept of economic order quantity,'' and @cite_1 , who discussed the first lot-sizing algorithm for a finite-horizon inventory system subject to dynamic demand. Lot-sizing models are surveyed in ; recent developments in stochastic lot sizing are surveyed in .
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "2107857629", "2149835216" ], "abstract": [ "(This article originally appeared in Management Science, October 1958, Volume 5, Number 1, pp. 89-96, published by The Institute of Management Sciences.) A forward algorithm for a solution to the following dynamic version of the economic lot size model is given: allowing the possibility of demands for a single item, inventory holding charges, and setup costs to vary over N periods, we desire a minimum total cost inventory management scheme which satisfies known demand in every period. Disjoint planning horizons are shown to be possible which eliminate the necessity of having data for the full N periods.", "Reprinted from Factory, The Magazine of Management, Volume 10, Number 2, February 1913, pp. 135-136, 152. Interest on capital tied up in wages, material and overhead sets a maximum limit to the quantity of parts which can be profitably manufactured at one time; \"set-up\" costs on the job fix the minimum. Experience has shown one manager a way to determine the economical size of lots." ] }
1705.10609
2620432330
In this work we investigate opportunities offered by telematics and analytics to enable better informed, and more integrated, collaborative management decisions on construction sites. We focus on efficient refuelling of assets across construction sites. More specifically, we develop decision support models that, by leveraging data supplied by different assets, schedule refuelling operations by minimising the distance travelled by the bowser truck as well as fuel shortages. Motivated by a practical case study elicited in the context of a project we recently conducted at the C610 Systemwide Crossrail site, we introduce the Dynamic Bowser Routing Problem. In this problem the decision maker aims to dynamically refuel, by dispatching a bowser truck, a set of assets which consume fuel and whose location changes over time; the goal is to ensure that assets do not run out of fuel and that the bowser covers the minimum possible distance. We investigate deterministic and stochastic variants of this problem. To tackle deterministic variants, we introduce and contrast a bilinear programming model and a mixed-integer linear programming model. To tackle stochastic variants, by employing a new general purpose software library for stochastic modeling, we introduce a complete stochastic dynamic programming model as well as a novel heuristic we named "sample waning." We consider two stochastic variants of the problem in which asset locations or asset fuel consumptions are uncertain. We demonstrate the effectiveness of our approaches in the context of an extensive computational study designed around information and data collected at C610 Systemwide as well as supplied by our project partners.
The field of Inventory Routing (IR), whose origins can be traced back to the work of @cite_5 , encompasses problems which combine vehicle routing and inventory management decisions. In IR optimisation is delegated to a central entity that jointly optimises all decisions. Recent surveys in the area include . The first exact approach to the Inventory Routing Problem (IRP) was proposed by @cite_3 , who considered the single-vehicle case. Computationally efficient approaches to this problem were discussed in . Approaches for the multi-vehicle case include . Stochastic IRP problems, in which customer demand is modeled as a random variable, include the seminal work by @cite_4 , and a number of more recent contributions .
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_3" ], "mid": [ "2045719928", "2059210397", "2121266360" ], "abstract": [ "For Air Products and Chemicals, Inc., inventory management of industrial gases at customer locations is integrated with vehicle scheduling and dispatching. Their advanced decision support system includes on-line data entry functions, customer usage forecasting, a time distance network with a shortest path algorithm to compute intercustomer travel times and distances, a mathematical optimization module to produce daily delivery schedules, and an interactive schedule change interface. The optimization module uses a sophisticated Lagrangian relaxation algorithm to solve mixed integer programs with up to 800,000 variables and 200,000 constraints to near optimality. The system, first implemented in October, 1981, has been saving between 6 to 10 of operating costs.", "Abstract Inventory-routing problems (IRPs) arise in vendor-managed inventory systems. They require jointly solving a vehicle routing problem and an inventory management problem. Whereas the solutions they yield tend to benefit the vendor and customers, solving IRPs solely based on cost considerations may lead to inconveniences to both parties. These are related to the fleet size and vehicle load, to the frequency of the deliveries, and to the quantities delivered. In order to alleviate these problems, we introduce the concept of consistency in IRP solutions, thus increasing quality of service. We formulate the multi-vehicle IRP, with and without consistency requirements, as mixed integer linear programs, and we propose a matheuristic for their solution. This heuristic applies an adaptive large neighborhood search scheme in which some subproblems are solved exactly. The proposed algorithm generates solutions offering a good compromise between cost and quality. We analyze the effect of different inventory policies, routing decisions and delivery sizes.", "We consider an inventory routing problem in discrete time where a supplier has to serve a set of customers over a multiperiod horizon. A capacity constraint for the inventory is given for each customer, and the service cannot cause any stockout situation. Two different replenishment policies are considered: the order-up-to-level and the maximum-level policies. A single vehicle with a given capacity is available. The transportation cost is proportional to the distance traveled, whereas the inventory holding cost is proportional to the level of the inventory at the customers and at the supplier. The objective is the minimization of the sum of the inventory and transportation costs. We present a heuristic that combines a tabu search scheme with ad hoc designed mixed-integer programming models. The effectiveness of the heuristic is proved over a set of benchmark instances for which the optimal solution is known." ] }
1705.10609
2620432330
In this work we investigate opportunities offered by telematics and analytics to enable better informed, and more integrated, collaborative management decisions on construction sites. We focus on efficient refuelling of assets across construction sites. More specifically, we develop decision support models that, by leveraging data supplied by different assets, schedule refuelling operations by minimising the distance travelled by the bowser truck as well as fuel shortages. Motivated by a practical case study elicited in the context of a project we recently conducted at the C610 Systemwide Crossrail site, we introduce the Dynamic Bowser Routing Problem. In this problem the decision maker aims to dynamically refuel, by dispatching a bowser truck, a set of assets which consume fuel and whose location changes over time; the goal is to ensure that assets do not run out of fuel and that the bowser covers the minimum possible distance. We investigate deterministic and stochastic variants of this problem. To tackle deterministic variants, we introduce and contrast a bilinear programming model and a mixed-integer linear programming model. To tackle stochastic variants, by employing a new general purpose software library for stochastic modeling, we introduce a complete stochastic dynamic programming model as well as a novel heuristic we named "sample waning." We consider two stochastic variants of the problem in which asset locations or asset fuel consumptions are uncertain. We demonstrate the effectiveness of our approaches in the context of an extensive computational study designed around information and data collected at C610 Systemwide as well as supplied by our project partners.
Moving our attention to robotics, recently investigated stochastic collection and replenishment of agents motivated by use cases in mining and agricultural settings in which a replenishment agent transports a resource between a centralised replenishment point to agents using the resource in the field. They employ Gaussian approximations to quickly calculate the risk-weighted cost of a schedule; a branch and bound search then exploits these predictions to minimise the downtime of the agents. Previous works in this area mainly focused on scheduling the actions of the dedicated replenishment agent from a short-term and deterministic angle. Scenario 2 [see][p. 59] citeulike:14544091 present similarities to our setup; this emphasises the practical relevance of our study. However, our modeling and solution framework has the advantage of relying solely on MILP modeling and not on ad-hoc algorithms to predict future asset resource levels. Moreover, the discussion in @cite_2 assumes that uncertain parameters and variables are normally distributed, while our approach based on piecewise linearisation of loss functions does not require this assumption and can accommodate any distribution.
{ "cite_N": [ "@cite_2" ], "mid": [ "1965784613" ], "abstract": [ "It would be useful if computers could learn from experience and thus automatically improve the efficiency of their own programs during execution. A simple but effective rote-learning facility can be provided within the framework of a suitable programming language." ] }
1705.10843
2618625858
In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics.
Previous work has relied on specific modifications of the objective function to reach the desired properties. For example, @cite_10 introduce penalties to unrealistic sequences, in absence of which RL can easily get stuck around local maxima which can be very far from the global maximum reward. Related applications by @cite_15 and @cite_6 apply reinforcement learning to sequence generation in a NLP setting.
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_6" ], "mid": [ "2176263492", "2953113026", "1522301498" ], "abstract": [ "Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.", "This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.", "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm." ] }
1705.10843
2618625858
In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics.
In the field of music generation, @cite_22 built a SeqGAN model employing an efficient representation of multi-channel MIDI to generate polyphonic music. @cite_21 presented Fusion GAN, a dual-learning GAN model that can fuse two data distributions. @cite_27 employ deep Q-learning with a cross-entropy reward to optimize the quality of melodies generated from an RNN.
{ "cite_N": [ "@cite_27", "@cite_21", "@cite_22" ], "mid": [ "", "2774755175", "2766651896" ], "abstract": [ "", "FusionGAN is a novel genre fusion framework for music generation that integrates the strengths of generative adversarial networks and dual learning. In particular, the proposed method offers a dual learning extension that can effectively integrate the styles of the given domains. To efficiently quantify the difference among diverse domains and avoid the vanishing gradient issue, FusionGAN provides a Wasserstein based metric to approximate the distance between the target domain and the existing domains. Adopting the Wasserstein distance, a new domain is created by combining the patterns of the existing domains using adversarial learning. Experimental results on public music datasets demonstrated that our approach could effectively merge two genres.", "We propose an application of SeqGAN, generative adversarial networks for discrete sequence generation, for creating polyphonic musical sequences. Instead of monophonic melody generation suggested in the original work, we present an efficient representation of polyphony MIDI file that captures chords and melodies with dynamic timings simultaneously. The network can create sequences that are musically coherent. We also report that careful tuning of reinforcement learning signals of the model is crucial for general application." ] }
1705.10882
2619088528
Deep learning algorithms for connectomics rely upon localized classification, rather than overall morphology. This leads to a high incidence of erroneously merged objects. Humans, by contrast, can easily detect such errors by acquiring intuition for the correct morphology of objects. Biological neurons have complicated and variable shapes, which are challenging to learn, and merge errors take a multitude of different forms. We present an algorithm, MergeNet, that shows 3D ConvNets can, in fact, detect merge errors from high-level neuronal morphology. MergeNet follows unsupervised training and operates across datasets. We demonstrate the performance of MergeNet both on a variety of connectomics data and on a dataset created from merged MNIST images.
There have been numerous recent advances in using neural networks to recognize general three-dimensional objects. Methods include taking 2D projections of the input @cite_4 , combined 2D-3D approaches @cite_8 @cite_7 , and purely 3D networks @cite_33 @cite_36 . Accelerated implementation techniques for 3D networks have been introduced by Budden, et al @cite_15 and Zlateski, Lee, and Seung @cite_6 .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_7", "@cite_8", "@cite_36", "@cite_6", "@cite_15" ], "mid": [ "2952789225", "2211722331", "", "2963122731", "", "2591237478", "2952713184" ], "abstract": [ "A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.", "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.", "", "Segmentation of 3D images is a fundamental problem in biomedical image analysis. Deep learning (DL) approaches have achieved the state-of-the-art segmentation performance. To exploit the 3D contexts using neural networks, known DL segmentation methods, including 3D convolution, 2D convolution on the planes orthogonal to 2D slices, and LSTM in multiple directions, all suffer incompatibility with the highly anisotropic dimensions in common 3D biomedical images. In this paper, we propose a new DL framework for 3D image segmentation, based on a combination of a fully convolutional network (FCN) and a recurrent neural network (RNN), which are responsible for exploiting the intra-slice and inter-slice contexts, respectively. To our best knowledge, this is the first DL framework for 3D image segmentation that explicitly leverages 3D image anisotropism. Evaluating using a dataset from the ISBI Neuronal Structure Segmentation Challenge and in-house image stacks for 3D fungus segmentation, our approach achieves promising results, comparing to the known DL-based 3D segmentation approaches.", "", "Abstract Convolutional networks (ConvNets) have become a popular approach to computer vision. Here we consider the parallelization of ConvNet training, which is computationally costly. Our novel parallel algorithm is based on decomposition into a set of tasks, most of which are convolutions or FFTs. Theoretical analysis suggests that linear speedup with the number of processors is attainable. To attain such performance on real shared-memory machines, our algorithm computes convolutions converging on the same node of the network with temporal locality to reduce cache misses, and sums the convergent convolution outputs via an almost wait-free concurrent method to reduce time spent in critical sections. Benchmarking with multi-core CPUs shows speedup roughly equal to the number of physical cores. We also demonstrate 90x speedup on a many-core CPU (Xeon Phi Knights Corner). Our algorithm can be either faster or slower than certain GPU implementations depending on specifics of the network architecture, kernel sizes, and density and size of the output patch.", "Deep convolutional neural networks (ConvNets) of 3-dimensional kernels allow joint modeling of spatiotemporal features. These networks have improved performance of video and volumetric image analysis, but have been limited in size due to the low memory ceiling of GPU hardware. Existing CPU implementations overcome this constraint but are impractically slow. Here we extend and optimize the faster Winograd-class of convolutional algorithms to the @math -dimensional case and specifically for CPU hardware. First, we remove the need to manually hand-craft algorithms by exploiting the relaxed constraints and cheap sparse access of CPU memory. Second, we maximize CPU utilization and multicore scalability by transforming data matrices to be cache-aware, integer multiples of AVX vector widths. Treating 2-dimensional ConvNets as a special (and the least beneficial) case of our approach, we demonstrate a 5 to 25-fold improvement in throughput compared to previous state-of-the-art." ] }
1705.10882
2619088528
Deep learning algorithms for connectomics rely upon localized classification, rather than overall morphology. This leads to a high incidence of erroneously merged objects. Humans, by contrast, can easily detect such errors by acquiring intuition for the correct morphology of objects. Biological neurons have complicated and variable shapes, which are challenging to learn, and merge errors take a multitude of different forms. We present an algorithm, MergeNet, that shows 3D ConvNets can, in fact, detect merge errors from high-level neuronal morphology. MergeNet follows unsupervised training and operates across datasets. We demonstrate the performance of MergeNet both on a variety of connectomics data and on a dataset created from merged MNIST images.
Within the field of connectomics, Maitin- @cite_12 describe CELIS, a neural network approach for optimizing local features of a segmented image. @cite_32 and @cite_38 present approaches for directly segmenting individual neurons from microscope images, without recourse to membrane prediction and agglomeration algorithms. Deep learning techniques have likewise been used to detect synapses between neurons @cite_10 @cite_11 and to localize voltage measurements in neural circuits @cite_9 (progress towards a ). New forms of data are also being leveraged for connectomics @cite_0 @cite_29 , thanks to advances in biochemical engineering.
{ "cite_N": [ "@cite_38", "@cite_9", "@cite_29", "@cite_32", "@cite_0", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2560374178", "2469594461", "", "", "", "2258894610", "1721069559", "2592731197" ], "abstract": [ "The field of connectomics faces unprecedented \"big data\" challenges. To reconstruct neuronal connectivity, automated pixel-level segmentation is required for petabytes of streaming electron microscopy data. Existing algorithms provide relatively good accuracy but are unacceptably slow, and would require years to extract connectivity graphs from even a single cubic millimeter of neural tissue. Here we present a viable real-time solution, a multi-pass pipeline optimized for shared-memory multicore systems, capable of processing data at near the terabyte-per-hour pace of multi-beam electron microscopes. The pipeline makes an initial fast-pass over the data, and then makes a second slow-pass to iteratively correct errors in the output of the fast-pass. We demonstrate the accuracy of a sparse slow-pass reconstruction algorithm and suggest new methods for detecting morphological errors. Our fast-pass approach provided many algorithmic challenges, including the design and implementation of novel shallow convolutional neural nets and the parallelization of watershed and object-merging techniques. We use it to reconstruct, from image stack to skeletons, the full dataset of (463 GB capturing 120,000 cubic microns) in a matter of hours on a single multicore machine rather than the weeks it has taken in the past on much larger distributed systems.", "Calcium imaging is an important technique for monitoring the activity of thousands of neurons simultaneously. As calcium imaging datasets grow in size, automated detection of individual neurons is becoming important. Here we apply a supervised learning approach to this problem and show that convolutional networks can achieve near-human accuracy and superhuman speed. Accuracy is superior to the popular PCA ICA method based on precision and recall relative to ground truth annotation by a human expert. These results suggest that convolutional networks are an efficient and flexible tool for the analysis of large-scale calcium imaging data.", "", "", "", "An open challenge problem at the forefront of modern neuroscience is to obtain a comprehensive mapping of the neural pathways that underlie human brain function; an enhanced understanding of the wiring diagram of the brain promises to lead to new breakthroughs in diagnosing and treating neurological disorders. Inferring brain structure from image data, such as that obtained via electron microscopy (EM), entails solving the problem of identifying biological structures in large data volumes. Synapses, which are a key communication structure in the brain, are particularly difficult to detect due to their small size and limited contrast. Prior work in automated synapse detection has relied upon time-intensive biological preparations (post-staining, isotropic slice thicknesses) in order to simplify the problem. This paper presents VESICLE, the first known approach designed for mammalian synapse detection in anisotropic, non-post-stained data. Our methods explicitly leverage biological context, and the results exceed existing synapse detection methods in terms of accuracy and scalability. We provide two different approaches - one a deep learning classifier (VESICLE-CNN) and one a lightweight Random Forest approach (VESICLE-RF) to offer alternatives in the performance-scalability space. Addressing this synapse detection challenge enables the analysis of high-throughput imaging data soon expected to reach petabytes of data, and provide tools for more rapid estimation of brain-graphs. Finally, to facilitate community efforts, we developed tools for large-scale object detection, and demonstrated this framework to find @math 50,000 synapses in 60,000 @math (220 GB on disk) of electron microscopy data.", "We introduce a new machine learning approach for image segmentation that uses a neural network to model the conditional energy of a segmentation given an image. Our approach, combinatorial energy learning for image segmentation (CELIS) places a particular emphasis on modeling the inherent combinatorial nature of dense image segmentation problems. We propose efficient algorithms for learning deep neural networks to model the energy function, and for local optimization of this energy in the space of supervoxel agglomerations. We extensively evaluate our method on a publicly available 3-D microscopy dataset with 25 billion voxels of ground truth data. On an 11 billion voxel test set, we find that our method improves volumetric reconstruction accuracy by more than 20 as compared to two state-of-the-art baseline methods: graph-based segmentation of the output of a 3-D convolutional neural network trained to predict boundaries, as well as a random forest classifier trained to agglomerate supervoxels that were generated by a 3-D convolutional neural network.", "Connectomics is an emerging field in neuroscience that aims to reconstruct the 3-dimensional morphology of neurons from electron microscopy (EM) images. Recent studies have successfully demonstrated the use of convolutional neural networks (ConvNets) for segmenting cell membranes to individuate neurons. However, there has been comparatively little success in high-throughput identification of the intercellular synaptic connections required for deriving connectivity graphs. In this study, we take a compositional approach to segmenting synapses, modeling them explicitly as an intercellular cleft co-located with an asymmetric vesicle density along a cell membrane. Instead of requiring a deep network to learn all natural combinations of this compositionality, we train lighter networks to model the simpler marginal distributions of membranes, clefts and vesicles from just 100 electron microscopy samples. These feature maps are then combined with simple rules-based heuristics derived from prior biological knowledge. Our approach to synapse detection is both more accurate than previous state-of-the-art (7 higher recall and 5 higher F1-score) and yields a 20-fold speed-up compared to the previous fastest implementations. We demonstrate by reconstructing the first complete, directed connectome from the largest available anisotropic microscopy dataset (245 GB) of mouse somatosensory cortex (S1) in just 9.7 hours on a single shared-memory CPU system. We believe that this work marks an important step toward the goal of a microscope-pace streaming connectomics pipeline." ] }
1705.10882
2619088528
Deep learning algorithms for connectomics rely upon localized classification, rather than overall morphology. This leads to a high incidence of erroneously merged objects. Humans, by contrast, can easily detect such errors by acquiring intuition for the correct morphology of objects. Biological neurons have complicated and variable shapes, which are challenging to learn, and merge errors take a multitude of different forms. We present an algorithm, MergeNet, that shows 3D ConvNets can, in fact, detect merge errors from high-level neuronal morphology. MergeNet follows unsupervised training and operates across datasets. We demonstrate the performance of MergeNet both on a variety of connectomics data and on a dataset created from merged MNIST images.
Many authors cite the frequent problems posed by merge errors (see e.g. @cite_2 ); however, almost no approaches have been proposed for detecting them automatically. @cite_38 suggest a hard-coded heuristic to find X-junctions'', one variety of merge error, by analyzing graph theoretical representations of neurons as (see also @cite_23 ). Recent work including @cite_16 @cite_21 has considered the problem of deep learning on graphs, and Farhoodi, Ramkumar, and Kording @cite_28 use Generative Adversarial Networks (GANs) to generate neuron skeletons. However, such methods have not to date been brought to bear on connectomic reconstruction of neural circuits.
{ "cite_N": [ "@cite_38", "@cite_28", "@cite_21", "@cite_23", "@cite_2", "@cite_16" ], "mid": [ "2560374178", "2951065015", "", "1744057168", "", "2519887557" ], "abstract": [ "The field of connectomics faces unprecedented \"big data\" challenges. To reconstruct neuronal connectivity, automated pixel-level segmentation is required for petabytes of streaming electron microscopy data. Existing algorithms provide relatively good accuracy but are unacceptably slow, and would require years to extract connectivity graphs from even a single cubic millimeter of neural tissue. Here we present a viable real-time solution, a multi-pass pipeline optimized for shared-memory multicore systems, capable of processing data at near the terabyte-per-hour pace of multi-beam electron microscopes. The pipeline makes an initial fast-pass over the data, and then makes a second slow-pass to iteratively correct errors in the output of the fast-pass. We demonstrate the accuracy of a sparse slow-pass reconstruction algorithm and suggest new methods for detecting morphological errors. Our fast-pass approach provided many algorithmic challenges, including the design and implementation of novel shallow convolutional neural nets and the parallelization of watershed and object-merging techniques. We use it to reconstruct, from image stack to skeletons, the full dataset of (463 GB capturing 120,000 cubic microns) in a matter of hours on a single multicore machine rather than the weeks it has taken in the past on much larger distributed systems.", "Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.", "", "Mapping the connectivity of neurons in the brain (i.e., connectomics) is a challenging problem due to both the number of connections in even the smallest organisms and the nanometer resolution required to resolve them. Because of this, previous connectomes contain only hundreds of neurons, such as in the C.elegans connectome. Recent technological advances will unlock the mysteries of increasingly large connectomes (or partial connectomes). However, the value of these maps is limited by our ability to reason with this data and understand any underlying motifs. To aid connectome analysis, we introduce algorithms to cluster similarly-shaped neurons, where 3D neuronal shapes are represented as skeletons. In particular, we propose a novel location-sensitive clustering algorithm. We show clustering results on neurons reconstructed from the Drosophila medulla that show high-accuracy.", "", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin." ] }
1705.10479
2618170641
Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at this http URL
The method presented here belongs to the field of multi-task inverse reinforcement learning. Examples from this field include @cite_35 and @cite_27 . @cite_35 , the authors present a Bayesian approach to the problem, while the method in @cite_27 is based on an EM approach that clusters observed demonstrations. Both of these methods show promising results on relatively low-dimensional problems, whereas our approach scales well to higher dimensional domains due to the representational power of neural networks.
{ "cite_N": [ "@cite_35", "@cite_27" ], "mid": [ "2117675763", "2181849516" ], "abstract": [ "We present a probabilistic algorithm for nonlinear inverse reinforcement learning. The goal of inverse reinforcement learning is to learn the reward function in a Markov decision process from expert demonstrations. While most prior inverse reinforcement learning algorithms represent the reward as a linear combination of a set of features, we use Gaussian processes to learn the reward as a nonlinear function, while also determining the relevance of each feature to the expert's policy. Our probabilistic algorithm allows complex behaviors to be captured from suboptimal stochastic demonstrations, while automatically balancing the simplicity of the learned reward structure against its consistency with the observed actions.", "In this paper, we apply tools from inverse reinforcement learning (IRL) to the problem of learning from (unlabeled) demonstration trajectories of behavior generated by varying \"intentions\" or objectives. We derive an EM approach that clusters observed trajectories by inferring the objectives for each cluster using any of several possible IRL methods, and then uses the constructed clusters to quickly identify the intent of a trajectory. We show that a natural approach to IRL—a gradient ascent method that modifies reward parameters to maximize the likelihood of the observed trajectories—is successful at quickly identifying unknown reward functions. We demonstrate these ideas in the context of apprenticeship learning by acquiring the preferences of a human driver in a simple highway car simulator." ] }
1705.10479
2618170641
Imitation learning has traditionally been applied to learn a single task from demonstrations thereof. The requirement of structured and isolated demonstrations limits the scalability of imitation learning approaches as they are difficult to apply to real-world scenarios, where robots have to be able to execute a multitude of tasks. In this paper, we propose a multi-modal imitation learning framework that is able to segment and imitate skills from unlabelled and unstructured demonstrations by learning skill segmentation and imitation learning jointly. The extensive simulation results indicate that our method can efficiently separate the demonstrations into individual skills and learn to imitate them using a single multi-modal policy. The video of our experiments is available at this http URL
The works that are most closely related to this paper are @cite_20 , @cite_19 and @cite_10 . @cite_19 , the authors show a method that is able to learn disentangled representations and apply it to the problem of image generation. In this work, we provide an alternative derivation of our method that extends their work and applies it to multi-modal policies. @cite_20 , the authors present an imitation learning GAN approach that serves as a basis for the development of our method. We provide an extensive evaluation of the hereby presented approach compared to the work in @cite_20 , which shows that our method, as opposed to @cite_20 , can handle unstructured demonstrations of different skills. A concurrent work @cite_10 introduces a method similar to ours and applies it to detecting driving styles from unlabelled human data.
{ "cite_N": [ "@cite_19", "@cite_10", "@cite_20" ], "mid": [ "2951939904", "2971482891", "2434014514" ], "abstract": [ "While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available this https URL", "", "Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex behaviors in large, high-dimensional environments." ] }
1705.10432
2618198131
Recent advances in combining deep learning and Reinforcement Learning have shown a promising path for designing new control agents that can learn optimal policies for challenging control tasks. These new methods address the main limitations of conventional Reinforcement Learning methods such as customized feature engineering and small action state space dimension requirements. In this paper, we leverage one of the state-of-the-art Reinforcement Learning methods, known as Trust Region Policy Optimization, to tackle intersection management for autonomous vehicles. We show that using this method, we can perform fine-grained acceleration control of autonomous vehicles in a grid street plan to achieve a global design objective.
Advances in autonomous vehicles in recent years have revealed a portrait of a near future in which all vehicles will be driven by artificially intelligent agents. This emerging technology calls for an intelligent transportation system by redesigning the current transportation system which is intended to be used by human drivers. One of the interesting topics that arises in intelligent transportation systems is AIM. have proposed a multi-agent AIM system in which vehicles communicate with intersection management agents to reserve a dedicated spatio-temporal trajectory at the intersection @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2137514195" ], "abstract": [ "Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible. Current methods of vehicle coordination, which are all designed to work with human drivers, will be outdated. The bottleneck for roadway efficiency will no longer be the drivers, but rather the mechanism by which those drivers' actions are coordinated. While open-road driving is a well-studied and more-or-less-solved problem, urban traffic scenarios, especially intersections, are much more challenging. We believe current methods for controlling traffic, specifically at intersections, will not be able to take advantage of the increased sensitivity and precision of autonomous vehicles as compared to human drivers. In this article, we suggest an alternative mechanism for coordinating the movement of autonomous vehicles through intersections. Drivers and intersections in this mechanism are treated as autonomous agents in a multiagent system. In this multiagent system, intersections use a new reservation-based approach built around a detailed communication protocol, which we also present. We demonstrate in simulation that our new mechanism has the potential to significantly outperform current intersection control technology--traffic lights and stop signs. Because our mechanism can emulate a traffic light or stop sign, it subsumes the most popular current methods of intersection control. This article also presents two extensions to the mechanism. The first extension allows the system to control human-driven vehicles in addition to autonomous vehicles. The second gives priority to emergency vehicles without significant cost to civilian vehicles. The mechanism, including both extensions, is implemented and tested in simulation, and we present experimental results that strongly attest to the efficacy of this approach." ] }
1705.10432
2618198131
Recent advances in combining deep learning and Reinforcement Learning have shown a promising path for designing new control agents that can learn optimal policies for challenging control tasks. These new methods address the main limitations of conventional Reinforcement Learning methods such as customized feature engineering and small action state space dimension requirements. In this paper, we leverage one of the state-of-the-art Reinforcement Learning methods, known as Trust Region Policy Optimization, to tackle intersection management for autonomous vehicles. We show that using this method, we can perform fine-grained acceleration control of autonomous vehicles in a grid street plan to achieve a global design objective.
In @cite_5 , authors have proposed a self-organizing control framework in which a cooperative multi-agent control scheme is employed in addition to each vehicle's autonomy. The authors have proposed a priority-level system to determine the right-of-way through intersections based on vehicles' characteristics or intersection constraints.
{ "cite_N": [ "@cite_5" ], "mid": [ "2026876041" ], "abstract": [ "Development of in-vehicle computer and sensing technology, along with short-range vehicle-to-vehicle communication has provided technological potential for large-scale deployment of autonomous vehicles. The issue of intersection control for these future driverless vehicles is one of the emerging research issues. Contrary to some of the previous research approaches, this paper is proposing a paradigm shift based upon self-organizing and cooperative control framework. Distributed vehicle intelligence has been used to calculate each vehicle's approaching velocity. The control mechanism has been developed in an agent-based environment. Self-organizing agent's trajectory adjustment bases upon a proposed priority principle. Testing of the system has proved its safety, user comfort, and efficiency functional requirements. Several recommendations for further research are presented." ] }
1705.10432
2618198131
Recent advances in combining deep learning and Reinforcement Learning have shown a promising path for designing new control agents that can learn optimal policies for challenging control tasks. These new methods address the main limitations of conventional Reinforcement Learning methods such as customized feature engineering and small action state space dimension requirements. In this paper, we leverage one of the state-of-the-art Reinforcement Learning methods, known as Trust Region Policy Optimization, to tackle intersection management for autonomous vehicles. We show that using this method, we can perform fine-grained acceleration control of autonomous vehicles in a grid street plan to achieve a global design objective.
presented an approach in which the Cooperative Adaptive Cruise Control (CACC) systems are leveraged to minimize delays and prevent clashes @cite_3 . In this approach, the intersection controller communicates with the vehicles to recommend the optimal speed profile based on the vehicle's characteristics, motion data, weather conditions and intersection properties. Additionally, an optimization problem is solved to minimize the total difference of actual arrival times at the Intersection and the optimum times subject to conflict-free temporal constraints.
{ "cite_N": [ "@cite_3" ], "mid": [ "2118674895" ], "abstract": [ "Recently several artificial intelligence labs have suggested the use of fully equipped vehicles with the capability of sensing the surrounding environment to enhance roadway safety. As a result, it is anticipated in the future that many vehicles will be autonomous and thus there is a need to optimize the movement of these vehicles. This paper presents a new tool for optimizing the movements of autonomous vehicles through intersections: iCACC. The main concept of the proposed tool is to control vehicle trajectories using Cooperative Adaptive Cruise Control (CACC) systems to avoid collisions and minimize intersection delay. Simulations were executed to compare conventional signal control with iCACC considering two measures of effectiveness - delay and fuel consumption. Savings in delay and fuel consumption in the range of 91 and 82 percent relative to conventional signal control were demonstrated, respectively." ] }
1705.10432
2618198131
Recent advances in combining deep learning and Reinforcement Learning have shown a promising path for designing new control agents that can learn optimal policies for challenging control tasks. These new methods address the main limitations of conventional Reinforcement Learning methods such as customized feature engineering and small action state space dimension requirements. In this paper, we leverage one of the state-of-the-art Reinforcement Learning methods, known as Trust Region Policy Optimization, to tackle intersection management for autonomous vehicles. We show that using this method, we can perform fine-grained acceleration control of autonomous vehicles in a grid street plan to achieve a global design objective.
A decentralized optimal control formulation is proposed in @cite_1 in which the acceleration deceleration of the vehicles are minimized subject to collision avoidance constraints.
{ "cite_N": [ "@cite_1" ], "mid": [ "2255404673" ], "abstract": [ "In earlier work, we addressed the problem of coordinating online an increasing number of connected and automated vehicles (CAVs) crossing two adjacent intersections in an urban area. The analytical solution, however, did not consider the state and control constraints. In this paper, we present the complete Hamiltonian analysis including state and control constraints. In addition, we present conditions that do not allow the rear-end collision avoidance constraint to become active at any time inside the control zone. The complete analytical solution, when it exists, allows the vehicles to cross the intersection without the use of traffic lights and under the hard constraint of collision avoidance. The effectiveness of the proposed solution is validated through simulation in a single intersection and it is shown that coordination of CAVs can reduce significantly both fuel consumption and travel time." ] }
1705.10432
2618198131
Recent advances in combining deep learning and Reinforcement Learning have shown a promising path for designing new control agents that can learn optimal policies for challenging control tasks. These new methods address the main limitations of conventional Reinforcement Learning methods such as customized feature engineering and small action state space dimension requirements. In this paper, we leverage one of the state-of-the-art Reinforcement Learning methods, known as Trust Region Policy Optimization, to tackle intersection management for autonomous vehicles. We show that using this method, we can perform fine-grained acceleration control of autonomous vehicles in a grid street plan to achieve a global design objective.
In all the aforementioned works, the AIM problem is formulated for only one intersection and no global minimum travel time objective is considered directly. extended the approach proposed in @cite_7 to multi-intersection settings via dynamic traffic assignment and dynamic lane reversal @cite_12 . Their problem formulation is based on intersection arbitration which is well suited to main roads with a heavy load of traffic.
{ "cite_N": [ "@cite_12", "@cite_7" ], "mid": [ "2144229263", "2137514195" ], "abstract": [ "Advances in autonomous vehicles and intelligent transportation systems indicate a rapidly approaching future in which intelligent vehicles will automatically handle the process of driving. However, increasing the efficiency of today's transportation infrastructure will require intelligent traffic control mechanisms that work hand in hand with intelligent vehicles. To this end, Dresner and Stone proposed a new intersection control mechanism called Autonomous Intersection Management (AIM) and showed in simulation that by studying the problem from a multiagent perspective, intersection control can be made more efficient than existing control mechanisms such as traffic signals and stop signs. We extend their study beyond the case of an individual intersection and examine the unique implications and abilities afforded by using AIM-based agents to control a network of interconnected intersections. We examine different navigation policies by which autonomous vehicles can dynamically alter their planned paths, observe an instance of Braess' paradox, and explore the new possibility of dynamically reversing the flow of traffic along lanes in response to minute-by-minute traffic conditions. Studying this multiagent system in simulation, we quantify the substantial improvements in efficiency imparted by these agent-based traffic control methods.", "Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible. Current methods of vehicle coordination, which are all designed to work with human drivers, will be outdated. The bottleneck for roadway efficiency will no longer be the drivers, but rather the mechanism by which those drivers' actions are coordinated. While open-road driving is a well-studied and more-or-less-solved problem, urban traffic scenarios, especially intersections, are much more challenging. We believe current methods for controlling traffic, specifically at intersections, will not be able to take advantage of the increased sensitivity and precision of autonomous vehicles as compared to human drivers. In this article, we suggest an alternative mechanism for coordinating the movement of autonomous vehicles through intersections. Drivers and intersections in this mechanism are treated as autonomous agents in a multiagent system. In this multiagent system, intersections use a new reservation-based approach built around a detailed communication protocol, which we also present. We demonstrate in simulation that our new mechanism has the potential to significantly outperform current intersection control technology--traffic lights and stop signs. Because our mechanism can emulate a traffic light or stop sign, it subsumes the most popular current methods of intersection control. This article also presents two extensions to the mechanism. The first extension allows the system to control human-driven vehicles in addition to autonomous vehicles. The second gives priority to emergency vehicles without significant cost to civilian vehicles. The mechanism, including both extensions, is implemented and tested in simulation, and we present experimental results that strongly attest to the efficacy of this approach." ] }
1705.10447
2616960207
Recent advances in visual tracking showed that deep Convolutional Neural Networks (CNN) trained for image classification can be strong feature extractors for discriminative trackers. However, due to the drastic difference between image classification and tracking, extra treatments such as model ensemble and feature engineering must be carried out to bridge the two domains. Such procedures are either time consuming or hard to generalize well across datasets. In this paper we discovered that the internal structure of Region Proposal Network (RPN)'s top layer feature can be utilized for robust visual tracking. We showed that such property has to be unleashed by a novel loss function which simultaneously considers classification accuracy and bounding box quality. Without ensemble and any extra treatment on feature maps, our proposed method achieved state-of-the-art results on several large scale benchmarks including OTB50, OTB100 and VOT2016. We will make our code publicly available.
Modeling object appearance is the key of many tracking models. To achieve this, generative methods were proposed to maintain the target appearance model @cite_26 @cite_15 @cite_19 . On the other hand, discriminative methods view the problem differently and aim to build classifiers to distinguish between targets and background @cite_29 @cite_22 @cite_10 . Our proposed method can be categorized as a discriminative method. Previous discriminative methods often used hand-crafted features such as template, histogram features, Haar-like features to build classifiers. Such features are not robust enough to handle large appearance variations such as occlusion and illumination change as well as video quality degradation such as focal blur and low resolution.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_29", "@cite_19", "@cite_15", "@cite_10" ], "mid": [ "2060814785", "2098941887", "2109579504", "2115609520", "2162383208", "" ], "abstract": [ "Recently sparse representation has been applied to visual tracker by modeling the target appearance using a sparse approximation over a template set, which leads to the so-called L1 trackers as it needs to solve an l 1 norm related minimization problem for many times. While these L1 trackers showed impressive tracking accuracies, they are very computationally demanding and the speed bottleneck is the solver to l 1 norm minimizations. This paper aims at developing an L1 tracker that not only runs in real time but also enjoys better robustness than other L1 trackers. In our proposed L1 tracker, a new l 1 norm related minimization model is proposed to improve the tracking accuracy by adding an l 1 norm regularization on the coefficients associated with the trivial templates. Moreover, based on the accelerated proximal gradient approach, a very fast numerical solver is developed to solve the resulting l 1 norm related minimization problem with guaranteed quadratic convergence. The great running time efficiency and tracking accuracy of the proposed tracker is validated with a comprehensive evaluation involving eight challenging sequences and five alternative state-of-the-art trackers.", "Adaptive tracking-by-detection methods are widely used in computer vision for tracking arbitrary objects. Current approaches treat the tracking problem as a classification task and use online learning techniques to update the object model. However, for these updates to happen one needs to convert the estimated object position into a set of labelled training examples, and it is not clear how best to perform this intermediate step. Furthermore, the objective for the classifier (label prediction) is not explicitly coupled to the objective for the tracker (accurate estimation of object position). In this paper, we present a framework for adaptive visual object tracking based on structured output prediction. By explicitly allowing the output space to express the needs of the tracker, we are able to avoid the need for an intermediate classification step. Our method uses a kernelized structured output support vector machine (SVM), which is learned online to provide adaptive tracking. To allow for real-time application, we introduce a budgeting mechanism which prevents the unbounded growth in the number of support vectors which would otherwise occur during tracking. Experimentally, we show that our algorithm is able to outperform state-of-the-art trackers on various benchmark videos. Additionally, we show that we can easily incorporate additional features and kernels into our framework, which results in increased performance.", "In this paper, we address the problem of tracking an object in a video given its location in the first frame and no other information. Recently, a class of tracking techniques called “tracking by detection” has been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrade the classifier and can cause drift. In this paper, we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems and can therefore lead to a more robust tracker with fewer parameter tweaks. We propose a novel online MIL algorithm for object tracking that achieves superior results with real-time performance. We present thorough experimental results (both qualitative and quantitative) on a number of challenging video clips.", "Visual features are commonly modeled with probability density functions in computer vision problems, but current methods such as a mixture of Gaussians and kernel density estimation suffer from either the lack of flexibility by fixing or limiting the number of Gaussian components in the mixture or large memory requirement by maintaining a nonparametric representation of the density. These problems are aggravated in real-time computer vision applications since density functions are required to be updated as new data becomes available. We present a novel kernel density approximation technique based on the mean-shift mode finding algorithm and describe an efficient method to sequentially propagate the density modes over time. Although the proposed density representation is memory efficient, which is typical for mixture densities, it inherits the flexibility of nonparametric methods by allowing the number of components to be variable. The accuracy and compactness of the sequential kernel density approximation technique is illustrated by both simulations and experiments. Sequential kernel density approximation is applied to online target appearance modeling for visual tracking, and its performance is demonstrated on a variety of videos.", "Sparse representation has been applied to visual tracking by finding the best candidate with minimal reconstruction error using target templates. However most sparse representation based trackers only consider the holistic representation and do not make full use of the sparse coefficients to discriminate between the target and the background, and hence may fail with more possibility when there is similar object or occlusion in the scene. In this paper we develop a simple yet robust tracking method based on the structural local sparse appearance model. This representation exploits both partial information and spatial information of the target based on a novel alignment-pooling method. The similarity obtained by pooling across the local patches helps not only locate the target more accurately but also handle occlusion. In addition, we employ a template update strategy which combines incremental subspace learning and sparse representation. This strategy adapts the template to the appearance change of the target with less possibility of drifting and reduces the influence of the occluded target template as well. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods.", "" ] }
1705.10447
2616960207
Recent advances in visual tracking showed that deep Convolutional Neural Networks (CNN) trained for image classification can be strong feature extractors for discriminative trackers. However, due to the drastic difference between image classification and tracking, extra treatments such as model ensemble and feature engineering must be carried out to bridge the two domains. Such procedures are either time consuming or hard to generalize well across datasets. In this paper we discovered that the internal structure of Region Proposal Network (RPN)'s top layer feature can be utilized for robust visual tracking. We showed that such property has to be unleashed by a novel loss function which simultaneously considers classification accuracy and bounding box quality. Without ensemble and any extra treatment on feature maps, our proposed method achieved state-of-the-art results on several large scale benchmarks including OTB50, OTB100 and VOT2016. We will make our code publicly available.
However, image classification and tracking are two tasks drastically differ in objectives. Because feature extractors in CNNs are shaped by objective function's error derivatives, image classification CNNs do not guarantee to provide relevant features for tracking. Therefore, additional procedures need to be carried out to bridge this gap. In @cite_1 and @cite_14 , the authors used an ensemble of CNN models to make the resulting model generalize well. Despite the effectiveness, ensemble methods are usually computationally heavy because multiple CNNs need to be updated during online tracking. Another useful strategy is to use cross-layer feature selection and aggregation @cite_16 . The issue with this method however is that it is very hard to hand-pick the best group of lower level feature maps which consistently provides relevant features across scenes and domains.
{ "cite_N": [ "@cite_14", "@cite_1", "@cite_16" ], "mid": [ "2069332137", "2513005088", "" ], "abstract": [ "Defining hand-crafted feature representations needs expert knowledge, requires timeconsuming manual adjustments, and besides, it is arguably one of the limiting factors of object tracking. In this paper, we propose a novel solution to automatically relearn the most useful feature representations during the tracking process in order to accurately adapt appearance changes, pose and scale variations while preventing from drift and tracking failures. We employ a candidate pool of multiple Convolutional Neural Networks (CNNs) as a data-driven model of different instances of the target object. Individually, each CNN maintains a specific set of kernels that favourably discriminate object patches from their surrounding background using all available low-level cues. These kernels are updated in an online manner at each frame after being trained with just one instance at the initialization of the corresponding CNN. Given a frame, the most promising CNNs in the pool are selected to evaluate the hypothesises for the target object. The hypothesis with the highest score is assigned as the current detection window and the selected models are retrained using a warm-start back-propagation which optimizes a structural loss function. In addition to the model-free tracker, we introduce a class-specific version of the proposed method that is tailored for tracking of a particular object class such as human faces. Our experiments on a large selection of videos from the recent benchmarks demonstrate that our method outperforms the existing state-of-the-art algorithms and rarely loses the track of the target object.", "We present an online visual tracking algorithm by managing multiple target appearance models in a tree structure. The proposed algorithm employs Convolutional Neural Networks (CNNs) to represent target appearances, where multiple CNNs collaborate to estimate target states and determine the desirable paths for online model updates in the tree. By maintaining multiple CNNs in diverse branches of tree structure, it is convenient to deal with multi-modality in target appearances and preserve model reliability through smooth updates along tree paths. Since multiple CNNs share all parameters in convolutional layers, it takes advantage of multiple models with little extra cost by saving memory space and avoiding redundant network evaluations. The final target state is estimated by sampling target candidates around the state in the previous frame and identifying the best sample in terms of a weighted average score from a set of active CNNs. Our algorithm illustrates outstanding performance compared to the state-of-the-art techniques in challenging datasets such as online tracking benchmark and visual object tracking challenge.", "" ] }
1705.10195
2619212641
We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation: -- For any constant @math , detecting @math -paths and trees on @math nodes can be done in constantly many rounds, and @math -cycles in @math rounds. -- On @math -degenerate graphs, cliques and @math -cycles can be enumerated in @math rounds, and @math -cycles in @math rounds. In many cases, these bounds are tight up to logarithmic factors. Moreover, we show that the algorithms for @math -degenerate graphs can be improved to optimal complexity @math and @math , respectively, in the supported CONGEST model, which can be seen as an intermediate model between CONGEST and the congested clique.
For deterministic subgraph detection in the model, the only prior works we are aware of are the @math round algorithm for @math -cycle detection by @cite_24 , and the independent discovery of the constant-round path and tree detection algorithms @cite_23 @cite_18 . In the model, deterministic subgraph detection algorithms were given by @cite_5 and Censor- @cite_6 ; the latter is in particular noteworthy from our perspective, as it used techniques from centralised fixed-parameter algorithmics to obtain fast cycle detection algorithms for cycles of arbitrary length.
{ "cite_N": [ "@cite_18", "@cite_6", "@cite_24", "@cite_23", "@cite_5" ], "mid": [ "", "2950813619", "2107805020", "2626796510", "2949944845" ], "abstract": [ "", "In this work, we use algebraic methods for studying distance computation and subgraph detection tasks in the congested clique model. Specifically, we adapt parallel matrix multiplication implementations to the congested clique, obtaining an @math round matrix multiplication algorithm, where @math is the exponent of matrix multiplication. In conjunction with known techniques from centralised algorithmics, this gives significant improvements over previous best upper bounds in the congested clique model. The highlight results include: -- triangle and 4-cycle counting in @math rounds, improving upon the @math triangle detection algorithm of [DISC 2012], -- a @math -approximation of all-pairs shortest paths in @math rounds, improving upon the @math -round @math -approximation algorithm of Nanongkai [STOC 2014], and -- computing the girth in @math rounds, which is the first non-trivial solution in this model. In addition, we present a novel constant-round combinatorial algorithm for detecting 4-cycles.", "We study the computation power of the congested clique, a model of distributed computation where n players communicate with each other over a complete network in order to compute some function of their inputs. The number of bits that can be sent on any edge in a round is bounded by a parameter b We consider two versions of the model: in the first, the players communicate by unicast, allowing them to send a different message on each of their links in one round; in the second, the players communicate by broadcast, sending one message to all their neighbors. It is known that the unicast version of the model is quite powerful; to date, no lower bounds for this model are known. In this paper we provide a partial explanation by showing that the unicast congested clique can simulate powerful classes of bounded-depth circuits, implying that even slightly super-constant lower bounds for the congested clique would give new lower bounds in circuit complexity. Moreover, under a widely-believed conjecture on matrix multiplication, the triangle detection problem, studied in [8], can be solved in O(ne) time for any e > 0. The broadcast version of the congested clique is the well-known multi-party shared-blackboard model of communication complexity (with number-in-hand input). This version is more amenable to lower bounds, and in this paper we show that the subgraph detection problem studied in [8] requires polynomially many rounds for several classes of subgraphs. We also give upper bounds for the subgraph detection problem, and relate the hardness of triangle detection in the broadcast congested clique to the communication complexity of set disjointness in the 3-party number-on-forehead model.", "In the standard CONGEST model for distributed network computing, it is known that \"global\" tasks such as minimum spanning tree, diameter, and all-pairs shortest paths, consume large bandwidth, for their running-time is @math rounds in @math -node networks with constant diameter. Surprisingly, \"local\" tasks such as detecting the presence of a 4-cycle as a subgraph also requires @math rounds, even using randomized algorithms, and the best known upper bound for detecting the presence of a 3-cycle is @math rounds. The objective of this paper is to better understand the landscape of such subgraph detection tasks. We show that, in contrast to , which are hard to detect in the CONGEST model, there exists a deterministic algorithm for detecting the presence of a subgraph isomorphic to @math running in a number of rounds, for every tree @math . Our algorithm provides a distributed implementation of a combinatorial technique due to Erd o for sparsening the set of partial solutions kept by the nodes at each round. Our result has important consequences to , i.e., to randomized algorithms whose aim is to distinguish between graphs satisfying a property, and graphs far from satisfying that property. In particular, we get that, for every graph pattern @math composed of an edge and a tree connected in an arbitrary manner, there exists a distributed testing algorithm for @math -freeness, performing in a constant number of rounds. Although the class of graph patterns @math formed by a tree and an edge connected arbitrarily may look artificial, all previous results of the literature concerning testing @math -freeness for classical patterns such as cycles and cliques can be viewed as direct consequences of our result, while our algorithm enables testing more complex patterns.", "Let G = (V,E) be an n-vertex graph and M_d a d-vertex graph, for some constant d. Is M_d a subgraph of G? We consider this problem in a model where all n processes are connected to all other processes, and each message contains up to O(log n) bits. A simple deterministic algorithm that requires O(n^((d-2) d) log n) communication rounds is presented. For the special case that M_d is a triangle, we present a probabilistic algorithm that requires an expected O(ceil(n^(1 3) (t^(2 3) + 1))) rounds of communication, where t is the number of triangles in the graph, and O(min n^(1 3) log^(2 3) n (t^(2 3) + 1), n^(1 3) ) with high probability. We also present deterministic algorithms specially suited for sparse graphs. In any graph of maximum degree Delta, we can test for arbitrary subgraphs of diameter D in O(ceil(Delta^(D+1) n)) rounds. For triangles, we devise an algorithm featuring a round complexity of O(A^2 n + log_(2+n A^2) n), where A denotes the arboricity of G." ] }
1705.10195
2619212641
We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation: -- For any constant @math , detecting @math -paths and trees on @math nodes can be done in constantly many rounds, and @math -cycles in @math rounds. -- On @math -degenerate graphs, cliques and @math -cycles can be enumerated in @math rounds, and @math -cycles in @math rounds. In many cases, these bounds are tight up to logarithmic factors. Moreover, we show that the algorithms for @math -degenerate graphs can be improved to optimal complexity @math and @math , respectively, in the supported CONGEST model, which can be seen as an intermediate model between CONGEST and the congested clique.
As noted before, @cite_24 have studied lower bounds for cycle detection in the model. Specifically, their lower bound for @math -cycle detection is @math rounds, where @math is the n number for cycles, that is, the maximum number of edges in a @math -cycle-free graph with @math nodes. This lower bound is @math for odd cycles, and @math for @math -cycles. However, for longer even cycles, this can give at most @math due to known bounds for Tur ' a n numbers, and matching bounds on Tur ' a n numbers are only known for @math and @math ; see e.g. Pikhurko @cite_8 and references therein.
{ "cite_N": [ "@cite_24", "@cite_8" ], "mid": [ "2107805020", "2019898392" ], "abstract": [ "We study the computation power of the congested clique, a model of distributed computation where n players communicate with each other over a complete network in order to compute some function of their inputs. The number of bits that can be sent on any edge in a round is bounded by a parameter b We consider two versions of the model: in the first, the players communicate by unicast, allowing them to send a different message on each of their links in one round; in the second, the players communicate by broadcast, sending one message to all their neighbors. It is known that the unicast version of the model is quite powerful; to date, no lower bounds for this model are known. In this paper we provide a partial explanation by showing that the unicast congested clique can simulate powerful classes of bounded-depth circuits, implying that even slightly super-constant lower bounds for the congested clique would give new lower bounds in circuit complexity. Moreover, under a widely-believed conjecture on matrix multiplication, the triangle detection problem, studied in [8], can be solved in O(ne) time for any e > 0. The broadcast version of the congested clique is the well-known multi-party shared-blackboard model of communication complexity (with number-in-hand input). This version is more amenable to lower bounds, and in this paper we show that the subgraph detection problem studied in [8] requires polynomially many rounds for several classes of subgraphs. We also give upper bounds for the subgraph detection problem, and relate the hardness of triangle detection in the broadcast congested clique to the communication complexity of set disjointness in the 3-party number-on-forehead model.", "The Tur´an function ex(n, F) is the maximum number of edges in an F-free graph on n vertices. The question of estimating this function for F = C2k, the cycle of length 2k, is one of the central open questions in this area that goes back to the 1930s. We prove that ex(n,C2k) ≤ (k − 1) n1+1 k + 16(k − 1)n, improving the previously best known general upper bound of Verstra¨ete [Combin. Probab. Computing 9 (2000), 369–373] by a factor 8 + o(1) when n � k." ] }
1705.10195
2619212641
We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation: -- For any constant @math , detecting @math -paths and trees on @math nodes can be done in constantly many rounds, and @math -cycles in @math rounds. -- On @math -degenerate graphs, cliques and @math -cycles can be enumerated in @math rounds, and @math -cycles in @math rounds. In many cases, these bounds are tight up to logarithmic factors. Moreover, we show that the algorithms for @math -degenerate graphs can be improved to optimal complexity @math and @math , respectively, in the supported CONGEST model, which can be seen as an intermediate model between CONGEST and the congested clique.
To our knowledge, no other subgraph detection lower bounds are known in the model. In particular, proving lower bounds for triangle detection seems to be a particularly difficult challenge. However, for the broadcast congested clique model, @cite_24 give an @math round lower bound, which also applies to the broadcast model. Moreover, for triangle enumeration lower bounds are known, also in the stronger congested clique model @cite_12 @cite_25 .
{ "cite_N": [ "@cite_24", "@cite_25", "@cite_12" ], "mid": [ "2107805020", "2281539709", "" ], "abstract": [ "We study the computation power of the congested clique, a model of distributed computation where n players communicate with each other over a complete network in order to compute some function of their inputs. The number of bits that can be sent on any edge in a round is bounded by a parameter b We consider two versions of the model: in the first, the players communicate by unicast, allowing them to send a different message on each of their links in one round; in the second, the players communicate by broadcast, sending one message to all their neighbors. It is known that the unicast version of the model is quite powerful; to date, no lower bounds for this model are known. In this paper we provide a partial explanation by showing that the unicast congested clique can simulate powerful classes of bounded-depth circuits, implying that even slightly super-constant lower bounds for the congested clique would give new lower bounds in circuit complexity. Moreover, under a widely-believed conjecture on matrix multiplication, the triangle detection problem, studied in [8], can be solved in O(ne) time for any e > 0. The broadcast version of the congested clique is the well-known multi-party shared-blackboard model of communication complexity (with number-in-hand input). This version is more amenable to lower bounds, and in this paper we show that the subgraph detection problem studied in [8] requires polynomially many rounds for several classes of subgraphs. We also give upper bounds for the subgraph detection problem, and relate the hardness of triangle detection in the broadcast congested clique to the communication complexity of set disjointness in the 3-party number-on-forehead model.", "Motivated by the need to understand the algorithmic foundations of distributed large-scale graph computations, we study some fundamental graph problems in a message-passing model for distributed computing where @math machines jointly perform computations on graphs with @math nodes (typically, @math ). The input graph is assumed to be initially randomly partitioned among the @math machines. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. We present (almost) tight bounds for the round complexity of two fundamental graph problems, namely PageRank computation and triangle enumeration. Our tight lower bounds, a main contribution of the paper, are established through an information-theoretic approach that relates the round complexity to the minimal amount of information required by machines for correctly solving a problem. Our approach is generic and might be useful in showing lower bounds in the context of similar problems and similar models. We show a lower bound of @math rounds for computing the PageRank. (Notation @math hides a @math factor.) We also present a simple distributed algorithm that computes the PageRank of all the nodes of a graph in @math rounds (notation @math hides a @math factor and an additive @math term). For triangle enumeration, we show a lower bound of @math rounds, where @math is the number of edges of the graph. Our result implies a lower bound of @math for the congested clique, which is tight up to logarithmic factors. We also present a distributed algorithm that enumerates all the triangles of a graph in @math rounds.", "" ] }
1705.10195
2619212641
We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation: -- For any constant @math , detecting @math -paths and trees on @math nodes can be done in constantly many rounds, and @math -cycles in @math rounds. -- On @math -degenerate graphs, cliques and @math -cycles can be enumerated in @math rounds, and @math -cycles in @math rounds. In many cases, these bounds are tight up to logarithmic factors. Moreover, we show that the algorithms for @math -degenerate graphs can be improved to optimal complexity @math and @math , respectively, in the supported CONGEST model, which can be seen as an intermediate model between CONGEST and the congested clique.
Property testing of @math -freeness in the model is another question that has received a lot of attention lately. In this setting, an algorithm has to correctly decide with probably with probability at least @math if the input graph is (a) @math -free---that is, does not contain a subgraph isomorphic to @math ---or (b) @math -away from being @math -free; in the intermediate case, the algorithm can perform arbitrarily. See e.g. Censor- @cite_0 for complete definitions.
{ "cite_N": [ "@cite_0" ], "mid": [ "2952011438" ], "abstract": [ "We initiate a thorough study of -- producing algorithms for the approximation problems of property testing in the CONGEST model. In particular, for the so-called testing model we emulate sequential tests for nearly all graph properties having @math -sided tests, while in the and models we obtain faster tests for triangle-freeness and bipartiteness respectively. In most cases, aided by parallelism, the distributed algorithms have a much shorter running time as compared to their counterparts from the sequential querying model of traditional property testing. The simplest property testing algorithms allow a relatively smooth transitioning to the distributed model. For the more complex tasks we develop new machinery that is of independent interest. This includes a method for distributed maintenance of multiple random walks." ] }
1705.10195
2619212641
We present simple deterministic algorithms for subgraph finding and enumeration in the broadcast CONGEST model of distributed computation: -- For any constant @math , detecting @math -paths and trees on @math nodes can be done in constantly many rounds, and @math -cycles in @math rounds. -- On @math -degenerate graphs, cliques and @math -cycles can be enumerated in @math rounds, and @math -cycles in @math rounds. In many cases, these bounds are tight up to logarithmic factors. Moreover, we show that the algorithms for @math -degenerate graphs can be improved to optimal complexity @math and @math , respectively, in the supported CONGEST model, which can be seen as an intermediate model between CONGEST and the congested clique.
Property testing algorithms for triangle-freeness were given by Censor- @cite_0 ; for @math -freeness for graphs on @math nodes by @cite_23 and @cite_9 ; and for @math -freeness for most graphs on @math nodes by @cite_11 Moreover, property testing algorithms for tree and cycle freeness---using techniques of fixed-parameter algorithmics flavour---have been discovered recently @cite_9 @cite_11 @cite_23 @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_0", "@cite_23", "@cite_11" ], "mid": [ "", "2615821159", "2952011438", "2626796510", "2612518566" ], "abstract": [ "", "In this paper we present distributed testing algorithms of graph properties in the CONGEST-model [Censor- 2016]. We present one-sided error testing algorithms in the general graph model. We first describe a general procedure for converting @math -testers with a number of rounds @math , where @math denotes the diameter of the graph, to @math rounds, where @math is the number of processors of the network. We then apply this procedure to obtain an optimal tester, in terms of @math , for testing bipartiteness, whose round complexity is @math , which improves over the @math -round algorithm by Censor- (DISC 2016). Moreover, for cycle-freeness, we obtain a of the graph that locally corrects the graph so that the corrected graph is acyclic. Note that, unlike a tester, a corrector needs to mend the graph in many places in the case that the graph is far from having the property. In the second part of the paper we design algorithms for testing whether the network is @math -free for any connected @math of size up to four with round complexity of @math . This improves over the @math -round algorithms for testing triangle freeness by Censor- (DISC 2016) and for testing excluded graphs of size @math by (DISC 2016). In the last part we generalize the global tester by Iwama and Yoshida (ITCS 2014) of testing @math -path freeness to testing the exclusion of any tree of order @math . We then show how to simulate this algorithm in the CONGEST-model in @math rounds.", "We initiate a thorough study of -- producing algorithms for the approximation problems of property testing in the CONGEST model. In particular, for the so-called testing model we emulate sequential tests for nearly all graph properties having @math -sided tests, while in the and models we obtain faster tests for triangle-freeness and bipartiteness respectively. In most cases, aided by parallelism, the distributed algorithms have a much shorter running time as compared to their counterparts from the sequential querying model of traditional property testing. The simplest property testing algorithms allow a relatively smooth transitioning to the distributed model. For the more complex tasks we develop new machinery that is of independent interest. This includes a method for distributed maintenance of multiple random walks.", "In the standard CONGEST model for distributed network computing, it is known that \"global\" tasks such as minimum spanning tree, diameter, and all-pairs shortest paths, consume large bandwidth, for their running-time is @math rounds in @math -node networks with constant diameter. Surprisingly, \"local\" tasks such as detecting the presence of a 4-cycle as a subgraph also requires @math rounds, even using randomized algorithms, and the best known upper bound for detecting the presence of a 3-cycle is @math rounds. The objective of this paper is to better understand the landscape of such subgraph detection tasks. We show that, in contrast to , which are hard to detect in the CONGEST model, there exists a deterministic algorithm for detecting the presence of a subgraph isomorphic to @math running in a number of rounds, for every tree @math . Our algorithm provides a distributed implementation of a combinatorial technique due to Erd o for sparsening the set of partial solutions kept by the nodes at each round. Our result has important consequences to , i.e., to randomized algorithms whose aim is to distinguish between graphs satisfying a property, and graphs far from satisfying that property. In particular, we get that, for every graph pattern @math composed of an edge and a tree connected in an arbitrary manner, there exists a distributed testing algorithm for @math -freeness, performing in a constant number of rounds. Although the class of graph patterns @math formed by a tree and an edge connected arbitrarily may look artificial, all previous results of the literature concerning testing @math -freeness for classical patterns such as cycles and cliques can be viewed as direct consequences of our result, while our algorithm enables testing more complex patterns.", "In the subgraph-freeness problem, we are given a constant-size graph @math , and wish to determine whether the network contains @math as a subgraph or not. The relaxation of the problem only requires us to distinguish graphs that are @math -free from graphs that are @math -far from @math -free, meaning an @math -fraction of their edges must be removed to obtain an @math -free graph. Recently, Censor-Hillel et. al. and showed that in the property-testing regime it is possible to test @math -freeness for any graph @math of size 4 in constant time, @math rounds, regardless of the network size. However, Fraigniaud et. al. also showed that their techniques for graphs @math of size 4 cannot test @math -cycle-freeness in constant time. In this paper we revisit the subgraph-freeness problem and show that @math -cycle-freeness, and indeed @math -freeness for many other graphs @math comprising more than 4 vertices, can be tested in constant time. We show that @math -freeness can be tested in @math rounds for any cycle @math , improving on the running time of @math of the previous algorithms for triangle-freeness and @math -freeness. In the special case of triangles, we show that triangle-freeness can be solved in @math rounds independently of @math , when @math is not too small with respect to the number of nodes and edges. We also show that @math -freeness for any constant-size tree @math can be tested in @math rounds, even without the property-testing relaxation. Building on these results, we define a general class of graphs for which we can test subgraph-freeness in @math rounds. This class includes all graphs over 5 vertices except the 5-clique, @math . For cliques @math over @math nodes, we show that @math -freeness can be tested in @math rounds, where @math is the number of edges." ] }
1705.10194
2963428422
We propose a novel adaptive approximation approach for test-time resource-constrained prediction. Given an input instance at test-time, a gating function identifies a prediction model for the input among a collection of models. Our objective is to minimize overall average cost without sacrificing accuracy. We learn gating and prediction models on fully labeled training data by means of a bottom-up strategy. Our novel bottom-up method first trains a high-accuracy complex model. Then a low-complexity gating and prediction model are subsequently learned to adaptively approximate the high-accuracy model in regions where low-cost models are capable of making highly accurate predictions. We pose an empirical loss minimization problem with cost constraints to jointly train gating and prediction models. On a number of benchmark datasets our method outperforms state-of-the-art achieving higher accuracy for the same cost.
Our composite system is also related to HME @cite_5 , which learns the composite system based on max-likelihood estimation of models. A major difference is that HME does not address budget constraints. A fundamental aspect of budget constraints is the resulting asymmetry, whereby, we start with an HPC model and sequentially approximate with LPCs. This asymmetry leads us to propose a bottom-up strategy where the high-accuracy predictor can be separately estimated and is critical to posing a direct empirical loss minimization problem.
{ "cite_N": [ "@cite_5" ], "mid": [ "2025653905" ], "abstract": [ "We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain." ] }
1705.10417
2618869636
Machine learning and pattern recognition techniques have been successfully applied to algorithmic problems in free groups. In this paper, we seek to extend these techniques to finitely presented non-free groups, with a particular emphasis on polycyclic and metabelian groups that are of interest to non-commutative cryptography. As a prototypical example, we utilize supervised learning methods to construct classifiers that can solve the conjugacy decision problem, i.e., determine whether or not a pair of elements from a specified group are conjugate. The accuracies of classifiers created using decision trees, random forests, and N-tuple neural network models are evaluated for several non-free groups. The very high accuracy of these classifiers suggests an underlying mathematical relationship with respect to conjugacy in the tested groups.
In @cite_15 , posited that pattern recognition techniques are an appropriate methodology for solving problems in combinatorial group theory. To demonstrate, they constructed a machine learning system for discovering effective heuristics for the Whitehead automorphism problem, a search problem in free groups that uses the successive application of the namesake automorphisms to reduce a word to its minimal length. As mentioned in @cite_15 , every machine learning system must contend with the following tasks: data generation, feature extraction, model selection, and evaluation. Once the system is constructed, analysis of the system's performance can yield insights into the nature of the problem at hand, and potentially be used to improve upon it. In the following sections we will delve into each of these aforementioned tasks, showing in the process how these techniques can be extended from free groups to finitely presented groups, and ultimately be adapted to solving the conjugacy decision problem The primary difference in the construction of machine learning systems for free and not-free groups is in feature extraction, which is the focus of the next section.
{ "cite_N": [ "@cite_15" ], "mid": [ "1583192151" ], "abstract": [ "We review some basic methodologies from pattern recognition that can be applied to helping solve combinatorial problems in free group theory. We illustrate how this works with recognizing Whitehead minimal words in free groups of rank 2. The methodologies reviewed include how to form feature vectors, principal components, distance classifers, linear classifiers, regression classifiers, Fisher linear discriminant, support vector machines, quantizing, classification trees, and clustering techniques." ] }
1705.10422
2619543829
Sensor fusion is indispensable to improve accuracy and robustness in an autonomous navigation setting. However, in the space of end-to-end sensorimotor control, this multimodal outlook has received limited attention. In this work, we propose a novel stochastic regularization technique, called Sensor Dropout, to robustify multimodal sensor policy learning outcomes. We also introduce an auxiliary loss on policy network along with the standard DRL loss that help reduce the action variations of the multimodal sensor policy. Through empirical testing we demonstrate that our proposed policy can 1) operate with minimal performance drop in noisy environments, 2) remain functional even in the face of a sensor subset failure. Finally, through the visualization of gradients, we show that the learned policies are conditioned on the same latent input distribution despite having multiple sensory observations spaces - a hallmark of true sensor-fusion. This efficacy of a multimodal policy is shown through simulations on TORCS, a popular open-source racing car game. A demo video can be seen here: this https URL
Multisensory deep learning, popularly called Multimodal deep learning, is an active area of research in other domains like audiovisual systems @cite_28 , text speech and language models @cite_19 , etc. However, Multi-modal learning is conspicuous by its absence in the modern end-to-end autonomous navigation literature. Another challenge in multimodal learning is the specific case of over-fitting where instead of learning the underlying latent target state representation using multiple diverse observations, the model instead learns a complex representation in the original space itself, defeating the purpose of using multi-sensor observations and making the process computationally burdensome. An illustrative example for this case is a car navigating when all sensors remain functional but fails to navigate at all even if one sensor fails or is partially corrupted. This kind of behavior is detrimental and suitable regularization measures should be set up during training to avoid it.
{ "cite_N": [ "@cite_28", "@cite_19" ], "mid": [ "2184188583", "154472438" ], "abstract": [ "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.", "Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time." ] }
1705.10422
2619543829
Sensor fusion is indispensable to improve accuracy and robustness in an autonomous navigation setting. However, in the space of end-to-end sensorimotor control, this multimodal outlook has received limited attention. In this work, we propose a novel stochastic regularization technique, called Sensor Dropout, to robustify multimodal sensor policy learning outcomes. We also introduce an auxiliary loss on policy network along with the standard DRL loss that help reduce the action variations of the multimodal sensor policy. Through empirical testing we demonstrate that our proposed policy can 1) operate with minimal performance drop in noisy environments, 2) remain functional even in the face of a sensor subset failure. Finally, through the visualization of gradients, we show that the learned policies are conditioned on the same latent input distribution despite having multiple sensory observations spaces - a hallmark of true sensor-fusion. This efficacy of a multimodal policy is shown through simulations on TORCS, a popular open-source racing car game. A demo video can be seen here: this https URL
Stochastic regularization is an active area of research in deep learning made popular by the success of, @cite_25 . Following this landmark paper, numerous extensions were proposed to further generalize this idea ( @cite_1 @cite_7 @cite_33 @cite_15 ). In the similar vein, an interesting technique has been proposed for specialized regularization in the multimodal setting namely ModDrop @cite_32 . ModDrop, however, requires pretraining with individual sensor inputs using separate loss functions. The method is originally designed for multimodal deep learning on a dataset. We argue that for DRL where the training dataset is generated during run-time, pretraining for each sensor policy may end up optimizing on input distributions. In comparison, is designed to be applicable to the DRL setting. With SD, a network can be directly constructed in an end-to-end fashion and the sensor fusion layer can be added just like Dropout. The training time is much shorter and scales better with increasing number of sensors.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_1", "@cite_32", "@cite_15", "@cite_25" ], "mid": [ "2409027918", "4919037", "2218408410", "2963192057", "215155355", "2095705004" ], "abstract": [ "We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization yields state-of-the-art results on permuted sequential MNIST.", "We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.", "Most deep architectures for image classification–even those that are trained to classify a large number of diverse categories–learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and Image Net datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures.", "We present a method for gesture detection and localisation based on multi-scale and multi-modal deep learning. Each visual modality captures spatial information at a particular spatial scale (such as motion of the upper body or a hand), and the whole system operates at three temporal scales. Key to our technique is a training strategy which exploits: i) careful initialization of individual modalities; and ii) gradual fusion involving random dropping of separate channels (dubbed ModDrop ) for learning cross-modality correlations while preserving uniqueness of each modality-specific representation. We present experiments on the ChaLearn 2014 Looking at People Challenge gesture recognition track, in which we placed first out of 17 teams. Fusing multiple modalities at several spatial and temporal scales leads to a significant increase in recognition rates, allowing the model to compensate for errors of the individual classifiers as well as noise in the separate channels. Futhermore, the proposed ModDrop training technique ensures robustness of the classifier to missing signals in one or several channels to produce meaningful predictions from any number of available modalities. In addition, we demonstrate the applicability of the proposed fusion scheme to modalities of arbitrary nature by experiments on the same dataset augmented with audio.", "We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to sub-sampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With DropAll we can perform both methods. We show the validity of our proposal by improving the classification error of networks trained with DropOut and DropConnect, on a common image classification dataset. To improve the classification, we also used a new method for combining networks, which was proposed in [3].", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets." ] }
1705.10209
2951619806
We show that a recently proposed neural dependency parser can be improved by joint training on multiple languages from the same family. The parser is implemented as a deep neural network whose only input is orthographic representations of words. In order to successfully parse, the network has to discover how linguistically relevant concepts can be inferred from word spellings. We analyze the representations of characters and words that are learned by the network to establish which properties of languages were accounted for. In particular we show that the parser has approximately learned to associate Latin characters with their Cyrillic counterparts and that it can group Polish and Russian words that have a similar grammatical function. Finally, we evaluate the parser on selected languages from the Universal Dependencies dataset and show that it is competitive with other recently proposed state-of-the art methods, while having a simple structure.
Historically, the learning algorithms were relatively simple ones, e.g. transition-based parsers used linear SVMs @cite_32 @cite_4 . Recently, those simple learning models were successfully replaced by deep neural networks @cite_0 @cite_7 @cite_34 @cite_21 . This trend coincides with successes of those models on other NLP tasks, such as language modeling @cite_13 @cite_1 and translation @cite_22 @cite_33 @cite_2 .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_7", "@cite_33", "@cite_21", "@cite_1", "@cite_32", "@cite_0", "@cite_2", "@cite_34", "@cite_13" ], "mid": [ "2105847779", "2133564696", "2250861254", "2949888546", "2311132329", "2259472270", "2951559648", "2128791906", "2525778437", "2949952998", "179875071" ], "abstract": [ "Parsing algorithms that process the input from left to right and construct a single derivation have often been considered inadequate for natural language parsing because of the massive ambiguity typically found in natural language grammars. Nevertheless, it has been shown that such algorithms, combined with treebank-induced classifiers, can be used to build highly accurate disambiguating parsers, in particular for dependency-based syntactic representations. In this article, we first present a general framework for describing and analyzing algorithms for deterministic incremental dependency parsing, formalized as transition systems. We then describe and analyze two families of such algorithms: stack-based and list-based algorithms. In the former family, which is restricted to projective dependency structures, we describe an arc-eager and an arc-standard variant; in the latter family, we present a projective and a non-projective variant. For each of the four algorithms, we give proofs of correctness and complexity. In addition, we perform an experimental evaluation of all algorithms in combination with SVM classifiers for predicting the next parsing action, using data from thirteen languages. We show that all four algorithms give competitive accuracy, although the non-projective list-based algorithm generally outperforms the projective algorithms for languages with a non-negligible proportion of non-projective constructions. However, the projective algorithms often produce comparable results when combined with the technique known as pseudo-projective parsing. The linear time complexity of the stack-based algorithms gives them an advantage with respect to efficiency both in learning and in parsing, but the projective list-based algorithm turns out to be equally efficient in practice. Moreover, when the projective algorithms are used to implement pseudo-projective parsing, they sometimes become less efficient in parsing (but not in learning) than the non-projective list-based algorithm. Although most of the algorithms have been partially described in the literature before, this is the first comprehensive analysis and evaluation of the algorithms within a unified framework.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2 improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2 unlabeled attachment score on the English Penn Treebank.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-of-speech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. We discuss the importance of global as opposed to local normalization: a key insight is that the label bias problem implies that globally normalized models can be strictly more expressive than locally normalized models.", "In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary sizes, and complex, long term structure of language. We perform an exhaustive study on techniques such as character Convolutional Neural Networks or Long-Short Term Memory, on the One Billion Word Benchmark. Our best single model significantly improves state-of-the-art perplexity from 51.3 down to 30.0 (whilst reducing the number of parameters by a factor of 20), while an ensemble of models sets a new record by improving perplexity from 41.0 down to 23.7. We also release these models for the NLP and ML community to study and improve upon.", "We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.", "We propose a generative dependency parsing model which uses binary latent variables to induce conditioning features. To define this model we use a recently proposed class of Bayesian Networks for structured prediction, Incremental Sigmoid Belief Networks. We demonstrate that the proposed model achieves state-of-the-art results on three different languages. We also demonstrate that the features induced by the ISBN's latent variables are crucial to this success, and show that the proposed model is particularly good on long dependencies.", "Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (\"wordpieces\") for both input and output. This method provides a good balance between the flexibility of \"character\"-delimited models and the efficiency of \"word\"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60 compared to Google's phrase-based production system.", "We propose a technique for learning representations of parser states in transition-based dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks---the stack LSTM. Like the conventional stack data structures used in transition-based parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser's state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. Standard backpropagation techniques are used for training and yield state-of-the-art parsing performance.", "A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition" ] }
1705.10209
2951619806
We show that a recently proposed neural dependency parser can be improved by joint training on multiple languages from the same family. The parser is implemented as a deep neural network whose only input is orthographic representations of words. In order to successfully parse, the network has to discover how linguistically relevant concepts can be inferred from word spellings. We analyze the representations of characters and words that are learned by the network to establish which properties of languages were accounted for. In particular we show that the parser has approximately learned to associate Latin characters with their Cyrillic counterparts and that it can group Polish and Russian words that have a similar grammatical function. Finally, we evaluate the parser on selected languages from the Universal Dependencies dataset and show that it is competitive with other recently proposed state-of-the art methods, while having a simple structure.
Neural networks have enough capacity to directly solve the parsing task. For example a constituency parser can be implemented using a sequence-to-sequence network originally developed for translation @cite_30 . Similarly, a graph-based dependency parser can be implemented by solving two supervised tasks: head selection and dependency labeling. Both are easily solved using neural networks @cite_3 @cite_19 @cite_5 @cite_28 . Moreover, neural networks can extract meaningful features from the data, which may augment or replace manually designed ones, as it is the case with word embeddings @cite_15 or features derived from the spelling of words @cite_12 @cite_29 @cite_28 .
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_29", "@cite_3", "@cite_19", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "1869752048", "", "2951336364", "2950886545", "2413965833", "", "2950133940", "" ], "abstract": [ "Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades. As a result, the most accurate parsers are domain specific, complex, and inefficient. In this paper we show that the domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers. It also matches the performance of standard parsers when trained only on a small human-annotated dataset, which shows that this model is highly data-efficient, in contrast to sequence-to-sequence models without the attention mechanism. Our parser is also fast, processing over a hundred sentences per second with an unoptimized CPU implementation.", "", "We present extensions to a continuous-state dependency parsing method that makes it applicable to morphologically rich languages. Starting with a high-performance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.", "We present a simple and effective scheme for dependency parsing which is based on bidirectional-LSTMs (BiLSTMs). Each sentence token is associated with a BiLSTM vector representing the token in its sentential context, and feature vectors are constructed by concatenating a few BiLSTM vectors. The BiLSTM is trained jointly with the parser objective, resulting in very effective feature extractors for parsing. We demonstrate the effectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. The resulting parsers have very simple architectures, and match or surpass the state-of-the-art accuracies on English and Chinese.", "Conventional graph-based dependency parsers guarantee a tree structure both during training and inference. Instead, we formalize dependency parsing as the problem of independently selecting the head of each word in a sentence. Our model which we call (as shorthand for De pendency N eural Se lection) produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network. Without enforcing structural constraints during training, generates (at inference time) trees for the overwhelming majority of sentences, while non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of the approach, our parsers are on par with the state of the art.", "", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "" ] }
1705.10202
2617135656
Methods from the area of Process Mining traditionally focus on extracting insight in business processes from event logs. In this paper we explore the potential of Process Mining to provide valuable insights in (un)healthy habits and to contribute to ambient assisted living solutions when applied on data from smart home environments. Events in smart home environments are recorded at the level of sensor triggers, which is too low to mine habit-related behavioral patterns. Process discovery algorithms produce then overgeneralizing process models that allow for too much behavior and that are difficult to interpret for human experts. We show that abstracting the events to a higher-level interpretation can enable discovery of more precise and more comprehensible models. We present a framework to automatically abstract sensor-level events to their interpretation at the human activity level. Our framework is based on the XES IEEE standard for event logs. We use supervised learning techniques to train it on training data for which both the sensor and human activity events are known. We demonstrate our abstraction framework on three real-life smart home event logs and show that the process models that can be discovered after abstraction improve on precision as well as on F-score.
Event abstraction based on supervised learning is an unexplored problem in process mining. Most related work for abstracting from sensor-level to human activity level events can be found in the field of activity recognition, which focuses on the task of detecting different types of human activity from either passive sensors @cite_4 @cite_24 , wearable sensors @cite_28 @cite_5 , or cameras @cite_8 .
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_28", "@cite_24", "@cite_5" ], "mid": [ "1969307352", "2106996050", "2105046342", "1497385253", "2017634428" ], "abstract": [ "A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its annotation is described and made available to the community. Through a number of experiments we show how the hidden Markov model and conditional random fields perform in recognizing activities. We achieve a timeslice accuracy of 95.6 and a class accuracy of 79.4 .", "Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human-computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research.", "In this work, algorithms are developed and evaluated to de- tect physical activities from data acquired using five small biaxial ac- celerometers worn simultaneously on different parts of the body. Ac- celeration data was collected from 20 subjects without researcher su- pervision or observation. Subjects were asked to perform a sequence of everyday tasks but not told specifically where or how to do them. Mean, energy, frequency-domain entropy, and correlation of acceleration data was calculated and several classifiers using these features were tested. De- cision tree classifiers showed the best performance recognizing everyday activities with an overall accuracy rate of 84 . The results show that although some activities are recognized well with subject-independent training data, others appear to require subject-specific training data. The results suggest that multiple accelerometers aid in recognition because conjunctions in acceleration feature values can effectively discriminate many activities. With just two biaxial accelerometers - thigh and wrist - the recognition performance dropped only slightly. This is the first work to investigate performance of recognition algorithms with multiple, wire-free accelerometers on 20 activities using datasets annotated by the subjects themselves.", "In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be “tape on and forget” devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25 to 89 depending on the evaluation criteria used.", "Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors. These sensors include GPS sensors, vision sensors (i.e., cameras), audio sensors (i.e., microphones), light sensors, temperature sensors, direction sensors (i.e., magnetic compasses), and acceleration sensors (i.e., accelerometers). The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing. To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets. Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (e.g., sending calls directly to voicemail if a user is jogging) and generating a daily weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise." ] }
1705.10202
2617135656
Methods from the area of Process Mining traditionally focus on extracting insight in business processes from event logs. In this paper we explore the potential of Process Mining to provide valuable insights in (un)healthy habits and to contribute to ambient assisted living solutions when applied on data from smart home environments. Events in smart home environments are recorded at the level of sensor triggers, which is too low to mine habit-related behavioral patterns. Process discovery algorithms produce then overgeneralizing process models that allow for too much behavior and that are difficult to interpret for human experts. We show that abstracting the events to a higher-level interpretation can enable discovery of more precise and more comprehensible models. We present a framework to automatically abstract sensor-level events to their interpretation at the human activity level. Our framework is based on the XES IEEE standard for event logs. We use supervised learning techniques to train it on training data for which both the sensor and human activity events are known. We demonstrate our abstraction framework on three real-life smart home event logs and show that the process models that can be discovered after abstraction improve on precision as well as on F-score.
Activity recognition methods generally operate on discrete time windows over the time series of continuous-valued sensor values and aim to map each time window onto the correct type of human activity, e.g. or . Activity recognition methods can be classified into probabilistic approaches @cite_4 @cite_24 @cite_28 @cite_5 and ontological reasoning approaches @cite_23 @cite_21 . The advantage of probabilistic approaches over ontological reasoning approaches is their ability to handle noisy, uncertain and incomplete sensor data @cite_23 .
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_21", "@cite_24", "@cite_23", "@cite_5" ], "mid": [ "1969307352", "2105046342", "2000773008", "1497385253", "2074783809", "2017634428" ], "abstract": [ "A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its annotation is described and made available to the community. Through a number of experiments we show how the hidden Markov model and conditional random fields perform in recognizing activities. We achieve a timeslice accuracy of 95.6 and a class accuracy of 79.4 .", "In this work, algorithms are developed and evaluated to de- tect physical activities from data acquired using five small biaxial ac- celerometers worn simultaneously on different parts of the body. Ac- celeration data was collected from 20 subjects without researcher su- pervision or observation. Subjects were asked to perform a sequence of everyday tasks but not told specifically where or how to do them. Mean, energy, frequency-domain entropy, and correlation of acceleration data was calculated and several classifiers using these features were tested. De- cision tree classifiers showed the best performance recognizing everyday activities with an overall accuracy rate of 84 . The results show that although some activities are recognized well with subject-independent training data, others appear to require subject-specific training data. The results suggest that multiple accelerometers aid in recognition because conjunctions in acceleration feature values can effectively discriminate many activities. With just two biaxial accelerometers - thigh and wrist - the recognition performance dropped only slightly. This is the first work to investigate performance of recognition algorithms with multiple, wire-free accelerometers on 20 activities using datasets annotated by the subjects themselves.", "In recent years, there has been a growing interest in the adoption of ontologies and ontological reasoning to automatically recognize complex context data such as human activities. In particular, the Web Ontology Language (OWL) emerged as the language of choice, being a standard for the Semantic Web, and supported by a number of tools for knowledge engineering and reasoning. However, the limitations of OWL 1 in terms of expressiveness have been recognized in various fields, and important research efforts have been made to extend the language while preserving decidability of its OWL 1 DL fragment. The result of such work is OWL 2. In this paper we investigate the use of OWL 2 for modeling complex activities and reasoning with them. We show that the new language constructors of OWL 2 overcome the main limitations of OWL 1 for the representation of activities; OWL 2 axioms can be used to represent certain rules and rule-based reasoning previously demanded to hybrid approaches, with the advantage of having a unique semantics, avoiding potential inconsistencies. Then, we propose a system architecture showing the integration of a novel OWL 2 activity ontology and reasoning modules with distributed modules for sensor data aggregation and reasoning. The feasibility of our solution is shown by an extensive experimental evaluation with simulations of different intelligent environments.", "In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be “tape on and forget” devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25 to 89 depending on the evaluation criteria used.", "Purpose – This paper aims to serve two main purposes. In the first instance it aims to it provide an overview addressing the state‐of‐the‐art in the area of activity recognition, in particular, in the area of object‐based activity recognition. This will provide the necessary material to inform relevant research communities of the latest developments in this area in addition to providing a reference for researchers and system developers who ware working towards the design and development of activity‐based context aware applications. In the second instance this paper introduces a novel approach to activity recognition based on the use of ontological modeling, representation and reasoning, aiming to consolidate and improve existing approaches in terms of scalability, applicability and easy‐of‐use.Design methodology approach – The paper initially reviews the existing approaches and algorithms, which have been used for activity recognition in a number of related areas. From each of these, their strengths and w...", "Mobile devices are becoming increasingly sophisticated and the latest generation of smart cell phones now incorporates many diverse and powerful sensors. These sensors include GPS sensors, vision sensors (i.e., cameras), audio sensors (i.e., microphones), light sensors, temperature sensors, direction sensors (i.e., magnetic compasses), and acceleration sensors (i.e., accelerometers). The availability of these sensors in mass-marketed communication devices creates exciting new opportunities for data mining and data mining applications. In this paper we describe and evaluate a system that uses phone-based accelerometers to perform activity recognition, a task which involves identifying the physical activity a user is performing. To implement our system we collected labeled accelerometer data from twenty-nine users as they performed daily activities such as walking, jogging, climbing stairs, sitting, and standing, and then aggregated this time series data into examples that summarize the user activity over 10- second intervals. We then used the resulting training data to induce a predictive model for activity recognition. This work is significant because the activity recognition model permits us to gain useful knowledge about the habits of millions of users passively---just by having them carry cell phones in their pockets. Our work has a wide range of applications, including automatic customization of the mobile device's behavior based upon a user's activity (e.g., sending calls directly to voicemail if a user is jogging) and generating a daily weekly activity profile to determine if a user (perhaps an obese child) is performing a healthy amount of exercise." ] }
1705.10202
2617135656
Methods from the area of Process Mining traditionally focus on extracting insight in business processes from event logs. In this paper we explore the potential of Process Mining to provide valuable insights in (un)healthy habits and to contribute to ambient assisted living solutions when applied on data from smart home environments. Events in smart home environments are recorded at the level of sensor triggers, which is too low to mine habit-related behavioral patterns. Process discovery algorithms produce then overgeneralizing process models that allow for too much behavior and that are difficult to interpret for human experts. We show that abstracting the events to a higher-level interpretation can enable discovery of more precise and more comprehensible models. We present a framework to automatically abstract sensor-level events to their interpretation at the human activity level. Our framework is based on the XES IEEE standard for event logs. We use supervised learning techniques to train it on training data for which both the sensor and human activity events are known. We demonstrate our abstraction framework on three real-life smart home event logs and show that the process models that can be discovered after abstraction improve on precision as well as on F-score.
Tapia @cite_24 was the first to explore supervised learning for inference of human activities from passive sensors, using a naive Bayes classifier. Many more recent activity recognition approaches use probabilistic graphical models @cite_4 @cite_30 : Van @cite_4 explored Conditional Random Fields @cite_29 and Hidden Markov Models @cite_34 , and Van Kasteren and Kr "o se @cite_30 applied Bayesian Networks @cite_17 on the activity recognition task. @cite_26 found Hidden Markov Models to be unable to capture long-range or transitive dependencies between observations, which results in difficulties recognizing multiple interacting activities (concurrent or interwoven). Conditional Random Fields do not possess these limitations.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_4", "@cite_29", "@cite_24", "@cite_34", "@cite_17" ], "mid": [ "2097403258", "2103388129", "1969307352", "2147880316", "1497385253", "2105594594", "1817561967" ], "abstract": [ "The growing population of elders in our society calls for a new approach in caregiving. By inferring what activities elderly are performing in their houses it is possible to determine their physical and cognitive capabilities. In this paper we describe probabilistic models for performing activity recognition from sensor patterns. We introduce a new observation model which takes the history of sensor readings into account. Results show that the new observation model improves accuracy, but a description using less parameters is likely to give even better results.", "In principle, activity recognition can be exploited to great societ al benefits, especially in real-life, human centric applications such as elder care and healthcare. This article focused on recognizing simple human activities. Recognizing complex activities remains a challenging and active area of research and the nature of human activities poses different challenges. Human activity understanding encompasses activity recognition and activity pattern discovery. The first focuses on accurate detection of human activities based on a predefined activity model. An activity pattern discovery researcher builds a pervasive system first and then analyzes the sensor data to discover activity patterns.", "A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its annotation is described and made available to the community. Through a number of experiments we show how the hidden Markov model and conditional random fields perform in recognizing activities. We achieve a timeslice accuracy of 95.6 and a class accuracy of 79.4 .", "We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.", "In this work, a system for recognizing activities in the home setting using a set of small and simple state-change sensors is introduced. The sensors are designed to be “tape on and forget” devices that can be quickly and ubiquitously installed in home environments. The proposed sensing system presents an alternative to sensors that are sometimes perceived as invasive, such as cameras and microphones. Unlike prior work, the system has been deployed in multiple residential environments with non-researcher occupants. Preliminary results on a small dataset show that it is possible to recognize activities of interest to medical professionals such as toileting, bathing, and grooming with detection accuracies ranging from 25 to 89 depending on the evaluation criteria used.", "The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition.", "Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection." ] }
1705.10202
2617135656
Methods from the area of Process Mining traditionally focus on extracting insight in business processes from event logs. In this paper we explore the potential of Process Mining to provide valuable insights in (un)healthy habits and to contribute to ambient assisted living solutions when applied on data from smart home environments. Events in smart home environments are recorded at the level of sensor triggers, which is too low to mine habit-related behavioral patterns. Process discovery algorithms produce then overgeneralizing process models that allow for too much behavior and that are difficult to interpret for human experts. We show that abstracting the events to a higher-level interpretation can enable discovery of more precise and more comprehensible models. We present a framework to automatically abstract sensor-level events to their interpretation at the human activity level. Our framework is based on the XES IEEE standard for event logs. We use supervised learning techniques to train it on training data for which both the sensor and human activity events are known. We demonstrate our abstraction framework on three real-life smart home event logs and show that the process models that can be discovered after abstraction improve on precision as well as on F-score.
Other related work can be found in the area of process mining, where several techniques address the challenge of abstracting low-level (e.g. sensor-level) events to high level (e.g. human activity level) events ( @cite_12 @cite_31 @cite_10 ). Most existing event abstraction methods rely on clustering methods, where each cluster of low-level events is interpreted as one single high-level event. However, using unsupervised learning introduces two new problems. First, it is unclear how to label high-level events that are obtained by clustering low-level events. Current techniques require the user process analyst to provide high-level event labels themselves based on domain knowledge. Secondly, unsupervised learning gives no guidance with respect to the desired level of abstraction. Many existing event abstraction methods contain one or more parameters to control the size the clusters, and finding the right level of abstraction providing meaningful results is often a matter of trial-and-error.
{ "cite_N": [ "@cite_31", "@cite_10", "@cite_12" ], "mid": [ "1573114896", "1536291427", "1824264389" ], "abstract": [ "Process Mining is a technology for extracting non-trivial and useful information from execution logs. For example, there are many process mining techniques to automatically discover a process model describing the causal dependencies between activities . Unfortunately, the quality of a discovered process model strongly depends on the quality and suitability of the input data. For example, the logs of many real-life systems do not refer to the activities an analyst would have in mind, but are on a much more detailed level of abstraction. Trace segmentation attempts to group low-level events into clusters, which represent the execution of a higher-level activity in the (available or imagined) process meta-model. As a result, the simplified log can be used to discover better process models. This paper presents a new activity mining approach based on global trace segmentation. We also present an implementation of the approach, and we validate it using a real-life event log from ASML’s test process.", "The goal of performance analysis of business processes is to gain insights into operational processes, for the purpose of optimizing them. To intuitively show which parts of the process might be improved, performance analysis results can be projected onto process models. This way, bottlenecks can quickly be identified and resolved.", "Process mining refers to the extraction of process models from event logs. Real-life processes tend to be less structured and more flexible. Traditional process mining algorithms have problems dealing with such unstructured processes and generate spaghetti-like process models that are hard to comprehend. One reason for such a result can be attributed to constructing process models from raw traces without due pre-processing. In an event log, there can be instances where the system is subjected to similar execution patterns behavior. Discovery of common patterns of invocation of activities in traces (beyond the immediate succession relation) can help in improving the discovery of process models and can assist in defining the conceptual relationship between the tasks activities. In this paper, we characterize and explore the manifestation of commonly used process model constructs in the event log and adopt pattern definitions that capture these manifestations, and propose a means to form abstractions over these patterns. We also propose an iterative method of transformation of traces which can be applied as a pre-processing step for most of today's process mining techniques . The proposed approaches are shown to identify promising patterns and conceptually-valid abstractions on a real-life log. The patterns discussed in this paper have multiple applications such as trace clustering, fault diagnosis anomaly detection besides being an enabler for hierarchical process discovery." ] }
1705.10202
2617135656
Methods from the area of Process Mining traditionally focus on extracting insight in business processes from event logs. In this paper we explore the potential of Process Mining to provide valuable insights in (un)healthy habits and to contribute to ambient assisted living solutions when applied on data from smart home environments. Events in smart home environments are recorded at the level of sensor triggers, which is too low to mine habit-related behavioral patterns. Process discovery algorithms produce then overgeneralizing process models that allow for too much behavior and that are difficult to interpret for human experts. We show that abstracting the events to a higher-level interpretation can enable discovery of more precise and more comprehensible models. We present a framework to automatically abstract sensor-level events to their interpretation at the human activity level. Our framework is based on the XES IEEE standard for event logs. We use supervised learning techniques to train it on training data for which both the sensor and human activity events are known. We demonstrate our abstraction framework on three real-life smart home event logs and show that the process models that can be discovered after abstraction improve on precision as well as on F-score.
One abstraction technique from the process mining field that does not rely on unsupervised learning is proposed by @cite_14 . This approach relies on domain knowledge to abstract low-level events into high-level events, requiring the user to specify a low-level process model for each high-level activity. However, in the context of human behavior it is unreasonable to expect the user to provide the process model in sensor terms for each human activity from domain knowledge.
{ "cite_N": [ "@cite_14" ], "mid": [ "2507315788" ], "abstract": [ "Process mining techniques analyze processes based on event data. A crucial assumption for process analysis is that events correspond to occurrences of meaningful activities. Often, low-level events recorded by information systems do not directly correspond to these. Abstraction methods, which provide a mapping from the recorded events to activities recognizable by process workers, are needed. Existing supervised abstraction methods require a full model of the entire process as input and cannot handle noise. This paper proposes a supervised abstraction method based on behavioral activity patterns that capture domain knowledge on the relation between activities and events. Through an alignment between the activity patterns and the low-level event logs an abstracted event log is obtained. Events in the abstracted event log correspond to instantiations of recognizable activities. The method is evaluated with domain experts of a Norwegian hospital using an event log from their digital whiteboard system. The evaluation shows that state-of-the art process mining methods provide valuable insights on the usage of the system when using the abstracted event log, but fail when using the original lower level event log." ] }
1705.09786
2620481664
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.
TensorFlow Fold (TFF) @cite_29 is a recent extension of TensorFlow that attempts to increase batching for TF dynamic networks and is an interesting alternative to our asynchronous execution. TFF unrolls and merges together (by depth) the computation graphs of several instances, resulting in a batch-like execution. TFF effectiveness greatly depends on the model -- for example, it would not batch well for random permutations of a sequence of operations, whereas our IR would very succinctly express and achieve pipeline parallelism through our control-flow IR nodes.
{ "cite_N": [ "@cite_29" ], "mid": [ "2586850891" ], "abstract": [ "Neural networks that compute over graph structures are a natural fit for problems in a variety of domains, including natural language (parse trees) and cheminformatics (molecular graphs). However, since the computation graph has a different shape and size for every input, such networks do not directly support batched training or inference. They are also difficult to implement in popular deep learning libraries, which are based on static data-flow graphs. We introduce a technique called dynamic batching, which not only batches together operations between different input graphs of dissimilar shape, but also between different nodes within a single input graph. The technique allows us to create static graphs, using popular libraries, that emulate dynamic computation graphs of arbitrary shape and size. We further present a high-level library of compositional blocks that simplifies the creation of dynamic graph models. Using the library, we demonstrate concise and batch-wise parallel implementations for a variety of models from the literature." ] }
1705.09786
2620481664
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.
Asynchronous data parallel training @cite_21 @cite_5 @cite_4 is another popular approach to scale out optimization by removing synchronization, orthogonal to and combinable with model-parallel training. For example, convolutional layers are more amenable to data-parallel training than fully connected layers, because the weights are smaller than the activations. Moreover, when control flow differs per data instance, data parallelism is one way to get an effective minibatch size @math , which may improve convergence by reducing variance. The impact of staleness on convergence @cite_21 and optimization dynamics @cite_13 have been studied for data parallelism. It would be interesting to extend those results to our setting.
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_21", "@cite_4" ], "mid": [ "2410569288", "2168231600", "2951781666", "1442374986" ], "abstract": [ "Asynchronous methods are widely used in deep learning, but have limited theoretical justification when applied to non-convex problems. We show that running stochastic gradient descent (SGD) in an asynchronous manner can be viewed as adding a momentum-like term to the SGD iteration. Our result does not assume convexity of the objective function, so it is applicable to deep learning systems. We observe that a standard queuing model of asynchrony results in a form of momentum that is commonly used by deep learning practitioners. This forges a link between queuing theory and asynchrony in deep learning systems, which could be useful for systems builders. For convolutional neural networks, we experimentally validate that the degree of asynchrony directly correlates with the momentum, confirming our main result. An important implication is that tuning the momentum parameter is important when considering different levels of asynchrony. We assert that properly tuned momentum reduces the number of steps required for convergence. Finally, our theory suggests new ways of counteracting the adverse effects of asynchrony: a simple mechanism like using negative algorithmic momentum can improve performance under high asynchrony. Since asynchronous methods have better hardware efficiency, this result may shed light on when asynchronous execution is more efficient for deep learning systems.", "Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.", "Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.", "Large deep neural network models have recently demonstrated state-of-the-art accuracy on hard visual recognition tasks. Unfortunately such models are extremely time consuming to train and require large amount of compute cycles. We describe the design and implementation of a distributed system called Adam comprised of commodity server machines to train such models that exhibits world-class performance, scaling and task accuracy on visual recognition tasks. Adam achieves high efficiency and scalability through whole system co-design that optimizes and balances workload computation and communication. We exploit asynchrony throughout the system to improve performance and show that it additionally improves the accuracy of trained models. Adam is significantly more efficient and scalable than was previously thought possible and used 30x fewer machines to train a large 2 billion connection model to 2x higher accuracy in comparable time on the ImageNet 22,000 category image classification task than the system that previously held the record for this benchmark. We also show that task accuracy improves with larger models. Our results provide compelling evidence that a distributed systems-driven approach to deep learning using current training algorithms is worth pursuing." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
Perfectly detecting all the people or tracking every person in a video can solve the counting problem, and detection-based @cite_29 and tracking-based @cite_30 @cite_2 counting methods have been proposed. However, their performances are often limited by low-resolution and severe occlusion. In contrast, regression-based counting methods directly map from the image features to the number of people, without explicit object detection. Such methods normally give better performance for crowded scenes by bypassing the hard detection problem. For regression-based methods, initial works are based on regressing from global image features to the whole image or input patch count @cite_23 @cite_33 @cite_42 @cite_36 @cite_20 , discarding all the location information, or mapping local features to crowd blob count based on segmentation results @cite_17 @cite_37 @cite_18 . These methods ignore the distribution information of the objects within the region, and hence cannot be used for object localization. Extracting suitable features is a crucial part of regression-based methods: @cite_17 uses features related to segmentation, internal edges, and texture; @cite_20 uses multiple sources (confidence of head detector, SIFT, frequency-domain analysis) because the performance of any single source is not robust enough, especially when the dataset contains perspective distortion and severe occlusion.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_18", "@cite_33", "@cite_36", "@cite_29", "@cite_42", "@cite_23", "@cite_2", "@cite_20", "@cite_17" ], "mid": [ "2096229530", "2155916750", "2088929512", "2150123015", "", "", "2075875861", "2121864252", "2161841955", "", "2123175289" ], "abstract": [ "While crowds of various subjects may offer applicationspecific cues to detect individuals, we demonstrate that for the general case, motion itself contains more information than previously exploited. This paper describes an unsupervised data driven Bayesian clustering algorithm which has detection of individual entities as its primary goal. We track simple image features and probabilistically group them into clusters representing independently moving entities. The numbers of clusters and the grouping of constituent features are determined without supervised learning or any subject-specific model. The new approach is instead, that space-time proximity and trajectory coherence through image space are used as the only probabilistic criteria for clustering. An important contribution of this work is how these criteria are used to perform a one-shot data association without iterating through combinatorial hypotheses of cluster assignments. Our proposed general detection algorithm can be augmented with subject-specific filtering, but is shown to already be effective at detecting individual entities in crowds of people, insects, and animals. This paper and the associated video examine the implementation and experiments of our motion clustering framework.", "In public venues, crowd size is a key indicator of crowd safety and stability. Crowding levels can be detected using holistic image features, however this requires a large amount of training data to capture the wide variations in crowd distribution. If a crowd counting algorithm is to be deployed across a large number of cameras, such a large and burdensome training requirement is far from ideal. In this paper we propose an approach that uses local features to count the number of people in each foreground blob segment, so that the total crowd estimate is the sum of the group sizes. This results in an approach that is scalable to crowd volumes not seen in the training data, and can be trained on a very small data set. As a local approach is used, the proposed algorithm can easily be used to estimate crowd density throughout different regions of the scene and be used in a multi-camera environment. A unique localised approach to ground truth annotation reduces the required training data is also presented, as a localised approach to crowd counting has different training requirements to a holistic one. Testing on a large pedestrian database compares the proposed technique to existing holistic techniques and demonstrates improved accuracy, and superior performance when test conditions are unseen in the training set, or a minimal training set is used.", "An approach to the problem of estimating the size of inhomogeneous crowds, which are composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking is proposed. Instead, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic-texture motion model. A set of holistic low-level features is extracted from each segmented region, and a function that maps features into estimates of the number of people per segment is learned with Bayesian regression. Two Bayesian regression models are examined. The first is a combination of Gaussian process regression with a compound kernel, which accounts for both the global and local trends of the count mapping but is limited by the real-valued outputs that do not match the discrete counts. We address this limitation with a second model, which is based on a Bayesian treatment of Poisson regression that introduces a prior distribution on the linear weights of the model. Since exact inference is analytically intractable, a closed-form approximation is derived that is computationally efficient and kernelizable, enabling the representation of nonlinear functions. An approximate marginal likelihood is also derived for kernel hyperparameter learning. The two regression-based crowd counting methods are evaluated on a large pedestrian data set, containing very distinct camera views, pedestrian traffic, and outliers, such as bikes or skateboarders. Experimental results show that regression-based counts are accurate regardless of the crowd size, outperforming the count estimates produced by state-of-the-art pedestrian detectors. Results on 2 h of video demonstrate the efficiency and robustness of the regression-based crowd size estimation over long periods of time.", "A neural-based crowd estimation system for surveillance in complex scenes at underground station platform is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Those feature indexes are modeled by a neural network to estimate the crowd density. The learning phase is based on our proposed hybrid of the least-squares and global search algorithms which are capable of providing the global search characteristic and fast convergence speed. Promising experimental results are obtained in terms of accuracy and real-time response capability to alert operators automatically.", "", "", "A number of computer vision problems such as human age estimation, crowd density estimation and body face pose (view angle) estimation can be formulated as a regression problem by learning a mapping function between a high dimensional vector-formed feature input and a scalar-valued output. Such a learning problem is made difficult due to sparse and imbalanced training data and large feature variations caused by both uncertain viewing conditions and intrinsic ambiguities between observable visual features and the scalar values to be estimated. Encouraged by the recent success in using attributes for solving classification problems with sparse training data, this paper introduces a novel cumulative attribute concept for learning a regression model when only sparse and imbalanced data are available. More precisely, low-level visual features extracted from sparse and imbalanced image samples are mapped onto a cumulative attribute space where each dimension has clearly defined semantic interpretation (a label) that captures how the scalar output value (e.g. age, people count) changes continuously and cumulatively. Extensive experiments show that our cumulative attribute framework gains notable advantage on accuracy for both age estimation and crowd counting when compared against conventional regression models, especially when the labelled training data is sparse with imbalanced sampling.", "This paper describes a viewpoint invariant learning-based method for counting people in crowds from a single camera. Our method takes into account feature normalization to deal with perspective projection and different camera orientation. The training features include edge orientation and blob size histograms resulted from edge detection and background subtraction. A density map that measures the relative size of individuals and a global scale measuring camera orientation are estimated and used for feature normalization. The relationship between the feature histograms and the number of pedestrians in the crowds is learned from labeled training data. Experimental results from different sites with different camera orientation demonstrate the performance and the potential of our method", "In its full generality, motion analysis of crowded objects necessitates recognition and segmentation of each moving entity. The difficulty of these tasks increases considerably with occlusions and therefore with crowding. When the objects are constrained to be of the same kind, however, partitioning of densely crowded semi-rigid objects can be accomplished by means of clustering tracked feature points. We base our approach on a highly parallelized version of the KLT tracker in order to process the video into a set of feature trajectories. While such a set of trajectories provides a substrate for motion analysis, their unequal lengths and fragmented nature present difficulties for subsequent processing. To address this, we propose a simple means of spatially and temporally conditioning the trajectories. Given this representation, we integrate it with a learned object descriptor to achieve a segmentation of the constituent motions. We present experimental results for the problem of estimating the number of moving objects in a dense crowd as a function of time.", "", "We present a privacy-preserving system for estimating the size of inhomogeneous crowds, composed of pedestrians that travel in different directions, without using explicit object segmentation or tracking. First, the crowd is segmented into components of homogeneous motion, using the mixture of dynamic textures motion model. Second, a set of simple holistic features is extracted from each segmented region, and the correspondence between features and the number of people per segment is learned with Gaussian process regression. We validate both the crowd segmentation algorithm, and the crowd counting system, on a large pedestrian dataset (2000 frames of video, containing 49,885 total pedestrian instances). Finally, we present results of the system running on a full hour of video." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
The concept of an object density map , where the integral (sum) over any subregion equals the number of objects in that region, was first proposed in @cite_25 . The density values are estimated from low-level features, thus sharing the advantages of general regression-based methods, while also maintaining location information. @cite_25 uses a linear model to predict a pixel's density value from extracted features, and proposed the Maximum Excess over Sub Array (MESA) distance, which is the maximum counting error over all rectangular subregions, as a learning objective function. @cite_12 uses ridge regression (RR), instead of the computationally costly MESA, in their interactive counting system. Both @cite_25 and @cite_12 use random forest to extract features from several modalities, including the raw image, the difference image (with its previous frame), and the background-subtracted image.
{ "cite_N": [ "@cite_25", "@cite_12" ], "mid": [ "2145983039", "1003853626" ], "abstract": [ "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "Our objective is to count (and localize) object instances in an image interactively. We target the regime where individual object detectors do not work reliably due to crowding, or overlap, or size of the instances, and take the approach of estimating an object density." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
@cite_14 @cite_9 used regression random forests, generated using the Frobenius norm as their criteria to obtain the best splits of their nodes. @cite_14 uses a number of standard filter bank responses (Laplacian of Gaussian, Gaussian gradient magnitude and eigenvalues of the structure tensor at different scales), along with the raw image, as their input of the regression random forest. Other than the filter channels used in @cite_14 , @cite_9 also uses the background subtraction result and the temporal derivative as their input of the regression random forest. Unlike @cite_14 , which directly regress the density patch, @cite_9 regresses the vector label pointing at the location of the objects within certain radius from the patch center, saving memory. The density patch is finally generated from predicted vector labels.
{ "cite_N": [ "@cite_9", "@cite_14" ], "mid": [ "2207893099", "1542079534" ], "abstract": [ "This paper presents a patch-based approach for crowd density estimation in public scenes. We formulate the problem of estimating density in a structured learning framework applied to random decision forests. Our approach learns the mapping between patch features and relative locations of all objects inside each patch, which contribute to generate the patch density map through Gaussian kernel density estimation. We build the forest in a coarse-to-fine manner with two split node layers, and further propose a crowdedness prior and an effective forest reduction method to improve the estimation accuracy and speed. Moreover, we introduce a semi-automatic training method to learn the estimator for a specific scene. We achieved state-of-the-art results on the public Mall dataset and UCSD dataset, and also proposed two potential applications in traffic counts and scene understanding with promising results.", "Following [Lempitsky and Zisserman, 2010], we seek to count objects by integrating over an object density map that is predicted from an input image. In contrast to that work, we propose to estimate the object density map by averaging over structured, namely patch-wise, predictions. Using an ensemble of randomized regression trees that use dense features as input, we obtain results that are of similar quality, at a fraction of the training time, and with low implementation effort. An open source implementation will be provided in the framework of http: ilastik.org." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
Among deep learning methods, CNN-patch @cite_11 , Hydra CNN @cite_39 and CNN-boost @cite_34 use the input patch to predict a patch of density values, which is reshaped from the output of fully connected layer. In contrast to @cite_11 , Hydra CNN extracts patches of different sizes and scales them to a fixed size before feeding them to each head of the Hydra CNN, while CNN-boost uses a second and or a third CNN to predict the residual error of earlier predictions. @cite_44 uses a classical FCNN to predict a density map for cell counting. In contrast, MCNN @cite_0 is an FCNN with multiple feature-extraction columns, corresponding to different feature scales, whose output feature maps are concatenated together before feeding into later convolution layers. In contrast to Hydra CNN @cite_39 , the three columns in MCNN @cite_0 use the same input patch but have different receptive field settings to encourage different columns to better capture objects of different sizes.
{ "cite_N": [ "@cite_39", "@cite_44", "@cite_0", "@cite_34", "@cite_11" ], "mid": [ "2519281173", "2896018297", "2463631526", "2520826941", "1910776219" ], "abstract": [ "In this paper we address the problem of counting objects instances in images. Our models are able to precisely estimate the number of vehicles in a traffic congestion, or to count the humans in a very crowded scene. Our first contribution is the proposal of a novel convolutional neural network solution, named Counting CNN (CCNN). Essentially, the CCNN is formulated as a regression model where the network learns how to map the appearance of the image patches to their corresponding object density maps. Our second contribution consists in a scale-aware counting model, the Hydra CNN, able to estimate object densities in different very crowded scenarios where no geometric information of the scene can be provided. Hydra CNN learns a multiscale non-linear regression model which uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. We report an extensive experimental evaluation, using up to three different object counting benchmarks, where we show how our solutions achieve a state-of-the-art performance.", "This paper concerns automated cell counting in microscopy images. The approach we take is to adapt Convolutional Neural Networks (CNNs) to regress a cell spatial density map across the image. This is applicable to situations where traditional single-cell segmentation based methods do not work well due to cell clumping or overlap. We make the following contributions: (i) we develop and compare architectures for two Fully Convolutional Regression Networks (FCRNs) for this task; (ii) since the networks are fully convolutional, they can predict a density map for an input image of arbitrary size, and we exploit this to improve efficiency at training time by training end-to-end on image patches; and (iii) we show that FCRNs trained entirely on synthetic data are able to give excellent predictions on real microscopy images without fine-tuning, and that the performance can be further improved by fine-tuning on the real images. We set a new state-of-the-art performance for cell counting on the standard synthetic image benchmarks and, as a side benefit, show the potential of the FCRNs for providing cell detections for overlapping cells.", "This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.", "In this paper, we address the task of object counting in images. We follow modern learning approaches in which a density map is estimated directly from the input image. We employ CNNs and incorporate two significant improvements to the state of the art methods: layered boosting and selective sampling. As a result, we manage both to increase the counting accuracy and to reduce processing time. Moreover, we show that the proposed method is effective, even in the presence of labeling errors. Extensive experiments on five different datasets demonstrate the efficacy and robustness of our approach. Mean Absolute error was reduced by 20 to 35 . At the same time, the training time of each CNN has been reduced by 50 .", "Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to finetune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
Density map estimation methods differ in their choice of training loss function and form of prediction, which result in different characteristics of the predicted density maps. Both the loss and the prediction can be either pixel-wise or region-wise. Table summarizes the differences, and a visual comparison can be seen in fig:dmap-comp . For the loss function, @cite_25 uses a region-based loss consisting of the Maximum Excess over Sub Arrays (MESA) distance, which is specifically designed for the counting problem, and aims to keep good counting performance over all sub-regions. However, for MESA, per-pixel reconstruction of the density map is not the first priority so that @cite_25 cannot well maintain the monotonicity around the peaks and spatial continuity in the predicted density maps (see fig:dmap-comp ), although it is partially preserved due to neighboring pixels being assigned to the same feature codeword and thus same density value. Most other methods @cite_12 @cite_14 @cite_11 @cite_39 @cite_34 @cite_44 @cite_0 use a pixel-wise loss function, e.g., the squared error between the predicted density value and the ground-truth. While per-pixel loss does not optimize the counting error, it typically can yield good estimators of density maps for counting.
{ "cite_N": [ "@cite_14", "@cite_39", "@cite_44", "@cite_0", "@cite_34", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "1542079534", "2519281173", "2896018297", "2463631526", "2520826941", "2145983039", "1003853626", "1910776219" ], "abstract": [ "Following [Lempitsky and Zisserman, 2010], we seek to count objects by integrating over an object density map that is predicted from an input image. In contrast to that work, we propose to estimate the object density map by averaging over structured, namely patch-wise, predictions. Using an ensemble of randomized regression trees that use dense features as input, we obtain results that are of similar quality, at a fraction of the training time, and with low implementation effort. An open source implementation will be provided in the framework of http: ilastik.org.", "In this paper we address the problem of counting objects instances in images. Our models are able to precisely estimate the number of vehicles in a traffic congestion, or to count the humans in a very crowded scene. Our first contribution is the proposal of a novel convolutional neural network solution, named Counting CNN (CCNN). Essentially, the CCNN is formulated as a regression model where the network learns how to map the appearance of the image patches to their corresponding object density maps. Our second contribution consists in a scale-aware counting model, the Hydra CNN, able to estimate object densities in different very crowded scenarios where no geometric information of the scene can be provided. Hydra CNN learns a multiscale non-linear regression model which uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. We report an extensive experimental evaluation, using up to three different object counting benchmarks, where we show how our solutions achieve a state-of-the-art performance.", "This paper concerns automated cell counting in microscopy images. The approach we take is to adapt Convolutional Neural Networks (CNNs) to regress a cell spatial density map across the image. This is applicable to situations where traditional single-cell segmentation based methods do not work well due to cell clumping or overlap. We make the following contributions: (i) we develop and compare architectures for two Fully Convolutional Regression Networks (FCRNs) for this task; (ii) since the networks are fully convolutional, they can predict a density map for an input image of arbitrary size, and we exploit this to improve efficiency at training time by training end-to-end on image patches; and (iii) we show that FCRNs trained entirely on synthetic data are able to give excellent predictions on real microscopy images without fine-tuning, and that the performance can be further improved by fine-tuning on the real images. We set a new state-of-the-art performance for cell counting on the standard synthetic image benchmarks and, as a side benefit, show the potential of the FCRNs for providing cell detections for overlapping cells.", "This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.", "In this paper, we address the task of object counting in images. We follow modern learning approaches in which a density map is estimated directly from the input image. We employ CNNs and incorporate two significant improvements to the state of the art methods: layered boosting and selective sampling. As a result, we manage both to increase the counting accuracy and to reduce processing time. Moreover, we show that the proposed method is effective, even in the presence of labeling errors. Extensive experiments on five different datasets demonstrate the efficacy and robustness of our approach. Mean Absolute error was reduced by 20 to 35 . At the same time, the training time of each CNN has been reduced by 50 .", "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "Our objective is to count (and localize) object instances in an image interactively. We target the regime where individual object detectors do not work reliably due to crowding, or overlap, or size of the instances, and take the approach of estimating an object density.", "Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to finetune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
For density map prediction, traditional methods in @cite_25 @cite_12 choose a pixel-wise density prediction so as to obtain a full-resolution density map. The density map of the whole image is obtained by running the predictor over a sliding window in the image. In contrast, most deep learning-based methods choose a patch-wise or image-wise prediction to speed up the prediction @cite_11 @cite_44 @cite_39 @cite_0 @cite_34 . Image-wise predictions using FCNNs @cite_7 @cite_44 @cite_0 are especially fast since they reuse computations.
{ "cite_N": [ "@cite_7", "@cite_39", "@cite_44", "@cite_0", "@cite_34", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "1903029394", "2519281173", "2896018297", "2463631526", "2520826941", "2145983039", "1003853626", "1910776219" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "In this paper we address the problem of counting objects instances in images. Our models are able to precisely estimate the number of vehicles in a traffic congestion, or to count the humans in a very crowded scene. Our first contribution is the proposal of a novel convolutional neural network solution, named Counting CNN (CCNN). Essentially, the CCNN is formulated as a regression model where the network learns how to map the appearance of the image patches to their corresponding object density maps. Our second contribution consists in a scale-aware counting model, the Hydra CNN, able to estimate object densities in different very crowded scenarios where no geometric information of the scene can be provided. Hydra CNN learns a multiscale non-linear regression model which uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. We report an extensive experimental evaluation, using up to three different object counting benchmarks, where we show how our solutions achieve a state-of-the-art performance.", "This paper concerns automated cell counting in microscopy images. The approach we take is to adapt Convolutional Neural Networks (CNNs) to regress a cell spatial density map across the image. This is applicable to situations where traditional single-cell segmentation based methods do not work well due to cell clumping or overlap. We make the following contributions: (i) we develop and compare architectures for two Fully Convolutional Regression Networks (FCRNs) for this task; (ii) since the networks are fully convolutional, they can predict a density map for an input image of arbitrary size, and we exploit this to improve efficiency at training time by training end-to-end on image patches; and (iii) we show that FCRNs trained entirely on synthetic data are able to give excellent predictions on real microscopy images without fine-tuning, and that the performance can be further improved by fine-tuning on the real images. We set a new state-of-the-art performance for cell counting on the standard synthetic image benchmarks and, as a side benefit, show the potential of the FCRNs for providing cell detections for overlapping cells.", "This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.", "In this paper, we address the task of object counting in images. We follow modern learning approaches in which a density map is estimated directly from the input image. We employ CNNs and incorporate two significant improvements to the state of the art methods: layered boosting and selective sampling. As a result, we manage both to increase the counting accuracy and to reduce processing time. Moreover, we show that the proposed method is effective, even in the presence of labeling errors. Extensive experiments on five different datasets demonstrate the efficacy and robustness of our approach. Mean Absolute error was reduced by 20 to 35 . At the same time, the training time of each CNN has been reduced by 50 .", "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.", "Our objective is to count (and localize) object instances in an image interactively. We target the regime where individual object detectors do not work reliably due to crowding, or overlap, or size of the instances, and take the approach of estimating an object density.", "Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to finetune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
For patch-wise predictions, as in @cite_11 , patches of density maps are predicted for overlapping image patches. The density map of the whole image is obtained by placing the density patches at their image position, and then averaging pixel density values across overlapping patches. The averaging process overcomes the double-counting problem of using overlapping patches to some extent. However, due to the lack of context information around the borders of the image patch, the density predictions around the corners of the neighboring patches are not always consistent with (as good as) those of the central patch. The overlapping prediction and averaging operation can temper these artifacts, but also results in density maps that are overly smooth (e.g., see Fig. e).
{ "cite_N": [ "@cite_11" ], "mid": [ "1910776219" ], "abstract": [ "Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to finetune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
The current CNNs using image- or patch-wise prediction normally only produce reduced-resolution density maps, due to either the convolution pooling stride operation for FCNN-based methods, or to avoid very wide fully-connected layers for the patch-wise methods. Accurate counting does not necessarily require original-resolution density maps, and using reduced-resolution maps in @cite_11 @cite_0 can make the predictions faster, while still achieving good counting performance. On the other hand, accurate detection requires original resolution maps -- upsampling the reduced-resolution maps, in conjunction with averaging overlapping patches, sometimes results in an overly spread-out density map that cannot localize individual object well. Considering these factors, our study will also consider full-resolution density maps produced with CNNs, in order to obtain a complete comparison of counting and localization tasks.
{ "cite_N": [ "@cite_0", "@cite_11" ], "mid": [ "2463631526", "1910776219" ], "abstract": [ "This paper aims to develop a method than can accurately estimate the crowd count from an individual image with arbitrary crowd density and arbitrary perspective. To this end, we have proposed a simple but effective Multi-column Convolutional Neural Network (MCNN) architecture to map the image to its crowd density map. The proposed MCNN allows the input image to be of arbitrary size or resolution. By utilizing filters with receptive fields of different sizes, the features learned by each column CNN are adaptive to variations in people head size due to perspective effect or image resolution. Furthermore, the true density map is computed accurately based on geometry-adaptive kernels which do not need knowing the perspective map of the input image. Since exiting crowd counting datasets do not adequately cover all the challenging situations considered in our work, we have collected and labelled a large new dataset that includes 1198 images with about 330,000 heads annotated. On this challenging new dataset, as well as all existing datasets, we conduct extensive experiments to verify the effectiveness of the proposed model and method. In particular, with the proposed simple MCNN model, our method outperforms all existing methods. In addition, experiments show that our model, once trained on one dataset, can be readily transferred to a new dataset.", "Cross-scene crowd counting is a challenging task where no laborious data annotation is required for counting people in new target surveillance crowd scenes unseen in the training set. The performance of most existing crowd counting methods drops significantly when they are applied to an unseen scene. To address this problem, we propose a deep convolutional neural network (CNN) for crowd counting, and it is trained alternatively with two related learning objectives, crowd density and crowd count. This proposed switchable learning approach is able to obtain better local optimum for both objectives. To handle an unseen target crowd scene, we present a data-driven method to finetune the trained CNN model for the target scene. A new dataset including 108 crowd scenes with nearly 200,000 head annotations is introduced to better evaluate the accuracy of cross-scene crowd counting methods. Extensive experiments on the proposed and another two existing datasets demonstrate the effectiveness and reliability of our approach." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
Besides counting, @cite_19 @cite_1 have also explored using density maps for detection and tracking problems in crowded scenes. @cite_19 performs detection on density maps by first obtaining local counts from sliding windows over the density map from @cite_25 , and then uses integer programming to recover the location of individual objects. In contrast to @cite_19 , our predicted density maps have clearer peaks, thus allowing for simpler methods for detection, such as weighted GMM clustering.
{ "cite_N": [ "@cite_19", "@cite_1", "@cite_25" ], "mid": [ "1908321067", "2147221461", "2145983039" ], "abstract": [ "We propose a novel object detection framework for partially-occluded small instances, such as pedestrians in low resolution surveillance video, cells under a microscope, flocks of small animals (e.g. birds, fishes), or even tiny insects like honeybees and flies. These scenarios are very challenging for traditional detectors, which are typically trained on individual instances. In our approach, we first estimate the object density map of the input image, and then divide it into local regions. For each region, a sliding window (ROI) is passed over the density map to calculate the instance count within each ROI. 2D integer programming is used to recover the locations of object instances from the set of ROI counts, and the global count estimate of the density map is used as a constraint to regularize the detection performance. Finally, the bounding box for each instance is estimated using the local density map. Compared with current small-instance detection methods, our proposed approach achieves state-of-the-art performance on several challenging datasets including fluorescence microscopy cell images, UCSD pedestrians, small animals and insects.", "We address the problem of person detection and tracking in crowded video scenes. While the detection of individual objects has been improved significantly over the recent years, crowd scenes remain particularly challenging for the detection and tracking tasks due to heavy occlusions, high person densities and significant variation in people's appearance. To address these challenges, we propose to leverage information on the global structure of the scene and to resolve all detections jointly. In particular, we explore constraints imposed by the crowd density and formulate person detection as the optimization of a joint energy function combining crowd density estimation and the localization of individual people. We demonstrate how the optimization of such an energy function significantly improves person detection and tracking in crowds. We validate our approach on a challenging video dataset of crowded scenes.", "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data." ] }
1705.10118
2617256461
For crowded scenes, the accuracy of object-based computer vision methods declines when the images are low-resolution and objects have severe occlusions. Taking counting methods for example, almost all the recent state-of-the-art counting methods bypass explicit detection and adopt regression-based methods to directly count the objects of interest. Among regression-based methods, density map estimation, where the number of objects inside a subregion is the integral of the density map over that subregion, is especially promising because it preserves spatial information, which makes it useful for both counting and localization (detection and tracking). With the power of deep convolutional neural networks (CNNs) the counting performance has improved steadily. The goal of this paper is to evaluate density maps generated by density estimation methods on a variety of crowd analysis tasks, including counting, detection, and tracking. Most existing CNN methods produce density maps with resolution that is smaller than the original images, due to the downsample strides in the convolution pooling operations. To produce an original-resolution density map, we also evaluate a classical CNN that uses a sliding window regressor to predict the density for every pixel in the image. We also consider a fully convolutional (FCNN) adaptation, with skip connections from lower convolutional layers to compensate for loss in spatial information during upsampling. In our experiments, we found that the lower-resolution density maps sometimes have better counting performance. In contrast, the original-resolution density maps improved localization tasks, such as detection and tracking, compared to bilinear upsampling the lower-resolution density maps. Finally, we also propose several metrics for measuring the quality of a density map, and relate them to experiment results on counting and localization.
@cite_1 uses density maps in a regularization term to improve standard detectors and tracking. In particular, a term is added to their objective function that encourages the density map generated from the detected locations to be similar to the predicted density map, so as to reduce the number of false positives and increase the recall. The density maps estimated in @cite_1 are predicted from the detector score map, rather than image features, resulting in spread-out density maps. In contrast to @cite_1 , we show that, when the density maps are compact and focused around the people, a simple fusion strategy can be used to combine the density map and the response map of a visual tracker (e.g., kernel correlation filter).
{ "cite_N": [ "@cite_1" ], "mid": [ "2147221461" ], "abstract": [ "We address the problem of person detection and tracking in crowded video scenes. While the detection of individual objects has been improved significantly over the recent years, crowd scenes remain particularly challenging for the detection and tracking tasks due to heavy occlusions, high person densities and significant variation in people's appearance. To address these challenges, we propose to leverage information on the global structure of the scene and to resolve all detections jointly. In particular, we explore constraints imposed by the crowd density and formulate person detection as the optimization of a joint energy function combining crowd density estimation and the localization of individual people. We demonstrate how the optimization of such an energy function significantly improves person detection and tracking in crowds. We validate our approach on a challenging video dataset of crowded scenes." ] }
1705.09966
2949785681
We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a high-res face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to 1) handle unpaired training data because the training low high-res and high-res attribute images may not necessarily align with each other, and to 2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate impressive results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces impressive and interesting results on identity transfer. We demonstrate three applications on identity-guided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method.
Recent state-of-the-art image generation techniques have leveraged the deep convolutional neural networks (CNNs). For example, in single-image superresolution (SISR), a deep recursive CNN for SISR was proposed in @cite_14 . Learning upscaling filters have improved accuracy and speed @cite_12 @cite_10 @cite_16 . A deep CNN approach was proposed in @cite_6 using bicubic interpolation. The ESPCN @cite_10 performs SR by replacing the deconvolution layer in lieu of upscaling layer. However, many existing CNN-based networks still generate blurry images. The SRGAN @cite_22 uses the Euclidean distance between the feature maps extracted from the VGGNet to replace the MSE loss which cannot preserve texture details. The SRGAN has improved the perceptual quality of generated SR images. A deep residual network (ResNet) was proposed in @cite_22 that produces good results for upscaling factors up to @math . @cite_19 both the perceptual feature loss and pixel loss are used in training SISR.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_6", "@cite_19", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "", "2523714292", "54257720", "2950689937", "2505593925", "2476548250", "2950016100" ], "abstract": [ "", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.", "One impressive advantage of convolutional neural networks (CNNs) is their ability to automatically learn feature representation from raw pixels, eliminating the need for hand-designed procedures. However, recent methods for single image super-resolution (SR) fail to maintain this advantage. They utilize CNNs in two decoupled steps, i.e., first upsampling the low resolution (LR) image to the high resolution (HR) size with hand-designed techniques (e.g., bicubic interpolation), and then applying CNNs on the upsampled LR image to reconstruct HR results. In this paper, we seek an alternative and propose a new image SR method, which jointly learns the feature extraction, upsampling and HR reconstruction modules, yielding a completely end-to-end trainable deep CNN. As opposed to existing approaches, the proposed method conducts upsampling in the latent feature space with filters that are optimized for the task of image SR. In addition, the HR reconstruction is performed in a multi-scale manner to simultaneously incorporate both short- and long-range contextual information, ensuring more accurate restoration of HR images. To facilitate network training, a new training approach is designed, which jointly trains the proposed deep network with a relatively shallow network, leading to faster convergence and more superior performance. The proposed method is extensively evaluated on widely adopted data sets and improves the performance of state-of-the-art methods with a considerable margin. Moreover, in-depth ablation studies are conducted to verify the contribution of different network designs to image SR, providing additional insights for future research.", "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "As a successful deep model applied in image super-resolution (SR), the Super-Resolution Convolutional Neural Network (SRCNN) has demonstrated superior performance to the previous hand-crafted models either in speed and restoration quality. However, the high computational cost still hinders it from practical usage that demands real-time performance (24 fps). In this paper, we aim at accelerating the current SRCNN, and propose a compact hourglass-shape CNN structure for faster and better SR. We re-design the SRCNN structure mainly in three aspects. First, we introduce a deconvolution layer at the end of the network, then the mapping is learned directly from the original low-resolution image (without interpolation) to the high-resolution one. Second, we reformulate the mapping layer by shrinking the input feature dimension before mapping and expanding back afterwards. Third, we adopt smaller filter sizes but more mapping layers. The proposed model achieves a speed up of more than 40 times with even superior restoration quality. Further, we present the parameter settings that can achieve real-time performance on a generic CPU while still maintaining good performance. A corresponding transfer strategy is also proposed for fast training and testing across different upscaling factors." ] }
1705.09966
2949785681
We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a high-res face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to 1) handle unpaired training data because the training low high-res and high-res attribute images may not necessarily align with each other, and to 2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate impressive results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces impressive and interesting results on identity transfer. We demonstrate three applications on identity-guided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method.
Existing GANs @cite_7 @cite_18 @cite_0 have generated state-of-the-art results for automatic image generation. The key of their success lies in the adversarial loss which forces the generated images to be indistinguishable from real images. This is achieved by two competing neural networks, the generator and the discriminator. In particular, the DCGAN @cite_21 incorporates deep convolutional neural networks into GANs, and has generated some of the most impressive realistic images to date. GANs are however notoriously difficult to train: GANs are formulated as a minimax game'' between two networks. In practice, it is hard to keep the generator and discriminator in balance, where the optimization can oscillate between solutions which may easily cause the generator to collapse. Among different techniques, the conditional GAN @cite_20 addresses this problem by enforcing forward-backward consistency, which has emerged to be one of the most effective ways to train GAN.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_21", "@cite_0", "@cite_20" ], "mid": [ "", "2099471712", "2173520492", "2964024144", "2552465644" ], "abstract": [ "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either." ] }
1705.09993
2616926666
Experimenting with a new dataset of 1.6M user comments from a Greek news portal and existing datasets of English Wikipedia comments, we show that an RNN outperforms the previous state of the art in moderation. A deep, classification-specific attention mechanism improves further the overall performance of the RNN. We also compare against a CNN and a word-list baseline, considering both fully automatic and semi-automatic moderation.
In further work, aimed to identify high quality threads. Their best method converts each comment to a comment embedding using @cite_33 . An ensemble of Conditional Random Fields ( s ) @cite_30 assigns labels (from their annotation scheme, e.g., for sentiment, off-topic) to the comments of each thread, viewing each thread as a sequence of embeddings. The decisions of the s are then used to convert each thread to a feature vector (total count and mean marginal probability of each label in the thread), which is passed on to an classifier. Further improvements were observed when additional features were added, counts and @math -grams being the most important ones. also experimented with a , similar to that of , which was not however a top-performer, presumably because of the small size of the training set (2.1K threads).
{ "cite_N": [ "@cite_30", "@cite_33" ], "mid": [ "2147880316", "2949547296" ], "abstract": [ "We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks." ] }
1705.09993
2616926666
Experimenting with a new dataset of 1.6M user comments from a Greek news portal and existing datasets of English Wikipedia comments, we show that an RNN outperforms the previous state of the art in moderation. A deep, classification-specific attention mechanism improves further the overall performance of the RNN. We also compare against a CNN and a word-list baseline, considering both fully automatic and semi-automatic moderation.
used posts from chat rooms and discussion fora ( @math 15K posts in total) to train an to detect online harassment. They used , sentiment, and context features (e.g., similarity to other posts in a thread). Sentiment features have been used by several methods, but sentiment analysis @cite_2 @cite_4 is typically not directly concerned with abusive content. Our methods might also benefit by considering threads, rather than individual comments. point out that unlike other abusive content, spam in comments or discussion fora @cite_9 @cite_21 is off-topic and serves a commercial purpose. Spam is unlikely in Wikipedia discussions and extremely rare so far in Gazzetta comments.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_4", "@cite_2" ], "mid": [ "2401383085", "", "2397482367", "2097726431" ], "abstract": [ "We present an approach for detecting link spam common in blog comments by comparing the language models used in the blog post, the comment, and pages linked by the comments. In contrast to other link spam filtering approaches, our method requires no training, no hard-coded rule sets, and no knowledge of complete-web connectivity. Preliminary experiments with identification of typical blog spam show promising results.", "", "1. Introduction 2. The problem of sentiment analysis 3. Document sentiment classification 4. Sentence subjectivity and sentiment classification 5. Aspect sentiment classification 6. Aspect and entity extraction 7. Sentiment lexicon generation 8. Analysis of comparative opinions 9. Opinion summarization and search 10. Analysis of debates and comments 11. Mining intentions 12. Detecting fake or deceptive opinions 13. Quality of reviews.", "An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided." ] }
1705.09792
2618946976
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks.
Using complex parameters has numerous advantages from computational, biological, and signal processing perspectives. From a computational point of view, @cite_0 has shown that Holographic Reduced Representations , which use complex numbers, are numerically efficient and stable in the context of information retrieval from an associative memory. insert key-value pairs in the associative memory by addition into a . Although not typically viewed as such, residual networks and Highway Networks have a similar architecture to associative memories: each ResNet residual path computes a residual that is then inserted -- by summing into the memory'' provided by the identity connection. Given residual networks' resounding success on several benchmarks and their functional similarity to associative memories, it seems interesting to marry both together. This motivates us to incorporate complex weights and activations in residual networks. Together, they offer a mechanism by which useful information may be retrieved, processed and inserted in each residual block.
{ "cite_N": [ "@cite_0" ], "mid": [ "2950414499" ], "abstract": [ "We investigate a new method to augment recurrent neural networks with extra memory without increasing the number of network parameters. The system has an associative memory based on complex-valued vectors and is closely related to Holographic Reduced Representations and Long Short-Term Memory networks. Holographic Reduced Representations have limited capacity: as they store more information, each retrieval becomes noisier due to interference. Our system in contrast creates redundant copies of stored information, which enables retrieval with reduced noise. Experiments demonstrate faster learning on multiple memorization tasks." ] }
1705.09792
2618946976
At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks.
Using complex weights in neural networks also has biological motivation. @cite_14 have proposed a biologically plausible deep network that allows one to construct richer and more versatile representations using complex-valued neuronal units. The complex-valued formulation allows one to express the neuron’s output in terms of its firing rate and the relative timing of its activity. The amplitude of the complex neuron represents the former and its phase the latter. Input neurons that have similar phases are called as they add constructively, whereas neurons add destructively and thus interfere with each other. This is related to the gating mechanism used in both deep feed-forward neural networks and recurrent neural networks as this mechanism learns to synchronize inputs that the network propagates at a given feed-forward layer or time step. In the context of deep gating-based networks, synchronization means the propagation of inputs whose controlling gates simultaneously hold high values. These controlling gates are usually the activations of a sigmoid function. This ability to take into account phase information might explain the effectiveness of incorporating complex-valued representations in the context of recurrent neural networks.
{ "cite_N": [ "@cite_14" ], "mid": [ "1526708997" ], "abstract": [ "Deep learning has recently led to great successes in tasks such as image recognition (e.g , 2012). However, deep networks are still outmatched by the power and versatility of the brain, perhaps in part due to the richer neuronal computations available to cortical circuits. The challenge is to identify which neuronal mechanisms are relevant, and to find suitable abstractions to model them. Here, we show how aspects of spike timing, long hypothesized to play a crucial role in cortical information processing, could be incorporated into deep networks to build richer, versatile representations. We introduce a neural network formulation based on complex-valued neuronal units that is not only biologically meaningful but also amenable to a variety of deep learning frameworks. Here, units are attributed both a firing rate and a phase, the latter indicating properties of spike timing. We show how this formulation qualitatively captures several aspects thought to be related to neuronal synchrony, including gating of information processing and dynamic binding of distributed object representations. Focusing on the latter, we demonstrate the potential of the approach in several simple experiments. Thus, neuronal synchrony could be a flexible mechanism that fulfills multiple functional roles in deep networks." ] }