aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1710.08528 | 2766324716 | The emergence of social media as news sources has led to the rise of clickbait posts attempting to attract users to click on article links without informing them on the actual article content. This paper presents our efforts to create a clickbait detector inspired by fake news detection algorithms, and our submission to the Clickbait Challenge 2017. The detector is based almost exclusively on text-based features taken from previous work on clickbait detection, our own work on fake post detection, and features we designed specifically for the challenge. We use a two-level classification approach, combining the outputs of 65 first-level classifiers in a second-level feature vector. We present our exploratory results with individual features and their combinations, taken from the post text and the target article title, as well as feature selection. While our own blind tests with the dataset led to an F-score of 0.63, our final evaluation in the Challenge only achieved an F-score of 0.43. We explore the possible causes of this, and lay out potential future steps to achieve more successful results. | On the other hand, a different approach to clickbait detection would be to train a deep learning classifier. Two recent approaches have been proposed @cite_3 @cite_9 . While both begin by an embedding layer -as is common in neural networks for language processing-, the former uses a Recurrent Neural Network (RNN) layers, while the latter is based on Convolutional Layers. | {
"cite_N": [
"@cite_9",
"@cite_3"
],
"mid": [
"2952683812",
"2560440203"
],
"abstract": [
"The use of alluring headlines (clickbait) to tempt the readers has become a growing practice nowadays. For the sake of existence in the highly competitive media industry, most of the on-line media including the mainstream ones, have started following this practice. Although the wide-spread practice of clickbait makes the reader's reliability on media vulnerable, a large scale analysis to reveal this fact is still absent. In this paper, we analyze 1.67 million Facebook posts created by 153 media organizations to understand the extent of clickbait practice, its impact and user engagement by using our own developed clickbait detection model. The model uses distributed sub-word embeddings learned from a large corpus. The accuracy of the model is 98.3 . Powered with this model, we further study the distribution of topics in clickbait and non-clickbait contents.",
"Online content publishers often use catchy headlines for their articles in order to attract users to their websites. These headlines, popularly known as clickbaits, exploit a user’s curiosity gap and lure them to click on links that often disappoint them. Existing methods for automatically detecting clickbaits rely on heavy feature engineering and domain knowledge. Here, we introduce a neural network architecture based on Recurrent Neural Networks for detecting clickbaits. Our model relies on distributed word representations learned from a large unannotated corpora, and character embeddings learned via Convolutional Neural Networks. Experimental results on a dataset of news headlines show that our model outperforms existing techniques for clickbait detection with an accuracy of 0.98 with F1-score of 0.98 and ROC-AUC of 0.99."
]
} |
1710.08528 | 2766324716 | The emergence of social media as news sources has led to the rise of clickbait posts attempting to attract users to click on article links without informing them on the actual article content. This paper presents our efforts to create a clickbait detector inspired by fake news detection algorithms, and our submission to the Clickbait Challenge 2017. The detector is based almost exclusively on text-based features taken from previous work on clickbait detection, our own work on fake post detection, and features we designed specifically for the challenge. We use a two-level classification approach, combining the outputs of 65 first-level classifiers in a second-level feature vector. We present our exploratory results with individual features and their combinations, taken from the post text and the target article title, as well as feature selection. While our own blind tests with the dataset led to an F-score of 0.63, our final evaluation in the Challenge only achieved an F-score of 0.43. We explore the possible causes of this, and lay out potential future steps to achieve more successful results. | While we had no prior experience with clickbait detection, we noted a striking similarity of the task to that of fake post detection. While the task of misleading (fake) post detection @cite_2 , i.e. evaluating whether a post contains true information or not, is not the same as clickbait detection, the approach of extracting text-based features from a post and training a classifier on them is very similar, as are the expectations concerning the distinguishing features for clickbait posts and fake posts. For example, low readability or the increased presence of punctuation are related with both fake and clickbait posts. Given our previous experience with fake post detection @cite_1 , we decided to follow a similar approach for our submission to the clickbait detection challenge. | {
"cite_N": [
"@cite_1",
"@cite_2"
],
"mid": [
"2758048150",
"2084591134"
],
"abstract": [
"An increasing amount of posts on social media are used for disseminating news information and are accompanied by multimedia content. Such content may often be misleading or be digitally manipulated. More often than not, such pieces of content reach the front pages of major news outlets, having a detrimental effect on their credibility. To avoid such effects, there is profound need for automated methods that can help debunk and verify online content in very short time. To this end, we present a comparative study of three such methods that are catered for Twitter, a major social media platform used for news sharing. Those include: a) a method that uses textual patterns to extract claims about whether a tweet is fake or real and attribution statements about the source of the content; b) a method that exploits the information that same-topic tweets should be also similar in terms of credibility; and c) a method that uses a semi-supervised learning scheme that leverages the decisions of two independent credibility classifiers. We perform a comprehensive comparative evaluation of these approaches on datasets released by the Verifying Multimedia Use (VMU) task organized in the context of the 2015 and 2016 MediaEval benchmark. In addition to comparatively evaluating the three presented methods, we devise and evaluate a combined method based on their outputs, which outperforms all three of them. We discuss these findings and provide insights to guide future generations of verification tools for media professionals.",
"We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally. On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to \"trending\" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting (\"re-tweeting\") behavior, from the text of the posts, and from citations to external sources. We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70 to 80 ."
]
} |
1710.08350 | 2952382238 | We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed. | Bisimulations for weighted automata are related to our approach, because, as argued in subsection , Lie-derivation can be naturally represented by such an automaton. Algorithms for computing largest bisimulations on weighted automata have been studied by @cite_34 @cite_26 . A crucial ingredient in these algorithms is the representation of bisimulations as finite-dimensional vector spaces. Approximate versions of this technique have also been recently considered in relation to Markov chains @cite_10 . As discussed in Remark , in the case of linear systems, the algorithm in the present paper reduces to that of @cite_34 @cite_26 . Algebraically, moving from linear to polynomial systems corresponds to moving from vector spaces to ideals, hence from linear bases to Gr " o bner bases. From the point of view automata, this step leads to considering infinite weighted automata. In this respect, the present work may be also be related to the automata-theoretic treatment of linear 's by Fliess and Reutenauer @cite_12 . | {
"cite_N": [
"@cite_34",
"@cite_10",
"@cite_12",
"@cite_26"
],
"mid": [
"1555534061",
"1190238013",
"",
"2178359813"
],
"abstract": [
"We study bisimulation and minimization for weighted automata, relying on a geometrical representation of the model, linear weighted automata ( lwa ). In a lwa , the state-space of the automaton is represented by a vector space, and the transitions and weighting maps by linear morphisms over this vector space. Weighted bisimulations are represented by sub-spaces that are invariant under the transition morphisms. We show that the largest bisimulation coincides with weighted language equivalence, can be computed by a geometrical version of partition refinement and that the corresponding quotient gives rise to the minimal weighted-language equivalence automaton. Relations to Larsen and Skou's probabilistic bisimulation and to classical results in Automata Theory are also discussed.",
"We investigate the use of generating functions in the analysis of discrete Markov chains. Generating functions are introduced as power series whose coefficients are certain hitting probabilities. Being able to compute such functions implies that the calculation of a number of quantities of interest, including absorption probabilities, expected hitting time and number of visits, and variances thereof, becomes straightforward. We show that it is often possible to recover this information, either exactly or within excellent approximation, via the construction of Pade approximations of the involved generating function. The presented algorithms are based on projective methods from linear algebra, which can be made to work with limited computational resources. In particular, only a black-box, on-the-fly access to the transition function is presupposed, and the necessity of storing the whole model is eliminated. A few numerical experiments conducted with this technique give encouraging results.",
"",
"Weighted automata are a generalisation of non-deterministic automata where each transition, in addition to an input letter, has also a quantity expressing the weight (e.g. cost or probability) of its execution. As for non-deterministic automata, their behaviours can be expressed in terms of either (weighted) bisimilarity or (weighted) language equivalence. Coalgebras provide a categorical framework for the uniform study of state-based systems and their behaviours. In this work, we show that coalgebras can suitably model weighted automata in two different ways: coalgebras on Set (the category of sets and functions) characterise weighted bisimilarity, while coalgebras on Vect (the category of vector spaces and linear maps) characterise weighted language equivalence. Relying on the second characterisation, we show three different procedures for computing weighted language equivalence. The first one consists in a generalisation of the usual partition refinement algorithm for ordinary automata. The second one is the backward version of the first one. The third procedure relies on a syntactic representation of rational weighted languages."
]
} |
1710.08350 | 2952382238 | We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed. | Although there exists a rich literature dealing with linear aggregation of systems of 's (e.g. @cite_8 @cite_35 @cite_30 @cite_13 ), we are not aware of fully automated approaches to minimization (Theorem ), with the notable exception of a series of recent works by Cardelli and collaborators @cite_32 @cite_18 @cite_1 . Mostly related to ours is @cite_32 . There, for an extension of the polynomial format called IDOL , the authors introduce two flavours of , called ( ) and ( ). They provide a symbolic, SMT-based partition refining algorithms to compute the largest equivalence of each type. While is unrelated with our equivalence, can be compared directly to our -bisimulation. groups variables in such a way that the corresponding quotient system recovers the of the original solutions in each class, whatever the initial condition. However, precise information on the individual original solutions cannot in general be recovered from the reduced system. In , variables grouped together are guaranteed to have the same solution. Therefore the quotient system permits in this case to fully recover the original solutions. As such, can be compared directly to our -bisimulation. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_18",
"@cite_8",
"@cite_1",
"@cite_32",
"@cite_13"
],
"mid": [
"2032806006",
"1980200398",
"2409226593",
"1551480811",
"2409237074",
"2293292615",
"2059145118"
],
"abstract": [
"Let us consider the differential equation @math with an @math from @math to @math and suppose that there exists a transformation @math from @math to @math ( @math ) such that @math obeys a differential equation @math with some function @math ; then the first equation is said to be lumpable to the second by @math . Here mainly the case is investigated when the original differential equation has been induced by a complex chemical reaction. We provided a series of necessary and sufficient conditions for the existence of such functions @math and @math ; some of them are formulated in terms of @math and @math only. Beyond these conditions our main concern here is how lumping changes those properties of the solutions which are either interesting from the point of view of the qualitative theory of differential equations or from the point ...",
"Abstract A general analysis of exact nonlinear lumping is presented. This analysis can be applied to the kinetics of any reaction system with n species described by a set of first-order ordinary differential equations d y d t = f ( y ), where y is an n -dimensional vector and f ( y ) is an arbitrary n -dimensional function vector. We consider lumping by means of n ( n ⩽ n )-dimensional arbitrary transformation ŷ = h ( y ). The lumped differential equation system is d y D t = y ( h (ŷ))f( h (ŷ)) , where h y (y) is teh Jacobian matrix of h(y) , h is a generalized inverse transformation of h satisfying the relation h( h ) = I n . Three necessary and sufficient conditions of the existence of exact nonlinear lumping schemes have been determined. The geometric and algebraic interpretations of these conditions are discussed. It is found that a system is exactly lumpable by h only if h(y) = 0 is its invariant manifold. A linear partial differential operator A = Σ n i =1 f i ( y )ϑ ϑ y i corresponding to d y d t = f(y ) is defined. Using the eigenfunctions and the generalized eigenfunctions of A , the operator can be transformed to Jordan or diagonal canonical forms which give the lumped differential equation systems without determination of h . These approaches are illustrated by a simple example. The results of this analysis serve as a theoretical basis for the development of approaches for approximate nonlinear lumping.",
"We present an algorithm to compute exact aggregations of a class of systems of ordinary differential equations ODEs. Our approach consists in an extension of Paige and Tarjan's seminal solution to the coarsest refinement problem by encoding an ODE system into a suitable discrete-state representation. In particular, we consider a simple extension of the syntax of elementary chemical reaction networks because i it can express ODEs with derivatives given by polynomials of degree at most two, which are relevant in many applications in natural sciences and engineering; and ii we can build on two recently introduced bisimulations, which yield two complementary notions of ODE lumping. Our algorithm computes the largest bisimulations in @math time, where r is the number of monomials and s is the number of variables in the ODEs. Numerical experiments on real-world models from biochemistry, electrical engineering, and structural mechanics show that our prototype is able to handle ODEs with millions of variables and monomials, providing significant model reductions.",
"Preface Part I. Introduction: 1. Introduction 2. Motivating examples Part II. Preliminaries: 3. Tools from matrix theory 4. Linear dynamical systems, Part 1 5. Linear dynamical systems, Part 2 6. Sylvester and Lyapunov equations Part III. SVD-based Approximation Methods: 7. Balancing and balanced approximations 8. Hankel-norm approximation 9. Special topics in SVD-based approximation methods Part IV. Krylov-based Approximation Methods: 10. Eigenvalue computations 11. Model reduction using Krylov methods Part V. SVD-Krylov Methods and Case Studies: 12. SVD-Krylov methods 13. Case studies 14. Epilogue 15. Problems Bibliography Index.",
"We study chemical reaction networks (CRNs) as a kernel language for concurrency models with semantics based on ordinary differential equations. We investigate the problem of comparing two CRNs, i.e., to decide whether the trajectories of a source CRN can be matched by a target CRN under an appropriate choice of initial conditions. Using a categorical framework, we extend and relate model-comparison approaches based on structural (syntactic) and on dynamical (semantic) properties of a CRN, proving their equivalence. Then, we provide an algorithm to compare CRNs, running linearly in time with respect to the cardinality of all possible comparisons. Finally, we apply our results to biological models from the literature.",
"Ordinary differential equations (ODEs) are widespread in many natural sciences including chemistry, ecology, and systems biology, and in disciplines such as control theory and electrical engineering. Building on the celebrated molecules-as-processes paradigm, they have become increasingly popular in computer science, with high-level languages and formal methods such as Petri nets, process algebra, and rule-based systems that are interpreted as ODEs. We consider the problem of comparing and minimizing ODEs automatically. Influenced by traditional approaches in the theory of programming, we propose differential equivalence relations. We study them for a basic intermediate language, for which we have decidability results, that can be targeted by a class of high-level specifications. An ODE implicitly represents an uncountable state space, hence reasoning techniques cannot be borrowed from established domains such as probabilistic programs with finite-state Markov chain semantics. We provide novel symbolic procedures to check an equivalence and compute the largest one via partition refinement algorithms that use satisfiability modulo theories. We illustrate the generality of our framework by showing that differential equivalences include (i) well-known notions for the minimization of continuous-time Markov chains (lumpability), (ii) bisimulations for chemical reaction networks recently proposed by , and (iii) behavioral relations for process algebra with ODE semantics. With a prototype implementation we are able to detect equivalences in biochemical models from the literature that cannot be reduced using competing automatic techniques.",
"Detailed modeling of complex reaction systems is becoming increasingly important in the development, analysis, design, and control of chemical reaction processes. For industrial processes, complete incorporation of the chemistry into process models facilitates the minimization of byproduct and pollutant formation, increased efficiency, and improved product quality. Processes that involve complex reaction networks include a variety of noncatalytic and homogeneous or heterogeneous catalytic processes (such as fluid catalytic cracking, combustion, chemical vapor deposition, and alkylation). For some systems, large sets of relevant reactions have been identified for use in simulations.1-3 For others, the availability of advanced computing environments has enabled the automated generation of reaction networks and their models, based on computational descriptions of the reaction types occurring in the system.4-6 The use of such complex models is hindered by two obstacles. First, because of their sheer size and the presence of multiple time scales, these models are difficult to solve. Second, the models contain large numbers of uncertain (and sometimes unknown) kinetic parameters; regression to determine the parameters of complex nonlinear models is both difficult and unreliable, and the sensitivity of simulations to parameter uncertainties cannot be easily ascertained. Furthermore, for the purpose of gaining insights into the reaction system’s behavior, it is usually preferable to obtain simpler models that bring out the key features and components of the system. For these reasons, model simplification and order reduction are becoming central problems in the study of complex reaction systems. The simulation, monitoring, and control of a complex chemical process benefit from the derivation of accurate and reliable reduced models tailored to particular process modeling tasks. Model simplification is directly linked to identification of key reactions and sets of species that give valuable insights into the behavior of the network and how it may be influenced. Advanced control schemes such as model predictive control7 or multiple model adaptive control8 must be based on selecting appropriate reduced models and tracking key sets of species. Ideally, a model order reduction algorithm should have broad applicability, enable analysis at several levels of detail, and provide an assessment of the modeling error."
]
} |
1710.08350 | 2952382238 | We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed. | Linear aggregation and lumping of (polynomial) systems of 's are well known in the literature, se e.g. @cite_8 @cite_13 @cite_35 @cite_30 and references therein. However, as pointed out by @cite_32 , no general algorithms for computing the largest equivalence, hence the minimal reduction (in the sense of our Theorem ) was known. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_8",
"@cite_32",
"@cite_13"
],
"mid": [
"2032806006",
"1980200398",
"1551480811",
"2293292615",
"2059145118"
],
"abstract": [
"Let us consider the differential equation @math with an @math from @math to @math and suppose that there exists a transformation @math from @math to @math ( @math ) such that @math obeys a differential equation @math with some function @math ; then the first equation is said to be lumpable to the second by @math . Here mainly the case is investigated when the original differential equation has been induced by a complex chemical reaction. We provided a series of necessary and sufficient conditions for the existence of such functions @math and @math ; some of them are formulated in terms of @math and @math only. Beyond these conditions our main concern here is how lumping changes those properties of the solutions which are either interesting from the point of view of the qualitative theory of differential equations or from the point ...",
"Abstract A general analysis of exact nonlinear lumping is presented. This analysis can be applied to the kinetics of any reaction system with n species described by a set of first-order ordinary differential equations d y d t = f ( y ), where y is an n -dimensional vector and f ( y ) is an arbitrary n -dimensional function vector. We consider lumping by means of n ( n ⩽ n )-dimensional arbitrary transformation ŷ = h ( y ). The lumped differential equation system is d y D t = y ( h (ŷ))f( h (ŷ)) , where h y (y) is teh Jacobian matrix of h(y) , h is a generalized inverse transformation of h satisfying the relation h( h ) = I n . Three necessary and sufficient conditions of the existence of exact nonlinear lumping schemes have been determined. The geometric and algebraic interpretations of these conditions are discussed. It is found that a system is exactly lumpable by h only if h(y) = 0 is its invariant manifold. A linear partial differential operator A = Σ n i =1 f i ( y )ϑ ϑ y i corresponding to d y d t = f(y ) is defined. Using the eigenfunctions and the generalized eigenfunctions of A , the operator can be transformed to Jordan or diagonal canonical forms which give the lumped differential equation systems without determination of h . These approaches are illustrated by a simple example. The results of this analysis serve as a theoretical basis for the development of approaches for approximate nonlinear lumping.",
"Preface Part I. Introduction: 1. Introduction 2. Motivating examples Part II. Preliminaries: 3. Tools from matrix theory 4. Linear dynamical systems, Part 1 5. Linear dynamical systems, Part 2 6. Sylvester and Lyapunov equations Part III. SVD-based Approximation Methods: 7. Balancing and balanced approximations 8. Hankel-norm approximation 9. Special topics in SVD-based approximation methods Part IV. Krylov-based Approximation Methods: 10. Eigenvalue computations 11. Model reduction using Krylov methods Part V. SVD-Krylov Methods and Case Studies: 12. SVD-Krylov methods 13. Case studies 14. Epilogue 15. Problems Bibliography Index.",
"Ordinary differential equations (ODEs) are widespread in many natural sciences including chemistry, ecology, and systems biology, and in disciplines such as control theory and electrical engineering. Building on the celebrated molecules-as-processes paradigm, they have become increasingly popular in computer science, with high-level languages and formal methods such as Petri nets, process algebra, and rule-based systems that are interpreted as ODEs. We consider the problem of comparing and minimizing ODEs automatically. Influenced by traditional approaches in the theory of programming, we propose differential equivalence relations. We study them for a basic intermediate language, for which we have decidability results, that can be targeted by a class of high-level specifications. An ODE implicitly represents an uncountable state space, hence reasoning techniques cannot be borrowed from established domains such as probabilistic programs with finite-state Markov chain semantics. We provide novel symbolic procedures to check an equivalence and compute the largest one via partition refinement algorithms that use satisfiability modulo theories. We illustrate the generality of our framework by showing that differential equivalences include (i) well-known notions for the minimization of continuous-time Markov chains (lumpability), (ii) bisimulations for chemical reaction networks recently proposed by , and (iii) behavioral relations for process algebra with ODE semantics. With a prototype implementation we are able to detect equivalences in biochemical models from the literature that cannot be reduced using competing automatic techniques.",
"Detailed modeling of complex reaction systems is becoming increasingly important in the development, analysis, design, and control of chemical reaction processes. For industrial processes, complete incorporation of the chemistry into process models facilitates the minimization of byproduct and pollutant formation, increased efficiency, and improved product quality. Processes that involve complex reaction networks include a variety of noncatalytic and homogeneous or heterogeneous catalytic processes (such as fluid catalytic cracking, combustion, chemical vapor deposition, and alkylation). For some systems, large sets of relevant reactions have been identified for use in simulations.1-3 For others, the availability of advanced computing environments has enabled the automated generation of reaction networks and their models, based on computational descriptions of the reaction types occurring in the system.4-6 The use of such complex models is hindered by two obstacles. First, because of their sheer size and the presence of multiple time scales, these models are difficult to solve. Second, the models contain large numbers of uncertain (and sometimes unknown) kinetic parameters; regression to determine the parameters of complex nonlinear models is both difficult and unreliable, and the sensitivity of simulations to parameter uncertainties cannot be easily ascertained. Furthermore, for the purpose of gaining insights into the reaction system’s behavior, it is usually preferable to obtain simpler models that bring out the key features and components of the system. For these reasons, model simplification and order reduction are becoming central problems in the study of complex reaction systems. The simulation, monitoring, and control of a complex chemical process benefit from the derivation of accurate and reliable reduced models tailored to particular process modeling tasks. Model simplification is directly linked to identification of key reactions and sets of species that give valuable insights into the behavior of the network and how it may be influenced. Advanced control schemes such as model predictive control7 or multiple model adaptive control8 must be based on selecting appropriate reduced models and tracking key sets of species. Ideally, a model order reduction algorithm should have broad applicability, enable analysis at several levels of detail, and provide an assessment of the modeling error."
]
} |
1710.08350 | 2952382238 | We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed. | The seminal paper of Sankaranarayanan, Sipma and Manna @cite_19 introduced polynomial ideals to find invariants of hybrid systems. Indeed, the study of the safety of hybrid systems can be shown to reduce constructively to the problem of generating invariants for their differential equations @cite_36 . The results in @cite_19 have been subsequently refined and simplified by Sankaranarayanan using @cite_2 , which enable the discovery of polynomial invariants of a special form. Other authors have adapted this approach to the case of imperative programs, see e.g. @cite_9 @cite_22 @cite_5 and references therein. Reduction and minimization seem to be not a concern in this field. | {
"cite_N": [
"@cite_22",
"@cite_36",
"@cite_9",
"@cite_19",
"@cite_2",
"@cite_5"
],
"mid": [
"2003141394",
"2080884201",
"2109179224",
"2098045685",
"2137258051",
"1971043610"
],
"abstract": [
"We present two automatic program analyses. The first analysis checks if a given polynomial relation holds among the program variables whenever control reaches a given program point. It fully interprets assignment statements with polynomial expressions on the right-hand side and polynomial disequality guards. Other assignments are treated as non-deterministically assigning any value and guards that are not polynomial disequalities are ignored. The second analysis extends this checking procedure. It computes the set of all polynomial relations of an arbitrary given form that are valid at a given target program point. It is also complete up to the abstraction described above.",
"We study the logic of dynamical systems, that is, logics and proof principles for properties of dynamical systems. Dynamical systems are mathematical models describing how the state of a system evolves over time. They are important in modeling and understanding many applications, including embedded systems and cyber-physical systems. In discrete dynamical systems, the state evolves in discrete steps, one step at a time, as described by a difference equation or discrete state transition relation. In continuous dynamical systems, the state evolves continuously along a function, typically described by a differential equation. Hybrid dynamical systems or hybrid systems combine both discrete and continuous dynamics. This is a brief survey of differential dynamic logic for specifying and verifying properties of hybrid systems. We explain hybrid system models, differential dynamic logic, its semantics, and its axiomatization for proving logical formulas about hybrid systems. We study differential invariants, i.e., induction principles for differential equations. We briefly survey theoretical results, including soundness and completeness and deductive power. Differential dynamic logic has been implemented in automatic and interactive theorem provers and has been used successfully to verify safety-critical applications in automotive, aviation, railway, robotics, and analogue electrical circuits.",
"We propose a static analysis for computing polynomial invariants for imperative programs. The analysis is derived from an abstract interpretation of a backwards semantics, and computes pre-conditions for equalities like g=0 to hold at the end of execution. A distinguishing feature of the technique is that it computes polynomial loop invariants without resorting to Grobner base computations. The analysis uses remainder computations over parameterized polynomials in order to handle conditionals and loops efficiently. The algorithm can analyse and find a large majority of loop invariants reported previously in the literature, and executes significantly faster than implementations using Grobner bases.",
"We present a new technique for the generation of non-linear (algebraic) invariants of a program. Our technique uses the theory of ideals over polynomial rings to reduce the non-linear invariant generation problem to a numerical constraint solving problem. So far, the literature on invariant generation has been focussed on the construction of linear invariants for linear programs. Consequently, there has been little progress toward non-linear invariant generation. In this paper, we demonstrate a technique that encodes the conditions for a given template assertion being an invariant into a set of constraints, such that all the solutions to these constraints correspond to non-linear (algebraic) loop invariants of the program. We discuss some trade-offs between the completeness of the technique and the tractability of the constraint-solving problem generated. The application of the technique is demonstrated on a few examples.",
"We present computational techniques for automatically generating algebraic (polynomial equality) invariants for algebraic hybrid systems. Such systems involve ordinary differential equations with multivariate polynomial right-hand sides. Our approach casts the problem of generating invariants for differential equations as the greatest fixed point of a monotone operator over the lattice of ideals in a polynomial ring. We provide an algorithm to compute this monotone operator using basic ideas from commutative algebraic geometry. However, the resulting iteration sequence does not always converge to a fixed point, since the lattice of ideals over a polynomial ring does not satisfy the descending chain condition. We then present a bounded-degree relaxation based on the concept of \"pseudo ideals\", due to Colon, that restricts ideal membership using multipliers with bounded degrees. We show that the monotone operator on bounded degree pseudo ideals is convergent and generates fixed points that can be used to generate useful algebraic invariants for non-linear systems. The technique for continuous systems is then extended to consider hybrid systems with multiple modes and discrete transitions between modes. We have implemented the exact, non-convergent iteration over ideals in combination with the bounded degree iteration over pseudo ideals to guarantee convergence. This has been applied to automatically infer useful and interesting polynomial invariants for some benchmark non-linear systems.",
"This paper presents a method for automatically generating all polynomial invariants in simple loops. It is first shown that the set of polynomials serving as loop invariants has the algebraic structure of an ideal. Based on this connection, a fixpoint procedure using operations on ideals and Grobner basis constructions is proposed for finding all polynomial invariants. Most importantly, it is proved that the procedure terminates in at most m+1 iterations, where m is the number of program variables. The proof relies on showing that the irreducible components of the varieties associated with the ideals generated by the procedure either remain the same or increase their dimension at every iteration of the fixpoint procedure. This yields a correct and complete algorithm for inferring conjunctions of polynomial equalities as invariants. The method has been implemented in Maple using the Groebner package. The implementation has been used to automatically discover non-trivial invariants for several examples to illustrate the power of the technique."
]
} |
1710.08350 | 2952382238 | We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed. | Still in the field of formal verification of hybrid systems, mostly related to ours is Ghorbal and Platzer's recent work on polynomial invariants @cite_0 . Platzer has introduced to reason on hybrid systems @cite_39 . The rules of this logic implement a fundamentally inductive, rather than coinductive, proof method. Mostly related to ours is Ghorbal and Platzer's recent work on polynomial invariants @cite_0 . One one hand, they characterize algebraically invariant regions of vector fields -- as opposed to initial value problems, as we do. On the other hand, they offer sufficient conditions under which the trajectories induced by specific initial values satisfy all instances of a polynomial template (cf. [Prop.3] Pla14 ). The latter result compares with ours, but the resulting method appears to be not (relatively) complete in the sense of our double chain algorithm. Moreover, the computational prerequisites of @cite_0 (symbolic linear programming, exponential size matrices, symbolic root extraction) are very different from ours, and much more demanding. Again, minimization is not addressed. | {
"cite_N": [
"@cite_0",
"@cite_39"
],
"mid": [
"2182451272",
"1977444293"
],
"abstract": [
"We prove that any invariant algebraic set of a given polynomial vector field can be algebraically represented by one polynomial and a finite set of its successive Lie derivatives. This so-called differential radical characterization relies on a sound abstraction of the reachable set of solutions by the smallest variety that contains it. The characterization leads to a differential radical invariant proof rule that is sound and complete, which implies that invariance of algebraic equations over real-closed fields is decidable. Furthermore, the problem of generating invariant varieties is shown to be as hard as minimizing the rank of a symbolic matrix, and is therefore NP-hard. We investigate symbolic linear algebra tools based on Gaussian elimination to efficiently automate the generation. The approach can, e.g., generate nontrivial algebraic invariant equations capturing the airplane behavior during take-off or landing in longitudinal motion.",
"Hybrid systems are models for complex physical systems and are defined as dynamical systems with interacting discrete transitions and continuous evolutions along differential equations. With the goal of developing a theoretical and practical foundation for deductive verification of hybrid systems, we introduce a dynamic logic for hybrid programs, which is a program notation for hybrid systems. As a verification technique that is suitable for automation, we introduce a free variable proof calculus with a novel combination of real-valued free variables and Skolemisation for lifting quantifier elimination for real arithmetic to dynamic logic. The calculus is compositional, i.e., it reduces properties of hybrid programs to properties of their parts. Our main result proves that this calculus axiomatises the transition behaviour of hybrid systems completely relative to differential equations. In a case study with cooperating traffic agents of the European Train Control System, we further show that our calculus is well-suited for verifying realistic hybrid systems with parametric system dynamics."
]
} |
1710.08350 | 2952382238 | We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed. | Ideas from Algebraic Geometry have been fruitfully applied also in Program Analysis. Relevant to our work is M " u ller-Olm and Seidl's @cite_22 , where an algorithm to compute all polynomial invariants up to a given degree of an imperative program is provided. Similarly to what we do, they reduce the core problem to a linear algebraic one. However, being the setting in @cite_22 discrete rather than continuous, the techniques employed there are otherwise quite different, mainly because: (a) the construction of the ideal chain is driven by the program's operational semantics, rather than by Lie derivatives; (b) the found polynomial invariants must be valid under initial program states, not just under the user specified one. If transferred to a continuous setting, condition (b) would lead in most cases to trivial invariants. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2003141394"
],
"abstract": [
"We present two automatic program analyses. The first analysis checks if a given polynomial relation holds among the program variables whenever control reaches a given program point. It fully interprets assignment statements with polynomial expressions on the right-hand side and polynomial disequality guards. Other assignments are treated as non-deterministically assigning any value and guards that are not polynomial disequalities are ignored. The second analysis extends this checking procedure. It computes the set of all polynomial relations of an arbitrary given form that are valid at a given target program point. It is also complete up to the abstraction described above."
]
} |
1710.08350 | 2952382238 | We consider reasoning and minimization in systems of polynomial ordinary differential equations (ode's). The ring of multivariate polynomials is employed as a syntax for denoting system behaviours. We endow this set with a transition system structure based on the concept of Lie-derivative, thus inducing a notion of L-bisimulation. We prove that two states (variables) are L-bisimilar if and only if they correspond to the same solution in the ode's system. We then characterize L-bisimilarity algebraically, in terms of certain ideals in the polynomial ring that are invariant under Lie-derivation. This characterization allows us to develop a complete algorithm, based on building an ascending chain of ideals, for computing the largest L-bisimulation containing all valid identities that are instances of a user-specified template. A specific largest L-bisimulation can be used to build a reduced system of ode's, equivalent to the original one, but minimal among all those obtainable by linear aggregation of the original equations. A computationally less demanding approximate reduction and linearization technique is also proposed. | In nonlinear Control Theory, there is a huge amount of literature on ( ), that aims at reducing the size of a given system, while preserving some properties of interest, such as stability and passivity. A well established approach relies on building truncated Taylor expansions of the given sysytems @cite_38 @cite_17 , repeated at various points along a trajectory of interest, to keep the approximation error globally small: a technique known as ( tpwl ) , see e.g. @cite_14 . One wonders whether our approximate linearization technique of Section might conveniently serve as a building block of this strategy. | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_17"
],
"mid": [
"2119695238",
"2133642820",
"2123947147"
],
"abstract": [
"A compact nonlinear model order-reduction method (NORM) is presented that is applicable for time-invariant and periodically time-varying weakly nonlinear systems. NORM is suitable for model order reduction of a class of weakly nonlinear systems that can be well characterized by low-order Volterra functional series. The automatically extracted macromodels capture not only the first-order (linear) system properties, but also the important second-order effects of interest that cannot be neglected for a broad range of applications. Unlike the existing projection-based reduction methods for weakly nonlinear systems, NORM begins with the general matrix-form Volterra nonlinear transfer functions to derive a set of minimum Krylov subspaces for order reduction. Moment matching of the nonlinear transfer functions by projection of the original system onto this set of minimum Krylov subspaces leads to a significant reduction of model size. As we will demonstrate as part of comparison with existing methods, the efficacy of model reduction for weakly nonlinear systems is determined by the achievable model compactness. Our results further indicate that a multipoint version of NORM can substantially improve the model compactness for nonlinear system reduction. Furthermore, we show that the structure of the nonlinear system can be exploited to simplify the reduced model in practice, which is particularly effective for circuits with sharp frequency selectivity. We demonstrate the practical utility of NORM and its extension for macromodeling weakly nonlinear RF communication circuits with periodically time-varying behavior.",
"In this paper, we present an approach to nonlinear model reduction based on representing a nonlinear system with a piecewise-linear system and then reducing each of the pieces with a Krylov projection. However, rather than approximating the individual components as piecewise linear and then composing hundreds of components to make a system with exponentially many different linear regions, we instead generate a small set of linearizations about the state trajectory which is the response to a \"training input.\" Computational results and performance data are presented for an example of a micromachined switch and selected nonlinear circuits. These examples demonstrate that the macromodels obtained with the proposed reduction algorithm are significantly more accurate than models obtained with linear or recently developed quadratic reduction techniques. Also, we propose a procedure for a posteriori estimation of the simulation error, which may be used to determine the accuracy of the extracted trajectory piecewise-linear reduced-order models. Finally, it is shown that the proposed model order reduction technique is computationally inexpensive, and that the models can be constructed \"on the fly,\" to accelerate simulation of the system response.",
"The problem of automated macromodel generation is interesting from the viewpoint of system-level design because if small, accurate reduced-order models of system component blocks can be extracted, then much larger portions of a design, or more complicated systems, can be simulated or verified than if the analysis were to have to proceed at a detailed level. The prospect of generating the reduced model from a detailed analysis of component blocks is attractive because then the influence of second-order device effects or parasitic components on the overall system performance can be assessed. In this way overly conservative design specifications can be avoided. This paper reports on experiences with extending model reduction techniques to nonlinear systems of differential-algebraic equations, specifically, systems representative of RF circuit components. The discussion proceeds from linear time-varying, to weakly nonlinear, to nonlinear time-varying analysis, relying generally on perturbational techniques to handle deviations from the linear time-invariant case. The main intent is to explore which perturbational techniques work, which do not, and outline some problems that remain to be solved in developing robust, general nonlinear reduction methods."
]
} |
1710.08607 | 2964173611 | The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation poses a significant challenge, due to its heavy-tailed nature and the large variance in degrees. We design a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the h-index and the z-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law. We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1 of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10 of the vertices to give comparable results. We also observe that the h and z-indices of real graphs are large, validating our theoretical analysis. | There is a rich body of literature on generating a graph sample that reveals graph properties of the larger true" graph. We do not attempt to fully survey this literature, and only refer to results directly related to our work. The works of Leskovec & Faloutsos @cite_5 , Maiya & Berger-Wolf @cite_35 , and Ahmed, Neville, & Kompella @cite_39 @cite_40 provide excellent surveys of multiple sampling methods. | {
"cite_N": [
"@cite_5",
"@cite_40",
"@cite_35",
"@cite_39"
],
"mid": [
"2146008005",
"2963316155",
"2033995706",
"180417844"
],
"abstract": [
"Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and \"forest fire\"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph.",
"Network sampling is integral to the analysis of social, information, and biological networks. Since many real-world networks are massive in size, continuously evolving, and or distributed in nature, the network structure is often sampled in order to facilitate study. For these reasons, a more thorough and complete understanding of network sampling is critical to support the field of network science. In this paper, we outline a framework for the general problem of network sampling by highlighting the different objectives, population and units of interest, and classes of network sampling methods. In addition, we propose a spectrum of computational models for network sampling methods, ranging from the traditionally studied model based on the assumption of a static domain to a more challenging model that is appropriate for streaming domains. We design a family of sampling methods based on the concept of graph induction that generalize across the full spectrum of computational models (from static to streaming) while efficiently preserving many of the topological properties of the input graphs. Furthermore, we demonstrate how traditional static sampling algorithms can be modified for graph streams for each of the three main classes of sampling methods: node, edge, and topology-based sampling. Experimental results indicate that our proposed family of sampling methods more accurately preserve the underlying properties of the graph in both static and streaming domains. Finally, we study the impact of network sampling algorithms on the parameter estimation and performance evaluation of relational classification algorithms.",
"From social networks to P2P systems, network sampling arises in many settings. We present a detailed study on the nature of biases in network sampling strategies to shed light on how best to sample from networks. We investigate connections between specific biases and various measures of structural representativeness. We show that certain biases are, in fact, beneficial for many applications, as they \"push\" the sampling process towards inclusion of desired properties. Finally, we describe how these sampling biases can be exploited in several, real-world applications including disease outbreak detection and market research.",
"Recently, there has been a great deal of research focusing on the development of sampling algorithms for networks with small-world and or power-law structure. The peerto-peer research community (e.g., [7]) have used sampling to quickly explore and obtain a good representative sample of the network topology, as these networks are hard to explore completely and have significant amounts of churn in their topology. For collecting data from social networks, researchers often use snowball sampling (e.g., [2]) due to the lack of access to the complete graph. have developed Forest Fire Sampling, which uses a hybrid combination of snowball sampling and random-walk sampling to produce samples that match the temporal evolution of the underlying social network [5]. have developed a Metropolis algorithm which samples in a manner designed to match desired properties in the original network [3]. Although there has been a great deal of research focusing on the the development of sampling algorithms, much of this work is based on empirical study and evaluation (i.e., measuring the similarity between sampled and original network properties). There has been some work (e.g., [4, 8, 6]) that has studied the statistical properties of samples of complex networks produced by traditional sampling algorithms such as node sampling, edge sampling and random walks. However, there has been relatively little attention paid to the development of a theoretical foundation for sampling from networks—including a formal framework for sampling, an understanding of various network characteristics and their dependencies, and an analysis of their impact on the accuracy of sampling algorithms. In this paper, we reconsider the foundations of network sampling and attempt to formalize the goals, and process of, sampling, in order to frame future development and analysis of sampling algorithms."
]
} |
1710.08607 | 2964173611 | The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation poses a significant challenge, due to its heavy-tailed nature and the large variance in degrees. We design a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the h-index and the z-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law. We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1 of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10 of the vertices to give comparable results. We also observe that the h and z-indices of real graphs are large, validating our theoretical analysis. | There are a number of sampling methods based on random crawls: forest-fire @cite_5 , snowball sampling @cite_35 , and expansion sampling @cite_5 . As has been detailed in previous work, these methods tend to bias certain parts of the network, which can be exploited for more accurate estimates of various properties @cite_5 @cite_0 @cite_44 . A series of papers by Ahmed, Neville, and Kompella @cite_39 @cite_36 @cite_40 @cite_7 have proposed alternate sampling methods that combine random vertices and edges to get better representative samples. | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_36",
"@cite_39",
"@cite_0",
"@cite_44",
"@cite_40",
"@cite_5"
],
"mid": [
"2033995706",
"2124450885",
"2074932875",
"180417844",
"",
"",
"2963316155",
"2146008005"
],
"abstract": [
"From social networks to P2P systems, network sampling arises in many settings. We present a detailed study on the nature of biases in network sampling strategies to shed light on how best to sample from networks. We investigate connections between specific biases and various measures of structural representativeness. We show that certain biases are, in fact, beneficial for many applications, as they \"push\" the sampling process towards inclusion of desired properties. Finally, we describe how these sampling biases can be exploited in several, real-world applications including disease outbreak detection and market research.",
"Sampling is a standard approach in big-graph analytics; the goal is to efficiently estimate the graph properties by consulting a sample of the whole population. A perfect sample is assumed to mirror every property of the whole population. Unfortunately, such a perfect sample is hard to collect in complex populations such as graphs (e.g. web graphs, social networks), where an underlying network connects the units of the population. Therefore, a good sample will be representative in the sense that graph properties of interest can be estimated with a known degree of accuracy. While previous work focused particularly on sampling schemes to estimate certain graph properties (e.g. triangle count), much less is known for the case when we need to estimate various graph properties with the same sampling scheme. In this paper, we pro- pose a generic stream sampling framework for big-graph analytics, called Graph Sample and Hold (gSH), which samples from massive graphs sequentially in a single pass, one edge at a time, while maintaining a small state in memory. We use a Horvitz-Thompson construction in conjunction with a scheme that samples arriving edges without adjacencies to previously sampled edges with probability p and holds edges with adjacencies with probability q. Our sample and hold framework facilitates the accurate estimation of subgraph patterns by enabling the dependence of the sampling process to vary based on previous history. Within our framework, we show how to produce statistically unbiased estimators for various graph properties from the sample. Given that the graph analytics will run on a sample instead of the whole population, the runtime complexity is kept under control. Moreover, given that the estimators are unbiased, the approximation error is also kept under control. Finally, we test the performance of the proposed framework (gSH) on various types of graphs, showing that from a sample with -- 40K edges, it produces estimates with relative errors",
"In order to efficiently study the characteristics of network domains and support development of network systems (e.g. algorithms, protocols that operate on networks), it is often necessary to sample a representative subgraph from a large complex network. Although recent subgraph sampling methods have been shown to work well, they focus on sampling from memory-resident graphs and assume that the sampling algorithm can access the entire graph in order to decide which nodes edges to select. Many large-scale network datasets, however, are too large and or dynamic to be processed using main memory (e.g., email, tweets, wall posts). In this work, we formulate the problem of sampling from large graph streams. We propose a streaming graph sampling algorithm that dynamically maintains a representative sample in a reservoir based setting. We evaluate the efficacy of our proposed methods empirically using several real-world data sets. Across all datasets, we found that our method produce samples that preserve better the original graph distributions.",
"Recently, there has been a great deal of research focusing on the development of sampling algorithms for networks with small-world and or power-law structure. The peerto-peer research community (e.g., [7]) have used sampling to quickly explore and obtain a good representative sample of the network topology, as these networks are hard to explore completely and have significant amounts of churn in their topology. For collecting data from social networks, researchers often use snowball sampling (e.g., [2]) due to the lack of access to the complete graph. have developed Forest Fire Sampling, which uses a hybrid combination of snowball sampling and random-walk sampling to produce samples that match the temporal evolution of the underlying social network [5]. have developed a Metropolis algorithm which samples in a manner designed to match desired properties in the original network [3]. Although there has been a great deal of research focusing on the the development of sampling algorithms, much of this work is based on empirical study and evaluation (i.e., measuring the similarity between sampled and original network properties). There has been some work (e.g., [4, 8, 6]) that has studied the statistical properties of samples of complex networks produced by traditional sampling algorithms such as node sampling, edge sampling and random walks. However, there has been relatively little attention paid to the development of a theoretical foundation for sampling from networks—including a formal framework for sampling, an understanding of various network characteristics and their dependencies, and an analysis of their impact on the accuracy of sampling algorithms. In this paper, we reconsider the foundations of network sampling and attempt to formalize the goals, and process of, sampling, in order to frame future development and analysis of sampling algorithms.",
"",
"",
"Network sampling is integral to the analysis of social, information, and biological networks. Since many real-world networks are massive in size, continuously evolving, and or distributed in nature, the network structure is often sampled in order to facilitate study. For these reasons, a more thorough and complete understanding of network sampling is critical to support the field of network science. In this paper, we outline a framework for the general problem of network sampling by highlighting the different objectives, population and units of interest, and classes of network sampling methods. In addition, we propose a spectrum of computational models for network sampling methods, ranging from the traditionally studied model based on the assumption of a static domain to a more challenging model that is appropriate for streaming domains. We design a family of sampling methods based on the concept of graph induction that generalize across the full spectrum of computational models (from static to streaming) while efficiently preserving many of the topological properties of the input graphs. Furthermore, we demonstrate how traditional static sampling algorithms can be modified for graph streams for each of the three main classes of sampling methods: node, edge, and topology-based sampling. Experimental results indicate that our proposed family of sampling methods more accurately preserve the underlying properties of the graph in both static and streaming domains. Finally, we study the impact of network sampling algorithms on the parameter estimation and performance evaluation of relational classification algorithms.",
"Given a huge real graph, how can we derive a representative sample? There are many known algorithms to compute interesting measures (shortest paths, centrality, betweenness, etc.), but several of them become impractical for large graphs. Thus graph sampling is essential.The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e.g., the diameter), to get estimates for the large graph. The deeper, underlying question is subtle: how do we measure success?.We answer the above questions, and test our answers by thorough experiments on several, diverse datasets, spanning thousands nodes and edges. We consider several sampling methods, propose novel methods to check the goodness of sampling, and develop a set of scaling laws that describe relations between the properties of the original and the sample.In addition to the theoretical contributions, the practical conclusions from our work are: Sampling strategies based on edge selection do not perform well; simple uniform random node selection performs surprisingly well. Overall, best performing methods are the ones based on random-walks and \"forest fire\"; they match very accurately both static as well as evolutionary graph patterns, with sample sizes down to about 15 of the original graph."
]
} |
1710.08607 | 2964173611 | The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation poses a significant challenge, due to its heavy-tailed nature and the large variance in degrees. We design a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the h-index and the z-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law. We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1 of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10 of the vertices to give comparable results. We also observe that the h and z-indices of real graphs are large, validating our theoretical analysis. | Some methods try to match the shape family of the distribution, rather than estimate it as a whole @cite_22 . Thus, statistical methods can be used to estimate parameters of the distribution. But it is reasonably well-established that real-world degree distributions are rarely pure power laws in most instances @cite_54 . Indeed, fitting a power law is rather challenging and naive regression fits on log-log plots are erroneous, as results of Clauset-Shalizi-Newman showed @cite_54 . | {
"cite_N": [
"@cite_54",
"@cite_22"
],
"mid": [
"2000042664",
"1976575590"
],
"abstract": [
"Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution—the part of the distribution representing large but rare events—and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov (KS) statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data, while in others the power law is ruled out.",
"We discuss two sampling schemes for selecting random subnets from a network, random sampling and connectivity dependent sampling, and investigate how the degree distribution of a node in the network is affected by the two types of sampling. Here we derive a necessary and sufficient condition that guarantees that the degree distributions of the subnet and the true network belong to the same family of probability distributions. For completely random sampling of nodes we find that this condition is satisfied by classical random graphs; for the vast majority of networks this condition will, however, not be met. We furthermore discuss the case where the probability of sampling a node depends on the degree of a node and we find that even classical random graphs are no longer closed under this sampling regime. We conclude by relating the results to real Eschericia coli protein interaction network data."
]
} |
1710.08607 | 2964173611 | The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation poses a significant challenge, due to its heavy-tailed nature and the large variance in degrees. We design a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the h-index and the z-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law. We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1 of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10 of the vertices to give comparable results. We also observe that the h and z-indices of real graphs are large, validating our theoretical analysis. | The subfield of within theoretical computer science can be thought of as a formalization of graph sampling to estimate properties. Indeed, our description of the main problem follows this language. There is a very rich body of mathematical work in this area (refer to Ron's survey @cite_9 ). Practical applications of graph property testing are quite rare, and we are only aware of one previous work on applications for finding dense cores in router networks @cite_3 . The specific problem of estimating the average degree (or the total number of edges) was studied by Feige @cite_21 and Goldreich-Ron @cite_28 . and focus on the problem of estimating higher moments of the degree distribution @cite_1 @cite_2 . One of the main techniques we use of simulating edge queries was developed in sublinear algorithms results of @cite_17 @cite_15 in the context of triangle counting and degree moment estimation. We stress that all these results are purely theoretical, and their practicality is by no means obvious. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_2",
"@cite_15",
"@cite_17"
],
"mid": [
"2012438382",
"2012797687",
"2121707217",
"1994987445",
"2152070152",
"2590395400",
"",
"2761580201"
],
"abstract": [
"Inspired by Feige (36th STOC, 2004), we initiate a study of sublinear randomized algorithms for approximating average parameters of a graph. Specifically, we consider the average degree of a graph and the average distance between pairs of vertices in a graph. Since our focus is on sublinear algorithms, these algorithms access the input graph via queries to an adequate oracle. We consider two types of queries. The first type is standard neighborhood queries (i.e., what is the ith neighbor of vertex v?), whereas the second type are queries regarding the quantities that we need to find the average of (i.e., what is the degree of vertex v? and what is the distance between u and v?, respectively). Loosely speaking, our results indicate a difference between the two problems: For approximating the average degree, the standard neighbor queries suffice and in fact are preferable to degree queries. In contrast, for approximating average distances, the standard neighbor queries are of little help whereas distance queries are crucial. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008 Supported by Israel Internet Association (ISOC-IL). This article is dedicated in memory of Shimon Even (1935–2004).",
"Property testing algorithms are \"ultra\"-efficient algorithms that decide whether a given object (e.g., a graph) has a certain property (e.g., bipartiteness), or is significantly different from any object that has the property. To this end property testing algorithms are given the ability to perform (local) queries to the input, though the decision they need to make usually concerns properties with a global nature. In the last two decades, property testing algorithms have been designed for many types of objects and properties, amongst them, graph properties, algebraic properties, geometric properties, and more. In this monograph we survey results in property testing, where our emphasis is on common analysis and algorithmic techniques. Among the techniques surveyed are the following: The self-correcting approach, which was mainly applied in the study of property testing of algebraic properties; The enforce-and-test approach, which was applied quite extensively in the analysis of algorithms for testing graph properties (in the dense-graphs model), as well as in other contexts; Szemeredi's Regularity Lemma, which plays a very important role in the analysis of algorithms for testing graph properties (in the dense-graphs model); The approach of Testing by implicit learning, which implies efficient testability of membership in many functions classes; and Algorithmic techniques for testing properties of sparse graphs, which include local search and random walks.",
"We prove the following inequality: for every positive integer n and every collection X 1 ,..., X n of nonnegative independent random variables that each has expectation 1, the probability that their sum remains below n+1 is at least α > 0. Our proof produces a value of α = 1 13 ≅ 0.077, but we conjecture that the inequality also holds with α = 1 e ≅ 0.368.As an example for the use of the new inequality, we consider the problem of estimating the average degree of a graph by querying the degrees of some of its vertices. We show the following threshold behavior: approximation factors above 2 require far less queries than approximation factors below 2. The new inequality is used in order to get tight (up to multiplicative constant factors) relations between the number of queries and the quality of the approximation. We show how the degree approximation algorithm can be used in order to quickly find those edges in a network that belong to many shortest paths.",
"Detecting and counting the number of copies of certain subgraphs (also known as network motifs or graphlets) is motivated by applications in a variety of areas ranging from biology to the study of the World Wide Web. Several polynomial-time algorithms have been suggested for counting or detecting the number of occurrences of certain network motifs. However, a need for more efficient algorithms arises when the input graph is very large, as is indeed the case in many applications of motif counting. In this paper we design sublinear-time algorithms for approximating the number of copies of certain constant-size subgraphs in a graph G. That is, our algorithms do not read the whole graph, but rather query parts of the graph. Specifically, we consider algorithms that may query the degree of any vertex of their choice and may ask for any neighbor of any vertex of their choice. The main focus of this work is on the basic problem of counting the number of length-2 paths and more generally on counting the number of...",
"The connectivity of the Internet crucially depends on the relationships between thousands of Autonomous Systems (ASes) that exchange routing information using the Border Gateway Protocol (BGP). These relationships can be modeled as a graph, called the AS-graph, in which the vertices model the ASes, and the edges model the peering arrangements between the ASes. Based on topological studies, it is widely believed that the Internet graph contains a central dense-core: Informally, this is a small set of high-degree, tightly interconnected ASes that participate in a large fraction of end-to-end routes. Finding this dense-core is a very important practical task when analyzing the Internet's topology. In this work we introduce a randomized sublinear algorithm that finds a dense-core of the AS-graph. We mathematically prove the correctness of our algorithm, bound the density of the core it returns, and analyze its running time. We also implemented our algorithm and tested it on real AS-graph data and on real undirected version of WWW network data. Our results show that the core discovered by our algorithm is nearly identical to the cores found by existing algorithms - at a fraction of the running time.",
"We revisit the classic problem of estimating the degree distribution moments of an undirected graph. Consider an undirected graph @math with @math vertices, and define (for @math ) @math . Our aim is to estimate @math within a multiplicative error of @math (for a given approximation parameter @math ) in sublinear time. We consider the sparse graph model that allows access to: uniform random vertices, queries for the degree of any vertex, and queries for a neighbor of any vertex. For the case of @math (the average degree), @math queries suffice for any constant @math (Feige, SICOMP 06 and Goldreich-Ron, RSA 08). Gonen-Ron-Shavitt (SIDMA 11) extended this result to all integral @math , by designing an algorithms that performs @math queries. We design a new, significantly simpler algorithm for this problem. In the worst-case, it exactly matches the bounds of Gonen-Ron-Shavitt, and has a much simpler proof. More importantly, the running time of this algorithm is connected to the degeneracy of @math . This is (essentially) the maximum density of an induced subgraph. For the family of graphs with degeneracy at most @math , it has a query complexity of @math . Thus, for the class of bounded degeneracy graphs (which includes all minor closed families and preferential attachment graphs), we can estimate the average degree in @math queries, and can estimate the variance of the degree distribution in @math queries. This is a major improvement over the previous worst-case bounds. Our key insight is in designing an estimator for @math that has low variance when @math does not have large dense subgraphs.",
"",
"We consider the problem of estimating the number of triangles in a graph. This problem has been extensively studied in both theory and practice, but all existing algorithms read the entire graph. In this work we design a sublinear-time algorithm for approximating the number of triangles in a graph, where the algorithm is given query access to the graph. The allowed queries are degree queries, vertex-pair queries, and neighbor queries. We show that for any given approximation parameter @math , the algorithm provides an estimate @math such that, with high constant probability, @math , where @math is the number of triangles in the graph @math . The expected query complexity of the algorithm is @math , where @math is the number of vertices in the graph and @math is the number of edges. The expected running time of the algorithm is $( n t^ 1 3 + m^ 3 2 t )..."
]
} |
1710.08607 | 2964173611 | The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation poses a significant challenge, due to its heavy-tailed nature and the large variance in degrees. We design a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the h-index and the z-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law. We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1 of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10 of the vertices to give comparable results. We also observe that the h and z-indices of real graphs are large, validating our theoretical analysis. | On the practical side, Dasgupta, Kumar, and Sarlos study average degree estimation in real graphs, and develop alternate algorithms @cite_13 . They require the graph to have low mixing time and demonstrate that the algorithm has excellent behavior in practice (compared to implementations of Feige's and the Goldreich-Ron algorithm @cite_21 @cite_28 ). note that sampling uniform random vertices is not possible in many settings, and thus they consider a significantly weaker setting than SM or HDM. focus on sampling uniform random vertices, using only a small set of seed vertices and neighbor queries @cite_37 . | {
"cite_N": [
"@cite_28",
"@cite_37",
"@cite_21",
"@cite_13"
],
"mid": [
"2012438382",
"2336754337",
"2121707217",
"2075709248"
],
"abstract": [
"Inspired by Feige (36th STOC, 2004), we initiate a study of sublinear randomized algorithms for approximating average parameters of a graph. Specifically, we consider the average degree of a graph and the average distance between pairs of vertices in a graph. Since our focus is on sublinear algorithms, these algorithms access the input graph via queries to an adequate oracle. We consider two types of queries. The first type is standard neighborhood queries (i.e., what is the ith neighbor of vertex v?), whereas the second type are queries regarding the quantities that we need to find the average of (i.e., what is the degree of vertex v? and what is the distance between u and v?, respectively). Loosely speaking, our results indicate a difference between the two problems: For approximating the average degree, the standard neighbor queries suffice and in fact are preferable to degree queries. In contrast, for approximating average distances, the standard neighbor queries are of little help whereas distance queries are crucial. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008 Supported by Israel Internet Association (ISOC-IL). This article is dedicated in memory of Shimon Even (1935–2004).",
"Random walk is an important tool in many graph mining applications including estimating graph parameters, sampling portions of the graph, and extracting dense communities. In this paper we consider the problem of sampling nodes from a large graph according to a prescribed distribution by using random walk as the basic primitive. Our goal is to obtain algorithms that make a small number of queries to the graph but output a node that is sampled according to the prescribed distribution. Focusing on the uniform distribution case, we study the query complexity of three algorithms and show a near-tight bound expressed in terms of the parameters of the graph such as average degree and the mixing time. Both theoretically and empirically, we show that some algorithms are preferable in practice than the others. We also extend our study to the problem of sampling nodes according to some polynomial function of their degrees; this has implications for designing efficient algorithms for applications such as triangle counting.",
"We prove the following inequality: for every positive integer n and every collection X 1 ,..., X n of nonnegative independent random variables that each has expectation 1, the probability that their sum remains below n+1 is at least α > 0. Our proof produces a value of α = 1 13 ≅ 0.077, but we conjecture that the inequality also holds with α = 1 e ≅ 0.368.As an example for the use of the new inequality, we consider the problem of estimating the average degree of a graph by querying the degrees of some of its vertices. We show the following threshold behavior: approximation factors above 2 require far less queries than approximation factors below 2. The new inequality is used in order to get tight (up to multiplicative constant factors) relations between the number of queries and the quality of the approximation. We show how the degree approximation algorithm can be used in order to quickly find those edges in a network that belong to many shortest paths.",
"Networks are characterized by nodes and edges. While there has been a spate of recent work on estimating the number of nodes in a network, the edge-estimation question appears to be largely unaddressed. In this work we consider the problem of estimating the average degree of a large network using efficient random sampling, where the number of nodes is not known to the algorithm. We propose a new estimator for this problem that relies on access to node samples under a prescribed distribution. Next, we show how to efficiently realize this ideal estimator in a random walk setting. Our estimator has a natural and simple implementation using random walks; we bound its performance in terms of the mixing time of the underlying graph. We then show that our estimators are both provably and practically better than many natural estimators for the problem. Our work contrasts with existing theoretical work on estimating average degree, which assume that a uniform random sample of nodes is available and the number of nodes is known."
]
} |
1710.08607 | 2964173611 | The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation poses a significant challenge, due to its heavy-tailed nature and the large variance in degrees. We design a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the h-index and the z-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law. We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1 of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10 of the vertices to give comparable results. We also observe that the h and z-indices of real graphs are large, validating our theoretical analysis. | We note that there is a large body of work on sampling graphs from a stream @cite_43 . This is quite different from our setting, since a streaming algorithm observes every edge at least once. The specific problem of estimating the degree distribution at all scales was considered by @cite_55 . They observe many of the challenges we mentioned earlier: the difficulty of estimating the tail accurately, finding vertices at all degree scales, and combining estimates from the head and the tail. | {
"cite_N": [
"@cite_43",
"@cite_55"
],
"mid": [
"2016289973",
"1653839352"
],
"abstract": [
"Over the last decade, there has been considerable interest in designing algorithms for processing massive graphs in the data stream model. The original motivation was two-fold: a) in many applications, the dynamic graphs that arise are too large to be stored in the main memory of a single machine and b) considering graph problems yields new insights into the complexity of stream computation. However, the techniques developed in this area are now finding applications in other areas including data structures for dynamic graphs, approximation algorithms, and distributed and parallel computation. We survey the state-of-the-art results; identify general techniques; and highlight some simple algorithms that illustrate basic ideas.",
"The degree distribution is one of the most fundamental graph properties of interest for real-world graphs. It has been widely observed in numerous domains that graphs typically have a tailed or scale-free degree distribution. While the average degree is usually quite small, the variance is quite high and there are vertices with degrees at all scales. We focus on the problem of approximating the degree distribution of a large streaming graph, with small storage. We design an algorithm headtail, whose main novelty is a new estimator of infrequent degrees using truncated geometric random variables. We give a mathematical analysis of headtail and show that it has excellent behavior in practice. We can process streams will millions of edges with storage less than 1 and get extremely accurate approximations for all scales in the degree distribution. We also introduce a new notion of Relative Hausdorff distance between tailed histograms. Existing notions of distances between distributions are not suitable, since they ignore infrequent degrees in the tail. The Relative Hausdorff distance measures deviations at all scales, and is a more suitable distance for comparing degree distributions. By tracking this new measure, we are able to give strong empirical evidence of the convergence of headtail."
]
} |
1710.07831 | 2009701012 | Extended RBM to model spatio-temporal patterns among high-dimensional motion data.Generative approach to perform classification using RBM, for both binary and multi-class classification.High classification accuracy in two computer vision applications: facial expression recognition and human action recognition. Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm. | Capturing and representing spatio-temporal structure in data is important for many recognition and classification tasks. Research for capturing such patterns can be categorized into feature-based and model-based methods. The most widely used spatio-temporal features include spatio-temporal interest point (STIP) based features @cite_10 @cite_29 and optical flow based features @cite_14 . These features capture local appearance or motion patterns near the interest points or optical flows. Although having been successfully applied to many applications, these features generally focus more on local patterns. | {
"cite_N": [
"@cite_29",
"@cite_14",
"@cite_10"
],
"mid": [
"",
"1961645301",
"2020163092"
],
"abstract": [
"",
"The authors have developed a real-time, view-based gesture recognition system. Optical flow is estimated and segmented into motion blobs. Gestures are recognized using a rule-based technique based on characteristics of the motion blobs such as relative motion and size. Parameters of the gesture (e.g., frequency) are then estimated using context specific techniques. The system has been applied to create an interactive environment for children.",
"Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds."
]
} |
1710.07831 | 2009701012 | Extended RBM to model spatio-temporal patterns among high-dimensional motion data.Generative approach to perform classification using RBM, for both binary and multi-class classification.High classification accuracy in two computer vision applications: facial expression recognition and human action recognition. Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm. | Model-based methods include probabilistic graphical models such as Hidden Markov Models @cite_17 , Dynamic Bayesian Networks @cite_28 , Conditional Random Fields @cite_23 , and their variants. While capable of simultaneously capturing both spatial and temporal interactions, they can only capture the local spatial and temporal interactions due to the underlying Markov assumption. | {
"cite_N": [
"@cite_28",
"@cite_23",
"@cite_17"
],
"mid": [
"2029772767",
"1992681465",
"1953802779"
],
"abstract": [
"We propose a driver fatigue recognition model based on the dynamic Bayesian network, information fusion and multiple contextual and physiological features. We include features such as the contact physiological features (e.g., ECG and EEG), and apply the first-order Hidden Markov Model to compute the dynamics of the Bayesian network at different time slices. The experimental validation shows the effectiveness of the proposed system; also it indicates that the contact physiological features (especially ECG and EEG) are significant factors for inferring the fatigue state of a driver.",
"Understanding natural human activity involves not only identifying the action being performed, but also locating the semantic elements of the scene and describing the person's interaction with them. We present a system that is able to recognize complex, fine-grained human actions involving the manipulation of objects in realistic action sequences. Our method takes advantage of recent advances in sensors and pose trackers in learning an action model that draws on successful discriminative techniques while explicitly modeling both pose trajectories and object manipulations. By combining these elements in a single model, we are able to simultaneously recognize actions and track the location and manipulation of objects. To showcase this ability, we introduce a novel Cooking Action Dataset that contains video, depth readings, and pose tracks from a Kinect sensor. We show that our model outperforms existing state of the art techniques on this dataset as well as the VISINT dataset with only video sequences.",
"Our goal is to automatically segment and recognize basic human actions, such as stand, walk and wave hands, from a sequence of joint positions or pose angles. Such recognition is difficult due to high dimensionality of the data and large spatial and temporal variations in the same action. We decompose the high dimensional 3-D joint space into a set of feature spaces where each feature corresponds to the motion of a single joint or combination of related multiple joints. For each feature, the dynamics of each action class is learned with one HMM. Given a sequence, the observation probability is computed in each HMM and a weak classifier for that feature is formed based on those probabilities. The weak classifiers with strong discriminative power are then combined by the Multi-Class AdaBoost (AdaBoost.M2) algorithm. A dynamic programming algorithm is applied to segment and recognize actions simultaneously. Results of recognizing 22 actions on a large number of motion capture sequences as well as several annotated and automatically tracked sequences show the effectiveness of the proposed algorithms."
]
} |
1710.07831 | 2009701012 | Extended RBM to model spatio-temporal patterns among high-dimensional motion data.Generative approach to perform classification using RBM, for both binary and multi-class classification.High classification accuracy in two computer vision applications: facial expression recognition and human action recognition. Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm. | Restricted Boltzmann machines (RBMs) have been separately used for modeling spatial correlation or temporal correlation in the data in the last decade. RBM was firstly introduced to learn deep features from handwritings to recognize digits @cite_20 . In @cite_33 , propose a Deep Belief Network to model the shapes of horses and motorbikes. The samples from the model look realistic and have a good generalization. A more complicated model, proposed by Nair and Hinton @cite_16 , considers the spatial correlation among visible layer using a factored 3-way RBM, in which triple potentials are used to model the correlations among pixels in natural images. The intuition is that in natural images, the intensity of each pixel is approximately the average of its neighbors. @cite_31 apply the 3-way RBM to facial landmark tracking, and model the relationship between posed faces and frontal faces, under varying facial expressions. | {
"cite_N": [
"@cite_31",
"@cite_16",
"@cite_33",
"@cite_20"
],
"mid": [
"2131461458",
"2161893161",
"2105180511",
"2100495367"
],
"abstract": [
"Facial feature tracking is an active area in computer vision due to its relevance to many applications. It is a nontrivial task, since faces may have varying facial expressions, poses or occlusions. In this paper, we address this problem by proposing a face shape prior model that is constructed based on the Restricted Boltzmann Machines (RBM) and their variants. Specifically, we first construct a model based on Deep Belief Networks to capture the face shape variations due to varying facial expressions for near-frontal view. To handle pose variations, the frontal face shape prior model is incorporated into a 3-way RBM model that could capture the relationship between frontal face shapes and non-frontal face shapes. Finally, we introduce methods to systematically combine the face shape prior models with image measurements of facial feature points. Experiments on benchmark databases show that with the proposed method, facial feature points can be tracked robustly and accurately even if faces have significant facial expressions and poses.",
"We introduce a new type of top-level model for Deep Belief Nets and evaluate it on a 3D object recognition task. The top-level model is a third-order Boltzmann machine, trained using a hybrid algorithm that combines both generative and discriminative gradients. Performance is evaluated on the NORB database (normalized-uniform version), which contains stereo-pair images of objects under different lighting conditions and viewpoints. Our model achieves 6.5 error on the test set, which is close to the best published result for NORB (5.9 ) using a convolutional neural net that has built-in knowledge of translation invariance. It substantially outperforms shallow models such as SVMs (11.6 ). DBNs are especially suited for semi-supervised learning, and to demonstrate this we consider a modified version of the NORB recognition task in which additional unlabeled images are created by applying small translations to the images in the database. With the extra unlabeled data (and the same amount of labeled data as before), our model achieves 5.2 error.",
"A good model of object shape is essential in applications such as segmentation, object detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shape can help where the object boundary is noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to part of the object. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of Deep Boltzmann Machine [22] that we call a Shape Boltzmann Machine (ShapeBM) for the task of modeling binary shape images. We show that the ShapeBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the ShapeBM learns distributions that are qualitatively and quantitatively better than existing models for this task.",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
]
} |
1710.07831 | 2009701012 | Extended RBM to model spatio-temporal patterns among high-dimensional motion data.Generative approach to perform classification using RBM, for both binary and multi-class classification.High classification accuracy in two computer vision applications: facial expression recognition and human action recognition. Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm. | For dynamic data modeling, @cite_35 use a Conditional RBM (CRBM) to model the temporal transitions in human body movements, and reconstruct body movements. Nevertheless, like HMM, CRBM still models local dynamics by assuming @math 'th order Markov property. | {
"cite_N": [
"@cite_35"
],
"mid": [
"2158164339"
],
"abstract": [
"We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued \"visible\" variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture."
]
} |
1710.07831 | 2009701012 | Extended RBM to model spatio-temporal patterns among high-dimensional motion data.Generative approach to perform classification using RBM, for both binary and multi-class classification.High classification accuracy in two computer vision applications: facial expression recognition and human action recognition. Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm. | RBM and its variants have also been used for modeling motion data. For example, Sutskever and Hinton @cite_25 introduce a temporal RBM to model high-dimensional sequences. @cite_22 use RBM to capture the global dynamics of finger trace. However, it is limited to model the global dynamics for 1-D data only. | {
"cite_N": [
"@cite_22",
"@cite_25"
],
"mid": [
"2110590434",
"2147010501"
],
"abstract": [
"Brain-computer interfaces (BCIs) use brain signals to convey a user's intent. Some BCI approaches begin by decoding kinematic parameters of movements from brain signals, and then proceed to using these signals, in absence of movements, to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG) recordings from the surface of the brain in humans can give information about kinematic parameters (e.g., hand velocity or finger flexion). The decoding approaches in these demonstrations usually employed classical classification regression algorithms that derive a linear mapping between brain signals and outputs. However, they typically only incorporate little prior information about the target kinematic parameter. In this paper, we show that different types of anatomical constraints that govern finger flexion can be exploited in this context. Specifically, we incorporate these constraints in the construction, structure, and the probabilistic functions of a switched non-parametric dynamic system (SNDS) model. We then apply the resulting SNDS decoder to infer the flexion of individual fingers from the same ECoG dataset used in a recent study. Our results show that the application of the proposed model, which incorporates anatomical constraints, improves decoding performance compared to the results in the previous work. Thus, the results presented in this paper may ultimately lead to neurally controlled hand prostheses with full fine-grained finger articulation.",
"We describe a new family of non-linear sequence models that are substantially more powerful than hidden Markov models or linear dynamical systems. Our models have simple approximate inference and learning procedures that work well in practice. Multilevel representations of sequential data can be learned one hidden layer at a time, and adding extra hidden layers improves the resulting generative models. The models can be trained with very high-dimensional, very non-linear data such as raw pixel sequences. Their performance is demonstrated using synthetic video sequences of two balls bouncing in a box."
]
} |
1710.07831 | 2009701012 | Extended RBM to model spatio-temporal patterns among high-dimensional motion data.Generative approach to perform classification using RBM, for both binary and multi-class classification.High classification accuracy in two computer vision applications: facial expression recognition and human action recognition. Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm. | To improve the representation power of RBM, semi-restricted Boltzmann machine (SRBM) @cite_15 is introduced to model the lateral interactions between visible variables. The main property of SRBM is that given the hidden variables, the visible layer forms a Markov random field. However, for high-dimensional motion data, there will be too many parameters if every pair of visible units has an interaction. In this work, we model the dynamic nature of data with fewer parameters than a SRBM, which makes the learning process more efficient. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2134653808"
],
"abstract": [
"We describe an efficient learning procedure for multilayer generative models that combine the best aspects of Markov random fields and deep, directed belief nets. The generative models can be learned one layer at a time and when learning is complete they have a very fast inference procedure for computing a good approximation to the posterior distribution in all of the hidden layers. Each hidden layer has its own MRF whose energy function is modulated by the top-down directed connections from the layer above. To generate from the model, each layer in turn must settle to equilibrium given its top-down input. We show that this type of model is good at capturing the statistics of patches of natural images."
]
} |
1710.07831 | 2009701012 | Extended RBM to model spatio-temporal patterns among high-dimensional motion data.Generative approach to perform classification using RBM, for both binary and multi-class classification.High classification accuracy in two computer vision applications: facial expression recognition and human action recognition. Many computer vision applications involve modeling complex spatio-temporal patterns in high-dimensional motion data. Recently, restricted Boltzmann machines (RBMs) have been widely used to capture and represent spatial patterns in a single image or temporal patterns in several time slices. To model global dynamics and local spatial interactions, we propose to theoretically extend the conventional RBMs by introducing another term in the energy function to explicitly model the local spatial interactions in the input data. A learning method is then proposed to perform efficient learning for the proposed model. We further introduce a new method for multi-class classification that can effectively estimate the infeasible partition functions of different RBMs such that RBM is treated as a generative model for classification purpose. The improved RBM model is evaluated on two computer vision applications: facial expression recognition and human action recognition. Experimental results on benchmark databases demonstrate the effectiveness of the proposed algorithm. | Besides feature extraction and shape modeling, RBMs have also been used for classification. Larochelle and Bengio @cite_0 introduce a discriminative RBM as a classifier by including the labels in visible layer, and make predictions by comparing the likelihood of each label vector. In @cite_27 discriminate RBM is introduced to model vector inputs by duplicating the discriminative RBM and adding constraints on the hidden layer. In all RBM related models discussed above, one RBM is trained for all classes. For this work, in contrast, we build one RBM for each class, and perform a multi-class classification task. | {
"cite_N": [
"@cite_0",
"@cite_27"
],
"mid": [
"1964155876",
"1639006268"
],
"abstract": [
"Recently, many applications for Restricted Boltzmann Machines (RBMs) have been developed for a large variety of learning problems. However, RBMs are usually used as feature extractors for another learning algorithm or to provide a good initialization for deep feed-forward neural network classifiers, and are not considered as a standalone solution to classification problems. In this paper, we argue that RBMs provide a self-contained framework for deriving competitive non-linear classifiers. We present an evaluation of different learning algorithms for RBMs which aim at introducing a discriminative component to RBM training and improve their performance as classifiers. This approach is simple in that RBMs are used directly to build a classifier, rather than as a stepping stone. Finally, we demonstrate how discriminative RBMs can also be successfully employed in a semi-supervised setting.",
"We consider the problem of classification when inputs correspond to sets of vectors. This setting occurs in many problems such as the classification of pieces of mail containing several pages, of web sites with several sections or of images that have been pre-segmented into smaller regions. We propose generalizations of the restricted Boltzmann machine (RBM) that are appropriate in this context and explore how to incorporate different assumptions about the relationship between the input sets and the target class within the RBM. In experiments on standard multiple-instance learning datasets, we demonstrate the competitiveness of approaches based on RBMs and apply the proposed variants to the problem of incoming mail classification."
]
} |
1710.08016 | 2963938983 | Both experimental and computational biology is becoming increasingly automated. Laboratory experiments are now performed automatically on high-throughput machinery, while computational models are synthesized or inferred automatically from data. However, integration between automated tasks in the process of biological discovery is still lacking, largely due to incompatible or missing formal representations. While theories are expressed formally as computational models, existing languages for encoding and automating experimental protocols often lack formal semantics. This makes it challenging to extract novel understanding by identifying when theory and experimental evidence disagree due to errors in the models or the protocols used to validate them. To address this, we formalize the syntax of a core protocol language, which provides a unified description for the models of biochemical systems being experimented on, together with the discrete events representing the liquid-handling steps of biological protocols. We present both a deterministic and a stochastic semantics to this language, both defined in terms of hybrid processes. In particular, the stochastic semantics captures uncertainties in equipment tolerances, making it a suitable tool for both experimental and computational biologists. We illustrate how the proposed protocol language can be used for automated verification and synthesis of laboratory experiments on case studies from the fields of chemistry and molecular programming. | Several factors contribute to the growing need for a formalization of experimental protocols in biology. First, better record-keeping of experimental operations is recognized as a step towards tackling the ‘reproducibility crisis’ in biology @cite_14 . Second, the emergence of ‘cloud labs’ @cite_22 creates a need for precise, machine-readable descriptions of the experimental steps to be executed. To address these needs, frameworks allowing protocols to be recorded, shared, and reproduced locally or in a remote lab have been proposed. These frameworks introduce different programming languages for experimental protocols including BioCoder @cite_16 , Autoprotocol, and Antha @cite_15 . These languages provide expressive, high-level protocol descriptions but consider each experimental sample as a labelled ‘black-box’. This makes it challenging to study a protocol together with the biochemical systems it manipulates in a common framework. | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_22",
"@cite_16"
],
"mid": [
"2219029648",
"585341442",
"",
"2099467659"
],
"abstract": [
"Building robust manufacturing processes from biological components is a task that is highly complex and requires sophisticated tools to describe processes, inputs, and measurements and administrate management of knowledge, data, and materials. We argue that for bioengineering to fully access biological potential, it will require application of statistically designed experiments to derive detailed empirical models of underlying systems. This requires execution of large-scale structured experimentation for which laboratory automation is necessary. This requires development of expressive, high-level languages that allow reusability of protocols, characterization of their reliability, and a change in focus from implementation details to functional properties. We review recent developments in these areas and identify what we believe is an exciting trend that promises to revolutionize biotechnology.",
"Low reproducibility rates within life science research undermine cumulative knowledge production and contribute to both delays and costs of therapeutic drug development. An analysis of past studies indicates that the cumulative (total) prevalence of irreproducible preclinical research exceeds 50 , resulting in approximately US @math 28B) year spent on preclinical research that is not reproducible—in the United States alone. We outline a framework for solutions and a plan for long-term improvements in reproducibility rates that will help to accelerate the discovery of life-saving therapies and cures.",
"",
"Background Published descriptions of biology protocols are often ambiguous and incomplete, making them difficult to replicate in other laboratories. However, there is increasing benefit to formalizing the descriptions of protocols, as laboratory automation systems (such as microfluidic chips) are becoming increasingly capable of executing them. Our goal in this paper is to improve both the reproducibility and automation of biology experiments by using a programming language to express the precise series of steps taken."
]
} |
1710.07960 | 2765646393 | In this paper, the problem of disambiguating a target word for Polish is approached by searching for related words with known meaning. These relatives are used to build a training corpus from unannotated text. This technique is improved by proposing new rich sources of replacements that substitute the traditional requirement of monosemy with heuristics based on wordnet relations. The na "ive Bayesian classifier has been modified to account for an unknown distribution of senses. A corpus of 600 million web documents (594 billion tokens), gathered by the NEKST search engine allows us to assess the relationship between training set size and disambiguation accuracy. The classifier is evaluated using both a wordnet baseline and a corpus with 17,314 manually annotated occurrences of 54 ambiguous words. | The problem of WSD has received a lot of attention since the beginning of natural language processing research. WSD is typically expected to improve the results of real-world applications: originally machine translation and recently information retrieval and extraction, especially question answering @cite_15 . Like many other areas, WSD has greatly benefited from publicly available test sets and competitions. Two notable corpora are: 1) @cite_12 , built by labelling a subset of Brown corpus with synsets and 2) the public evaluations of workshops @cite_2 @cite_17 . | {
"cite_N": [
"@cite_15",
"@cite_17",
"@cite_12",
"@cite_2"
],
"mid": [
"2179678905",
"1525367170",
"2065157922",
"135437175"
],
"abstract": [
"This paper presents an entity recognition (ER) module for a question answering system for Polish called RAFAEL. Two techniques of ER are compared: traditional, based on named entity categories (e.g. person), and novel Deep Entity Recognition, using WordNet synsets (e.g. impressionist). The latter is possible thanks to a previously assembled entity library, gathered by analysing encyclopaedia definitions. Evaluation based on over 500 questions answered on the grounds of Wikipedia suggests that the strength of DeepER approach lies in its ability to tackle questions that demand answers beyond the categories of named entities.",
"This paper presents the task definition, resources, participating systems, and comparative results for the English lexical sample task, which was organized as part of the SENSEVAL-3 evaluation exercise. The task drew the participation of 27 teams from around the world, with a total of 47 systems.",
"A semantic concordance is a textual corpus and a lexicon so combined that every substantive word in the text is linked to its appropriate sense in the lexicon. Thus it can be viewed either as a corpus in which words have been tagged syntactically and semantically, or as a lexicon in which example sentences can be found for many definitions. A semantic concordance is being constructed to use in studies of sense resolution in context (semantic disambiguation). The Brown Corpus is the text and WordNet is the lexicon. Semantic tags (pointers to WordNet synsets) are inserted in the text manually using an interface, ConText, that was designed to facilitate the task. Another interface supports searches of the tagged text. Some practical uses for semantic concordances am proposed.",
""
]
} |
1710.07960 | 2765646393 | In this paper, the problem of disambiguating a target word for Polish is approached by searching for related words with known meaning. These relatives are used to build a training corpus from unannotated text. This technique is improved by proposing new rich sources of replacements that substitute the traditional requirement of monosemy with heuristics based on wordnet relations. The na "ive Bayesian classifier has been modified to account for an unknown distribution of senses. A corpus of 600 million web documents (594 billion tokens), gathered by the NEKST search engine allows us to assess the relationship between training set size and disambiguation accuracy. The classifier is evaluated using both a wordnet baseline and a corpus with 17,314 manually annotated occurrences of 54 ambiguous words. | There are a variety of approaches to solve the WSD problem, which can be grouped based upon how they use their data -- see reviews @cite_18 @cite_8 . In supervised solutions a large sense-tagged corpus is available for training. This approach has been applied to the the test set used in the current study, resulting in an accuracy value of 91.5 In the minimally supervised approach @cite_4 , a small set of initial training examples, obtained by a heuristic or hand-tagging, is used to label new occurrences. They in turn serve as a training set for next iteration, and so on. This bootstrapping procedure requires very little manual tagging but needs to be carefully implemented to avoid loosing accuracy in further steps. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8"
],
"mid": [
"2436001372",
"2101210369",
"1971571811"
],
"abstract": [
"Word sense disambiguation (WSD) is the ability to identify the meaning of words in context in a computational manner. WSD is considered an AI-complete problem, that is, a task whose solution is at least as hard as the most difficult problems in artificial intelligence. We introduce the reader to the motivations for solving the ambiguity of words and provide a description of the task. We overview supervised, unsupervised, and knowledge-based approaches. The assessment of WSD systems is discussed in the context of the Senseval Semeval campaigns, aiming at the objective evaluation of systems participating in several different disambiguation tasks. Finally, applications, open problems, and future directions are discussed.",
"This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 .",
"The problem and process of identifying the meaning of a word as per its usage context is called word sense disambiguation (WSD). Although research in this field has been ongoing for the past forty years, a distinct change of techniques adopted can be observed over time. Two important parameters govern the direction in which WSD research progresses during any period. These are the underlying requirement of the kind of sense disambiguation, or the domain, and the robustness of available knowledge in the form of corpora or dictionaries. This paper surveys the progress of WSD over time and the important linguistic achievements that enabled this progress."
]
} |
1710.07960 | 2765646393 | In this paper, the problem of disambiguating a target word for Polish is approached by searching for related words with known meaning. These relatives are used to build a training corpus from unannotated text. This technique is improved by proposing new rich sources of replacements that substitute the traditional requirement of monosemy with heuristics based on wordnet relations. The na "ive Bayesian classifier has been modified to account for an unknown distribution of senses. A corpus of 600 million web documents (594 billion tokens), gathered by the NEKST search engine allows us to assess the relationship between training set size and disambiguation accuracy. The classifier is evaluated using both a wordnet baseline and a corpus with 17,314 manually annotated occurrences of 54 ambiguous words. | If lack of definitions make the Lesk algorithm infeasible, we can exploit relations between words. This study focuses on , i.e. words or collocations, selected using wordnet, being related to a disambiguation target, but free of ambiguity. One can easily find occurrences of such relatives in an unannotated text and treat them as training examples for the target ambiguous word. The method has been successfully applied in an English WSD task @cite_9 , but still many problems remain. One of them is choice of relatives -- in fact, even synonyms differ in meaning and usage contexts; and they are not available for many words. That is why also hypernyms and hyponyms, especially multi-word expressions containing the target word, are taken into account. Some researchers also include siblings (i.e. words with a common hypernym with the target) and antonyms, but their influence is not always beneficiary @cite_20 . Other interesting sources of monosemous relatives are parts of definition @cite_13 , named entities @cite_21 , indirect hyponyms and hypernyms, and finally meronyms and holonyms @cite_20 . | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"1566423636",
"1851555520",
"2112655892"
],
"abstract": [
"",
"In this paper, we present an iterative algorithm for Word Sense Disambiguation. It combines two sources of information: Word_Net and a semantic tagged corpus, for the purpose of identifying the correct sense of the words in a given text. It differs from other standard approaches in that the disambiguation process is performed in an iterative manner: starting from free text, a set of disambiguated words is built, using various methods; new words are sense tagged based on their relation to the already disambiguated words, and then added to the set. This iterative process allows us to identify, in the original text, a set of words which can be disambiguated with high precision; 55 of the verbs and nouns are disambiguated with an accuracy of 92 .",
"The unavailability of very large corpora with semantically disambiguated words is a major limitation in text processing research. For example, statistical methods for word sense disambiguation of free text are known to achieve high accuracy results when large corpora are available to develop context rules, to train and test them.This paper presents a novel approach to automatically generate arbitrarily large corpora for word senses. The method is based on (1) the information provided in WordNet, used to formulate queries consisting of synonyms or definitions of word senses, and (2) the information gathered from Internet using existing search engines. The method was tested on 120 word senses and a precision of 91 was observed.",
"This paper describes a sense disambiguation method for a polysemous target noun using the context words surrounding the target noun and its WordNet relatives, such as synonyms, hypernyms and hyponyms. The result of sense disambiguation is a relative that can substitute for that target noun in a context. The selection is made based on co-occurrence frequency between candidate relatives and each word in the context. Since the co-occurrence frequency is obtainable from a raw corpus, the method is considered to be an unsupervised learning algorithm and therefore does not require a sense-tagged corpus. In a series of experiments using SemCor and the corpus of SENSEVAL-2 lexical sample task, all in English, and using some Korean data, the proposed method was shown to be very promising. In particular, its performance was superior to that of the other approaches evaluated on the same test corpora."
]
} |
1710.07960 | 2765646393 | In this paper, the problem of disambiguating a target word for Polish is approached by searching for related words with known meaning. These relatives are used to build a training corpus from unannotated text. This technique is improved by proposing new rich sources of replacements that substitute the traditional requirement of monosemy with heuristics based on wordnet relations. The na "ive Bayesian classifier has been modified to account for an unknown distribution of senses. A corpus of 600 million web documents (594 billion tokens), gathered by the NEKST search engine allows us to assess the relationship between training set size and disambiguation accuracy. The classifier is evaluated using both a wordnet baseline and a corpus with 17,314 manually annotated occurrences of 54 ambiguous words. | Preparing a corpus for finding relatives poses a challenge as well. It should contain a lot of text, as many monosemous words are scarce. Some researchers use snippets retrieved from search engines, i.e. AltaVista @cite_13 or Google @cite_22 . One can also extend a search query by including the context of the disambiguated word @cite_10 , but it requires using as many queries as test cases. | {
"cite_N": [
"@cite_10",
"@cite_13",
"@cite_22"
],
"mid": [
"1509248455",
"1851555520",
"66953820"
],
"abstract": [
"The current situation for Word Sense Disambiguation (WSD) is somewhat stuck due to lack of training data. We present in this paper a novel disambiguation algorithm that improves previous systems based on acquisition of examples by incorporating local context information. With a basic configuration, our method is able to obtain state-of-the-art performance. We complemented this work by evaluating other well-known methods in the same dataset, and analysing the comparative results per word. We observed that each algorithm performed better for different types of words, and each of them failed for some particular words. We proposed then a simple unsupervised voting scheme that improved significantly over single systems, achieving the best unsupervised performance on both the Senseval 2 and Senseval 3 lexical sample datasets.",
"The unavailability of very large corpora with semantically disambiguated words is a major limitation in text processing research. For example, statistical methods for word sense disambiguation of free text are known to achieve high accuracy results when large corpora are available to develop context rules, to train and test them.This paper presents a novel approach to automatically generate arbitrarily large corpora for word senses. The method is based on (1) the information provided in WordNet, used to formulate queries consisting of synonyms or definitions of word senses, and (2) the information gathered from Internet using existing search engines. The method was tested on 120 word senses and a precision of 91 was observed.",
"This paper explores the large-scale acquisition of sense-tagged examples for Word Sense Disambiguation (WSD). We have applied the “WordNet monosemous relatives” method to construct automatically a web corpus that we have used to train disambiguation systems. The corpus-building process has highlighted important factors, such as the distribution of senses (bias). The corpus has been used to train WSD algorithms that include supervised methods (combining automatic and manuallytagged examples), minimally supervised (requiring sense bias information from hand-tagged corpora), and fully unsupervised. These methods were tested on the Senseval-2 lexical sample test set, and compared successfully to other systems with minimum or no supervision."
]
} |
1710.07960 | 2765646393 | In this paper, the problem of disambiguating a target word for Polish is approached by searching for related words with known meaning. These relatives are used to build a training corpus from unannotated text. This technique is improved by proposing new rich sources of replacements that substitute the traditional requirement of monosemy with heuristics based on wordnet relations. The na "ive Bayesian classifier has been modified to account for an unknown distribution of senses. A corpus of 600 million web documents (594 billion tokens), gathered by the NEKST search engine allows us to assess the relationship between training set size and disambiguation accuracy. The classifier is evaluated using both a wordnet baseline and a corpus with 17,314 manually annotated occurrences of 54 ambiguous words. | Finally, the usage of monosemous relatives has more applications than classical WSD. One can use them to generate topical signatures for concepts @cite_6 , automatically build large sense-tagged corpora @cite_3 and evaluate the quality of wordnet-related semantic resources @cite_19 . | {
"cite_N": [
"@cite_19",
"@cite_3",
"@cite_6"
],
"mid": [
"1977944732",
"1487777768",
"1486093399"
],
"abstract": [
"This paper presents an empirical evaluation of the quality of publicly available large-scale knowledge resources. The study includes a wide range of manually and automatically derived large-scale knowledge resources. In order to establish a fair and neutral comparison, the quality of each knowledge resource is indirectly evaluated using the same method on a Word Sense Disambiguation task. The evaluation framework selected has been the Senseval-3 English Lexical Sample Task. The study empirically demonstrates that automatically acquired knowledge resources surpass both in terms of precision and recall the knowledge resources derived manually, and that the combination of the knowledge contained in these resources is very close to the most frequent sense classifier. As far as we know, this is the first time that such a quality assessment has been performed showing a clear picture of the current state-of-the-art of publicly available wide coverage semantic resources.",
"A Chrysanthemum plant named Yellow Sheena, particularly characterized by its straight quill shaped bright yellow ray florets, strong stems, dark green leaves; diameter across the face of the capitulum 84-91 mm when fully opened, when grown as a single stem cut mum; flowering response under normal temperatures of 56-60 days after start of short days; plant height of 80-83 cm when grown with 14 long days prior to start of short days; peduncle length of the first lateral at flowering of 5-14 cm and at the fourth lateral of 10-15 cm, and its terminal spray formation.",
"This paper explores the possibility of enriching the content of existing ontologies. The overall goal is to overcome the lack of topical links among concepts in WordNet. Each concept is to be associated to a topic signature, i.e., a set of related words with associated weights. The signatures can be automatically constructed from the WWW or from sense-tagged corpora. Both approaches are compared and evaluated on a word sense disambiguation task. The results show that it is possible to construct clean signatures from the WWW using some filtering techniques."
]
} |
1710.08306 | 2755778050 | Mobile phones provide an excellent opportunity for building context-aware applications. In particular, location-based services are important context-aware services that are more and more used for enforcing security policies, for supporting indoor room navigation, and for providing personalized assistance. However, a major problem still remains unaddressed--the lack of solutions that work across buildings while not using additional infrastructure and also accounting for privacy and reliability needs. In this paper, a privacy-preserving, multi-modal, cross-building, collaborative localization platform is proposed based on Wi-Fi RSSI (existing infrastructure), Cellular RSSI, sound and light levels, that enables room-level localization as main application (though sub room level granularity is possible). The privacy is inherently built into the solution based on onion routing, and perturbation randomization techniques, and exploits the idea of weighted collaboration to increase the reliability as well as to limit the effect of noisy devices (due to sensor noise privacy). The proposed solution has been analyzed in terms of privacy, accuracy, optimum parameters, and other overheads on location data collected at multiple indoor and outdoor locations. | There are some privacy-preserving localization solutions in the literature. For example, Gedik and Liu @cite_16 present a location-privacy method that makes use of general k-anonymity model. Here, a person's location is indistinguishable from that of @math anonymous people around him her. However, in our collaborative setting this would result in a large communication overhead as the locations of @math people have to be known. Kassem and Kang @cite_0 provide techniques to address location tracking, profiling, and identification threats on Android OS. Conversely, since in our scenario more than one device is involved, we propose techniques to preserve privacy during collaboration between location providers and requester. There is some existing work on local collaboration such as @cite_17 , @cite_11 to increase accuracy. These solutions neither preserve privacy nor consider the effect of noisy devices in the collaboration process. Also since collaboration is limited to only local devices, these approaches lack the advantage of our approach where any device can become eligible for collaboration, provided it has some information about that place, or near by places. | {
"cite_N": [
"@cite_0",
"@cite_16",
"@cite_11",
"@cite_17"
],
"mid": [
"2151254335",
"2096899416",
"2075840228",
"2128920581"
],
"abstract": [
"As smartphones are increasingly used to run apps that provide users with location-based services, the users' location privacy has become a major concern. Existing solutions to this concern are deficient in terms of practicality, efficiency, and effectiveness. To address this problem, we design, implement, and evaluate LP-Guardian, a novel and comprehensive framework for location privacy protection for Android smartphone users. LP-Guardian's overcomes the shortcomings of existing approaches by addressing the tracking, profiling, and identification threats while maintaining app functionality. We have implemented and evaluated LP-Guardian's on Android 4.3.1. Our evaluation results show that LP-Guardian's effectively thwarts the privacy threats, without deteriorating the user's experience (less than 10 overhead in delay and energy). Also, LP-Guardian's privacy protection is shown to be achieved at a tolerable loss in app functionality.",
"Continued advances in mobile networks and positioning technologies have created a strong market push for location-based applications. Examples include location-aware emergency response, location-based advertisement, and location-based entertainment. An important challenge in the wide deployment of location-based services (LBSs) is the privacy-aware management of location information, providing safeguards for location privacy of mobile clients against vulnerabilities for abuse. This paper describes a scalable architecture for protecting the location privacy from various privacy threats resulting from uncontrolled usage of LBSs. This architecture includes the development of a personalized location anonymization model and a suite of location perturbation algorithms. A unique characteristic of our location privacy architecture is the use of a flexible privacy personalization framework to support location k-anonymity for a wide range of mobile clients with context-sensitive privacy requirements. This framework enables each mobile client to specify the minimum level of anonymity that it desires and the maximum temporal and spatial tolerances that it is willing to accept when requesting k-anonymity-preserving LBSs. We devise an efficient message perturbation engine to implement the proposed location privacy framework. The prototype that we develop is designed to be run by the anonymity server on a trusted platform and performs location anonymization on LBS request messages of mobile clients such as identity removal and spatio-temporal cloaking of the location information. We study the effectiveness of our location cloaking algorithms under various conditions by using realistic location data that is synthetically generated from real road maps and traffic volume data. Our experiments show that the personalized location k-anonymity model, together with our location perturbation engine, can achieve high resilience to location privacy threats without introducing any significant performance penalty.",
"We present Darwin, an enabling technology for mobile phone sensing that combines collaborative sensing and classification techniques to reason about human behavior and context on mobile phones. Darwin advances mobile phone sensing through the deployment of efficient but sophisticated machine learning techniques specifically designed to run directly on sensor-enabled mobile phones (i.e., smartphones). Darwin tackles three key sensing and inference challenges that are barriers to mass-scale adoption of mobile phone sensing applications: (i) the human-burden of training classifiers, (ii) the ability to perform reliably in different environments (e.g., indoor, outdoor) and (iii) the ability to scale to a large number of phones without jeopardizing the \"phone experience\" (e.g., usability and battery lifetime). Darwin is a collaborative reasoning framework built on three concepts: classifier model evolution, model pooling, and collaborative inference. To the best of our knowledge Darwin is the first system that applies distributed machine learning techniques and collaborative inference concepts to mobile phones. We implement the Darwin system on the Nokia N97 and Apple iPhone. While Darwin represents a general framework applicable to a wide variety of emerging mobile sensing applications, we implement a speaker recognition application and an augmented reality application to evaluate the benefits of Darwin. We show experimental results from eight individuals carrying Nokia N97s and demonstrate that Darwin improves the reliability and scalability of the proof-of-concept speaker recognition application without additional burden to users.",
"Handheld communication devices equipped with sensing capabilities can recognize some aspects of their context to enable novel applications. We seek to improve the reliability of context recognition through an analogy to human behavior. Where multiple devices are around, they can jointly negotiate on a suitable context and behave accordingly. We have developed a method for this collaborative context recognition for handheld devices. The method determines the need to request and collaboratively recognize the current context of a group of handheld devices. It uses both local context time history information and spatial context information of handheld devices within a certain area. The method exploits dynamic weight parameters that describe content and reliability of context information. The performance of the method is analyzed using artificial and real context data. The results suggest that the method is capable of improving the reliability."
]
} |
1710.08306 | 2755778050 | Mobile phones provide an excellent opportunity for building context-aware applications. In particular, location-based services are important context-aware services that are more and more used for enforcing security policies, for supporting indoor room navigation, and for providing personalized assistance. However, a major problem still remains unaddressed--the lack of solutions that work across buildings while not using additional infrastructure and also accounting for privacy and reliability needs. In this paper, a privacy-preserving, multi-modal, cross-building, collaborative localization platform is proposed based on Wi-Fi RSSI (existing infrastructure), Cellular RSSI, sound and light levels, that enables room-level localization as main application (though sub room level granularity is possible). The privacy is inherently built into the solution based on onion routing, and perturbation randomization techniques, and exploits the idea of weighted collaboration to increase the reliability as well as to limit the effect of noisy devices (due to sensor noise privacy). The proposed solution has been analyzed in terms of privacy, accuracy, optimum parameters, and other overheads on location data collected at multiple indoor and outdoor locations. | There is also existing literature on indoor localization and floor-map reconstruction with minimal infrastructure and human intervention. For example, @cite_8 propose a solution using the combination of dead-reckoning, user-activity recognition from mobile sensors, and WiFi-based partitioning of an area. @cite_2 make use of crowdsourcing to gather Wi-Fi signatures and determine the signature location using sensor activity recognition and a map of the floor plan. These techniques, unlike ours, do not consider privacy, and are mostly limited to a single building (albeit achieving high granularity) and cannot scale to work across buildings. There are also certain room-level localization solutions. For example, @cite_4 propose a technique that uses a combination of RSSI measurements and room specific user activity and dwell times. @cite_1 make use of RSSI readings from BLE beacons fixed in the rooms along with the geometry of the room. All of these solutions are designed to work only in a single or at most a few buildings and incur considerable overhead to pervasively work across buildings. Moreover they do not preserve user privacy. | {
"cite_N": [
"@cite_1",
"@cite_2",
"@cite_4",
"@cite_8"
],
"mid": [
"2508195202",
"2166315077",
"2476406844",
"2054602086"
],
"abstract": [
"During the last decades, location based services have become very popular and the developed indoor positioning systems have achieved an impressive accuracy. The problem though is that even if the only requirement is room-level localization, those systems are most of the times not cost-efficient and not easy to set-up, since they often require time-consuming calibration procedures. This paper presents a low-cost, threshold-based approach and introduces an algorithm that takes into account both the Received Signal Strength Indication (RSSI) of the Bluetooth Low Energy (BLE) beacons and the geometry of the rooms the beacons are placed in. Performance evaluation was done via measurements in an office environment composed of three rooms and in a house environment composed of six rooms. The experimental results show an improved accuracy in room detection when using the proposed algorithm, compared to when only considering the RSSI readings. This method was developed to provide context awareness to the international research project named SmartHeat. The projects aims to provide a system that efficiently heats a house, room by room, based on the habitants' habits and preferences.",
"Radio Frequency (RF) fingerprinting, based onWiFi or cellular signals, has been a popular approach to indoor localization. However, its adoption in the real world has been stymied by the need for sitespecific calibration, i.e., the creation of a training data set comprising WiFi measurements at known locations in the space of interest. While efforts have been made to reduce this calibration effort using modeling, the need for measurements from known locations still remains a bottleneck. In this paper, we present Zee -- a system that makes the calibration zero-effort, by enabling training data to be crowdsourced without any explicit effort on the part of users. Zee leverages the inertial sensors (e.g., accelerometer, compass, gyroscope) present in the mobile devices such as smartphones carried by users, to track them as they traverse an indoor environment, while simultaneously performing WiFi scans. Zee is designed to run in the background on a device without requiring any explicit user participation. The only site-specific input that Zee depends on is a map showing the pathways (e.g., hallways) and barriers (e.g., walls). A significant challenge that Zee surmounts is to track users without any a priori, user-specific knowledge such as the user's initial location, stride-length, or phone placement. Zee employs a suite of novel techniques to infer location over time: (a) placement-independent step counting and orientation estimation, (b) augmented particle filtering to simultaneously estimate location and user-specific walk characteristics such as the stride length,(c) back propagation to go back and improve the accuracy of ocalization in the past, and (d) WiFi-based particle initialization to enable faster convergence. We present an evaluation of Zee in a large office building.",
"Locating smartphone users will enable numerous potential applications such as monitoring customers in shopping malls. However, conventional received signal strength (RSS)-based room-level localization methods are not likely to distinguish neighboring zones accurately due to similar RSS fingerprints. We solve this problem by proposing a system called feature-based room-level localization (FRL). FRL is based on an observation that different rooms vary in internal structures and human activities which can be reflected by RSS fluctuation ranges and user dwell time respectively. These two features combing with RSS can be exploited to improve the localization accuracy. To enable localization of unmodified smartphones, FRL utilizes probe requests, which are periodically broadcast by smartphones to discover nearby access points (APs). Experiments indicate that FRL can reliably locate users in neighboring zones and achieve a 10 accuracy gain, compared with conventional methods like the histogram method.",
"We propose UnLoc, an unsupervised indoor localization scheme that bypasses the need for war-driving. Our key observation is that certain locations in an indoor environment present identifiable signatures on one or more sensing dimensions. An elevator, for instance, imposes a distinct pattern on a smartphone's accelerometer; a corridor-corner may overhear a unique set of WiFi access points; a specific spot may experience an unusual magnetic fluctuation. We hypothesize that these kind of signatures naturally exist in the environment, and can be envisioned as internal landmarks of a building. Mobile devices that \"sense\" these landmarks can recalibrate their locations, while dead-reckoning schemes can track them between landmarks. Results from 3 different indoor settings, including a shopping mall, demonstrate median location errors of 1:69m. War-driving is not necessary, neither are floorplans the system simultaneously computes the locations of users and landmarks, in a manner that they converge reasonably quickly. We believe this is an unconventional approach to indoor localization, holding promise for real-world deployment."
]
} |
1710.07845 | 2042695538 | The Paxos algorithm requires a single correct coordinator process to operate. After a failure, the replacement of the coordinator may lead to a temporary unavailability of the application implemented atop Paxos. So far, this unavailability has been addressed by reducing the coordinator replacement rate through the use of stable coordinator selection algorithms. We have observed that the cost of recovery of the newly elected coordinator's state is at the core of this unavailability problem. In this paper we present a new technique to manage coordinator replacement that allows the recovery to occur concurrently with new consensus rounds. Experimental results show that our seamless approach effectively solves the temporary unavailability problem, its adoption entails uninterrupted execution of the application. Our solution removes the restriction that the occurrence of coordinator replacements is something to be avoided, allowing the decoupling of the application execution from the accuracy of the mechanism used to choose a coordinator. This result increases the performance of the application even in the presence of failures, it is of special importance to the autonomous operation of replicated applications that have to adapt to varying network conditions and partial failures. | The importance of the coordinator replacement procedure was observed by during the design and operation of the Chubby distributed lock system @cite_15 @cite_0 . In this system the current coordinator has an explicit lease to operate for a predetermined period of time. This coordinator is called a master and it uses its lease to ensure its stability and concentrate client requests. The designers of Chubby decided to make it harder for a replica to loose its master status to simplify the design and increase its reliability. However, this approach has the cost of slower detection of process failures. For instance, a typical master change takes around 14 seconds @cite_15 . | {
"cite_N": [
"@cite_0",
"@cite_15"
],
"mid": [
"2143149536",
"1992479210"
],
"abstract": [
"We describe our experience in building a fault-tolerant data-base using the Paxos consensus algorithm. Despite the existing literature in the field, building such a database proved to be non-trivial. We describe selected algorithmic and engineering problems encountered, and the solutions we found for them. Our measurements indicate that we have built a competitive system.",
"We describe our experiences with the Chubby lock service, which is intended to provide coarse-grained locking as well as reliable (though low-volume) storage for a loosely-coupled distributed system. Chubby provides an interface much like a distributed file system with advisory locks, but the design emphasis is on availability and reliability, as opposed to high performance. Many instances of the service have been used for over a year, with several of them each handling a few tens of thousands of clients concurrently. The paper describes the initial design and expected use, compares it with actual use, and explains how the design had to be modified to accommodate the differences."
]
} |
1710.07845 | 2042695538 | The Paxos algorithm requires a single correct coordinator process to operate. After a failure, the replacement of the coordinator may lead to a temporary unavailability of the application implemented atop Paxos. So far, this unavailability has been addressed by reducing the coordinator replacement rate through the use of stable coordinator selection algorithms. We have observed that the cost of recovery of the newly elected coordinator's state is at the core of this unavailability problem. In this paper we present a new technique to manage coordinator replacement that allows the recovery to occur concurrently with new consensus rounds. Experimental results show that our seamless approach effectively solves the temporary unavailability problem, its adoption entails uninterrupted execution of the application. Our solution removes the restriction that the occurrence of coordinator replacements is something to be avoided, allowing the decoupling of the application execution from the accuracy of the mechanism used to choose a coordinator. This result increases the performance of the application even in the presence of failures, it is of special importance to the autonomous operation of replicated applications that have to adapt to varying network conditions and partial failures. | In general, a way to mitigate this problem is to devise a mechanism that makes it harder to replace the coordinator, namely a leader stabilization mechanism. @cite_16 have proposed a failure detector based on an election procedure with built-in leader stability; the coordinator is only replaced if it isn't able to effectively perform its actions. However, approaches like this do not directly address the problem of coordinator replacements caused by message loss or variable communication delay. Minimization of these errors entails the improvement in the overall quality of service of the failure detector @cite_7 , which often requires tuning the detector's parameters to the characteristics of the local networking environment. In the absence of a self adjusting mechanism and in rapidly changing network conditions, the system has to bear the full cost of coordinator replacement more often than necessary. | {
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"2102226439",
"2124617909"
],
"abstract": [
"This paper provides a realization of distributed leader election without having any eventual timely links. Progress is guaranteed in the following weak setting: Eventually one process can send messages such that every message obtains f timely responses, where f is a resilience bound. A crucial facet of this property is that the f responders need not be fixed, and may change from one message to another. In particular, this means that no specific link needs to remain timely. In the (common) case where f=1, this implies that the FLP impossibility result on consensus is circumvented if one process can at any time communicate in a timely manner with one other process in the system. The protocol also bears significant practical importance to well-known coordination schemes such as Paxos, because our setting more precisely captures the conditions on the elected leader for reaching timely consensus. Additionally, an extension of our protocol provides leader stability, which guarantees against arbitrary demotion of a qualified leader and avoids performance penalties associated with leader changes in schemes such as Paxos.",
"We study the quality of service (QoS) of failure detectors. By QoS, we mean a specification that quantifies: (1) how fast the failure detector detects actual failures and (2) how well it avoids false detections. We first propose a set of QoS metrics to specify failure detectors for systems with probabilistic behaviors, i.e., for systems where message delays and message losses follow some probability distributions. We then give a new failure detector algorithm and analyze its QoS in terms of the proposed metrics. We show that, among a large class of failure detectors, the new algorithm is optimal with respect to some of these QoS metrics. Given a set of failure detector QoS requirements, we show how to compute the parameters of our algorithm so that it satisfies these requirements and we show how this can be done even if the probabilistic behavior of the system is not known. We then present some simulation results that show that the new failure detector algorithm provides a better QoS than an algorithm that is commonly used in practice. Finally, we suggest some ways to make our failure detector adaptive to changes in the probabilistic behavior of the network."
]
} |
1710.07845 | 2042695538 | The Paxos algorithm requires a single correct coordinator process to operate. After a failure, the replacement of the coordinator may lead to a temporary unavailability of the application implemented atop Paxos. So far, this unavailability has been addressed by reducing the coordinator replacement rate through the use of stable coordinator selection algorithms. We have observed that the cost of recovery of the newly elected coordinator's state is at the core of this unavailability problem. In this paper we present a new technique to manage coordinator replacement that allows the recovery to occur concurrently with new consensus rounds. Experimental results show that our seamless approach effectively solves the temporary unavailability problem, its adoption entails uninterrupted execution of the application. Our solution removes the restriction that the occurrence of coordinator replacements is something to be avoided, allowing the decoupling of the application execution from the accuracy of the mechanism used to choose a coordinator. This result increases the performance of the application even in the presence of failures, it is of special importance to the autonomous operation of replicated applications that have to adapt to varying network conditions and partial failures. | Ultimately, the fact that Paxos requires a single coordinator is at the root of the unavailability problem. This single process will eventually fail, or be mistakenly taken for failed, requiring a new coordinator to take its place. Another approach was taken by and consists in not relying in a single one but on a group of coordinators @cite_1 . Their justification is that multiple coordinators make the algorithm more resilient to coordinator failures without requiring the use of Fast Paxos and its larger quorums. The resulting algorithm is considerably complex and increases the number of messages exchanged between the acceptors and the group of coordinators. Our seamless coordinator validation procedure is simpler and has similar coordinator resilience, if we consider the whole set of replicas that can act as a coordinator as a group where only a master is active at any time and master changes are very cheap. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2130387304"
],
"abstract": [
"Adaptability and graceful degradation are important features in distributed systems. Yet, consensus and other agreement protocols, basic building blocks of reliable distributed systems, lack these features and must perform expensive reconfiguration even in face of single failures. In this paper we describe multicoordinated mode of execution for agreement protocols that has improved availability and tolerates failures in a graceful manner. We exemplify our approach by presenting a generic broadcast algorithm. Our protocol can adapt to environment changes by switching to different execution modes. Finally, we show how our algorithm can solve the generalized consensus and its many instances (e.g., consensus and atomic broadcast)."
]
} |
1710.07845 | 2042695538 | The Paxos algorithm requires a single correct coordinator process to operate. After a failure, the replacement of the coordinator may lead to a temporary unavailability of the application implemented atop Paxos. So far, this unavailability has been addressed by reducing the coordinator replacement rate through the use of stable coordinator selection algorithms. We have observed that the cost of recovery of the newly elected coordinator's state is at the core of this unavailability problem. In this paper we present a new technique to manage coordinator replacement that allows the recovery to occur concurrently with new consensus rounds. Experimental results show that our seamless approach effectively solves the temporary unavailability problem, its adoption entails uninterrupted execution of the application. Our solution removes the restriction that the occurrence of coordinator replacements is something to be avoided, allowing the decoupling of the application execution from the accuracy of the mechanism used to choose a coordinator. This result increases the performance of the application even in the presence of failures, it is of special importance to the autonomous operation of replicated applications that have to adapt to varying network conditions and partial failures. | A similar strategy of splitting the coordinator role among many processes was taken in Mencius @cite_3 , to minimize the number of exchanged messages in a wide-area network. In Mencius processes take turns running a coordinator and proposers only exchange messages with the closest coordinator. The handover of coordinator responsibilities to another process is a built-in feature of the Mencius protocol. Every instance has a predefined coordinator, that proposes and decides a value in it or decides a special value (no-op) indicating it yields its turn. In Mencius coordinator replacement occurs on a per-instance basis. This effectively solves the temporary unavailability problem, as the state to be transferred is reduced to one instance. However, in the case of a permanent failure of a process, an unbounded number of these simple coordinator replacements will happen continuously for as long as the failed process remains down. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1814543774"
],
"abstract": [
"We present a protocol for general state machine replication - a method that provides strong consistency - that has high performance in a wide-area network. In particular, our protocol Mencius has high throughput under high client load and low latency under low client load even under changing wide-area network environment and client load. We develop our protocol as a derivation from the well-known protocol Paxos. Such a development can be changed or further refined to take advantage of specific network or application requirements."
]
} |
1710.07845 | 2042695538 | The Paxos algorithm requires a single correct coordinator process to operate. After a failure, the replacement of the coordinator may lead to a temporary unavailability of the application implemented atop Paxos. So far, this unavailability has been addressed by reducing the coordinator replacement rate through the use of stable coordinator selection algorithms. We have observed that the cost of recovery of the newly elected coordinator's state is at the core of this unavailability problem. In this paper we present a new technique to manage coordinator replacement that allows the recovery to occur concurrently with new consensus rounds. Experimental results show that our seamless approach effectively solves the temporary unavailability problem, its adoption entails uninterrupted execution of the application. Our solution removes the restriction that the occurrence of coordinator replacements is something to be avoided, allowing the decoupling of the application execution from the accuracy of the mechanism used to choose a coordinator. This result increases the performance of the application even in the presence of failures, it is of special importance to the autonomous operation of replicated applications that have to adapt to varying network conditions and partial failures. | One of the causes of communication instabilities that induce coordinator replacements in the absence of process failures is message loss due to buffer overflows. The designers of Ring Paxos @cite_2 have observed that many concurrent senders of multicast messages can increase considerably the rate of message loss. Ring Paxos attacks the problem caused by these message losses from a throughput perspective, by organizing acceptors in a ring. This minimizes concurrent senders, decreases message loss and increases the utilization of the links. However, in Ring Paxos coordinator replacement is still an expensive operation that can be triggered by workload peaks; it includes reforming the ring topology and broadcasting it to all active agents. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2167100431"
],
"abstract": [
"Atomic broadcast is an important communication primitive often used to implement state-machine replication. Despite the large number of atomic broadcast algorithms proposed in the literature, few papers have discussed how to turn these algorithms into efficient executable protocols. Our main contribution, Ring Paxos, is a protocol derived from Paxos. Ring Paxos inherits the reliability of Paxos and can be implemented very efficiently. We report a detailed performance analysis of Ring Paxos and compare it to other atomic broadcast protocols."
]
} |
1710.07394 | 2766982230 | In the wake of a polarizing election, social media is laden with hateful content. To address various limitations of supervised hate speech classification methods including corpus bias and huge cost of annotation, we propose a weakly supervised two-path bootstrapping approach for an online hate speech detection model leveraging large-scale unlabeled data. This system significantly outperforms hate speech detection systems that are trained in a supervised manner using manually annotated data. Applying this model on a large quantity of tweets collected before, after, and on election day reveals motivations and patterns of inflammatory language. | The commonly used classification methods in previous studies are logistic regression and Naive Bayes classifiers. and applied neural network models for training word embeddings, which were further used as features in a logistic regression model for classification. We will instead train a neural net classifier @cite_2 @cite_0 @cite_5 in a weakly supervised manner in order to capture implicit and compositional hate speech expressions. | {
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_2"
],
"mid": [
"",
"2284289336",
"2949541494"
],
"abstract": [
"",
"Neural network models have been demonstrated to be capable of achieving remarkable performance in sentence and document modeling. Convolutional neural network (CNN) and recurrent neural network (RNN) are two mainstream architectures for such modeling tasks, which adopt totally different ways of understanding natural languages. In this work, we combine the strengths of both architectures and propose a novel and unified model called C-LSTM for sentence representation and text classification. C-LSTM utilizes CNN to extract a sequence of higher-level phrase representations, and are fed into a long short-term memory recurrent neural network (LSTM) to obtain the sentence representation. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. We evaluate the proposed architecture on sentiment classification and question classification tasks. The experimental results show that the C-LSTM outperforms both CNN and LSTM and can achieve excellent performance on these tasks.",
"We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification."
]
} |
1710.07558 | 2766114019 | Convolutional neural networks rely on image texture and structure to serve as discriminative features to classify the image content. Image enhancement techniques can be used as preprocessing steps to help improve the overall image quality and in turn improve the overall effectiveness of a CNN. Existing image enhancement methods, however, are designed to improve the perceptual quality of an image for a human observer. In this paper, we are interested in learning CNNs that can emulate image enhancement and restoration, but with the overall goal to improve image classification and not necessarily human perception. To this end, we present a unified CNN architecture that uses a range of enhancement filters that can enhance image-specific details via end-to-end dynamic filter learning. We demonstrate the effectiveness of this strategy on four challenging benchmark datasets for fine-grained, object, scene, and texture classification: CUB-200-2011, PASCAL-VOC2007, MIT-Indoor, and DTD. Experiments using our proposed enhancement show promising results on all the datasets. In addition, our approach is capable of improving the performance of all generic CNN architectures. | Considerable progress has been seen in the development for removing the effects of blur @cite_23 , noise @cite_33 , and compression artifacts @cite_11 using CNN architectures. Reversing the effect of these degradations in order to obtain sharp images is currently an active area of research @cite_23 @cite_4 @cite_37 . The investigated CNN frameworks @cite_23 @cite_6 @cite_31 @cite_4 @cite_38 @cite_33 @cite_37 @cite_22 are typically built on simple strategies to train the networks by minimizing a global objective function using input-output image pairs. These frameworks encourage the output to have a similar structure with the target image. After training the CNN, a similar approach to transfer details to new images has been proposed @cite_37 . These frameworks act as a filter that are specialized for a specific enhancement method. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_6",
"@cite_23",
"@cite_31",
"@cite_11"
],
"mid": [
"",
"",
"",
"2572831907",
"1920280450",
"2751689814",
"2300657047",
"2509784253",
"2124964692"
],
"abstract": [
"",
"",
"",
"The increasing demand for high image quality in mobile devices brings forth the need for better computational enhancement techniques, and image denoising in particular. At the same time, the images captured by these devices can be categorized into a small set of semantic classes. However simple, this observation has not been exploited in image denoising until now. In this paper, we demonstrate how the reconstruction quality improves when a denoiser is aware of the type of content in the image. To this end, we first propose a new fully convolutional deep neural network architecture which is simple yet powerful as it achieves state-of-the-art performance even without being class-aware. We further show that a significant boost in performance of up to @math dB PSNR can be achieved by making our network class-aware, namely, by fine-tuning it for images belonging to a specific semantic class. Relying on the hugely successful existing image classifiers, this research advocates for using a class-aware approach in all image enhancement tasks.",
"Photo retouching enables photographers to invoke dramatic visual impressions by artistically enhancing their photos through stylistic color and tone adjustments. However, it is also a time-consuming and challenging task that requires advanced skills beyond the abilities of casual photographers. Using an automated algorithm is an appealing alternative to manual work, but such an algorithm faces many hurdles. Many photographic styles rely on subtle adjustments that depend on the image content and even its semantics. Further, these adjustments are often spatially varying. Existing automatic algorithms are still limited and cover only a subset of these challenges. Recently, deep learning has shown unique abilities to address hard problems. This motivated us to explore the use of deep neural networks (DNNs) in the context of photo editing. In this article, we formulate automatic photo adjustment in a manner suitable for this approach. We also introduce an image descriptor accounting for the local semantics of an image. Our experiments demonstrate that training DNNs using these descriptors successfully capture sophisticated photographic styles. In particular and unlike previous techniques, it can model local adjustments that depend on image semantics. We show that this yields results that are qualitatively and quantitatively better than previous work.",
"We present an approach to accelerating a wide variety of image processing operators. Our approach uses a fully-convolutional network that is trained on input-output pairs that demonstrate the operator's action. After training, the original operator need not be run at all. The trained network operates at full resolution and runs in constant time. We investigate the effect of network architecture on approximation accuracy, runtime, and memory footprint, and identify a specific architecture that balances these considerations. We evaluate the presented approach on ten advanced image processing operators, including multiple variational models, multiscale tone and detail manipulation, photographic style transfer, nonlocal dehazing, and nonphotorealistic stylization. All operators are approximated by the same model. Experiments demonstrate that the presented approach is significantly more accurate than prior approximation schemes. It increases approximation accuracy as measured by PSNR across the evaluated operators by 8.5 dB on the MIT-Adobe dataset (from 27.5 to 36 dB) and reduces DSSIM by a multiplicative factor of 3 compared to the most accurate prior approximation scheme, while being the fastest. We show that our models generalize across datasets and across resolutions, and investigate a number of extensions of the presented approach. The results are shown in the supplementary video at this https URL",
"We present a new method for blind motion deblurring that uses a neural network trained to compute estimates of sharp image patches from observations that are blurred by an unknown motion kernel. Instead of regressing directly to patch intensities, this network learns to predict the complex Fourier coefficients of a deconvolution filter to be applied to the input patch for restoration. For inference, we apply the network independently to all overlapping patches in the observed image, and average its outputs to form an initial estimate of the sharp image. We then explicitly estimate a single global blur kernel by relating this estimate to the observed image, and finally perform non-blind deconvolution with this kernel. Our method exhibits accuracy and robustness close to state-of-the-art iterative methods, while being much faster when parallelized on GPU hardware.",
"We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.",
"Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods."
]
} |
1710.07558 | 2766114019 | Convolutional neural networks rely on image texture and structure to serve as discriminative features to classify the image content. Image enhancement techniques can be used as preprocessing steps to help improve the overall image quality and in turn improve the overall effectiveness of a CNN. Existing image enhancement methods, however, are designed to improve the perceptual quality of an image for a human observer. In this paper, we are interested in learning CNNs that can emulate image enhancement and restoration, but with the overall goal to improve image classification and not necessarily human perception. To this end, we present a unified CNN architecture that uses a range of enhancement filters that can enhance image-specific details via end-to-end dynamic filter learning. We demonstrate the effectiveness of this strategy on four challenging benchmark datasets for fine-grained, object, scene, and texture classification: CUB-200-2011, PASCAL-VOC2007, MIT-Indoor, and DTD. Experiments using our proposed enhancement show promising results on all the datasets. In addition, our approach is capable of improving the performance of all generic CNN architectures. | Similar to our goal are the works @cite_3 @cite_35 @cite_10 @cite_13 @cite_16 @cite_39 , where the authors also seek to ameliorate the degradation effects for accurate classification. Dodge and Karam @cite_35 analyzed how blur, noise, contrast, and compression hamper the performance of ConvNet architectures for image classification. Their findings showed that: (1) ConvNets are very sensitive to blur because blur removes textures in the images; (2) noise affects the performance negatively, though deeper architectures' performance falls off slower; and (3) deep networks are resilient to compression distortions and contrast changes. A study by @cite_10 reports similar results for a face-recognition task. @cite_16 showed that minor changes to the image, which are barely perceptible to humans, can have drastic effects on computational recognition accuracy. @cite_29 showed that applying an imperceptible non-random perturbation can cause ConvNets to produce erroneous prediction. | {
"cite_N": [
"@cite_35",
"@cite_10",
"@cite_29",
"@cite_3",
"@cite_39",
"@cite_16",
"@cite_13"
],
"mid": [
"2337024056",
"2511484725",
"1673923490",
"2519898457",
"2556882396",
"2280426979",
""
],
"abstract": [
"Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions.",
"Face recognition approaches that are based on deep convolutional neural networks (CNN) have been dominating the field. The performance improvements they have provided in the so called in-the-wild datasets are significant, however, their performance under image quality degradations have not been assessed, yet. This is particularly important, since in real-world face recognition applications, images may contain various kinds of degradations due to motion blur, noise, compression artifacts, color distortions, and occlusion. In this work, we have addressed this problem and analyzed the influence of these image degradations on the performance of deep CNN-based face recognition approaches using the standard LFW closed-set identification protocol. We have evaluated three popular deep CNN models, namely, the AlexNet, VGG-Face, and GoogLeNet. Results have indicated that blur, noise, and occlusion cause a significant decrease in performance, while deep CNN models are found to be robust to distortions, such as color distortions and change in color balance.",
"Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"Image classification is one of the main research problems in computer vision and machine learning. Since in most real-world image classification applications there is no control over how the images are captured, it is necessary to consider the possibility that these images might be affected by noise (e.g. sensor noise in a low-quality surveillance camera). In this paper we analyse the impact of three different types of noise on descriptors extracted by two widely used feature extraction methods (LBP and HOG) and how denoising the images can help to mitigate this problem. We carry out experiments on two different datasets and consider several types of noise, noise levels, and denoising methods. Our results show that noise can hinder classification performance considerably and make classes harder to separate. Although denoising methods were not able to reach the same performance of the noise-free scenario, they improved classification results for noisy data.",
"State-of-the-art algorithms for many semantic visual tasks are based on the use of convolutional neural networks. These networks are commonly trained, and evaluated, on large annotated datasets of artifact-free high-quality images. In this paper, we investigate the effect of one such artifact that is quite common in natural capture settings: optical blur. We show that standard network models, trained only on high-quality images, suffer a significant degradation in performance when applied to those degraded by blur due to defocus, or subject or camera motion. We investigate the extent to which this degradation is due to the mismatch between training and input image statistics. Specifically, we find that fine-tuning a pre-trained model with blurred images added to the training set allows it to regain much of the lost accuracy. We also show that there is a fair amount of generalization between different degrees and types of blur, which implies that a single network model can be used robustly for recognition when the nature of the blur in the input is unknown. We find that this robustness arises as a result of these models learning to generate blur invariant representations in their hidden layers. Our findings provide useful insights towards developing vision systems that can perform reliably on real world images affected by blur.",
"Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.",
""
]
} |
1710.07558 | 2766114019 | Convolutional neural networks rely on image texture and structure to serve as discriminative features to classify the image content. Image enhancement techniques can be used as preprocessing steps to help improve the overall image quality and in turn improve the overall effectiveness of a CNN. Existing image enhancement methods, however, are designed to improve the perceptual quality of an image for a human observer. In this paper, we are interested in learning CNNs that can emulate image enhancement and restoration, but with the overall goal to improve image classification and not necessarily human perception. To this end, we present a unified CNN architecture that uses a range of enhancement filters that can enhance image-specific details via end-to-end dynamic filter learning. We demonstrate the effectiveness of this strategy on four challenging benchmark datasets for fine-grained, object, scene, and texture classification: CUB-200-2011, PASCAL-VOC2007, MIT-Indoor, and DTD. Experiments using our proposed enhancement show promising results on all the datasets. In addition, our approach is capable of improving the performance of all generic CNN architectures. | To help to mitigate these problems, @cite_3 designed separate models specialized for each noisy version of an augmented training set. This improved the classification results for noisy data to some extent. @cite_13 explored the potential of jointly training on low-resolution and high-resolution images in order to boost performance on low-resolution inputs. Similar to @cite_13 is 's @cite_39 work, where the authors augment the training set with degradations and fine-tune the network with a diverse mix of different types of degraded and high-quality images to regain much of the lost accuracy on degraded images. In fact, with this approach the authors were able to learn to generate a degradation (particularly blur) invariant representation in their hidden layers. | {
"cite_N": [
"@cite_13",
"@cite_3",
"@cite_39"
],
"mid": [
"",
"2519898457",
"2556882396"
],
"abstract": [
"",
"Image classification is one of the main research problems in computer vision and machine learning. Since in most real-world image classification applications there is no control over how the images are captured, it is necessary to consider the possibility that these images might be affected by noise (e.g. sensor noise in a low-quality surveillance camera). In this paper we analyse the impact of three different types of noise on descriptors extracted by two widely used feature extraction methods (LBP and HOG) and how denoising the images can help to mitigate this problem. We carry out experiments on two different datasets and consider several types of noise, noise levels, and denoising methods. Our results show that noise can hinder classification performance considerably and make classes harder to separate. Although denoising methods were not able to reach the same performance of the noise-free scenario, they improved classification results for noisy data.",
"State-of-the-art algorithms for many semantic visual tasks are based on the use of convolutional neural networks. These networks are commonly trained, and evaluated, on large annotated datasets of artifact-free high-quality images. In this paper, we investigate the effect of one such artifact that is quite common in natural capture settings: optical blur. We show that standard network models, trained only on high-quality images, suffer a significant degradation in performance when applied to those degraded by blur due to defocus, or subject or camera motion. We investigate the extent to which this degradation is due to the mismatch between training and input image statistics. Specifically, we find that fine-tuning a pre-trained model with blurred images added to the training set allows it to regain much of the lost accuracy. We also show that there is a fair amount of generalization between different degrees and types of blur, which implies that a single network model can be used robustly for recognition when the nature of the blur in the input is unknown. We find that this robustness arises as a result of these models learning to generate blur invariant representations in their hidden layers. Our findings provide useful insights towards developing vision systems that can perform reliably on real world images affected by blur."
]
} |
1710.07455 | 2765468404 | Action recognition in surveillance video makes our life safer by detecting the criminal events or predicting violent emergencies. However, efficient action recognition is not free of difficulty. First, there are so many action classes in daily life that we cannot pre-define all possible action classes beforehand. Moreover, it is very hard to collect real-word videos for certain particular actions such as steal and street fight due to legal restrictions and privacy protection. These challenges make existing data-driven recognition methods insufficient to attain desired performance. Zero-shot learning is potential to be applied to solve these issues since it can perform classification without positive example. Nevertheless, current zero-shot learning algorithms have been studied under the unreasonable setting where seen classes are absent during the testing phase. Motivated by this, we study the task of action recognition in surveillance video under a more realistic , where testing data contains both seen and unseen classes. To our best knowledge, this is the first work to study video action recognition under the generalized zero-shot setting. We firstly perform extensive empirical studies on several existing zero-shot leaning approaches under this new setting on a web-scale video data. Our experimental results demonstrate that, under the generalize setting, typical zero-shot learning methods are no longer effective for the dataset we applied. Then, we propose a method for action recognition by deploying generalized zero-shot learning, which transfers the knowledge of web video to detect the anomalous actions in surveillance videos. To verify the effectiveness of our proposed method, we further construct a new surveillance video dataset consisting of nine action classes related to the public safety situation. | Video-based action recognition has been widely explored in the past few years. Previous works closely related to ours fall into two types: (1) Action recognition with hand-crafted features; (2) CNN based Action Recognition. In the early stage, some action recognition techniques focus on designing powerful and effective video representations using local spatio-temporal features, such as Motion Boundary Histograms(MBH) @cite_19 , 3D Scale-Invariant Feature Transform (SIFT-3D) @cite_4 and Histogram of Optical Flow (HOF) @cite_32 . For example, @cite_37 recently propose a state-of-the-art hand-crafted feature named improved Dense Trajectories (iDT), which extracts several descriptors (HOG, HOF and MBH) along the trajectory. Then the feature distributions are encoded by the robust residual encoders such as Vector of Locally Aggregated Descriptors (VLAD) and its probabilistic version Fisher Vector @cite_28 . In the final step, a classifier such as Support Vector Machine (SVM) is learned for classification. In spite of superior performance on various datasets, this kind of methods not only lack discriminative capacity as well as scalability, but also is computationally intensive so that it is difficult to be applied to large scale datasets. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_28",
"@cite_32",
"@cite_19"
],
"mid": [
"2105101328",
"2108333036",
"2103924867",
"2142194269",
""
],
"abstract": [
"Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.",
"In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data.",
"The objective of this paper is large scale object instance retrieval, given a query image. A starting point of such systems is feature detection and description, for example using SIFT. The focus of this paper, however, is towards very large scale retrieval where, due to storage requirements, very compact image descriptors are required and no information about the original SIFT descriptors can be accessed directly at run time. We start from VLAD, the state-of-the art compact descriptor introduced by for this purpose, and make three novel contributions: first, we show that a simple change to the normalization method significantly improves retrieval performance, second, we show that vocabulary adaptation can substantially alleviate problems caused when images are added to the dataset after initial vocabulary learning. These two methods set a new state-of-the-art over all benchmarks investigated here for both mid-dimensional (20k-D to 30k-D) and small (128-D) descriptors. Our third contribution is a multiple spatial VLAD representation, MultiVLAD, that allows the retrieval and localization of objects that only extend over a small part of an image (again without requiring use of the original image SIFT descriptors).",
"The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.",
""
]
} |
1710.07455 | 2765468404 | Action recognition in surveillance video makes our life safer by detecting the criminal events or predicting violent emergencies. However, efficient action recognition is not free of difficulty. First, there are so many action classes in daily life that we cannot pre-define all possible action classes beforehand. Moreover, it is very hard to collect real-word videos for certain particular actions such as steal and street fight due to legal restrictions and privacy protection. These challenges make existing data-driven recognition methods insufficient to attain desired performance. Zero-shot learning is potential to be applied to solve these issues since it can perform classification without positive example. Nevertheless, current zero-shot learning algorithms have been studied under the unreasonable setting where seen classes are absent during the testing phase. Motivated by this, we study the task of action recognition in surveillance video under a more realistic , where testing data contains both seen and unseen classes. To our best knowledge, this is the first work to study video action recognition under the generalized zero-shot setting. We firstly perform extensive empirical studies on several existing zero-shot leaning approaches under this new setting on a web-scale video data. Our experimental results demonstrate that, under the generalize setting, typical zero-shot learning methods are no longer effective for the dataset we applied. Then, we propose a method for action recognition by deploying generalized zero-shot learning, which transfers the knowledge of web video to detect the anomalous actions in surveillance videos. To verify the effectiveness of our proposed method, we further construct a new surveillance video dataset consisting of nine action classes related to the public safety situation. | It's worth noting that most of the above methods achieve promising performance based on the large scale web-scale video datasets @cite_13 @cite_38 , which include several hundred or thousand training examples per action class. The lack of training data in the surveillance video directly leads to the sharp degradation of recognition performance. Since zero-shot learning is capable of recognizing action categories without ever having seen them before, we deploy zero-shot learning algorithms to recognize the events occurred in surveillance video even there is no available instance to train classifiers. | {
"cite_N": [
"@cite_38",
"@cite_13"
],
"mid": [
"24089286",
"2126579184"
],
"abstract": [
"We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.",
"With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion."
]
} |
1710.07455 | 2765468404 | Action recognition in surveillance video makes our life safer by detecting the criminal events or predicting violent emergencies. However, efficient action recognition is not free of difficulty. First, there are so many action classes in daily life that we cannot pre-define all possible action classes beforehand. Moreover, it is very hard to collect real-word videos for certain particular actions such as steal and street fight due to legal restrictions and privacy protection. These challenges make existing data-driven recognition methods insufficient to attain desired performance. Zero-shot learning is potential to be applied to solve these issues since it can perform classification without positive example. Nevertheless, current zero-shot learning algorithms have been studied under the unreasonable setting where seen classes are absent during the testing phase. Motivated by this, we study the task of action recognition in surveillance video under a more realistic , where testing data contains both seen and unseen classes. To our best knowledge, this is the first work to study video action recognition under the generalized zero-shot setting. We firstly perform extensive empirical studies on several existing zero-shot leaning approaches under this new setting on a web-scale video data. Our experimental results demonstrate that, under the generalize setting, typical zero-shot learning methods are no longer effective for the dataset we applied. Then, we propose a method for action recognition by deploying generalized zero-shot learning, which transfers the knowledge of web video to detect the anomalous actions in surveillance videos. To verify the effectiveness of our proposed method, we further construct a new surveillance video dataset consisting of nine action classes related to the public safety situation. | However, zero-shot learning is analyzed under a hypothesis that the instances of seen classes is absent from the testing set. In the real application, this setting might be impractical since it is usual to encounter instances in both seen and unseen classes during the testing phase. To better investigate this problem, @cite_33 advocate a generalized zero-shot learning setting, where models trained on samples of seen classes require to predict testing data from both seen and unseen classes. @cite_43 first reconstruct some popular benchmarks for fair comparison and then analyze a significant number of the state-of-the-art zero-shot learning methods for image recognition. Two novel evaluation protocols for generalized zero-shot learning are proposed in @cite_33 and @cite_43 . However, the above generalized zero-shot learning methods are mainly for image. Our work is evaluated under both settings with multiple evaluation metrics for generalized video zero-shot learning problems. | {
"cite_N": [
"@cite_43",
"@cite_33"
],
"mid": [
"2949503252",
"2949823873"
],
"abstract": [
"Due to the importance of zero-shot learning, the number of proposed approaches has increased steadily recently. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g. pre-training on zero-shot test classes. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss limitations of the current status of the area which can be taken as a basis for advancing it.",
"We investigate the problem of generalized zero-shot learning (GZSL). GZSL relaxes the unrealistic assumption in conventional ZSL that test data belong only to unseen novel classes. In GZSL, test data might also come from seen classes and the labeling space is the union of both types of classes. We show empirically that a straightforward application of the classifiers provided by existing ZSL approaches does not perform well in the setting of GZSL. Motivated by this, we propose a surprisingly simple but effective method to adapt ZSL approaches for GZSL. The main idea is to introduce a calibration factor to calibrate the classifiers for both seen and unseen classes so as to balance two conflicting forces: recognizing data from seen classes and those from unseen ones. We develop a new performance metric called the Area Under Seen-Unseen accuracy Curve to characterize this tradeoff. We demonstrate the utility of this metric by analyzing existing ZSL approaches applied to the generalized setting. Extensive empirical studies reveal strengths and weaknesses of those approaches on three well-studied benchmark datasets, including the large-scale ImageNet Full 2011 with 21,000 unseen categories. We complement our comparative studies in learning methods by further establishing an upper-bound on the performance limit of GZSL. There, our idea is to use class-representative visual features as the idealized semantic embeddings. We show that there is a large gap between the performance of existing approaches and the performance limit, suggesting that improving the quality of class semantic embeddings is vital to improving zero-shot learning."
]
} |
1710.07411 | 2767150096 | The importance of geo-spatial data in critical applications such as emergency response, transportation, agriculture etc., has prompted the adoption of recent GeoSPARQL standard in many RDF processing engines. In addition to large repositories of geo-spatial data -- e.g., LinkedGeoData, OpenStreetMap, etc. -- spatial data is also routinely found in automatically constructed knowledgebases such as Yago and WikiData. While there have been research efforts for efficient processing of spatial data in RDF SPARQL, very little effort has gone into building end-to-end systems that can holistically handle complex SPARQL queries along with spatial filters. In this paper, we present Streak, a RDF data management system that is designed to support a wide-range of queries with spatial filters including complex joins, top-k, higher-order relationships over spatially enriched databases. Streak introduces various novel features such as a careful identifier encoding strategy for spatial and non-spatial entities, the use of a semantics-aware Quad-tree index that allows for early-termination and a clever use of adaptive query processing with zero plan-switch cost. We show that Streak can scale to some of the largest publicly available semantic data resources such as Yago3 and LinkedGeoData which contain spatial entities and quantifiable predicates useful for result ranking. For experimental evaluations, we focus on top-k distance join queries and demonstrate that Streak outperforms popular spatial join algorithms as well as state of the art end-to-end systems like Virtuoso and PostgreSQL. | Parliament @cite_13 is a spatial RDF engine built on top of Jena @cite_4 . Parliament supports GeoSPARQL features and uses R-tree as the spatial index. Additional techniques to enhance spatial queries, as adopted by Geo-store @cite_44 and in RDF engine proposed by @cite_17 involve encoding spatial data by employing Hilbert space-filling curves for preserving spatial locality. | {
"cite_N": [
"@cite_44",
"@cite_13",
"@cite_4",
"@cite_17"
],
"mid": [
"2095542621",
"1715730942",
"",
"1770795006"
],
"abstract": [
"The techniques of utilizing spatial data on the Semantic Web have attracted more and more interest from researchers due to the rapidly increasing applications based on geographic information. However, there are currently limited solutions providing efficient spatial query evaluation based on Semantic Web data. In this demonstration, we present Geo-Store, a novel spatially-augmented SPARQL evaluation system. By extending the standard SPARQL query language with spatial query filters, Geo-Store is able to process complex spatial queries with common spatial constraints. These spatial filters are designed based on our Spatially Aware Mapping (SAM) scheme. With SAM, spatial data are pre-processed and encoded with their Hilbert values by employing the Hilbert curve, resulting in more efficient spatial query processing than the existing approaches. The Geo-Store demonstration includes both a server and a web browser-based client.",
"As the amount of Linked Open Data on the web increases, so does the amount of data with an inherent spatial context. Without spatial reasoning, however, the value of this spatial context is limited. Over the past decade there have been several vocabularies and query languages that attempt to exploit this knowledge and enable spatial reasoning. These attempts provide varying levels of support for fundamental geospatial concepts. GeoSPARQL, a forthcoming OGC standard, attempts to unify data access for the geospatial Semantic Web. As authors of the Parliament triple store and contributors to the GeoSPARQL specification, we are particularly interested in the issues of geospatial data access and indexing. In this paper, we look at the overall state of geospatial data in the Semantic Web, with a focus on GeoSPARQL. We first describe the motivation for GeoSPARQL, then the current state of the art in industry and research, followed by an example use case, and finally our implementation of GeoSPARQL in the Parliament triple store.",
"",
"The RDF data model has recently been extended to support representation and querying of spatial information (i.e., locations and geometries), which is associated with RDF entities. Still, there are limited efforts towards extending RDF stores to efficiently support spatial queries, such as range selections (e.g., find entities within a given range) and spatial joins (e.g., find pairs of entities whose locations are close to each other). In this paper, we propose an extension for RDF stores that supports efficient spatial data management. Our contributions include an effective encoding scheme for entities having spatial locations, the introduction of on-the-fly spatial filters and spatial join algorithms, and several optimizations that minimize the overhead of geometry and dictionary accesses. We implemented the proposed techniques as an extension to the opensource RDF-3X engine and we experimentally evaluated them using real RDF knowledge bases. The results show that our system offers robust performance for spatial queries, while introducing little overhead to the original query engine."
]
} |
1710.07411 | 2767150096 | The importance of geo-spatial data in critical applications such as emergency response, transportation, agriculture etc., has prompted the adoption of recent GeoSPARQL standard in many RDF processing engines. In addition to large repositories of geo-spatial data -- e.g., LinkedGeoData, OpenStreetMap, etc. -- spatial data is also routinely found in automatically constructed knowledgebases such as Yago and WikiData. While there have been research efforts for efficient processing of spatial data in RDF SPARQL, very little effort has gone into building end-to-end systems that can holistically handle complex SPARQL queries along with spatial filters. In this paper, we present Streak, a RDF data management system that is designed to support a wide-range of queries with spatial filters including complex joins, top-k, higher-order relationships over spatially enriched databases. Streak introduces various novel features such as a careful identifier encoding strategy for spatial and non-spatial entities, the use of a semantics-aware Quad-tree index that allows for early-termination and a clever use of adaptive query processing with zero plan-switch cost. We show that Streak can scale to some of the largest publicly available semantic data resources such as Yago3 and LinkedGeoData which contain spatial entities and quantifiable predicates useful for result ranking. For experimental evaluations, we focus on top-k distance join queries and demonstrate that Streak outperforms popular spatial join algorithms as well as state of the art end-to-end systems like Virtuoso and PostgreSQL. | Another framework, S-Store @cite_36 , extends state-of-the-art RDF store gStore @cite_33 , by indexing data based on their structure. Subsequently, queries are accelerated by pruning based on both spatial and semantic constraints in the query. Yet another spatiotemporal storage framework is g @math -Store @cite_2 , which is an extension of gStore @cite_33 . It processes spatial queries using its tree-style index ST-tree, with a top-down search algorithm. | {
"cite_N": [
"@cite_36",
"@cite_33",
"@cite_2"
],
"mid": [
"202256239",
"1982177147",
"2400703691"
],
"abstract": [
"The semantic web data and the SPARQL query language allow users to write precise queries. However, the lack of spatial information limits the use of the semantic web data on position-oriented query. In this paper, we introduce spatial SPARQL, a variant of SPARQL language, for querying spatial information integrated RDF data. Besides, we design a novel index SS-tree for evaluating the spatial queries. Based on the index, we propose a search algorithm. The experimental results show the effectiveness and the efficiency of our approach.",
"Due to the increasing use of RDF data, efficient processing of SPARQL queries over RDF datasets has become an important issue. However, existing solutions suffer from two limitations: 1) they cannot answer SPARQL queries with wildcards in a scalable manner; and 2) they cannot handle frequent updates in RDF repositories efficiently. Thus, most of them have to reprocess the dataset from scratch. In this paper, we propose a graph-based approach to store and query RDF data. Rather than mapping RDF triples into a relational database as most existing methods do, we store RDF data as a large graph. A SPARQL query is then converted into a corresponding subgraph matching query. In order to speed up query processing, we develop a novel index, together with some effective pruning rules and efficient search algorithms. Our method can answer exact SPARQL queries and queries with wildcards in a uniform manner. We also propose an effective maintenance algorithm to handle online updates over RDF repositories. Extensive experiments confirm the efficiency and effectiveness of our solution.",
"In this paper, we present a spatiotemporal information integrated RDF data management system, called g st -Store. In g st -Store, some entities have spatiotemporal features, and some statements have valid time intervals and occurring locations. We introduce some spatiotemporal assertions into the SPARQL query language to answer the spatiotemporal range queries and join queries. Some examples are listed to demonstrate our demo."
]
} |
1710.07411 | 2767150096 | The importance of geo-spatial data in critical applications such as emergency response, transportation, agriculture etc., has prompted the adoption of recent GeoSPARQL standard in many RDF processing engines. In addition to large repositories of geo-spatial data -- e.g., LinkedGeoData, OpenStreetMap, etc. -- spatial data is also routinely found in automatically constructed knowledgebases such as Yago and WikiData. While there have been research efforts for efficient processing of spatial data in RDF SPARQL, very little effort has gone into building end-to-end systems that can holistically handle complex SPARQL queries along with spatial filters. In this paper, we present Streak, a RDF data management system that is designed to support a wide-range of queries with spatial filters including complex joins, top-k, higher-order relationships over spatially enriched databases. Streak introduces various novel features such as a careful identifier encoding strategy for spatial and non-spatial entities, the use of a semantics-aware Quad-tree index that allows for early-termination and a clever use of adaptive query processing with zero plan-switch cost. We show that Streak can scale to some of the largest publicly available semantic data resources such as Yago3 and LinkedGeoData which contain spatial entities and quantifiable predicates useful for result ranking. For experimental evaluations, we focus on top-k distance join queries and demonstrate that Streak outperforms popular spatial join algorithms as well as state of the art end-to-end systems like Virtuoso and PostgreSQL. | Relational approaches such as Synchronous R-tree traversal @cite_15 and TOUCH @cite_20 apply spatial-join algorithm on hierarchical tree-like data structure. TOUCH is an in-memory based technique, which uses R-tree traversal for evaluating spatial joins. differs from TOUCH in using identifier encoding and semantic information embedded within the datasets for early-pruning, and for improving performance. and by utilizing statistics stored within the spatial index to choose the query plan at runtime for improving performance. Sync. R-tree traversal builds R-tree indexes over the datasets participating in the spatial join. Starting from the root of the trees, the two datasets are then synchronously traversed to check for intersections, with the join happening at the leaf nodes. Inner node overlap and dead space are two shortcomings in this technique, which are addressed with 's that uses space-oriented partitioning. We compare with sync. R-tree traversal in . | {
"cite_N": [
"@cite_15",
"@cite_20"
],
"mid": [
"2058903936",
"2150610799"
],
"abstract": [
"Spatial joins are one of the most important operations for combining spatial objects of several relations. The efficient processing of a spatial join is extremely important since its execution time is superlinear in the number of spatial objects of the participating relations, and this number of objects may be very high. In this paper, we present a first detailed study of spatial join processing using R-trees, particularly R*-trees. R-trees are very suitable for supporting spatial queries and the R*-tree is one of the most efficient members of the R-tree family. Starting from a straightforward approach, we present several techniques for improving its execution time with respect to both, CPU- and I O-time. Eventually, we end up with an algorithm whose total execution time is improved over the first approach by an order of magnitude. Using a buffer of reasonable size, I O-time is almost optimal, i.e. it almost corresponds to the time for reading each required page of the relations exactly once. The performance of the various approaches is investigated in an experimental performance comparison where several large data sets from real applications are used.",
"Efficient spatial joins are pivotal for many applications and particularly important for geographical information systems or for the simulation sciences where scientists work with spatial models. Past research has primarily focused on disk-based spatial joins; efficient in-memory approaches, however, are important for two reasons: a) main memory has grown so large that many datasets fit in it and b) the in-memory join is a very time-consuming part of all disk-based spatial joins. In this paper we develop TOUCH, a novel in-memory spatial join algorithm that uses hierarchical data-oriented space partitioning, thereby keeping both its memory footprint and the number of comparisons low. Our results show that TOUCH outperforms known in-memory spatial-join algorithms as well as in-memory implementations of disk-based join approaches. In particular, it has a one order of magnitude advantage over the memory-demanding state of the art in terms of number of comparisons (i.e., pairwise object comparisons), as well as execution time, while it is two orders of magnitude faster when compared to approaches with a similar memory footprint. Furthermore, TOUCH is more scalable than competing approaches as data density grows."
]
} |
1710.07411 | 2767150096 | The importance of geo-spatial data in critical applications such as emergency response, transportation, agriculture etc., has prompted the adoption of recent GeoSPARQL standard in many RDF processing engines. In addition to large repositories of geo-spatial data -- e.g., LinkedGeoData, OpenStreetMap, etc. -- spatial data is also routinely found in automatically constructed knowledgebases such as Yago and WikiData. While there have been research efforts for efficient processing of spatial data in RDF SPARQL, very little effort has gone into building end-to-end systems that can holistically handle complex SPARQL queries along with spatial filters. In this paper, we present Streak, a RDF data management system that is designed to support a wide-range of queries with spatial filters including complex joins, top-k, higher-order relationships over spatially enriched databases. Streak introduces various novel features such as a careful identifier encoding strategy for spatial and non-spatial entities, the use of a semantics-aware Quad-tree index that allows for early-termination and a clever use of adaptive query processing with zero plan-switch cost. We show that Streak can scale to some of the largest publicly available semantic data resources such as Yago3 and LinkedGeoData which contain spatial entities and quantifiable predicates useful for result ranking. For experimental evaluations, we focus on top-k distance join queries and demonstrate that Streak outperforms popular spatial join algorithms as well as state of the art end-to-end systems like Virtuoso and PostgreSQL. | Hash-based Rank Join (HRJN) @cite_24 and Nested Loop Rank Join (NRJN) @cite_28 are two widely used top- @math join algorithms in the relational world. HRJN accesses objects from left (or right) side of join and joins them with tuples seen so-far from right (or left) side of join. All join results are fed to a priority queue, which outputs the results to the users if they are above threshold, thus producing results incrementally. Nested Loop Rank Join (NRJN) algorithm is similar to HRJN, except that it follows a nested loop join strategy instead of buffering join inputs. We do not compare with a general top- @math join algorithms, HRJN and NRJN, since such top- @math operators for such spatial workloads have been shown to perform poorly when compared to a block-based algorithm @cite_37 and instead compare with the state-of-the-art spatial top- @math algorithm. | {
"cite_N": [
"@cite_24",
"@cite_37",
"@cite_28"
],
"mid": [
"",
"172828832",
"2143682328"
],
"abstract": [
"",
"Consider two sets of spatial objects R and S, where each object is assigned a score (e.g., ranking). Given a spatial distance threshold e and an integer k, the top-k spatial distance join (k- SDJ) returns the k pairs of objects, which have the highest combined score (based on an aggregate function γ) among all object pairs in R×S which have spatial distance at most e. Despite the practical application value of this query, it has not received adequate attention in the past. In this paper, we fill this gap by proposing methods that utilize both location and score information from the objects, enabling top-k join computation by accessing a limited number of objects. Extensive experiments demonstrate that a technique which accesses blocks of data from R and S ordered by the object scores and then joins them using an aR-tree based module performs best in practice and outperforms alternative solutions by a wide margin.",
"Ranking queries, also known as top-k queries, produce results that are ordered on some computed score. Typically, these queries involve joins, where users are usually interested only in the top-k join results. Top-k queries are dominant in many emerging applications, e.g., multimedia retrieval by content, Web databases, data mining, middlewares, and most information retrieval applications. Current relational query processors do not handle ranking queries efficiently, especially when joins are involved. In this paper, we address supporting top-k join queries in relational query processors. We introduce a new rank-join algorithm that makes use of the individual orders of its inputs to produce join results ordered on a user-specified scoring function. The idea is to rank the join results progressively during the join operation. We introduce two physical query operators based on variants of ripple join that implement the rank-join algorithm. The operators are nonblocking and can be integrated into pipelined execution plans. We also propose an efficient heuristic designed to optimize a top-k join query by choosing the best join order. We address several practical issues and optimization heuristics to integrate the new join operators in practical query processors. We implement the new operators inside a prototype database engine based on PREDATOR. The experimental evaluation of our approach compares recent algorithms for joining ranked inputs and shows superior performance."
]
} |
1710.07411 | 2767150096 | The importance of geo-spatial data in critical applications such as emergency response, transportation, agriculture etc., has prompted the adoption of recent GeoSPARQL standard in many RDF processing engines. In addition to large repositories of geo-spatial data -- e.g., LinkedGeoData, OpenStreetMap, etc. -- spatial data is also routinely found in automatically constructed knowledgebases such as Yago and WikiData. While there have been research efforts for efficient processing of spatial data in RDF SPARQL, very little effort has gone into building end-to-end systems that can holistically handle complex SPARQL queries along with spatial filters. In this paper, we present Streak, a RDF data management system that is designed to support a wide-range of queries with spatial filters including complex joins, top-k, higher-order relationships over spatially enriched databases. Streak introduces various novel features such as a careful identifier encoding strategy for spatial and non-spatial entities, the use of a semantics-aware Quad-tree index that allows for early-termination and a clever use of adaptive query processing with zero plan-switch cost. We show that Streak can scale to some of the largest publicly available semantic data resources such as Yago3 and LinkedGeoData which contain spatial entities and quantifiable predicates useful for result ranking. For experimental evaluations, we focus on top-k distance join queries and demonstrate that Streak outperforms popular spatial join algorithms as well as state of the art end-to-end systems like Virtuoso and PostgreSQL. | Returning the top- @math results on spatial queries, where the final results are ranked by using a distance metric has been already studied by Ljosa, et al @cite_8 . However, such ranking mechanism often have to be restricted to specific, pre-defined aggregation functions. In comparison, attempts to solve a more challenging problem with the spatial constraint specified in the join predicate. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2116440837"
],
"abstract": [
"Probabilistic data have recently become popular in applications such as scientific and geospatial databases. For images and other spatial datasets, probabilistic values can capture the uncertainty in extent and class of the objects in the images. Relating one such dataset to another by spatial joins is an important operation for data management systems. We consider probabilistic spatial join (PSJ) queries, which rank the results according to a score that incorporates both the uncertainties associated with the objects and the distances between them. We present algorithms for two kinds of PSJ queries: Threshold PSJ queries, which return all pairs that score above a given threshold, and top-k PSJ queries, which return the k top-scoring pairs. For threshold PSJ queries, we propose a plane sweep algorithm that, because it exploits the special structure of the problem, runs in 0(n (log n + k)) time, where n is the number of points and k is the number of results. We extend the algorithms to 2-D data and to top-k PSJ queries. To further speed up top-k PSJ queries, we develop a scheduling technique that estimates the scores at the level of blocks, then hands the blocks to the plane sweep algorithm. By finding high-scoring pairs early, the scheduling allows a large portion of the datasets to be pruned. Experiments demonstrate speed-ups of two orders of magnitude."
]
} |
1710.07411 | 2767150096 | The importance of geo-spatial data in critical applications such as emergency response, transportation, agriculture etc., has prompted the adoption of recent GeoSPARQL standard in many RDF processing engines. In addition to large repositories of geo-spatial data -- e.g., LinkedGeoData, OpenStreetMap, etc. -- spatial data is also routinely found in automatically constructed knowledgebases such as Yago and WikiData. While there have been research efforts for efficient processing of spatial data in RDF SPARQL, very little effort has gone into building end-to-end systems that can holistically handle complex SPARQL queries along with spatial filters. In this paper, we present Streak, a RDF data management system that is designed to support a wide-range of queries with spatial filters including complex joins, top-k, higher-order relationships over spatially enriched databases. Streak introduces various novel features such as a careful identifier encoding strategy for spatial and non-spatial entities, the use of a semantics-aware Quad-tree index that allows for early-termination and a clever use of adaptive query processing with zero plan-switch cost. We show that Streak can scale to some of the largest publicly available semantic data resources such as Yago3 and LinkedGeoData which contain spatial entities and quantifiable predicates useful for result ranking. For experimental evaluations, we focus on top-k distance join queries and demonstrate that Streak outperforms popular spatial join algorithms as well as state of the art end-to-end systems like Virtuoso and PostgreSQL. | Eddies @cite_5 and Content Based Retrieval (CBR) @cite_18 are two state-of-the-art approaches which explored AQP for switching plans during query execution. Both these approaches use random tuples for profiling, relying on a machine-learning based solution to model routing predictions. Our work directly contrasts ML-based approaches by using statistics for each of spatial data, which helps in determining the selectivity of spatial join operator. By using fine-grained block-level statistics, we mitigate the errors that are typically associated with a model-based approach, especially when there are many joins involved. Additionally, unlike CBR, which has high routing and learning overhead, our spatial AQP algorithm incurs a very small routing overhead, owing to cost estimate calculations for only the customized plans. We validated this argument and found the overhead of AQP to be very small --- just 5-10$ | {
"cite_N": [
"@cite_5",
"@cite_18"
],
"mid": [
"2203361072",
"1563265404"
],
"abstract": [
"In large federated and shared-nothing databases, resources can exhibit widely fluctuating characteristics. Assumptions made at the time a query is submitted will rarely hold throughout the duration of query processing. As a result, traditional static query optimization and execution techniques are ineffective in these environments. In this paper we introduce a query processing mechanism called an eddy, which continuously reorders operators in a query plan as it runs. We characterize the moments of symmetry during which pipelined joins can be easily reordered, and the synchronization barriers that require inputs from different sources to be coordinated. By combining eddies with appropriate join algorithms, we merge the optimization and execution phases of query processing, allowing each tuple to have a flexible ordering of the query operators. This flexibility is controlled by a combination of fluid dynamics and a simple learning algorithm. Our initial implementation demonstrates promising results, with eddies performing nearly as well as a static optimizer executor in static scenarios, and providing dramatic improvements in dynamic execution environments.",
"Query optimizers in current database systems are designed to pick a single efficient plan for a given query based on current statistical properties of the data. However, different subsets of the data can sometimes have very different statistical properties. In such scenarios it can be more efficient to process different subsets of the data for a query using different plans. We propose a new query processing technique called content-based routing (CBR) that eliminates the single-plan restriction in current systems. We present low-overhead adaptive algorithms that partition input data based on statistical properties relevant to query execution strategies, and efficiently route individual tuples through customized plans based on their partition. We have implemented CBR as an extension to the Eddies query processor in the TelegraphCQ system, and we present an extensive experimental evaluation showing the significant performance benefits of CBR."
]
} |
1710.07400 | 2766468155 | Docking is an important tool in computational drug discovery that aims to predict the binding pose of a ligand to a target protein through a combination of pose scoring and optimization. A scoring function that is differentiable with respect to atom positions can be used for both scoring and gradient-based optimization of poses for docking. Using a differentiable grid-based atomic representation as input, we demonstrate that a scoring function learned by training a convolutional neural network (CNN) to identify binding poses can also be applied to pose optimization. We also show that an iteratively-trained CNN that includes poses optimized by the first CNN in its training set performs even better at optimizing randomly initialized poses than either the first CNN scoring function or AutoDock Vina. | A currently unexplored application of deep learning to drug discovery is gradient-based optimization of chemical structures. Activation maximization performed on a neural network scoring function is analogous to local pose optimization, which is fundamental to molecular docking. A convolutional neural network with a differentiable input representation can therefore be used for both the scoring and optimization components of docking @cite_21 . Poses generated in this manner are susceptible to the same pitfalls of activation maximization in the image domain, so constraints are needed to ensure that the optimized poses are physically realistic. | {
"cite_N": [
"@cite_21"
],
"mid": [
"1993285168"
],
"abstract": [
"NNScore is a neural-network-based scoring function designed to aid the computational identification of small-molecule ligands. While the test cases included in the original NNScore article demonstrated the utility of the program, the application examples were limited. The purpose of the current work is to further confirm that neural-network scoring functions are effective, even when compared to the scoring functions of state-of-the-art docking programs, such as AutoDock, the most commonly cited program, and AutoDock Vina, thought to be two orders of magnitude faster. Aside from providing additional validation of the original NNScore function, we here present a second neural-network scoring function, NNScore 2.0. NNScore 2.0 considers many more binding characteristics when predicting affinity than does the original NNScore. The network output of NNScore 2.0 also differs from that of NNScore 1.0; rather than a binary classification of ligand potency, NNScore 2.0 provides a single estimate of the pKd. To fac..."
]
} |
1710.07557 | 2767103364 | In this paper we propose an implement a general convolutional neural network (CNN) building framework for designing real-time CNNs. We validate our models by creating a real-time vision system which accomplishes the tasks of face detection, gender classification and emotion classification simultaneously in one blended step using our proposed CNN architecture. After presenting the details of the training procedure setup we proceed to evaluate on standard benchmark sets. We report accuracies of 96 in the IMDB gender dataset and 66 in the FER-2013 emotion dataset. Along with this we also introduced the very recent real-time enabled guided back-propagation visualization technique. Guided back-propagation uncovers the dynamics of the weight changes and evaluates the learned features. We argue that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures. Our system has been validated by its deployment on a Care-O-bot 3 robot used during RoboCup@Home competitions. All our code, demos and pre-trained architectures have been released under an open-source license in our public repository. | Commonly used CNNs for feature extraction include a set of fully connected layers at the end. Fully connected layers tend to contain most of the parameters in a CNN. Specifically, VGG16 @cite_6 contains approximately 90 Recent architectures such as Inception V3 @cite_1 , reduced the amount of parameters in their last layers by including a Global Average Pooling operation. Global Average Pooling reduces each feature map into a scalar value by taking the average over all elements in the feature map. The average operation forces the network to extract global features from the input image. Modern CNN architectures such as Xception @cite_9 leverage from the combination of two of the most successful experimental assumptions in CNNs: the use of residual modules @cite_5 and depth-wise separable convolutions @cite_0 . Depth-wise separable convolutions reduce further the amount of parameters by separating the processes of feature extraction and combination within a convolutional layer. | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_5"
],
"mid": [
"2951583185",
"",
"1686810756",
"2612445135",
"2949650786"
],
"abstract": [
"We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.",
"",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."
]
} |
1710.07557 | 2767103364 | In this paper we propose an implement a general convolutional neural network (CNN) building framework for designing real-time CNNs. We validate our models by creating a real-time vision system which accomplishes the tasks of face detection, gender classification and emotion classification simultaneously in one blended step using our proposed CNN architecture. After presenting the details of the training procedure setup we proceed to evaluate on standard benchmark sets. We report accuracies of 96 in the IMDB gender dataset and 66 in the FER-2013 emotion dataset. Along with this we also introduced the very recent real-time enabled guided back-propagation visualization technique. Guided back-propagation uncovers the dynamics of the weight changes and evaluates the learned features. We argue that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures. Our system has been validated by its deployment on a Care-O-bot 3 robot used during RoboCup@Home competitions. All our code, demos and pre-trained architectures have been released under an open-source license in our public repository. | Furthermore, the state-of-the-art model for the FER2-2013 dataset is based on CNN trained with square hinged loss @cite_7 . This model achieved an accuracy of 71 In this architecture 98 The second-best methods presented in @cite_3 achieved an accuracy of 66 | {
"cite_N": [
"@cite_3",
"@cite_7"
],
"mid": [
"2041616772",
"1546411676"
],
"abstract": [
"The ICML 2013 Workshop on Challenges in Representation Learning focused on three challenges: the black box learning challenge, the facial expression recognition challenge, and the multimodal learning challenge. We describe the datasets created for these challenges and summarize the results of the competitions. We provide suggestions for organizers of future challenges and some comments on what kind of knowledge can be gained from machine learning competitions.",
"Recently, fully-connected and convolutional neural networks have been trained to achieve state-of-the-art performance on a wide variety of tasks such as speech recognition, image classification, natural language processing, and bioinformatics. For classification tasks, most of these \"deep learning\" models employ the softmax activation function for prediction and minimize cross-entropy loss. In this paper, we demonstrate a small but consistent advantage of replacing the softmax layer with a linear support vector machine. Learning minimizes a margin-based loss instead of the cross-entropy loss. While there have been various combinations of neural nets and SVMs in prior art, our results using L2-SVMs show that by simply replacing softmax with linear SVMs gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and the ICML 2013 Representation Learning Workshop's face expression recognition challenge."
]
} |
1710.07543 | 2545145400 | Due to the emergent adoption of distributed systems when building applications, demand for reliability and availability has increased. These properties can be achieved through replication techniques using middleware algorithms that must be capable of tolerating faults. Certain faults such as arbitrary faults, however, may be more difficult to tolerate, resulting in more complex and resource intensive algorithms that end up being not so practical to use. We propose and experiment with the use of consistency validation techniques to harden a benign fault-tolerant Paxos, thus being able to detect and tolerate non-malicious arbitrary faults. | In @cite_4 , an in-depth non-malicious arbitrary faults study is presented. Many of this paper techniques are inspired by this work. The approach taken was to develop a library that hardens processes built on top of it. All the process' messages, event handlers and variables, if implemented according to the library, are managed by it as part of its state. It intercepts all messages and event handlers to perform integrity checks on them, and aborts whenever a fault is detected. This library is not a middleware, but it can be used to harden existing benign fault tolerant middlewares if implemented on top of the library. Our approach takes an existing middleware and explores the challenges of hardening the middleware itself. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2167880834"
],
"abstract": [
"Recent failures of production systems have highlighted the importance of tolerating faults beyond crashes. The industry has so far addressed this problem by hardening crash-tolerant systems with ad hoc error detection checks, potentially overlooking critical fault scenarios. We propose a generic and principled hardening technique for Arbitrary State Corruption (ASC) faults, which specifically model the effects of realistic data corruptions on distributed processes. Hardening does not require the use of trusted components or the replication of the process over multiple physical servers. We implemented a wrapper library to transparently harden distributed processes. To exercise our library and evaluate our technique, we obtained ASC-tolerant versions of Paxos, of a subset of the ZooKeeper API, and of an eventually consistent storage by implementing crash-tolerant protocols and automatically hardening them using our library. Our evaluation shows that the throughput of our ASC-hardened state machine replication outperforms its Byzantine-tolerant counterpart by up to 70 ."
]
} |
1710.07543 | 2545145400 | Due to the emergent adoption of distributed systems when building applications, demand for reliability and availability has increased. These properties can be achieved through replication techniques using middleware algorithms that must be capable of tolerating faults. Certain faults such as arbitrary faults, however, may be more difficult to tolerate, resulting in more complex and resource intensive algorithms that end up being not so practical to use. We propose and experiment with the use of consistency validation techniques to harden a benign fault-tolerant Paxos, thus being able to detect and tolerate non-malicious arbitrary faults. | In @cite_10 , although it discusses several concepts on detecting arbitrary faults, it only implements semantic checks. This is similar to part of our approach, with a low coverage because the checks are implemented only at the application layer. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2065471065"
],
"abstract": [
"Neither of the two broad classes of fault models considered by traditional fault tolerance techniques --- crash and Byzantine faults --- suit the environment of systems that run in today's data centers. On the one hand, assuming Byzantine faults is considered overkill due to the assumption of a worst-case adversarial behavior, and the use of other techniques to guard against malicious attacks. On the other hand, the crash fault model is insufficient since it does not capture non-crash faults that may result from a variety of unexpected conditions that are commonplace in this setting. In this paper, we present the case for a more practical approach at handling non-crash (but non-adversarial) faults in data-center scale computations. In this context, we discuss how such problem can be tackled for an important class of data-center scale systems: systems for large-scale processing of data, with a particular focus on the Pig programming framework. Such an approach not only covers a significant fraction of the processing jobs that run in today's data centers, but is potentially applicable to a broader class of applications."
]
} |
1710.07543 | 2545145400 | Due to the emergent adoption of distributed systems when building applications, demand for reliability and availability has increased. These properties can be achieved through replication techniques using middleware algorithms that must be capable of tolerating faults. Certain faults such as arbitrary faults, however, may be more difficult to tolerate, resulting in more complex and resource intensive algorithms that end up being not so practical to use. We propose and experiment with the use of consistency validation techniques to harden a benign fault-tolerant Paxos, thus being able to detect and tolerate non-malicious arbitrary faults. | The approach presented in @cite_5 involves the use of a low-level encoding compiler so processes read, write and perform all operations with encoded arithmetic values. Whenever a value is changed due to corruption, the arithmetic decoding operation fails and process detects it. Arbitrary faults handling is mapped to benign faults, so processes either crash or have their messages discarded. This approach also sacrifices error coverage for better performance due to the use of arithmetic codes. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1969670854"
],
"abstract": [
"Arbitrary faults such as bit flips have been often observed in commodity-hardware data centers and have disrupted large services. Benign faults, such as crashes and message omissions, are nevertheless the standard assumption in practical fault-tolerant distributed systems. Algorithms tolerant to arbitrary faults are harder to understand and more expensive to deploy (requiring more machines). In this work, we introduce a non-malicious arbitrary fault model including transient and permanent arbitrary faults, such as bit flips and hardware-design errors, but no malicious faults, typically caused by security breaches. We then present a compiler-based framework that allows benign fault-tolerant algorithms to automatically tolerate arbitrary faults in non-malicious settings. Finally, we experimentally evaluate two fundamental algorithms: Paxos and leader election. At expense of CPU cycles, transformed algorithms use the same number of processes as their benign fault-tolerant counterparts, and have virtually no network overhead, while reducing the probability of failing arbitrarily by two orders of magnitude."
]
} |
1710.07735 | 2772934882 | The use of random perturbations of ground truth data, such as random translation or scaling of bounding boxes, is a common heuristic used for data augmentation that has been shown to prevent overfitting and improve generalization. Since the design of data augmentation is largely guided by reported best practices, it is difficult to understand if those design choices are optimal. To provide a more principled perspective, we develop a game-theoretic interpretation of data augmentation in the context of object detection. We aim to find an optimal adversarial perturbations of the ground truth data (i.e., the worst case perturbations) that forces the object bounding box predictor to learn from the hardest distribution of perturbed examples for better test-time performance. We establish that the game theoretic solution, the Nash equilibrium, provides both an optimal predictor and optimal data augmentation distribution. We show that our adversarial method of training a predictor can significantly improve test time performance for the task of object detection. On the ImageNet object detection task, our adversarial approach improves performance by over 16 compared to the best performing data augmentation method | It is common to assume that the ground truth is singular and error-free. However, disagreement between annotators is a widely-known problem for many computer vision tasks @cite_30 , as well as a major concern @cite_13 when constructing an annotated computer vision corpora. In large part, the difficulty arises because the set of possible ground truth'' annotations is typically extremely large for vision tasks. It is a powerset of possible descriptions (, words, noun phrases) in annotation tasks, multi-partitions of the pixels (exponential in the number of pixels) in segmentation tasks, and the possible bounding boxes (quadratic in the number of pixels) for localization tasks. | {
"cite_N": [
"@cite_30",
"@cite_13"
],
"mid": [
"2149273804",
"2003497265"
],
"abstract": [
"Distributing labeling tasks among hundreds or thousands of annotators is an increasingly important method for annotating large datasets. We present a method for estimating the underlying value (e.g. the class) of each image from (noisy) annotations provided by multiple annotators. Our method is based on a model of the image formation and annotation process. Each image has different characteristics that are represented in an abstract Euclidean space. Each annotator is modeled as a multidimensional entity with variables representing competence, expertise and bias. This allows the model to discover and represent groups of annotators that have different sets of skills and knowledge, as well as groups of images that differ qualitatively. We find that our model predicts ground truth labels on both synthetic and real data more accurately than state of the art methods. Experiments also show that our model, starting from a set of binary labels, may discover rich information, such as different \"schools of thought\" amongst the annotators, and can group together images belonging to separate categories.",
"The creation of golden standard datasets is a costly business. Optimally more than one judgment per document is obtained to ensure a high quality on annotations. In this context, we explore how much annotations from experts differ from each other, how different sets of annotations influence the ranking of systems and if these annotations can be obtained with a crowdsourcing approach. This study is applied to annotations of images with multiple concepts. A subset of the images employed in the latest ImageCLEF Photo Annotation competition was manually annotated by expert annotators and non-experts with Mechanical Turk. The inter-annotator agreement is computed at an image-based and concept-based level using majority vote, accuracy and kappa statistics. Further, the Kendall τ and Kolmogorov-Smirnov correlation test is used to compare the ranking of systems regarding different ground-truths and different evaluation measures in a benchmark scenario. Results show that while the agreement between experts and non-experts varies depending on the measure used, its influence on the ranked lists of the systems is rather small. To sum up, the majority vote applied to generate one annotation set out of several opinions, is able to filter noisy judgments of non-experts to some extent. The resulting annotation set is of comparable quality to the annotations of experts."
]
} |
1710.07673 | 2952606169 | We characterize (up to endpoints) the @math -tuples @math for which certain @math -linear generalized Radon transforms map @math boundedly into @math . This generalizes a result of Tao and Wright. | There is an extensive bibliography in @cite_8 to which we direct the interested reader. We will focus here on some more recent results. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1640964249"
],
"abstract": [
"We establish local (L p ,L q ) mapping properties for averages on curves. The exponents are sharp except for endpoints. Let n ≥ 2, and let M1 and M2 be two smooth n − 1-dimensional manifolds 1 , each containing a preferred origin 0M1 and 0M2. We shall abuse notation and write 0 for both 0M1 and 0M2. For the purposes of integration we shall place a smooth Riemannian metric on M1 and M2, although the exact choice of this metric will not be relevant. All our considerations shall be local to the origin 0. We are interested in the local L p improving properties of averaging operators on curves. Before we give the rigorous description of these operators, let us first give an informal discussion. Informally, we assume that we have a smooth assignment x2 7→ γx2 taking points in M2 to curves in M1, with a corresponding dual assignment x1 → γ ∗ x1 taking points in M1 to curves in M2, such that x1 ∈ γx2 ⇐⇒ x2 ∈ γ ∗1 . We then form the operator R taking functions 2 on M1 to functions on M2, defined"
]
} |
1710.07110 | 2766168507 | Deep learning typically requires training a very capable architecture using large datasets. However, many important learning problems demand an ability to draw valid inferences from small size datasets, and such problems pose a particular challenge for deep learning. In this regard, various researches on "meta-learning" are being actively conducted. Recent work has suggested a Memory Augmented Neural Network (MANN) for meta-learning. MANN is an implementation of a Neural Turing Machine (NTM) with the ability to rapidly assimilate new data in its memory, and use this data to make accurate predictions. In models such as MANN, the input data samples and their appropriate labels from previous step are bound together in the same memory locations. This often leads to memory interference when performing a task as these models have to retrieve a feature of an input from a certain memory location and read only the label information bound to that location. In this paper, we tried to address this issue by presenting a more robust MANN. We revisited the idea of meta-learning and proposed a new memory augmented neural network by explicitly splitting the external memory into feature and label memories. The feature memory is used to store the features of input data samples and the label memory stores their labels. Hence, when predicting the label of a given input, our model uses its feature memory unit as a reference to extract the stored feature of the input, and based on that feature, it retrieves the label information of the input from the label memory unit. In order for the network to function in this framework, a new memory-writingmodule to encode label information into the label memory in accordance with the meta-learning task structure is designed. Here, we demonstrate that our model outperforms MANN by a large margin in supervised one-shot classification tasks using Omniglot and MNIST datasets. | In previous implementation of NTM, memory was addressed both by content and location. However, in their work, they presented a new memory access module. This memory access module is called Least Recently Used Access (LRUA) @cite_8 . It is a pure content-based memory writer that writes memories either to the least recently used location or to the most recently used location of the memory. According to this module, new information is written into rarely used locations (preserving recently encoded information) or it is written to the last used location (to update the memory with newer, and possibly relevant, information). | {
"cite_N": [
"@cite_8"
],
"mid": [
"2472819217"
],
"abstract": [
"Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of \"one-shot learning.\" Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory location-based focusing mechanisms."
]
} |
1710.06929 | 2765639270 | In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset. | Background subtraction is a field within computer vision concerned with detect moving objects within a video sequence from a static RGB-camera. A classical application for background subtraction algorithms is video surveillance. @cite_15 a comprehensive review and benchmarking study on the field of background subtraction is presented. Generally background subtraction assumes a static camera and builds models of the background for the pixels in the image plane over many images which are then used to statistically determine if a pixel in a new image contains a moving object or not. @cite_33 it was found that incorporating depth information into background subtraction significantly increases the performance of background subtraction in otherwise challenging scenarios. Our scenario differs from background subtraction in that our robot is not static and capture data at unknown intervals. Furthermore, background subtraction is usually performed by comparing the current frame to data aggregated over many previous observations of the same view, whereas our system only require one previous observation of the same scene. | {
"cite_N": [
"@cite_15",
"@cite_33"
],
"mid": [
"2071860582",
"2154079153"
],
"abstract": [
"Abstract Background subtraction (BS) is a crucial step in many computer vision systems, as it is first applied to detect moving objects within a video stream. Many algorithms have been designed to segment the foreground objects from the background of a sequence. In this article, we propose to use the BMC (Background Models Challenge) dataset, and to compare the 29 methods implemented in the BGSLibrary. From this large set of various BG methods, we have conducted a relevant experimental analysis to evaluate both their robustness and their practical performance in terms of processor memory requirements.",
"Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage."
]
} |
1710.06929 | 2765639270 | In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset. | The Simultaneous Localization And Mapping (SLAM) problem has long been central to the performance of mobile robots. Localization using range sensors such as lidars, sonars, radars or structured light sensors is based in the ability to compare current measurements to the predicted state of the environment given previous measurements. Detecting and modelling the static, dynamic and changing areas of an environment is therefore advantageous as shown in @cite_31 @cite_5 @cite_28 . While this work is not focused on the modeling of dynamics in an environment for SLAM, we believe that an accurate detector of static and dynamic areas could prove useful to improve the quality of SLAM solutions. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_31"
],
"mid": [
"2009655761",
"2135261438",
"1963849668"
],
"abstract": [
"This paper presents a new approach for topological localisation of service robots in dynamic indoor environments. In contrast to typical localisation approaches that rely mainly on static parts of the environment, our approach makes explicit use of information about changes by learning and modelling the spatio-temporal dynamics of the environment where the robot is acting. The proposed spatio-temporal world model is able to predict environmental changes in time, allowing the robot to improve its localisation capabilities during long-term operations in populated environments. To investigate the proposed approach, we have enabled a mobile robot to autonomously patrol a populated environment over a period of one week while building the proposed model representation. We demonstrate that the experience learned during one week is applicable for topological localization even after a hiatus of three months by showing that the localization error rate is significantly lower compared to static environment representations.",
"In this paper we introduce a method for learning motion patterns in dynamic environments. Representations of dynamic environments have recently received an increasing amount of attention in the research community. Understanding dynamic environments is seen as one of the key challenges in order to enable autonomous navigation in real-world scenarios. However, representing the temporal dimension is a challenge yet to be solved. In this paper we introduce a spatial representation, which encapsulates the statistical dynamic behavior observed in the environment. The proposed Conditional Transition Map (CTMap) is a grid-based representation that associates a probability distribution for an object exiting the cell, given its entry direction. The transition parameters are learned from a temporal signal of occupancy on cells by using a local-neighborhood cross-correlation method. In this paper, we introduce the CTMap, the learning approach and present a proof-of-concept method for estimating future paths of dynamic objects, called Conditional Probability Propagation Tree (CPPTree). The evaluation is done using a real-world dataset collected at a busy roundabout.",
"In this paper we propose a new grid based approach to model a dynamic environment. Each grid cell is assumed to be an independent Markov chain (iMac) with two states. The state transition parameters are learned online and modeled as two Poisson processes. As a result, our representation not only encodes the expected occupancy of the cell, but also models the expected dynamics within the cell. The paper also presents a strategy based on recency weighting to learn the model parameters from observations that is able to deal with non-stationary cell dynamics. Moreover, an interpretation of the model parameters with discussion about the convergence rates of the cells is presented. The proposed model is experimentally validated using offline data recorded with a Laser Guided Vehicle (LGV) system running in production use."
]
} |
1710.06929 | 2765639270 | In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset. | Semantic segmentation is the task of computing meaningful labels for all pixels in a set of images. Typical labels include wall, floor, furniture, people or object. Our system can therefore be considered to perform semantic segmentation with the classes , and . A popular paradigm for semantic segmentation is to compute pixel priors for each class and pairwise potentials for pixels which are likely to belong to the same class. Statistical inference is then used to infer a maximum likelihood labeling over the pixels. Pixel priors and pairwise potentials are usually found using supervised machine learning techniques. We apply a similar pipeline, however the primary difference to normal semantic segmentation is that we compute pixel priors based on occlusion detection and pairwise priors based on image statistics as opposed to labeled training data. @cite_0 a review of the field of semantic segmentation is given. In recent years, the field of deep learning have received massive attention due to impressive results, across the board, for supervised machine learning tasks. @cite_2 a review on the application of deep learning to semantic segmentation is given. | {
"cite_N": [
"@cite_0",
"@cite_2"
],
"mid": [
"2273666108",
"2609077090"
],
"abstract": [
"This survey gives an overview over different techniques used for pixel-level semantic segmentation. Metrics and datasets for the evaluation of segmentation algorithms and traditional approaches for segmentation such as unsupervised methods, Decision Forests and SVMs are described and pointers to the relevant papers are given. Recently published approaches with convolutional neural networks are mentioned and typical problematic situations for segmentation algorithms are examined. A taxonomy of segmentation algorithms is given.",
"Image semantic segmentation is more and more being of interest for computer vision and machine learning researchers. Many applications on the rise need accurate and efficient segmentation mechanisms: autonomous driving, indoor navigation, and even virtual or augmented reality systems to name a few. This demand coincides with the rise of deep learning approaches in almost every field or application target related to computer vision, including semantic segmentation or scene understanding. This paper provides a review on deep learning methods for semantic segmentation applied to various application areas. Firstly, we describe the terminology of this field as well as mandatory background concepts. Next, the main datasets and challenges are exposed to help researchers decide which are the ones that best suit their needs and their targets. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. Finally, quantitative results are given for the described methods and the datasets in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of semantic segmentation using deep learning techniques."
]
} |
1710.06929 | 2765639270 | In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset. | @cite_17 a system for lifelong object discovery by structured aggregation of multiple different sources of object segmentation information such as objects placed on planar surfaces, and repeating appearance. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2028952506"
],
"abstract": [
"Our long-term goal is to develop a general solution to the lifelong robotic object discovery (LROD) problem: to discover new objects in the environment while the robot operates, for as long as the robot operates. In this paper, we consider the first step towards LROD: we automatically process the raw data stream of an entire workday of a robotic agent to discover objects. Our key contribution to achieve this goal is to incorporate domain knowledge (robotic metadata) in the discovery process, in addition to visual data. We propose a general graph-based formulation for LROD in which generic domain knowledge is encoded as constraints. To make long-term object discovery feasible, we encode into our formulation the natural constraints and non-visual sensory information in service robotics. A key advantage of our generic formulation is that we can add, modify, or remove sources of domain knowledge dynamically, as they become available or as conditions change. In our experiments, we show that by adding domain knowledge we discover 2.7A more objects and decrease processing time 190 times. With our optimized implementation, HerbDisc, we show for the first time a system that processes a video stream of 6 h 20 min of continuous exploration in cluttered human environments (and over half a million images) in 18 min 34 s, to discover 206 new objects with their 3D models."
]
} |
1710.06929 | 2765639270 | In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset. | @cite_14 a system for detecting changes in NDT representations based on color images and lidar range scanners is presented. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2093593306"
],
"abstract": [
"This paper presents a system for autonomous change detection with a security patrol robot. In an initial step a reference model of the environment is created and changes are then detected with respect to the reference model as differences in coloured 3D point clouds, which are obtained from a 3D laser range scanner and a CCD camera. The suggested approach introduces several novel aspects, including a registration method that utilizes local visual features to determine point correspondences (thus essentially working without an initial pose estimate) and the 3D-NDT representation with adaptive cell size to efficiently represent both the spatial and colour aspects of the reference model. Apart from a detailed description of the individual parts of the difference detection system, a qualitative experimental evaluation in an indoor lab environment is presented, which demonstrates that the suggested system is able register and detect changes in spatial 3D data and also to detect changes that occur in colour space and are not observable using range values only."
]
} |
1710.06929 | 2765639270 | In this paper we introduce a system for unsupervised object discovery and segmentation of RGBD-images. The system models the sensor noise directly from data, allowing accurate segmentation without sensor specific hand tuning of measurement noise models making use of the recently introduced Statistical Inlier Estimation (SIE) method. Through a fully probabilistic formulation, the system is able to apply probabilistic inference, enabling reliable segmentation in previously challenging scenarios. In addition, we introduce new methods for filtering out false positives, significantly improving the signal to noise ratio. We show that the system significantly outperform state-of-the-art in on a challenging real-world dataset. | @cite_22 a system for automatic object segmentation for RGBD cameras is presented. The system takes as input two maps of an environment in the form of pointclouds. The pointclouds are then compared and the difference computed based on nearest neighbor matching. Segments are then created from clustering on the remaining components and finally filtering is applied to remove false positives. Segments which do not cause sufficient free space violations or are too small are filtered out as false positives. Objects are merged based on feature appearance. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2082761562"
],
"abstract": [
"In this paper, we present a system for automatically learning segmentations of objects given changes in dense RGB-D maps over the lifetime of a robot. Using recent advances in RGB-D mapping to construct multiple dense maps, we detect changes between mapped regions from multiple traverses by performing a 3-D difference of the scenes. Our method takes advantage of the free space seen in each map to account for variability in how the maps were created. The resulting changes from the 3-D difference are our discovered objects, which are then used to train multiple segmentation algorithms in the original map. The final objects can then be matched in other maps given their corresponding features and learned segmentation method. If the same object is discovered multiple times in different contexts, the features and segmentation method are refined, incorporating all instances to better learn objects over time. We verify our approach with multiple objects in numerous and varying maps."
]
} |
1710.07161 | 2602044573 | Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79 and 73 , respectively, as compared to the baseline of 74 on cross-validation. | Visual speech recognition requires a series of steps to process the video and extract relevant features. First of all, a region of interest (ROI) around the mouth, which contains the largest amount of information about the utterance, has to be extracted @cite_17 . This can be done by hand or with the help of a face tracker. The latter is more common nowadays even though manual corrections are still sometimes applied. The ROI is later used to extract the features. | {
"cite_N": [
"@cite_17"
],
"mid": [
"142803501"
],
"abstract": [
"We have made significant progress in automatic speech recognition (ASR) for well-defined applications like dictation and medium vocabulary transaction processing tasks in relatively controlled environments. However, ASR performance has yet to reach the level required for speech to become a truly pervasive user interface. Indeed, even in “clean” acoustic environments, and for a variety of tasks, state of the art ASR system performance lags human speech perception by up to an order of magnitude (Lippmann, 1997). In addition, current systems are quite sensitive to channel, environment, and style of speech variations. A number of techniques for improving ASR robustness have met limited success in severely degraded environments, mismatched to system training (Ghitza, 1986; , 1989; Juang, 1991; , 1993; Hermansky and Morgan, 1994; Neti, 1994; Gales, 1997; , 2001). Clearly, novel, non-traditional approaches, that use orthogonal sources of information to the acoustic input, are needed to achieve ASR performance closer to the human speech perception level, and robust enough to be deployable in field applications. Visual speech is the most promising source of additional speech information, and it is obviously not affected by the acoustic environment and noise."
]
} |
1710.07161 | 2602044573 | Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79 and 73 , respectively, as compared to the baseline of 74 on cross-validation. | In general three types of features are used: texture-based features, shape-based features, or a combination of both @cite_17 @cite_10 . Texture-based features exploit the pixel values in a ROI --- usually closely around the mouth or including the jaws @cite_17 . Typically, this is done by applying a transformation such as the discrete cosine transform (DCT) and or a dimensionality reduction technique such as the linear discriminant analysis (LDA) to the ROI, possibly in combination with a principle component analysis (PCA) or a maximum-likelihood linear transform (MLLT) @cite_17 . A common feature post-processing technique involves a chain of LDAs and MLLTs on concatenated frames, the so-called HiLDA @cite_21 . | {
"cite_N": [
"@cite_21",
"@cite_10",
"@cite_17"
],
"mid": [
"2096391593",
"1992790156",
"142803501"
],
"abstract": [
"Visual speech information from the speaker's mouth region has been successfully shown to improve noise robustness of automatic speech recognizers, thus promising to extend their usability in the human computer interface. In this paper, we review the main components of audiovisual automatic speech recognition (ASR) and present novel contributions in two main areas: first, the visual front-end design, based on a cascade of linear image transforms of an appropriate video region of interest, and subsequently, audiovisual speech integration. On the latter topic, we discuss new work on feature and decision fusion combination, the modeling of audiovisual speech asynchrony, and incorporating modality reliability estimates to the bimodal recognition process. We also briefly touch upon the issue of audiovisual adaptation. We apply our algorithms to three multisubject bimodal databases, ranging from small- to large-vocabulary recognition tasks, recorded in both visually controlled and challenging environments. Our experiments demonstrate that the visual modality improves ASR over all conditions and data considered, though less so for visually challenging environments and large vocabulary tasks.",
"Human lip-readers are increasingly being presented as useful in the gathering of forensic evidence but, like all humans, suffer from unreliability. Here we report the results of a long-term study in automatic lip-reading with the objective of converting video-to-text (V2T). The V2T problem is surprising in that some aspects that look tricky, such as real-time tracking of the lips on poor-quality interlaced video from hand-held cameras, but prove to be relatively tractable. Whereas the problem of speaker independent lip-reading is very demanding due to unpredictable variations between people. Here we review the problem of automatic lip-reading for crime fighting and identify the critical parts of the problem.",
"We have made significant progress in automatic speech recognition (ASR) for well-defined applications like dictation and medium vocabulary transaction processing tasks in relatively controlled environments. However, ASR performance has yet to reach the level required for speech to become a truly pervasive user interface. Indeed, even in “clean” acoustic environments, and for a variety of tasks, state of the art ASR system performance lags human speech perception by up to an order of magnitude (Lippmann, 1997). In addition, current systems are quite sensitive to channel, environment, and style of speech variations. A number of techniques for improving ASR robustness have met limited success in severely degraded environments, mismatched to system training (Ghitza, 1986; , 1989; Juang, 1991; , 1993; Hermansky and Morgan, 1994; Neti, 1994; Gales, 1997; , 2001). Clearly, novel, non-traditional approaches, that use orthogonal sources of information to the acoustic input, are needed to achieve ASR performance closer to the human speech perception level, and robust enough to be deployable in field applications. Visual speech is the most promising source of additional speech information, and it is obviously not affected by the acoustic environment and noise."
]
} |
1710.07161 | 2602044573 | Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79 and 73 , respectively, as compared to the baseline of 74 on cross-validation. | Shape-based features, on the other hand, try to extract information about the shape of the mouth. This can be done for example with the help of snakes, taking into account the outer contours of the mouth, or by computing the geometrical distances between certain points of interest around the mouth @cite_17 . In recent works these feature points are generally extracted with the help of a face or mouth tracker. Some researchers also directly use these points or shapes and extract information by applying a PCA to them. This technique is, for example, the case for the use of active appearance models (AAMs) @cite_10 @cite_15 . | {
"cite_N": [
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"1273351261",
"1992790156",
"142803501"
],
"abstract": [
"Display Omitted This paper presents a phonetic and visemic information-based audio-visual speech recognizer (AVSR). Active appearance model (AAM) is used to extract the visual features as it finely represents the shape and appearance information extracted from jaw and lip region. Consideration of visual features along with traditional acoustic feature has been found to be promising in the complex auditory environment. However, most of the existing AVSR systems rarely faced the visual domain problems. In this work, a real world multiple camera corpus audio visual in car (AVICAR) is used for the speech recognition experiment. Texas Instruments and Massachusetts Institute of Technology (TIMIT) corpus sentence portion is used to study the performance of bimodal audio-visual speech recognizer. To consider \"Mc-Guruk\" effect, acoustic and visual models are trained according to phonetic and visemic information, respectively. Phonetic-visemic AVSR system shows significant improvement over phonetic AVSR system.",
"Human lip-readers are increasingly being presented as useful in the gathering of forensic evidence but, like all humans, suffer from unreliability. Here we report the results of a long-term study in automatic lip-reading with the objective of converting video-to-text (V2T). The V2T problem is surprising in that some aspects that look tricky, such as real-time tracking of the lips on poor-quality interlaced video from hand-held cameras, but prove to be relatively tractable. Whereas the problem of speaker independent lip-reading is very demanding due to unpredictable variations between people. Here we review the problem of automatic lip-reading for crime fighting and identify the critical parts of the problem.",
"We have made significant progress in automatic speech recognition (ASR) for well-defined applications like dictation and medium vocabulary transaction processing tasks in relatively controlled environments. However, ASR performance has yet to reach the level required for speech to become a truly pervasive user interface. Indeed, even in “clean” acoustic environments, and for a variety of tasks, state of the art ASR system performance lags human speech perception by up to an order of magnitude (Lippmann, 1997). In addition, current systems are quite sensitive to channel, environment, and style of speech variations. A number of techniques for improving ASR robustness have met limited success in severely degraded environments, mismatched to system training (Ghitza, 1986; , 1989; Juang, 1991; , 1993; Hermansky and Morgan, 1994; Neti, 1994; Gales, 1997; , 2001). Clearly, novel, non-traditional approaches, that use orthogonal sources of information to the acoustic input, are needed to achieve ASR performance closer to the human speech perception level, and robust enough to be deployable in field applications. Visual speech is the most promising source of additional speech information, and it is obviously not affected by the acoustic environment and noise."
]
} |
1710.07161 | 2602044573 | Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79 and 73 , respectively, as compared to the baseline of 74 on cross-validation. | The next step in the recognition system is the classification of the utterance, traditionally performed through a system composed of Hidden Markov Models (HMMs) with Gaussian Mixture Models (GMMs). The GMMs model the acoustics, i.e. the phonemes, or visemes in visual speech, while the states of an HMM model the time evolution within a phoneme and the overall evolution within and between words @cite_17 . | {
"cite_N": [
"@cite_17"
],
"mid": [
"142803501"
],
"abstract": [
"We have made significant progress in automatic speech recognition (ASR) for well-defined applications like dictation and medium vocabulary transaction processing tasks in relatively controlled environments. However, ASR performance has yet to reach the level required for speech to become a truly pervasive user interface. Indeed, even in “clean” acoustic environments, and for a variety of tasks, state of the art ASR system performance lags human speech perception by up to an order of magnitude (Lippmann, 1997). In addition, current systems are quite sensitive to channel, environment, and style of speech variations. A number of techniques for improving ASR robustness have met limited success in severely degraded environments, mismatched to system training (Ghitza, 1986; , 1989; Juang, 1991; , 1993; Hermansky and Morgan, 1994; Neti, 1994; Gales, 1997; , 2001). Clearly, novel, non-traditional approaches, that use orthogonal sources of information to the acoustic input, are needed to achieve ASR performance closer to the human speech perception level, and robust enough to be deployable in field applications. Visual speech is the most promising source of additional speech information, and it is obviously not affected by the acoustic environment and noise."
]
} |
1710.07161 | 2602044573 | Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79 and 73 , respectively, as compared to the baseline of 74 on cross-validation. | Even though there are still many studies working on texture-based features such as DCT, DCT-HiLDA, or scattering @cite_14 and, similarly, many researchers still work with GMM-HMM recognition systems, recently, more focus is being put on deep learning techniques. These networks are widely spread both in audio speech recognition and visual recognition tasks to extract features, construct acoustic models, or replace the complete recognition chain. | {
"cite_N": [
"@cite_14"
],
"mid": [
"1503933356"
],
"abstract": [
"In this paper, we present methods in deep multimodal learning for fusing speech and visual modalities for Audio-Visual Automatic Speech Recognition (AV-ASR). First, we study an approach where uni-modal deep networks are trained separately and their final hidden layers fused to obtain a joint feature space in which another deep network is built. While the audio network alone achieves a phone error rate (PER) of 41 under clean condition on the IBM large vocabulary audio-visual studio dataset, this fusion model achieves a PER of 35.83 demonstrating the tremendous value of the visual channel in phone classification even in audio with high signal to noise ratio. Second, we present a new deep network architecture that uses a bilinear softmax layer to account for class specific correlations between modalities. We show that combining the posteriors from the bilinear networks with those from the fused model mentioned above results in a further significant phone error rate reduction, yielding a final PER of 34.03 ."
]
} |
1710.07161 | 2602044573 | Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79 and 73 , respectively, as compared to the baseline of 74 on cross-validation. | In recent literature, deep network based approaches have consistently shown superior performances over traditional methods. Deep Boltzmann machines have been used as stacked autoencoders for feature extraction @cite_0 or post-processing of local binary patterns from three orthogonal planes (LBP-TOP) @cite_2 . These features are then classified using Support Vector Machines (SVMs) @cite_0 , where all utterance lengths have to be normalized, or using a tandem system @cite_23 , where the features are passed into a GMM-HMM recognizer @cite_13 @cite_12 @cite_2 . Similarly, feature extraction has been performed by convolutional neural networks (CNNs) @cite_12 @cite_3 and deep belief networks (DBNs) @cite_13 . The outputs of these networks can be used as an acoustic model in the so-called hybrid approach, where the posterior probability outputs are passed directly to the HMM @cite_11 . Finally, the recognition system itself can be replaced by DNNs, either in the form of bilinear @cite_14 or recurrent neural networks @cite_7 . In the former case, DNNs are used to classify texture-based features while in the latter case the whole processing chain is replaced by a LSTM network. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"1503933356",
"2267805933",
"2243738093",
"2184188583",
"2165712214",
"2208724044",
"2022799064",
"2076462394",
"811578723"
],
"abstract": [
"In this paper, we present methods in deep multimodal learning for fusing speech and visual modalities for Audio-Visual Automatic Speech Recognition (AV-ASR). First, we study an approach where uni-modal deep networks are trained separately and their final hidden layers fused to obtain a joint feature space in which another deep network is built. While the audio network alone achieves a phone error rate (PER) of 41 under clean condition on the IBM large vocabulary audio-visual studio dataset, this fusion model achieves a PER of 35.83 demonstrating the tremendous value of the visual channel in phone classification even in audio with high signal to noise ratio. Second, we present a new deep network architecture that uses a bilinear softmax layer to account for class specific correlations between modalities. We show that combining the posteriors from the bilinear networks with those from the fused model mentioned above results in a further significant phone error rate reduction, yielding a final PER of 34.03 .",
"Lipreading, i.e. speech recognition from visual-only recordings of a speaker's face, can be achieved with a processing pipeline based solely on neural networks, yielding significantly better accuracy than conventional methods. Feedforward and recurrent neural network layers (namely Long Short-Term Memory; LSTM) are stacked to form a single structure which is trained by back-propagating error gradients through all the layers. The performance of such a stacked network was experimentally evaluated and compared to a standard Support Vector Machine classifier using conventional computer vision features (Eigenlips and Histograms of Oriented Gradients). The evaluation was performed on data from 19 speakers of the publicly available GRID corpus. With 51 different words to classify, we report a best word accuracy on held-out evaluation speakers of 79.6 using the end-to-end neural network-based solution (11.6 improvement over the best feature-based solution evaluated).",
"This paper deals with robust modelling of mouth shapes in the context of sign language recognition using deep convolutional neural networks. Sign language mouth shapes are difficult to annotate and thus hardly any publicly available annotations exist. As such, this work exploits related information sources as weak supervision. Humans mainly look at the face during sign language communication, where mouth shapes play an important role and constitute natural patterns with large variability. However, most scientific research on sign language recognition still disregards the face. Hardly any works explicitly focus on mouth shapes. This paper presents our advances in the field of sign language recognition. We contribute in following areas: We present a scheme to learn a convolutional neural network in a weakly supervised fashion without explicit frame labels. We propose a way to incorporate neural network classifier outputs into a HMM approach. Finally, we achieve a significant improvement in classification performance of mouth shapes over the current state of the art.",
"Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.",
"Hidden Markov model speech recognition systems typically use Gaussian mixture models to estimate the distributions of decorrelated acoustic feature vectors that correspond to individual subword units. By contrast, hybrid connectionist-HMM systems use discriminatively-trained neural networks to estimate the probability distribution among subword units given the acoustic observations. In this work we show a large improvement in word recognition performance by combining neural-net discriminative feature processing with Gaussian-mixture distribution modeling. By training the network to generate the subword probability posteriors, then using transformations of these estimates as the base features for a conventionally-trained Gaussian-mixture based system, we achieve relative error rate reductions of 35 or more on the multicondition Aurora noisy continuous digits task.",
"This paper presents a novel feature learning method for visual speech recognition using Deep Boltzmann Machines (DBM). Unlike all existing visual feature extraction techniques which solely extracts features from video sequences, our method is able to explore both acoustic information and visual information to learn a better visual feature representation in the training stage. During the test stage, instead of using both audio and visual signals, only the videos are used for generating the missing audio feature, and both the given visual and given audio features are used to obtain a joint representation. We carried out our experiments on a large scale audio-visual data corpus, and experimental results show that our proposed techniques outperforms the performance of the hadncrafted features and features learned by other commonly used deep learning techniques.",
"Deep belief networks (DBN) have shown impressive improvements over Gaussian mixture models for automatic speech recognition. In this work we use DBNs for audio-visual speech recognition; in particular, we use deep learning from audio and visual features for noise robust speech recognition. We test two methods for using DBNs in a multimodal setting: a conventional decision fusion method that combines scores from single-modality DBNs, and a novel feature fusion method that operates on mid-level features learned by the single-modality DBNs. On a continuously spoken digit recognition task, our experiments show that these methods can reduce word error rate by as much as 21 relative over a baseline multi-stream audio-visual GMM HMM system.",
"Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for reliable speech recognition, particularly when the audio is corrupted by noise. However, cautious selection of sensory features is crucial for attaining high recognition performance. In the machine-learning community, deep learning approaches have recently attracted increasing attention because deep neural networks can effectively extract robust latent features that enable various recognition algorithms to demonstrate revolutionary generalization capabilities under diverse application conditions. This study introduces a connectionist-hidden Markov model (HMM) system for noise-robust AVSR. First, a deep denoising autoencoder is utilized for acquiring noise-robust audio features. By preparing the training data for the network with pairs of consecutive multiple steps of deteriorated audio features and the corresponding clean features, the network is trained to output denoised audio features from the corresponding features deteriorated by noise. Second, a convolutional neural network (CNN) is utilized to extract visual features from raw mouth area images. By preparing the training data for the CNN as pairs of raw images and the corresponding phoneme label outputs, the network is trained to predict phoneme labels from the corresponding mouth area input images. Finally, a multi-stream HMM (MSHMM) is applied for integrating the acquired audio and visual HMMs independently trained with the respective features. By comparing the cases when normal and denoised mel-frequency cepstral coefficients (MFCCs) are utilized as audio features to the HMM, our unimodal isolated word recognition results demonstrate that approximately 65 word recognition rate gain is attained with denoised MFCCs under 10 dB signal-to-noise-ratio (SNR) for the audio signal input. Moreover, our multimodal isolated word recognition results utilizing MSHMM with denoised MFCCs and acquired visual features demonstrate that an additional word recognition rate gain is attained for the SNR conditions below 10 dB.",
"Keywords: speech Reference EPFL-CONF-82487 Record created on 2006-03-10, modified on 2017-05-10"
]
} |
1710.07107 | 2767006446 | Studying IP traffic is crucial for many applications. We focus here on the detection of (structurally and temporally) dense sequences of interactions, that may indicate botnets or coordinated network scans. More precisely, we model a MAWI capture of IP traffic as a link streams, i.e. a sequence of interactions @math meaning that devices @math and @math exchanged packets from time @math to time @math . This traffic is captured on a single router and so has a bipartite structure: links occur only between nodes in two disjoint sets. We design a method for finding interesting bipartite cliques in such link streams, i.e. two sets of nodes and a time interval such that all nodes in the first set are linked to all nodes in the second set throughout the time interval. We then explore the bipartite cliques present in the considered trace. Comparison with the MAWILab classification of anomalous IP addresses shows that the found cliques succeed in detecting anomalous network activity. | Graph-based approaches are however limited in their ability to capture temporal information, crucial for traffic analysis. Indeed, they generally rely on splitting data into time slices, and then aggregate traffic occurring into each slice into a (possibly weighted, directed, and or bipartite) graph. One obtains this way a sequence of graphs, and one may study the evolution of their properties, see for instance @cite_0 . However, choosing small time slices leads to almost empty graphs and bring little information. Conversely, large slices lead to important loss of information as the dynamics within each slice is ignored. As a consequence, choosing appropriate sizes for time slices is extremely difficult is a research topic in itself @cite_6 . There is currently an important interdisciplinary effort for solving these issues by defining formalisms able to deal with both the structure and dynamics of such data. The link stream approach is one of them @cite_9 , as well as temporal networks and time-varying graphs @cite_7 @cite_12 . Up to our knowledge, these other approaches have not yet been applied to network traffic analysis. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_12"
],
"mid": [
"1741689439",
"1890592509",
"2531419941",
"2070461682",
"1937334562"
],
"abstract": [
"Graph-based models form a fundamental aspect of data representation in Data Sciences and play a key role in modeling complex networked systems. In particular, recently there is an ever-increasing interest in modeling dynamic complex networks, i.e. networks in which the topological structure (nodes and edges) may vary over time. In this context, we propose a novel model for representing finite discrete Time-Varying Graphs (TVGs), which are typically used to model dynamic complex networked systems. We analyze the data structures built from our proposed model and demonstrate that, for most practical cases, the asymptotic memory complexity of our model is in the order of the cardinality of the set of edges. Further, we show that our proposal is an unifying model that can represent several previous (classes of) models for dynamic networks found in the recent literature, which in general are unable to represent each other. In contrast to previous models, our proposal is also able to intrinsically model cyclic (i.e. periodic) behavior in dynamic networks. These representation capabilities attest the expressive power of our proposed unifying model for TVGs. We thus believe our unifying model for TVGs is a step forward in the theoretical foundations for data analysis of complex networked systems.",
"We introduce delta-cliques, that generalize graph cliques to link streams time-varying graphs.We provide a greedy algorithm to compute all delta-cliques of a link stream.Implementation available on http: www.github.com JordanV delta-cliques. A link stream is a collection of triplets ( t , u , v ) indicating that an interaction occurred between u and v at time t. We generalize the classical notion of cliques in graphs to such link streams: for a given Δ, a Δ-clique is a set of nodes and a time interval such that all pairs of nodes in this set interact at least once during each sub-interval of duration Δ. We propose an algorithm to enumerate all maximal (in terms of nodes or time interval) cliques of a link stream, and illustrate its practical relevance to a real-world contact trace.",
"Many dynamic networks coming from real-world contexts are link streams, i.e. a finite collection of triplets (u,v,t) where u and v are two nodes having a link between them at time t. A great number of studies on these objects start by aggregating the data on disjoint time windows of length Δ in order to obtain a series of graphs on which are made all subsequent analyses. Here we are concerned with the impact of the chosen Δ on the obtained graph series. We address the fundamental question of knowing whether a series of graphs formed using a given Δ faithfully describes the original link stream. We answer the question by showing that such dynamic networks exhibit a threshold for Δ, which we call the saturation scale, beyond which the properties of propagation of the link stream are altered, while they are mostly preserved before. We design an automatic method to determine the saturation scale of any link stream, which we apply and validate on several real-world datasets.",
"Detecting events such as major routing changes or congestions in the dynamics of the internet topology is an important but challenging task. We explore here an empirical approach based on a notion of statistically significant events. It consists in identifying properties of graph dynamics which exhibit a homogeneous distribution with outliers, corresponding to events. We apply this approach to ego-centered measurements of the internet topology (views obtained from a single monitor) and show that it succeeds in detecting meaningful events. Finally, we give some hints for the interpretation of detected events in terms of network operations.",
"The power of any kind of network approach lies in the ability to simplify a complex system so that one can better understand its function as a whole. Sometimes it is beneficial, however, to include more information than in a simple graph of only nodes and links. Adding information about times of interactions can make predictions and mechanistic understanding more accurate. The drawback, however, is that there are not so many methods available, partly because temporal networks is a relatively young field, partly because it is more difficult to develop such methods compared to for static networks. In this colloquium, we review the methods to analyze and model temporal networks and processes taking place on them, focusing mainly on the last three years. This includes the spreading of infectious disease, opinions, rumors, in social networks; information packets in computer networks; various types of signaling in biology, and more. We also discuss future directions."
]
} |
1710.06993 | 2766820688 | Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. There has been considerable research on generating efficient image representation via the deep-network-based hashing methods. However, the issue of efficient searching in the deep representation space remains largely unsolved. To this end, we propose a simple yet efficient deep-network-based multi-index hashing method for simultaneously learning the powerful image representation and the efficient searching. To achieve these two goals, we introduce the multi-index hashing (MIH) mechanism into the proposed deep architecture, which divides the binary codes into multiple substrings. Due to the non-uniformly distributed codes will result in inefficiency searching, we add the two balanced constraints at feature-level and instance-level, respectively. Extensive evaluations on several benchmark image retrieval datasets show that the learned balanced binary codes bring dramatic speedups and achieve comparable performance over the existing baselines. | The learning-to-hash methods learn the hash functions from the training data for generating better binary representation. The representative methods include Iterative Quantization (ITQ) @cite_26 , Kernerlized LSH (KLSH) @cite_20 , Anchor Graph Hashing (AGH) @cite_12 , Spectral Hashing (SH) @cite_1 , Semi-Supervised Hashing (SSH) @cite_17 , Kernel-based Supervised Hashing (KSH) @cite_14 , Minimal Loss Hashing (MIH) @cite_37 , Binary Reconstruction Embedding (BRE) @cite_23 and so on. The comprehensive survey can be found in @cite_25 . | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_14",
"@cite_1",
"@cite_23",
"@cite_20",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2221852422",
"2084363474",
"1992371516",
"",
"2164338181",
"2171790913",
"2411707397",
"2251864938",
"2044195942"
],
"abstract": [
"We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods.",
"Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .",
"",
"Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches. In this paper, we develop an algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings. We develop a scalable coordinate-descent algorithm for our proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art techniques.",
"Fast retrieval methods are critical for large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several large-scale datasets, and show that it enables accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.",
"Nearest neighbor search is a problem of finding the data points from the database such that the distances from them to the query point are the smallest. Learning to hash is one of the major solutions to this problem and has been widely studied recently. In this paper, we present a comprehensive survey of the learning to hash algorithms, categorize them according to the manners of preserving the similarities into: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, as well as quantization, and discuss their relations. We separate quantization from pairwise similarity preserving as the objective function is very different though quantization, as we show, can be derived from preserving the pairwise similarities. In addition, we present the evaluation protocols, and the general performance analysis, and point out that the quantization algorithms perform superiorly in terms of search accuracy, search time cost, and space cost. Finally, we introduce a few emerging topics.",
"Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.",
"Large scale image search has recently attracted considerable attention due to easy availability of huge amounts of data. Several hashing methods have been proposed to allow approximate but highly efficient search. Unsupervised hashing methods show good performance with metric distances but, in image search, semantic similarity is usually given in terms of labeled pairs of images. There exist supervised hashing methods that can handle such semantic similarity but they are prone to overfitting when labeled data is small or noisy. Moreover, these methods are usually very slow to train. In this work, we propose a semi-supervised hashing method that is formulated as minimizing empirical error on the labeled data while maximizing variance and independence of hash bits over the labeled and unlabeled data. The proposed method can handle both metric as well as semantic similarity. The experimental results on two large datasets (up to one million samples) demonstrate its superior performance over state-of-the-art supervised and unsupervised methods."
]
} |
1710.06993 | 2766820688 | Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. There has been considerable research on generating efficient image representation via the deep-network-based hashing methods. However, the issue of efficient searching in the deep representation space remains largely unsolved. To this end, we propose a simple yet efficient deep-network-based multi-index hashing method for simultaneously learning the powerful image representation and the efficient searching. To achieve these two goals, we introduce the multi-index hashing (MIH) mechanism into the proposed deep architecture, which divides the binary codes into multiple substrings. Due to the non-uniformly distributed codes will result in inefficiency searching, we add the two balanced constraints at feature-level and instance-level, respectively. Extensive evaluations on several benchmark image retrieval datasets show that the learned balanced binary codes bring dramatic speedups and achieve comparable performance over the existing baselines. | Although obtaining the powerful image representation via the deep learning-to-hash methods, existing works always do not consider the fast searching in the learned codes space. Multi-index hashing @cite_0 @cite_31 is an efficient method for finding all @math -neighbors of a query by dividing the binary codes into multiple substrings. While, binary codes learned from the deep network always not be uniformly distributed in practice, e.g., all images with the same label indices with a similar key as shown in Figure , which will cost much time to check many candidate codes. In this paper, we solve this problem by adding two balanced constraints in our network, and learn more uniformly distributed binary codes. | {
"cite_N": [
"@cite_0",
"@cite_31"
],
"mid": [
"1729998890",
"2041878876"
],
"abstract": [
"We describe a technique for building hash indices for a large dictionary of strings. This technique permits robust retrieval of strings from the dictionary even when the query pattern has a significant number of errors. This technique is closely related to the classical Turan problem for hypergraphs. We propose a general method of multi-index construction by generalizing certain Turan hypergraphs. We also develop an accompanying theory for analyzing such hashing schemes. The resulting algorithms have been implemented and can be applied to a wide variety of recognition and retrieval problems. >",
"There is growing interest in representing image data and feature descriptors using compact binary codes for fast near neighbor search. Although binary codes are motivated by their use as direct indices (addresses) into a hash table, codes longer than 32 bits are not being used as such, as it was thought to be ineffective. We introduce a rigorous way to build multiple hash tables on binary code substrings that enables exact k-nearest neighbor search in Hamming space. The approach is storage efficient and straight-forward to implement. Theoretical analysis shows that the algorithm exhibits sub-linear run-time behavior for uniformly distributed codes. Empirical results show dramatic speedups over a linear scan baseline for datasets of up to one billion codes of 64, 128, or 256 bits."
]
} |
1710.06925 | 2765944859 | We present an interactive visualization system for exploring the coverage in sensor networks with uncertain sensor locations. We consider a simple case of uncertainty where the location of each sensor is confined to a discrete number of points sampled uniformly at random from a region with a fixed radius. Employing techniques from topological data analysis, we model and visualize network coverage by quantifying the uncertainty defined on its simplicial complex representations. We demonstrate the capabilities and effectiveness of our tool via the exploration of randomly distributed sensor networks. | Deterministic models of coverage using topological methods. In their seminal work on sensor networks, de Silva and Ghrist @cite_22 @cite_35 @cite_4 consider the determination of coverage with minimal sensing capabilities without coordinate information. They demonstrate that, given a minimal set of assumptions, one can compute coverage over a compact domain through the use of simplicial complexes and persistent homology @cite_9 . Their model, while based on unknown node location, nevertheless assumes those locations are deterministic. @cite_26 recently generalized the assumptions on the boundaries to make the results applicable to general domains. @cite_7 extend this concept further to consider a time-varying network. They utilize zigzag persistent homology @cite_32 (a variation of persistent homology) to identify holes in the coverage area. Adams and Carlsson @cite_28 use a similar approach to determine if evasion paths exist within a time-varying sensor network, that is, if a moving intruder can avoid detection in a time-varying setting. | {
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_32"
],
"mid": [
"",
"2568738501",
"",
"2110541499",
"2003885320",
"2125650083",
"",
"2158407067"
],
"abstract": [
"",
"In their seminal work on homological sensor networks, de Silva and Ghrist showed the surprising fact that it's possible to certify the coverage of a coordinate-free sensor network even with very minimal knowledge of the space to be covered. Here, coverage means that every point in the domain (except possibly those very near the boundary) has a nearby sensor. More generally, their algorithm takes a pair of nested neighborhood graphs along with a labeling of vertices as either boundary or interior and computes the relative homology of a simplicial complex induced by the graphs. This approach, called the Topological Coverage Criterion (TCC), requires some assumptions about the underlying geometric domain as well as some assumptions about the relationship of the input graphs to the domain. The goal of this paper is to generalize these assumptions and show how the TCC can be applied to both much more general domains as well as very weak assumptions on the input. We give a new, simpler proof of the de Silva-Ghrist Topological Coverage Criterion that eliminates any assumptions about the smoothness of the boundary of the underlying space, allowing the results to be applied to much more general problems. The new proof factors the geometric, topological, and combinatorial aspects, allowing us to provide a coverage condition that supports thick boundaries, k-coverage, and weighted coverage, in which sensors have varying radii.",
"",
"Tools from computational homology are introduced to verify coverage in an idealized sensor network. These methods are unique in that, while they are coordinate-free and assume no localization or orientation capabilities for the nodes, there are also no probabilistic assumptions. The key ingredient is the theory of homology from algebraic topology. The robustness of these tools is demonstrated by adapting them to a variety of settings, including static planar coverage, 3-D barrier coverage, and time-dependent sweeping coverage. Results are also given on hole repair, error tolerance, optimal coverage, and variable radii. An overview of implementation is given.",
"In the study of sensor networks, many applications require topological analysis, and for some problems topological information is even sufficient. Here, we review how algebraic topology (and specifically simplicial homology theory) can be used as a general framework for detection of coverage holes in a coordinate-free sensor network. Extensions to distributed processing and localization algorithms are also reviewed, before progressing into discussion of a new way to apply algebraic topological methods to the analysis of coverage properties in dynamic sensor networks. Zigzag persistent homology is a recently developed method to track homological features (such as holes) over a sequence of spaces. This paper demonstrates the promise of this method for the identification of coverage holes in a time-varying coordinate-free sensor network, as well as the designation of coverage holes as significant or not, based on the length of time they are present in the sequence.",
"Suppose that ball-shaped sensors wander in a bounded domain. A sensor does not know its location but does know when it overlaps a nearby sensor. We say that an evasion path exists in this sensor network if a moving intruder can avoid detection. In 'Coordinate-free coverage in sensor networks with controlled boundaries via homology', Vin de Silva and Robert Ghrist give a necessary condition, depending only on the time-varying connectivity data of the sensors, for an evasion path to exist. Using zigzag persistent homology, we provide an equivalent condition that moreover can be computed in a streaming fashion. However, no method with time-varying connectivity data as input can give necessary and sufficient conditions for the existence of an evasion path. Indeed, we show that the existence of an evasion path depends not only on the fibrewise homotopy type of the region covered by sensors but also on its embedding in spacetime. For planar sensors that also measure weak rotation and distance information, we provide necessary and sufficient conditions for the existence of an evasion path.",
"",
"We study the problem of computing zigzag persistence of a sequence of homology groups and study a particular sequence derived from the levelsets of a real-valued function on a topological space. The result is a local, symmetric interval descriptor of the function. Our structural results establish a connection between the zigzag pairs in this sequence and extended persistence, and in the process resolve an open question associated with the latter. Our algorithmic results not only provide a way to compute zigzag persistence for any sequence of homology groups, but combined with our structural results give a novel algorithm for computing extended persistence. This algorithm is easily parallelizable and uses (asymptotically) less memory."
]
} |
1710.07132 | 2765274629 | We study a certain relaxation of the classic vertex coloring problem, namely, a coloring of vertices of undirected, simple graphs, such that there are no monochromatic triangles. We give the first classification of the problem in terms of classic and parametrized algorithms. Several computational complexity results are also presented, which improve on the previous results found in the literature. We propose the new structural parameter for undirected, simple graphs -- the triangle-free chromatic number @math . We bound @math by other known structural parameters. We also present two classes of graphs with interesting coloring properties, that play pivotal role in proving useful observation about our problem. We give ask several conjectures questions throughout this paper to encourage new research in the area of graph coloring. | Some researchers have already considered coloring problems that are similar to our variation. The class of planar graphs has been of particular interest, for example, Angelini and Frati @cite_14 study planar graphs that admit an acyclic 3-coloring -- a proper coloring in which every 2-chromatic subgraph is acyclic. Algorithms for acyclic coloring can be used to solve approximate a triangle-free coloring, although we do not explore this possibility in this paper. Another result is of Kaiser and S krekovski @cite_1 , where they prove that every planar graph has a 2-coloring such that no cycle of length 3 or 4 is monochromatic. Thomassen @cite_2 , on the other hand, considers list-coloring of planar graphs without monochromatic triangles. Few hardness results for our problem are known -- Karpi 'nski @cite_20 showed that verifying whether a graph admits a 2-coloring without monochromatic cycles of fixed length is @math -complete. His proof was then simplified by Shitov @cite_12 , who also proposed and proved the hardness of an extension of our problem, where additional restriction is imposed on the coloring in the form of the set of polar edges -- edges that must not be monochromatic in the resulting coloring. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_2",
"@cite_20",
"@cite_12"
],
"mid": [
"2089322292",
"1914122257",
"2082317855",
"2542720522",
"2588373402"
],
"abstract": [
"In this paper we study the acyclic 3-colorability of some subclasses of planar graphs. First, we show that there exist infinite classes of cubic planar graphs that are not acyclically 3-colorable. Then, we show that every planar graph has a subdivision with one vertex per edge that is acyclically 3-colorable and provide a linear-time coloring algorithm. Finally, we characterize the series-parallel graphs for which every 3-coloring is acyclic and provide a linear-time recognition algorithm for such graphs.",
"It is well known that every planar graph G is 2-colorable in such a way that no 3-cycle of G is monochromatic. In this paper, we prove that G has a 2-coloring such that no cycle of length 3 or 4 is monochromatic. The complete graph K5 does not admit such a coloring. On the other hand, we extend the result to K5-minor-free graphs. There are planar graphs with the property that each of their 2-colorings has a monochromatic cycle of length 3, 4, or 5. In this sense, our result is best possible. © 2004 Wiley Periodicals, Inc. J Graph Theory 46: 25–38, 2004",
"We prove that, for every list-assignment of two colors to every vertex of any planar graph, there is a list-coloring such that there is no monochromatic triangle. This proves and extends a conjecture of B. Mohar and R. Skrekovski and a related conjecture of A. Kundgen and R. Ramamurthi.",
"In this paper we study a problem of vertex two-coloring of an undirected graph such that there is no monochromatic cycle of the given length. We show that this problem is hard to solve. We give a proof by presenting a reduction from the variation of satisfiability (SAT) problem. We show the nice properties of coloring cliques with two colors which plays pivotal role in the reduction construction.",
"For any integer k3, we consider the following decision problem. Given a simple graph, does there exist a partition of its vertices into two disjoint sets such that every simple k-cycle of G contains vertices in both of these sets? This problem is NP-hard because it admits a polynomial reduction from NAE 3-SAT. We construct a reduction that is polynomial both in the length of the instance and in k, which answers a recent question of Karpiski."
]
} |
1710.07210 | 2765742186 | Multi-task learning in text classification leverages implicit correlations among related tasks to extract common features and yield performance gains. However, most previous works treat labels of each task as independent and meaningless one-hot vectors, which cause a loss of potential information and makes it difficult for these models to jointly learn three or more tasks. In this paper, we propose Multi-Task Label Embedding to convert labels in text classification into semantic vectors, thereby turning the original tasks into vector matching tasks. We implement unsupervised, supervised and semi-supervised models of Multi-Task Label Embedding, all utilizing semantic correlations among tasks and making it particularly convenient to scale and transfer as more tasks are involved. Extensive experiments on five benchmark datasets for text classification show that our models can effectively improve performances of related tasks with semantic representations of labels and additional information from each other. | @cite_7 utilizes a shared lookup layer for common features, followed by task-specific layers for several traditional NLP tasks including part-of-speech tagging and semantic parsing. They use a fix-size window to solve the problem of variable-length input sequences, which can be better addressed by RNN. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2117130368"
],
"abstract": [
"We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel form of semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in state-of-the-art-performance."
]
} |
1710.07035 | 2765811365 | Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this by deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image superresolution, and classification. The aim of this review article is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application. | What other comparisons can be made between GANs and the standard tools of signal processing? For PCA, ICA, Fourier and wavelet representations, the latent space of GANs is, by analogy, the coefficient space of what we commonly refer to as transform space. What sets GANs apart from these standard tools of signal processing is the level of complexity of the models that map vectors from latent space to image space. Because the generator networks contain non-linearities, and can be of almost arbitrary depth, this mapping -- as with many other deep learning approaches -- can be extraordinarily complex. With regard to deep image-based models, modern approaches to generative image modelling can be grouped into explicit density models and implicit density models. Explicit density models are either tractable (change of variables models, autoregressive models) or intractable (directed models trained with variational inference, undirected models trained using Markov chains). Implicit density models capture the statistical distribution of the data through a generative process which makes use of either ancestral sampling @cite_36 or Markov chain-based sampling. GANs fall into the directed implicit model category. A more detailed overview and relevant papers can be found in Ian Goodfellow's NIPS 2016 tutorial @cite_5 . | {
"cite_N": [
"@cite_36",
"@cite_5"
],
"mid": [
"2953267151",
"2962775818"
],
"abstract": [
"Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying data-generating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).",
"We introduce the \"Energy-based Generative Adversarial Network\" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and higher energies to other regions. Similar to the probabilistic GANs, a generator is seen as being trained to produce contrastive samples with minimal energies, while the discriminator is trained to assign high energies to these generated samples. Viewing the discriminator as an energy function allows to use a wide variety of architectures and loss functionals in addition to the usual binary classifier with logistic output. Among them, we show one instantiation of EBGAN framework as using an auto-encoder architecture, with the energy being the reconstruction error, in place of the discriminator. We show that this form of EBGAN exhibits more stable behavior than regular GANs during training. We also show that a single-scale architecture can be trained to generate high-resolution images."
]
} |
1710.07346 | 2757508077 | We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model "redresses" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN . | Generative Adversarial Networks (GAN) @cite_6 have shown impressive results generating new images, faces @cite_17 , indoor scenes @cite_12 , fine-grained objects like birds @cite_14 , or clothes @cite_10 . Training GANs based on conditions incorporates further information to guide the generation process. Existing works have explored various conditions, from category labels @cite_4 , text @cite_14 to an encoded feature vector @cite_10 . Different from the studies above, our study aims at generating the target by using the spatial configuration of the input images as a condition. The spatial configuration is carefully formulated so that it is agnostic to the clothing worn in the original image, and only captures information about the user's body. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_6",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2949999304",
"2963464195",
"2099471712",
"",
"2298992465",
"2173520492"
],
"abstract": [
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right---similar to why we study the human brain---and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization, which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network. The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images).",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"",
"Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations."
]
} |
1710.07346 | 2757508077 | We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model "redresses" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN . | There exist several studies to transfer an input image to a new one. Ledig al @cite_16 apply the GAN framework to super-resolve a low-resolution image. Zhu al @cite_13 use a conditional GAN to transfer across the image domains, from edge maps to real images, or from daytime images to night-time. Isola al @cite_13 change the viewing angle of an existing object. Johnson al @cite_9 apply GANs to neural style transfer. All these studies share a common feature - the image is transformed on the texture level but is not region-specific. In this study, we explore a new compositional mapping method that allows region-specific texture generation, which provides richer textures for different body regions. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_13"
],
"mid": [
"2950689937",
"2523714292",
"2552465644"
],
"abstract": [
"We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either."
]
} |
1710.07346 | 2757508077 | We present a novel and effective approach for generating new clothing on a wearer through generative adversarial learning. Given an input image of a person and a sentence describing a different outfit, our model "redresses" the person as desired, while at the same time keeping the wearer and her his pose unchanged. Generating new outfits with precise regions conforming to a language description while retaining wearer's body structure is a new challenging task. Existing generative adversarial networks are not ideal in ensuring global coherence of structure given both the input photograph and language description as conditions. We address this challenge by decomposing the complex generative process into two conditional stages. In the first stage, we generate a plausible semantic segmentation map that obeys the wearer's pose as a latent spatial arrangement. An effective spatial constraint is formulated to guide the generation of this semantic segmentation map. In the second stage, a generative model with a newly proposed compositional mapping layer is used to render the final image with precise regions and textures conditioned on this map. We extended the DeepFashion dataset [8] by collecting sentence descriptions for 79K images. We demonstrate the effectiveness of our approach through both quantitative and qualitative evaluations. A user study is also conducted. The codes and the data are available at this http URL edu.hk projects FashionGAN . | There are several recent studies that explore improved image generation by stacking GANs. Our work is somewhat similar in spirit to @cite_12 @cite_2 -- our idea is to have the first stage to create the basic composition, and the second stage to add the necessary refinements to the image generated in the first stage. However, the proposed FashionGAN differs from S @math GAN @cite_12 in that the latter aims at synthesizing a surface map from a random vector in its first stage. In contrast, our goal is to generate a plausible mask whose structure conforms to a given photograph and language description, which requires us to design additional spatial constraints and design coding as conditions. Furthermore, these two conditions should not contradict themselves. Similarly, our work requires additional constraints which are not explored in @cite_1 . Compositional mapping is not explored in the aforementioned studies as well. | {
"cite_N": [
"@cite_1",
"@cite_12",
"@cite_2"
],
"mid": [
"2964024144",
"2298992465",
""
],
"abstract": [
"Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.",
"Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.",
""
]
} |
1710.07072 | 2766483774 | Understanding user preference is essential to the optimization of recommender systems. As a feedback of user's taste, rating scores can directly reflect the preference of a given user to a given product. Uncovering the latent components of user ratings is thus of significant importance for learning user interests. In this paper, a new recommendation approach, called LCR, was proposed by investigating the latent components of user ratings. The basic idea is to decompose an existing rating into several components via a cost-sensitive learning strategy. Specifically, each rating is assigned to several latent factor models and each model is updated according to its predictive errors. Afterwards, these accumulated predictive errors of models are utilized to decompose a rating into several components, each of which is treated as an independent part to retrain the latent factor models. Finally, all latent factor models are combined linearly to estimate predictive ratings for users. In contrast to existing methods, LCR provides an intuitive preference modeling strategy via multiple component analysis at an individual perspective. Meanwhile, it is verified by the experimental results on several benchmark datasets that the proposed method is superior to the state-of-art methods in terms of recommendation accuracy. | Motivated by the multi-criteria technique, we decomposed each single-criteria rating into several components @cite_31 @cite_8 . Therefore, our work is related with both the single-criteria and multi-criteria approaches. In this section, the related works are reviewed. | {
"cite_N": [
"@cite_31",
"@cite_8"
],
"mid": [
"2084127140",
"2110325612"
],
"abstract": [
"Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.",
"Collaborative filtering or recommender systems use a database about user preferences to predict additional topics or products a new user might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods. We compare the predictive accuracy of the various methods in a set of representative problem domains. We use two basic classes of evaluation metrics. The first characterizes accuracy over a set of individual predictions in terms of average absolute deviation. The second estimates the utility of a ranked list of suggested items. This metric uses an estimate of the probability that a user will see a recommendation in an ordered list. Experiments were run for datasets associated with 3 application areas, 4 experimental protocols, and the 2 evaluation metr rics for the various algorithms. Results indicate that for a wide range of conditions, Bayesian networks with decision trees at each node and correlation methods outperform Bayesian-clustering and vector-similarity methods. Between correlation and Bayesian networks, the preferred method depends on the nature of the dataset, nature of the application (ranked versus one-by-one presentation), and the availability of votes with which to make predictions. Other considerations include the size of database, speed of predictions, and learning time."
]
} |
1710.07072 | 2766483774 | Understanding user preference is essential to the optimization of recommender systems. As a feedback of user's taste, rating scores can directly reflect the preference of a given user to a given product. Uncovering the latent components of user ratings is thus of significant importance for learning user interests. In this paper, a new recommendation approach, called LCR, was proposed by investigating the latent components of user ratings. The basic idea is to decompose an existing rating into several components via a cost-sensitive learning strategy. Specifically, each rating is assigned to several latent factor models and each model is updated according to its predictive errors. Afterwards, these accumulated predictive errors of models are utilized to decompose a rating into several components, each of which is treated as an independent part to retrain the latent factor models. Finally, all latent factor models are combined linearly to estimate predictive ratings for users. In contrast to existing methods, LCR provides an intuitive preference modeling strategy via multiple component analysis at an individual perspective. Meanwhile, it is verified by the experimental results on several benchmark datasets that the proposed method is superior to the state-of-art methods in terms of recommendation accuracy. | Comparing to single-criteria recommender systems, multi-criteria recommender system contains more information, including ratings of item attributes. The complexity of algorithms is increased by the additional information, but in most cases, the quality of recommendation can also be improved by incorporating the auxiliary data @cite_31 @cite_45 @cite_16 @cite_11 . To the best of our knowledge, examples of multi-criteria recommender systems include Zagat's Guide, Buy.com and Yahoo! Movies @cite_38 . To exploit the information of multi-criteria ratings, a commonly used method is to extend the computation of similarity, from single-criteria ratings to multi-criteria ratings @cite_45 @cite_16 . For example, within a user-based collaborative filtering approach, the similarity of two users is computed by making use of their single-criteria ratings while in a multi-criteria recommender system, the computation of similarity is performed on each criteria and finally average the result over all criteria. Lee @cite_49 extended the concept of single criteria rating to multi-criteria ones and utilized skyline queriy algorithm to find candidate items. Jannach @cite_45 made use of support vector regression to determine the relative importance of multi-criteria ratings and combined user-based and item-based regression model in a weighted way. | {
"cite_N": [
"@cite_38",
"@cite_45",
"@cite_49",
"@cite_31",
"@cite_16",
"@cite_11"
],
"mid": [
"1966373856",
"2089115299",
"2111309847",
"2084127140",
"2026962484",
"2114433479"
],
"abstract": [
"Many websites provide visitors with the possibility to evaluate each item on more than one criteria. A commonly used rating scale is the one to five-star rating system or similar linguistic scales. Such scales are ordinal but the symbolic or lexical semantics convey information about the strength of user references in addition to the order of rated items. We refer to such scales as discrete ordered scales. We present AHP-Rec a method that treats user ratings as interval scale data and uses a multi-criteria approach for deriving predictions for user ratings. We use the data provided by Yahoo! Movies to demonstrate and evaluate the AHP-Rec recommender method. AHP-Rec takes as input the ratings each user gives to movies, calculates weights for each scale item that are personal for each user and provides its recommendation by aggregating preferences of similar users. Our method provides improved results over the state of the art single criterion method SVD++ and the multi-criteria method UTARec.",
"Recommender systems (RS) have shown to be valuable tools on e-commerce sites which help the customers identify the most relevant items within large product catalogs. In systems that rely on collaborative filtering, the generation of the product recommendations is based on ratings provided by the user community. While in many domains users are only allowed to attach an overall rating to the items, increasingly more online platforms allow their customers to evaluate the available items along different dimensions. Previous work has shown that these criteria ratings contain valuable information that can be exploited in the recommendation process. In this work we present new methods to leverage information derived from multi-dimensional ratings to improve the predictive accuracy of such multi-criteria recommender systems. In particular, we propose to use Support Vector regression to determine the relative importance of the individual criteria ratings and suggest to combine user- and item-based regression models in a weighted approach. Beside the automatic adjustment and optimization of the combination weights, we also explore different feature selection strategies to further improve the quality of the recommendations. An experimental analysis on two real-world rating datasets reveals that our method outperforms both recent single-rating algorithms based on matrix factorization as well as previous methods based on multi-criteria ratings in terms of the predictive accuracy. We therefore see the usage of multi-criteria customer ratings as a promising opportunity for e-commerce sites to improve the quality and precision of their online recommendation services.",
"Recommendation systems utilize information techniques to the problem of helping users find the items they would like. Example applications include the recommendation systems for movies, books, CDs and many others. As recommendation systems emerge as an independent research area, the rating structure plays a critical role in recent studies. Among many alternatives, the collaborative filtering algorithms are generally accepted to be successful to estimate user ratings of unseen items and then to derive proper recommendations. In this paper, we extend the concept of single criterion ratings to multi-criteria ones, i.e., an item can be evaluated in many different, aspects. Since there are usually conflicts among different criteria, the recommendation problem cannot be formulated as an optimization problem any more. Instead, we propose to use data query techniques to solve this multi-criteria recommendation problem.",
"Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.",
"The rapid development of Internet technologies in recent decades has imposed a heavy information burden on users. This has led to the popularity of recommender systems, which provide advice to users about items they may like to examine. Collaborative Filtering (CF) is the most promising technique in recommender systems, providing personalized recommendations to users based on their previously expressed preferences and those of other similar users. This paper introduces a CF framework based on Fuzzy Association Rules and Multiple-level Similarity (FARAMS). FARAMS extended existing techniques by using fuzzy association rule mining, and takes advantage of product similarities in taxonomies to address data sparseness and nontransitive associations. Experimental results show that FARAMS improves prediction quality, as compared to similar approaches.",
"Research in recommender systems is now starting to recognise the importance of multiple selection criteria to improve the recommendation output. In this paper, we present a novel approach to multi-criteria recommendation, based on the idea of clustering users in \"preference lattices\" (partial orders) according to their criteria preferences. We assume that some selection criteria for an item (product or a service) will dominate the overall ranking, and that these dominant criteria will be different for different users. Following this assumption, we cluster users based on their criteria preferences, creating a \"preference lattice\". The recommendation output for a user is then based on ratings by other users from the same or close clusters. Having introduced the general approach of clustering, we proceed to formulate three alternative recommendation methods instantiating the approach: (a) using the aggregation function of the criteria, (b) using the overall item ratings, and (c) combining clustering with collaborative filtering. We then evaluate the accuracy of the three methods using a set of experiments on a service ranking dataset, and compare them with a conventional collaborative filtering approach extended to cover multiple criteria. The results indicate that our third method, which combines clustering and extended collaborative filtering, produces the highest accuracy."
]
} |
1710.06555 | 2750966862 | Person Re-identification (ReID) is to identify the same person across different cameras. It is a challenging task due to the large variations in person pose, occlusion, background clutter, etc How to extract powerful features is a fundamental problem in ReID and is still an open problem today. In this paper, we design a Multi-Scale Context-Aware Network (MSCAN) to learn powerful features over full body and body parts, which can well capture the local context knowledge by stacking multi-scale convolutions in each layer. Moreover, instead of using predefined rigid parts, we propose to learn and localize deformable pedestrian parts using Spatial Transformer Networks (STN) with novel spatial constraints. The learned body parts can release some difficulties, eg pose variations and background clutters, in part-based representation. Finally, we integrate the representation learning processes of full body and body parts into a unified framework for person ReID through multi-class person identification tasks. Extensive evaluations on current challenging large-scale person ReID datasets, including the image-based Market1501, CUHK03 and sequence-based MARS datasets, show that the proposed method achieves the state-of-the-art results. | Deep learning approaches for person ReID tend to learn person representation and similarity (distance) metric jointly. Given a pair of person images, previous deep learning approaches learn each person's features followed by a deep matching function from the convolutional features @cite_17 @cite_12 @cite_11 @cite_44 or the Fully Connected (FC) features @cite_53 @cite_50 @cite_46 . In addition to the deep metric learning, some work directly learns image representation through pair-wise contrastive loss or triplet ranking loss, and use Euclidean metric for comparison @cite_56 @cite_20 @cite_31 @cite_24 . | {
"cite_N": [
"@cite_31",
"@cite_53",
"@cite_17",
"@cite_56",
"@cite_44",
"@cite_24",
"@cite_50",
"@cite_46",
"@cite_20",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"",
"1982925187",
"1971955426",
"",
"",
"2336626189",
"2519373641",
"",
"1928419358",
""
],
"abstract": [
"",
"",
"Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.",
"Identifying the same individual across different scenes is an important yet difficult task in intelligent video surveillance. Its main difficulty lies in how to preserve similarity of the same person against large appearance and structure variation while discriminating different individuals. In this paper, we present a scalable distance driven feature learning framework based on the deep neural network for person re-identification, and demonstrate its effectiveness to handle the existing challenges. Specifically, given the training images with the class labels (person IDs), we first produce a large number of triplet units, each of which contains three images, i.e. one person with a matched reference and a mismatched reference. Treating the units as the input, we build the convolutional neural network to generate the layered representations, and follow with the L 2 distance metric. By means of parameter optimization, our framework tends to maximize the relative distance between the matched pair and the mismatched pair for each triplet unit. Moreover, a nontrivial issue arising with the framework is that the triplet organization cubically enlarges the number of training triplets, as one image can be involved into several triplet units. To overcome this problem, we develop an effective triplet generation scheme and an optimized gradient descent algorithm, making the computational load mainly depend on the number of original images instead of the number of triplets. On several challenging databases, our approach achieves very promising results and outperforms other state-of-the-art approaches. HighlightsWe present a novel feature learning framework for person re-identification.Our framework is based on the maximum relative distance comparison.The learning algorithm is scalable to process large amount of data.We demonstrate superior performances over other state-of-the-arts.",
"",
"",
"The past decade has witnessed the rapid development of feature representation learning and distance metric learning, whereas the two steps are often discussed separately. To explore their interaction, this work proposes an end-to-end learning framework called DARI, i.e. Distance metric And Representation Integration, and validates the effectiveness of DARI in the challenging task of person verification. Given the training images annotated with the labels, we first produce a large number of triplet units, and each one contains three images, i.e. one person and the matched mismatch references. For each triplet unit, the distance disparity between the matched pair and the mismatched pair tends to be maximized. We solve this objective by building a deep architecture of convolutional neural networks. In particular, the Mahalanobis distance matrix is naturally factorized as one top fully-connected layer that is seamlessly integrated with other bottom layers representing the image feature. The image feature and the distance metric can be thus simultaneously optimized via the one-shot backward propagation. On several public datasets, DARI shows very promising performance on re-identifying individuals cross cameras against various challenges, and outperforms other state-of-the-art approaches.",
"Person re-identification is challenging due to the large variations of pose, illumination, occlusion and camera view. Owing to these variations, the pedestrian data is distributed as highly-curved manifolds in the feature space, despite the current convolutional neural networks (CNN)’s capability of feature extraction. However, the distribution is unknown, so it is difficult to use the geodesic distance when comparing two samples. In practice, the current deep embedding methods use the Euclidean distance for the training and test. On the other hand, the manifold learning methods suggest to use the Euclidean distance in the local range, combining with the graphical relationship between samples, for approximating the geodesic distance. From this point of view, selecting suitable positive (i.e. intra-class) training samples within a local range is critical for training the CNN embedding, especially when the data has large intra-class variations. In this paper, we propose a novel moderate positive sample mining method to train robust CNN for person re-identification, dealing with the problem of large variation. In addition, we improve the learning by a metric weight constraint, so that the learned metric has a better generalization ability. Experiments show that these two strategies are effective in learning robust deep metrics for person re-identification, and accordingly our deep model significantly outperforms the state-of-the-art methods on several benchmarks of person re-identification. Therefore, the study presented in this paper may be useful in inspiring new designs of deep models for person re-identification.",
"",
"In this work, we propose a method for simultaneously learning features and a corresponding similarity metric for person re-identification. We present a deep convolutional architecture with layers specially designed to address the problem of re-identification. Given a pair of images as input, our network outputs a similarity value indicating whether the two input images depict the same person. Novel elements of our architecture include a layer that computes cross-input neighborhood differences, which capture local relationships between the two input images based on mid-level features from each input image. A high-level summary of the outputs of this layer is computed by a layer of patch summary features, which are then spatially integrated in subsequent layers. Our method significantly outperforms the state of the art on both a large data set (CUHK03) and a medium-sized data set (CUHK01), and is resistant to over-fitting. We also demonstrate that by initially training on an unrelated large data set before fine-tuning on a small target data set, our network can achieve results comparable to the state of the art even on a small data set (VIPeR).",
""
]
} |
1710.06555 | 2750966862 | Person Re-identification (ReID) is to identify the same person across different cameras. It is a challenging task due to the large variations in person pose, occlusion, background clutter, etc How to extract powerful features is a fundamental problem in ReID and is still an open problem today. In this paper, we design a Multi-Scale Context-Aware Network (MSCAN) to learn powerful features over full body and body parts, which can well capture the local context knowledge by stacking multi-scale convolutions in each layer. Moreover, instead of using predefined rigid parts, we propose to learn and localize deformable pedestrian parts using Spatial Transformer Networks (STN) with novel spatial constraints. The learned body parts can release some difficulties, eg pose variations and background clutters, in part-based representation. Finally, we integrate the representation learning processes of full body and body parts into a unified framework for person ReID through multi-class person identification tasks. Extensive evaluations on current challenging large-scale person ReID datasets, including the image-based Market1501, CUHK03 and sequence-based MARS datasets, show that the proposed method achieves the state-of-the-art results. | With the increasing sample size of ReID dataset, the IDE feature which is learned through multi-class person identification tasks, has shown great potentials on current large-scale person ReID datasets. Xiao al @cite_32 propose the domain guided dropout to learn features over multiple datasets simultaneously with identity classification loss. Zheng al @cite_37 learn the IDE feature for the video-based person re-identification. Xiao al @cite_4 and Zheng al @cite_21 learn the IDE feature to jointly solve the pedestrian detection and person ReID tasks. Schumann al @cite_15 learn the IDE feature for domain adaptive person ReID. The similar phenomenon has also been validated on face recognition @cite_33 . | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_33",
"@cite_21",
"@cite_32",
"@cite_15"
],
"mid": [
"",
"2339827301",
"1998808035",
"2337600727",
"2342611082",
"2533124984"
],
"abstract": [
"",
"Existing person re-identification (re-id) benchmarks and algorithms mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be found from whole images. To close the gap, we investigate how to localize and match query persons from the scene images without relying on the annotations of candidate boxes. Instead of breaking it down into two separate tasks---pedestrian detection and person re-id, we propose an end-to-end deep learning framework to jointly handle both tasks. A random sampling softmax loss is proposed to effectively train the model under the supervision of sparse and unbalanced labels. On the other hand, existing benchmarks are small in scale and the samples are collected from a few fixed camera views with low scene diversities. To address this issue, we collect a large-scale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes this http URL . We evaluate our approach and other baselines on the proposed dataset, and study the influence of various factors. Experiments show that our method achieves the best result.",
"This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45 verification accuracy on LFW is achieved with only weakly aligned faces.",
"We present a novel large-scale dataset and comprehensive baselines for end-to-end pedestrian detection and person recognition in raw video frames. Our baselines address three issues: the performance of various combinations of detectors and recognizers, mechanisms for pedestrian detection to help improve overall re-identification accuracy and assessing the effectiveness of different detectors for re-identification. We make three distinct contributions. First, a new dataset, PRW, is introduced to evaluate Person Re-identification in the Wild, using videos acquired through six synchronized cameras. It contains 932 identities and 11,816 frames in which pedestrians are annotated with their bounding box positions and identities. Extensive benchmarking results are presented on this dataset. Second, we show that pedestrian detection aids re-identification through two simple yet effective improvements: a discriminatively trained ID-discriminative Embedding (IDE) in the person subspace using convolutional neural network (CNN) features and a Confidence Weighted Similarity (CWS) metric that incorporates detection scores into similarity measurement. Third, we derive insights in evaluating detector performance for the particular scenario of accurate person re-identification.",
"Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.",
"Person re-identification (re-id) is the task of matching multiple occurrences of the same person from different cameras, poses, lighting conditions, and a multitude of other factors which alter the visual appearance. Typically, this is achieved by learning either optimal features or matching metrics which are adapted to specific pairs of camera views dictated by the pairwise labelled training datasets. In this work, we formulate a deep learning based novel approach to automatic prototype-domain discovery for domain perceptive (adaptive) person re-id (rather than camera pair specific learning) for any camera views scalable to new unseen scenes without training data. We learn a separate re-id model for each of the discovered prototype-domains and during model deployment, use the person probe image to select automatically the model of the closest prototype domain. Our approach requires neither supervised nor unsupervised domain adaptation learning, i.e. no data available from the target domains. We evaluate extensively our model under realistic re-id conditions using automatically detected bounding boxes with low-resolution and partial occlusion. We show that our approach outperforms most of the state-of-the-art supervised and unsupervised methods on the latest CUHK-SYSU and PRW benchmarks."
]
} |
1710.06637 | 2765291116 | Finding hot topics in scholarly fields can help researchers to keep up with the latest concepts, trends, and inventions in their field of interest. Due to the rarity of complete large-scale scholarly data, earlier studies target this problem based on manual topic extraction from a limited number of domains, with their focus solely on a single feature such as coauthorship, citation relations, and etc. Given the compromised effectiveness of such predictions, in this paper we use a real scholarly dataset from Microsoft Academic Graph, which provides more than 12000 topics in the field of Computer Science (CS), including 1200 venues, 14.4 million authors, 30 million papers and their citation relations over the period of 1950 till now. Aiming to find the topics that will trend in CS area, we innovatively formalize a hot topic prediction problem where, with joint consideration of both inter- and intra-topical influence, 17 different scientific features are extracted for comprehensive description of topic status. By leveraging all those 17 features, we observe good accuracy of topic scale forecasting after 5 and 10 years with R2 values of 0.9893 and 0.9646, respectively. Interestingly, our prediction suggests that the maximum value matters in finding hot topics in scholarly fields, primarily from three aspects: (1) the maximum value of each factor, such as authors' maximum h-index and largest citation number, provides three times the amount of information than the average value in prediction; (2) the mutual influence between the most correlated topics serve as the most telling factor in long-term topic trend prediction, interpreting that those currently exhibiting the maximum growth rates will drive the correlated topics to be hot in the future; (3) we predict in the next 5 years the top 100 fastest growing (maximum growth rate) topics that will potentially get the major attention in CS area. | Traditionally, the topic trend prediction always focusses on the topics they extracted from the texts of small sets of papers. Hurtado @math @math @cite_14 extracted topics from a collection of documents and forecasted topic trends. Some work proposed evolving models to predict the topic's future trend. Qian @math @math @cite_12 proposed a model based on the relation of papers in one topic and predicted the core-group's life circle. However, these ways are limited by the quantity of data and the generality is not enough. Due to the rarity of the datasets which contain papers' topic information along with the fact that it is a huge workload to obtain the time series of all the features in scholarly topics, there has been very few prior works that predict academic topical future trend in such a large scale. | {
"cite_N": [
"@cite_14",
"@cite_12"
],
"mid": [
"2338305720",
"2077353658"
],
"abstract": [
"Finding topics from a collection of documents, such as research publications, patents, and technical reports, is helpful for summarizing large scale text collections and the world wide web. It can also help forecast topic trends in the future. This can be beneficial for many applications, such as modeling the evolution of the direction of research and forecasting future trends of the IT industry. In this paper, we propose using association analysis and ensemble forecasting to automatically discover topics from a set of text documents and forecast their evolving trend in a near future. In order to discover meaningful topics, we collect publications from a particular research area, data mining and machine learning, as our data domain. An association analysis process is applied to the collected data to first identify a set of topics, followed by a temporal correlation analysis to help discover correlations between topics, and identify a network of topics and communities. After that, an ensemble forecasting approach is proposed to predict the popularity of research topics in the future. Our experiments and validations on data with 9 years of publication records validate the effectiveness of the proposed design.",
"Recent years have witnessed increased interests in topic detection and tracking (TDT). However, existing work mainly focuses on overall trend analysis, and is not developed for understanding the evolving process of topics. To this end, this paper aims to reveal the underlying process and reasons for topic formation and development (TFD). Along this line, based on community partitioning in social networks, a core-group model is proposed to explain the dynamics and to segment topic development. This model is inspired by the cell division mechanism in biology. Furthermore, according to the division phase and interphase in the life cycle of a core group, a topic is separated into four states including birth state, extending state, saturation state and shrinkage state. In this paper, we mainly focus our studies on scientific topic formation and development using the citation network structure among scientific papers. Experimental results on two real-world data sets show that the division of a core group brings on the generation of a new scientific topic. The results also reveal that the progress of an entire scientific topic is closely correlated to the growth of a core group during its interphase. Finally, we demonstrate the effectiveness of the proposed method in several real-life scenarios."
]
} |
1710.06839 | 2765858647 | The City of Detroit maintains an active fleet of over 2500 vehicles, spending an annual average of over @math 7.7 million on maintaining this fleet. Understanding the existence of patterns and trends in this data could be useful to a variety of stakeholders, particularly as Detroit emerges from Chapter 9 bankruptcy, but the patterns in such data are often complex and multivariate and the city lacks dedicated resources for detailed analysis of this data. This work, a data collaboration between the Michigan Data Science Team (this http URL) and the City of Detroit's Operations and Infrastructure Group, seeks to address this unmet need by analyzing data from the City of Detroit's entire vehicle fleet from 2010-2017. We utilize tensor decomposition techniques to discover and visualize unique temporal patterns in vehicle maintenance; apply differential sequence mining to demonstrate the existence of common and statistically unique maintenance sequences by vehicle make and model; and, after showing these time-dependencies in the dataset, demonstrate an application of a predictive Long Short Term Memory (LSTM) neural network model to predict maintenance sequences. Our analysis shows both the complexities of municipal vehicle fleet data and useful techniques for mining and modeling such data. | Tensor representations and various tensor decompositions have found wide applications in a variety of domains, including psychometrics @cite_15 and brain imaging @cite_3 (where many core techniques, such as the PARAFAC decomposition used here, were developed), the evolution of chatroom @cite_18 and email @cite_17 conversations over time, modeling web search @cite_22 , epidemiology @cite_26 , and anomaly detection @cite_10 . Tensor representation is useful in a variety of problem domains because it allows for multi-way analysis of data containing multidimensional patterns. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_22",
"@cite_3",
"@cite_15",
"@cite_10",
"@cite_17"
],
"mid": [
"1529395305",
"",
"2111363262",
"2030699645",
"2000215628",
"2070113647",
"67471658"
],
"abstract": [
"This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.",
"",
"As the competition of Web search market increases, there is a high demand for personalized Web search to conduct retrieval incorporating Web users' information needs. This paper focuses on utilizing clickthrough data to improve Web search. Since millions of searches are conducted everyday, a search engine accumulates a large volume of clickthrough data, which records who submits queries and which pages he she clicks on. The clickthrough data is highly sparse and contains different types of objects (user, query and Web page), and the relationships among these objects are also very complicated. By performing analysis on these data, we attempt to discover Web users' interests and the patterns that users locate information. In this paper, a novel approach CubeSVD is proposed to improve Web search. The clickthrough data is represented by a 3-order tensor, on which we perform 3-mode analysis using the higher-order singular value decomposition technique to automatically capture the latent factors that govern the relations among these multi-type objects: users, queries and Web pages. A tensor reconstructed based on the CubeSVD analysis reflects both the observed interactions among these objects and the implicit associations among them. Therefore, Web search activities can be carried out based on CubeSVD analysis. Experimental evaluations using a real-world data set collected from an MSN search engine show that CubeSVD achieves encouraging search results in comparison with some standard methods.",
"Models for decomposing averaged event-related potentials in component functions are discussed. Biophysical considerations motivate a sample model, which is shown to lead to unique identifiable components, thereby overcoming a major drawback of the customary approach by principal components analysis. >",
"An individual differences model for multidimensional scaling is outlined in which individuals are assumed differentially to weight the several dimensions of a common “psychological space”. A corresponding method of analyzing similarities data is proposed, involving a generalization of “Eckart-Young analysis” to decomposition of three-way (or higher-way) tables. In the present case this decomposition is applied to a derived three-way table of scalar products between stimuli for individuals. This analysis yields a stimulus by dimensions coordinate matrix and a subjects by dimensions matrix of weights. This method is illustrated with data on auditory stimuli and on perception of nations.",
"How can we spot anomalies in large, time-evolving graphs? When we have multi-aspect data, e.g. who published which paper on which conference and on what year, how can we combine this information, in order to obtain good summaries thereof and unravel hidden anomalies and patterns? Such multi-aspect data, including time-evolving graphs, can be successfully modelled using Tensors. In this paper, we show that when we have multiple dimensions in the dataset, then tensor analysis is a powerful and promising tool. Our method TENSORSPLAT, at the heart of which lies the \"PARAFAC\" decomposition method, can give good insights about the large networks that are of interest nowadays, and contributes to spotting micro-clusters, changes and, in general, anomalies. We report extensive experiments on a variety of datasets (co-authorship network, time-evolving DBLP network, computer network and Facebook wall posts) and show how tensors can be proved useful in detecting \"strange\" behaviors.",
""
]
} |
1710.06839 | 2765858647 | The City of Detroit maintains an active fleet of over 2500 vehicles, spending an annual average of over @math 7.7 million on maintaining this fleet. Understanding the existence of patterns and trends in this data could be useful to a variety of stakeholders, particularly as Detroit emerges from Chapter 9 bankruptcy, but the patterns in such data are often complex and multivariate and the city lacks dedicated resources for detailed analysis of this data. This work, a data collaboration between the Michigan Data Science Team (this http URL) and the City of Detroit's Operations and Infrastructure Group, seeks to address this unmet need by analyzing data from the City of Detroit's entire vehicle fleet from 2010-2017. We utilize tensor decomposition techniques to discover and visualize unique temporal patterns in vehicle maintenance; apply differential sequence mining to demonstrate the existence of common and statistically unique maintenance sequences by vehicle make and model; and, after showing these time-dependencies in the dataset, demonstrate an application of a predictive Long Short Term Memory (LSTM) neural network model to predict maintenance sequences. Our analysis shows both the complexities of municipal vehicle fleet data and useful techniques for mining and modeling such data. | For a more detailed overview of tensor decompositions, their mechanics, and their applications, we refer the interested reader to @cite_11 . We describe the decompositions that are relevant to our data analysis in . | {
"cite_N": [
"@cite_11"
],
"mid": [
"2024165284"
],
"abstract": [
"This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or @math -way array. Decompositions of higher-order tensors (i.e., @math -way arrays with @math ) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors."
]
} |
1710.06501 | 2751746637 | Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data. | have also been used to visualize selected responses for single samples both in the input space @cite_34 and in the class space @cite_3 . utilizes a @math response map to show activation patterns within certain neuron groups @cite_28 . Nevertheless, these maps are not designed to provide a comprehensive overview of the responses or to reveal group-level response patterns, a key focus of . | {
"cite_N": [
"@cite_28",
"@cite_34",
"@cite_3"
],
"mid": [
"2343061342",
"70975097",
"2963149653"
],
"abstract": [
"Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
"The goal of this work is text spotting in natural images. This is divided into two sequential tasks: detecting words regions in the image, and recognizing the words within these regions. We make the following contributions: first, we develop a Convolutional Neural Network (CNN) classifier that can be used for both tasks. The CNN has a novel architecture that enables efficient feature sharing (by using a number of layers in common) for text detection, character case-sensitive and insensitive classification, and bigram classification. It exceeds the state-of-the-art performance for all of these. Second, we make a number of technical changes over the traditional CNN architectures, including no downsampling for a per-pixel sliding window, and multi-mode learning with a mixture of linear models (maxout). Third, we have a method of automated data mining of Flickr, that generates word and character level annotations. Finally, these components are used together to form an end-to-end, state-of-the-art text spotting system. We evaluate the text-spotting system on two standard benchmarks, the ICDAR Robust Reading data set and the Street View Text data set, and demonstrate improvements over the state-of-the-art on multiple measures.",
"Deep networks have produced significant gains for various visual recognition problems, leading to high impact academic and commercial applications. Recent work in deep networks highlighted that it is easy to generate images that humans would never classify as a particular object class, yet networks classify such images high confidence as that given class – deep network are easily fooled with images humans do not consider meaningful. The closed set nature of deep networks forces them to choose from one of the known classes leading to such artifacts. Recognition in the real world is open set, i.e. the recognition system should reject unknown unseen classes at test time. We present a methodology to adapt deep networks for open set recognition, by introducing a new model layer, OpenMax, which estimates the probability of an input being from an unknown class. A key element of estimating the unknown probability is adapting Meta-Recognition concepts to the activation patterns in the penultimate layer of the network. Open-Max allows rejection of \"fooling\" and unrelated open set images presented to the system, OpenMax greatly reduces the number of obvious errors made by a deep network. We prove that the OpenMax concept provides bounded open space risk, thereby formally providing an open set recognition solution. We evaluate the resulting open set deep networks using pre-trained networks from the Caffe Model-zoo on ImageNet 2012 validation data, and thousands of fooling and open set images. The proposed OpenMax model significantly outperforms open set recognition accuracy of basic deep networks as well as deep networks with thresholding of SoftMax probabilities."
]
} |
1710.06513 | 2773305749 | In this paper, we propose a pose grammar to tackle the problem of 3D human pose estimation. Our model directly takes 2D pose as input and learns a generalized 2D-3D mapping function. The proposed model consists of a base network which efficiently captures pose-aligned features and a hierarchy of Bi-directional RNNs (BRNN) on the top to explicitly incorporate a set of knowledge regarding human body configuration (i.e., kinematics, symmetry, motor coordination). The proposed model thus enforces high-level constraints over human poses. In learning, we develop a pose sample simulator to augment training samples in virtual camera views, which further improves our model generalizability. We validate our method on public 3D human pose benchmarks and propose a new evaluation protocol working on cross-view setting to verify the generalization capability of different methods. We empirically observe that most state-of-the-art methods encounter difficulty under such setting while our method can well handle such challenges. | . In literature, methods solving this task can be roughly classified into two frameworks: i) directly learning 3D pose structures from 2D images, ii) a cascaded framework of first performing 2D pose estimation and then reconstructing 3D pose from the estimated 2D joints. Specifically, for the first framework, @cite_7 proposed a multi-task convolutional network that simultaneously learns pose regression and part detection. @cite_15 first learned an auto-encoder that describes 3D pose in high dimensional space then mapped the input image to that space using CNN. @cite_18 represented 3D joints as points in a discretized 3D space and proposed a coarse-to-fine approach for iterative refinement. @cite_13 mixed 2D and 3D data and trained an unified network with two-stage cascaded structure. These methods heavily relies on well-labeled image and 3D ground-truth pairs, since they need to learn depth information from images. | {
"cite_N": [
"@cite_13",
"@cite_15",
"@cite_18",
"@cite_7"
],
"mid": [
"2756050327",
"2404595106",
"",
"2293220651"
],
"abstract": [
"In this paper, we study the task of 3D human pose estimation in the wild. This task is challenging due to lack of training data, as existing datasets are either in the wild images with 2D pose or in the lab images with 3D pose.,, We propose a weakly-supervised transfer learning method that uses mixed 2D and 3D labels in a unified deep neutral network that presents two-stage cascaded structure. Our network augments a state-of-the-art 2D pose estimation sub-network with a 3D depth regression sub-network. Unlike previous two stage approaches that train the two sub-networks sequentially and separately, our training is end-to-end and fully exploits the correlation between the 2D pose and depth estimation sub-tasks. The deep features are better learnt through shared representations. In doing so, the 3D pose labels in controlled lab environments are transferred to in the wild images. In addition, we introduce a 3D geometric constraint to regularize the 3D pose prediction, which is effective in the absence of ground truth depth labels. Our method achieves competitive results on both 2D and 3D benchmarks.",
"Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from image to 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and account for joint dependencies. We demonstrate that our approach outperforms state-of-the-art ones both in terms of structure preservation and prediction accuracy.",
"",
"In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations."
]
} |
1710.06513 | 2773305749 | In this paper, we propose a pose grammar to tackle the problem of 3D human pose estimation. Our model directly takes 2D pose as input and learns a generalized 2D-3D mapping function. The proposed model consists of a base network which efficiently captures pose-aligned features and a hierarchy of Bi-directional RNNs (BRNN) on the top to explicitly incorporate a set of knowledge regarding human body configuration (i.e., kinematics, symmetry, motor coordination). The proposed model thus enforces high-level constraints over human poses. In learning, we develop a pose sample simulator to augment training samples in virtual camera views, which further improves our model generalizability. We validate our method on public 3D human pose benchmarks and propose a new evaluation protocol working on cross-view setting to verify the generalization capability of different methods. We empirically observe that most state-of-the-art methods encounter difficulty under such setting while our method can well handle such challenges. | To avoid this limitation, some work @cite_16 @cite_31 @cite_29 tried to address this problem in a two step manner. For example, in @cite_29 , the authors proposed an exemplar-based method to retrieve the nearest 3D pose in the 3D pose library using the estimated 2D pose. Recently, @cite_28 proposed a network that directly regresses 3D keypoints from 2D joint detections and achieves state-of-the-art performance. Our work takes a further step towards a unified 2D-to-3D reconstruction network that integrates the learning power of deep learning and the domain-specific knowledge represented by hierarchy grammar model. The proposed method would offer a deep insight into the rationale behind this problem. | {
"cite_N": [
"@cite_29",
"@cite_28",
"@cite_31",
"@cite_16"
],
"mid": [
"2963013806",
"2612706635",
"2105041273",
"2152926413"
],
"abstract": [
"One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results and is even competitive when the skeleton structure of the two sources differ substantially.",
"Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30 on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.",
"We propose a novel exemplar based method to estimate 3D human poses from single images by using only the joint correspondences. Due to the inherent depth ambiguity, estimating 3D poses from a monocular view is a challenging problem. We solve the problem by searching through millions of exemplars for optimal poses. Compared with traditional parametric schemes, our method is able to handle very large pose database, relieves parameter tweaking, is easier to train and is more effective for complex pose 3D reconstruction. The proposed method estimates upper body poses and lower body poses sequentially, which implicitly squares the size of the exemplar database and enables us to reconstruct unconstrained poses efficiently. Our implementation based on the kd-tree achieves real-time performance. The experiments on a variety of images show that the proposed method is efficient and effective.",
"Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly become prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends locality-sensitive hashing, a recently developed method to find approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call parameter-sensitive hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images."
]
} |
1710.06513 | 2773305749 | In this paper, we propose a pose grammar to tackle the problem of 3D human pose estimation. Our model directly takes 2D pose as input and learns a generalized 2D-3D mapping function. The proposed model consists of a base network which efficiently captures pose-aligned features and a hierarchy of Bi-directional RNNs (BRNN) on the top to explicitly incorporate a set of knowledge regarding human body configuration (i.e., kinematics, symmetry, motor coordination). The proposed model thus enforces high-level constraints over human poses. In learning, we develop a pose sample simulator to augment training samples in virtual camera views, which further improves our model generalizability. We validate our method on public 3D human pose benchmarks and propose a new evaluation protocol working on cross-view setting to verify the generalization capability of different methods. We empirically observe that most state-of-the-art methods encounter difficulty under such setting while our method can well handle such challenges. | . This track receives long-lasting endorsement due to its interpretability and effectiveness in modeling diverse tasks @cite_10 @cite_30 @cite_19 . @cite_11 , the authors approached the problem of image parsing using a stochastic grammar model. After that, grammar models have been used in @cite_9 @cite_2 for 2D human body parsing. @cite_23 proposed a phrase structure, dependency and attribute grammar for 2D human body, representing decomposition and articulation of body parts. Notably, @cite_22 represented human body as a set of simplified kinematic grammar and learn their relations with LSTM. In this paper, our representation can be analogized as a hierarchical attributed grammar model, with similar hierarchical structures, BRNNS as probabilistic grammar. The difference lies in that our model is fully recursive and without semantics in middle levels. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_9",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2781181706",
"",
"2473532709",
"",
"2085161844",
"1992971572",
"2139381543"
],
"abstract": [
"",
"This paper aims at estimating full-body 3D human poses from monocular images of which the biggest challenge is the inherent ambiguity introduced by lifting the 2D pose into 3D space. We propose a novel framework focusing on reducing this ambiguity by predicting the depth of human joints based on 2D human joint locations and body part images. Our approach is built on a two-level hierarchy of Long Short-Term Memory (LSTM) Networks which can be trained end-to-end. The first level consists of two components: 1) a skeleton-LSTM which learns the depth information from global human skeleton features; 2) a patch-LSTM which utilizes the local image evidence around joint locations. The both networks have tree structure defined on the kinematic relation of human skeleton, thus the information at different joints is broadcast through the whole skeleton in a top-down fashion. The two networks are first pre-trained separately on different data sources and then aggregated in the second layer for final depth prediction. The empirical e-valuation on Human3.6M and HHOI dataset demonstrates the advantage of combining global 2D skeleton and local image patches for depth prediction, and our superior quantitative and qualitative performance relative to state-of-the-art methods.",
"",
"This paper presents a hierarchical composition approach for multi-view object tracking. The key idea is to adaptively exploit multiple cues in both 2D and 3D, e.g., ground occupancy consistency, appearance similarity, motion coherence etc., which are mutually complementary while tracking the humans of interests over time. While feature online selection has been extensively studied in the past literature, it remains unclear how to effectively schedule these cues for the tracking purpose especially when encountering various challenges, e.g. occlusions, conjunctions, and appearance variations. To do so, we propose a hierarchical composition model and re-formulate multi-view multi-object tracking as a problem of compositional structure optimization. We setup a set of composition criteria, each of which corresponds to one particular cue. The hierarchical composition process is pursued by exploiting different criteria, which impose constraints between a graph node and its offsprings in the hierarchy. We learn the composition criteria using MLE on annotated data and efficiently construct the hierarchical graph by an iterative greedy pursuit algorithm. In the experiments, we demonstrate superior performance of our approach on three public datasets, one of which is newly created by us to test various challenges in multi-view multi-object tracking.",
"",
"This paper presents a novel framework for a multimedia search task: searching a person in a scene using human body appearance. Existing works mostly focus on two independent problems related to this task, i.e., people detection and person re-identification. However, a sequential combination of these two components does not solve the person search problem seamlessly for two reasons: 1) the errors in people detection are carried into person re-identification unavoidably; 2) the setting of person re-identification is different from that of person search which is essentially a verification problem. To bridge this gap, we propose a unified framework which jointly models the commonness of people (for detection) and the uniqueness of a person (for identification). We demonstrate superior performance of our approach on public benchmarks compared with the sequential combination of the state-of-the-art detection and identification algorithms.",
"Growing numbers of 3D scenes in online repositories provide new opportunities for data-driven scene understanding, editing, and synthesis. Despite the plethora of data now available online, most of it cannot be effectively used for data-driven applications because it lacks consistent segmentations, category labels, and or functional groupings required for co-analysis. In this paper, we develop algorithms that infer such information via parsing with a probabilistic grammar learned from examples. First, given a collection of scene graphs with consistent hierarchies and labels, we train a probabilistic hierarchical grammar to represent the distributions of shapes, cardinalities, and spatial relationships of semantic objects within the collection. Then, we use the learned grammar to parse new scenes to assign them segmentations, labels, and hierarchies consistent with the collection. During experiments with these algorithms, we find that: they work effectively for scene graphs for indoor scenes commonly found online (bedrooms, classrooms, and libraries); they outperform alternative approaches that consider only shape similarities and or spatial relationships without hierarchy; they require relatively small sets of training data; they are robust to moderate over-segmentation in the inputs; and, they can robustly transfer labels from one data set to another. As a result, the proposed algorithms can be used to provide consistent hierarchies for large collections of scenes within the same semantic class.",
"This paper presents a simple attribute graph grammar as a generative representation for made-made scenes, such as buildings, hallways, kitchens, and living rooms, and studies an effective top-down bottom-up inference algorithm for parsing images in the process of maximizing a Bayesian posterior probability or equivalently minimizing a description length (MDL). Given an input image, the inference algorithm computes (or constructs) a parse graph, which includes a parse tree for the hierarchical decomposition and a number of spatial constraints. In the inference algorithm, the bottom-up step detects an excessive number of rectangles as weighted candidates, which are sorted in certain order and activate top-down predictions of occluded or missing components through the grammar rules. In the experiment, we show that the grammar and top-down inference can largely improve the performance of bottom-up detection."
]
} |
1710.06298 | 2765799879 | Generating graphs that are similar to real ones is an open problem, while the similarity notion is quite elusive and hard to formalize. In this paper, we focus on sparse digraphs and propose SDG, an algorithm that aims at generating graphs similar to real ones. Since real graphs are evolving and this evolution is important to study in order to understand the underlying dynamical system, we tackle the problem of generating series of graphs. We propose SEDGE, an algorithm meant to generate series of graphs similar to a real series. SEDGE is an extension of SDG. We consider graphs that are representations of software programs and show experimentally that our approach outperforms other existing approaches. Experiments show the performance of both algorithms. | We have shown that the in-degree and the out-degree distributions of the graphs generated by exhibit a power law. This may come as a surprise to the reader, well aware of earlier works, such as @cite_8 . Indeed, our graph is not growing, keeping a set of @math nodes, connecting them along the iterations of the algorithm. However, the departure from a power law is expected when the number of iterations is approximately @math , that is when the graph gets dense. However, as we emphasized it earlier, we only consider sparse graphs, and the number of iterations, hence the number of edges, remains @math , hence much less than @math . | {
"cite_N": [
"@cite_8"
],
"mid": [
"2008620264"
],
"abstract": [
"Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mechanisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems."
]
} |
1710.06034 | 2767133776 | Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) to model-free policy gradient to significantly improve the sample-efficiency. The SVRG estimation is incorporated into a trust-region Newton conjugate gradient framework for the policy optimization. On several Mujoco tasks, our method achieves significantly better performance compared to the state-of-the-art model-free policy gradient methods in robotic continuous control such as trust region policy optimization (TRPO) | In reinforcement learning @cite_8 , policy search (or policy optimization) is to find the optimal policy parameterized with linear function approximation or highly non-linear functions such as neural networks. It has wide applications in robotic learning @cite_31 @cite_15 with continuous action space and high-dimensional state space, for example from robotics locomotion @cite_6 @cite_9 @cite_33 to manipulation @cite_24 @cite_21 , and robust policy search for safe vehicle navigation @cite_0 , model based policy search for robot control @cite_28 @cite_26 , multi-robot coordination policy search @cite_4 and so on. Our work is also inspired by the stochastic variance reduction for policy evaluation @cite_17 . | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_15",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_31",
"@cite_17"
],
"mid": [
"",
"2568597750",
"",
"",
"1515851193",
"1967736575",
"",
"2964161785",
"2139053308",
"1969074599",
"2739340005",
"2012587148",
"2950556355"
],
"abstract": [
"",
"We introduce a principled method for multi-robot coordination based on a general model termed a MacDec-POMDP of multi-robot cooperative planning in the presence of stochasticity, uncertain sensing, and communication limitations. A new MacDec-POMDP planning algorithm is presented that searches over policies represented as finite-state controllers, rather than the previous policy tree representation. Finite-state controllers can be much more concise than trees, are much easier to interpret, and can operate over an infinite horizon. The resulting policy search algorithm requires a substantially simpler simulator that models only the outcomes of executing a given set of motor controllers, not the details of the executions themselves and can solve significantly larger problems than existing MacDec-POMDP planners. We demonstrate significant performance improvements over previous methods and show that our method can be used for actual multi-robot systems through experiments on a cooperative multi-robot bartending domain.",
"",
"",
"From the Publisher: In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.",
"© 2014 IEEE. In many complex robot applications, such as grasping and manipulation, it is difficult to program desired task solutions beforehand, as robots are within an uncertain and dynamic environment. In such cases, learning tasks from experience can be a useful alternative. To obtain a sound learning and generalization performance, machine learning, especially, reinforcement learning, usually requires sufficient data. However, in cases where only little data is available for learning, due to system constraints and practical issues, reinforcement learning can act suboptimally. In this paper, we investigate how model-based reinforcement learning, in particular the probabilistic inference for learning control method (Pilco), can be tailored to cope with the case of sparse data to speed up learning. The basic idea is to include further prior knowledge into the learning process. As Pilco is built on the probabilistic Gaussian processes framework, additional system knowledge can be incorporated by defining appropriate prior distributions, e.g. A linear mean Gaussian prior. The resulting Pilco formulation remains in closed form and analytically tractable. The proposed approach is evaluated in simulation as well as on a physical robot, the Festo Robotino XT. For the robot evaluation, we employ the approach for learning an object pick-up task. The results show that by including prior knowledge, policy learning can be sped up in presence of sparse data.",
"",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"This paper presents a machine learning approach to optimizing a quadrupedal trot gait for forward speed. Given a parameterized walk designed for a specific robot, we propose using a form of policy gradient reinforcement learning to automatically search the set of possible parameters with the goal of finding the fastest possible walk. We implement and test our approach on a commercially available quadrupedal robot platform, namely the Sony Aibo robot. After about three hours of learning, all on the physical robots and with no human intervention other than to change the batteries, the robots achieved a gait faster than any previously known gait known for the Aibo, significantly outperforming a variety of existing hand-coded and learned solutions.",
"Learning policies that generalize across multiple tasks is an important and challenging research topic in reinforcement learning and robotics. Training individual policies for every single potential task is often impractical, especially for continuous task variations, requiring more principled approaches to share and transfer knowledge among similar tasks. We present a novel approach for learning a nonlinear feedback policy that generalizes across multiple tasks. The key idea is to define a parametrized policy as a function of both the state and the task, which allows learning a single policy that generalizes across multiple known and unknown tasks. Applications of our novel approach to reinforcement and imitation learning in realrobot experiments are shown.",
"This work studies the design of reliable control laws of robotic systems operating in uncertain environments. We introduce a new approach to stochastic policy optimization based on probably approximately correct (PAC) bounds on the expected performance of control policies. An algorithm is constructed which directly minimizes an upper confidence bound on the expected cost of trajectories instead of employing a standard approach based on the expected cost itself. This algorithm thus has built-in robustness to uncertainty, since the bound can be regarded as a certificate for guaranteed future performance. The approach is evaluated on two challenging robot control scenarios in simulation: a car with side slip and a quadrotor navigating through obstacle-ridden environments. We show that the bound accurately predicts future performance and results in improved robustness measured by lower average cost and lower probability of collision. The performance of the technique is studied empirically and compared to several existing policy search algorithms.",
"Policy search is a subfield in reinforcement learning which focuses on finding good parameters for a given policy parametrization. It is well suited for robotics as it can cope with high-dimensional state and action spaces, one of the main challenges in robot learning. We review recent successes of both model-free and model-based policy search in robot learning.Model-free policy search is a general approach to learn policies based on sampled trajectories. We classify model-free methods based on their policy evaluation strategy, policy update strategy, and exploration strategy and present a unified view on existing algorithms. Learning a policy is often easier than learning an accurate forward model, and, hence, model-free methods are more frequently used in practice. However, for each sampled trajectory, it is necessary to interact with the robot, which can be time consuming and challenging in practice. Model-based policy search addresses this problem by first learning a simulator of the robot's dynamics from data. Subsequently, the simulator generates trajectories that are used for policy learning. For both model-free and model-based policy search methods, we review their respective properties and their applicability to robotic systems.",
"Policy evaluation is a crucial step in many reinforcement-learning procedures, which estimates a value function that predicts states' long-term value under a given policy. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods."
]
} |
1710.06034 | 2767133776 | Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) to model-free policy gradient to significantly improve the sample-efficiency. The SVRG estimation is incorporated into a trust-region Newton conjugate gradient framework for the policy optimization. On several Mujoco tasks, our method achieves significantly better performance compared to the state-of-the-art model-free policy gradient methods in robotic continuous control such as trust region policy optimization (TRPO) | Optimization methods @cite_7 @cite_23 @cite_25 play a key role in the policy search, especially for nonlinear policies in continuous high-dimensional parameter space. For example, the well-known @cite_2 is simply a (stochastic) gradient descent method. To accelerate the convergence rates, Fisher information is adopted in Natural Gradient @cite_1 @cite_27 and TRPO @cite_19 . Stochastic Variance Reduction @cite_14 is proposed under the mechanics of control variates @cite_36 to accelerate the convergence of SGD by dramatic variance reduction. Recently, second order statistics and stochastic curvature information are adopted @cite_29 @cite_3 @cite_18 @cite_35 to improve the convergence while achieving the good trade-off between computations and accuracy for the large-scale machine learning problems. The stochastic and approximated curvature information is also useful to further accelerate the variance reduction methods @cite_11 @cite_10 . | {
"cite_N": [
"@cite_35",
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_36",
"@cite_29",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_2",
"@cite_10",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"",
"2107438106",
"",
"2006722592",
"1991083751",
"",
"",
"2949608212",
"",
"",
"2119717200",
"",
"",
"2722088290"
],
"abstract": [
"",
"",
"Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.",
"",
"Abstract We present two improvements on the technique of importance sampling. First, we show that importance sampling from a mixture of densities, using those densities as control variates, results in a useful upper bound on the asymptotic variance. That bound is a small multiple of the asymptotic variance of importance sampling from the best single component density. This allows one to benefit from the great variance reductions obtainable by importance sampling, while protecting against the equally great variance increases that might take the practitioner by surprise. The second improvement is to show how importance sampling from two or more densities can be used to approach a zero sampling variance even for integrands that take both positive and negative values.",
"This paper describes how to incorporate sampled curvature information in a Newton-CG method and in a limited memory quasi-Newton method for statistical learning. The motivation for this work stems from supervised machine learning applications involving a very large number of training points. We follow a batch approach, also known in the stochastic optimization literature as a sample average approximation approach. Curvature information is incorporated in two subsampled Hessian algorithms, one based on a matrix-free inexact Newton iteration and one on a preconditioned limited memory BFGS iteration. A crucial feature of our technique is that Hessian-vector multiplications are carried out with a significantly smaller sample size than is used for the function and gradient. The efficiency of the proposed methods is illustrated using a machine learning application involving speech recognition.",
"",
"",
"We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.",
"",
"",
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"",
"",
"We present novel minibatch stochastic optimization methods for empirical risk minimization problems, the methods efficiently leverage variance reduced first-order and sub-sampled higher-order information to accelerate the convergence speed. For quadratic objectives, we prove improved iteration complexity over state-of-the-art under reasonable assumptions. We also provide empirical evidence of the advantages of our method compared to existing approaches in the literature."
]
} |
1710.05772 | 2765252922 | Decentralized visual simultaneous localization and mapping (SLAM) is a powerful tool for multi-robot applications in environments where absolute positioning systems are not available. Being visual, it relies on cameras, cheap, lightweight and versatile sensors, and being decentralized, it does not rely on communication to a central ground station. In this work, we integrate state-of-the-art decentralized SLAM components into a new, complete decentralized visual SLAM system. To allow for data association and co-optimization, existing decentralized visual SLAM systems regularly exchange the full map data between all robots, incurring large data transfers at a complexity that scales quadratically with the robot count. In contrast, our method performs efficient data association in two stages: in the first stage a compact full-image descriptor is deterministically sent to only one robot. In the second stage, which is only executed if the first stage succeeded, the data required for relative pose estimation is sent, again to only one robot. Thus, data association scales linearly with the robot count and uses highly compact place representations. For optimization, a state-of-the-art decentralized pose-graph optimization method is used. It exchanges a minimum amount of data which is linear with trajectory overlap. We characterize the resulting system and identify bottlenecks in its components. The system is evaluated on publicly available data and we provide open access to the code. | Distributed estimation in multi-robot systems is an active field of research, with special attention being paid to communication constraints @cite_14 , heterogeneity @cite_1 @cite_33 , consistency @cite_29 , and robust data association @cite_43 . The literature offers distributed implementations of different estimation techniques, including Kalman filters @cite_17 , information filters @cite_41 , particle filters @cite_13 @cite_38 , and distributed smoothers @cite_22 @cite_26 | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_26",
"@cite_33",
"@cite_22",
"@cite_41",
"@cite_29",
"@cite_1",
"@cite_43",
"@cite_13",
"@cite_17"
],
"mid": [
"1996361697",
"",
"",
"2026115431",
"",
"",
"2130418717",
"2170049866",
"1574773379",
"2170229019",
"1501672519"
],
"abstract": [
"In this paper we investigate the problem of Simultaneous Localization and Mapping (SLAM) for a multi robot system. Relaxing some assumptions that characterize related work we propose an application of Rao-Blackwellized Particle Filters (RBPF) for the purpose of cooperatively estimating SLAM posterior. We consider a realistic setup in which the robots start from unknown initial poses (relative locations are unknown too), and travel in the environment in order to build a shared representation of the latter. The robots are required to exchange a small amount of information only when a rendezvous event occurs and to measure relative poses during the meeting. As a consequence the approach also applies when using an unreliable wireless channel or short range communication technologies (bluetooth, RFId, etc.). Moreover it allows to take into account the uncertainty in relative pose measurements. The proposed technique, which constitutes a distributed solution to the multi robot SLAM problem, is further validated through simulations and experimental tests.",
"",
"",
"Cooperative navigation (CN) enables a group of cooperative robots to reduce their individual navigation errors. For a general multi-robot (MR) measurement model that involves both inertial navigation data and other onboard sensor readings, taken at different time instances, the various sources of information become correlated. Thus, this correlation should be solved for in the process of information fusion to obtain consistent state estimation. The common approach for obtaining the correlation terms is to maintain an augmented covariance matrix. This method would work for relative pose measurements, but is impractical for a general MR measurement model, because the identities of the robots involved in generating the measurements, as well as the measurement time instances, are unknown a priori. In the current work, a new consistent information fusion method for a general MR measurement model is developed. The proposed approach relies on graph theory. It enables explicit on-demand calculation of the required correlation terms. The graph is locally maintained by every robot in the group, representing all of the MR measurement updates. The developed method calculates the correlation terms in the most general scenarios of MR measurements while properly handling the involved process and measurement noise. A theoretical example and a statistical study are provided, demonstrating the performance of the method for vision-aided navigation based on a three-view measurement model. The method is compared, in a simulated environment, with a fixed-lag centralized smoothing approach. The method is also validated in an experiment that involved real imagery and navigation data. Computational complexity estimates show that the newly developed method is computationally efficient.",
"",
"",
"In cooperative navigation, teams of mobile robots obtain range and or angle measurements to each other and dead-reckoning information to help each other navigate more accurately. One typical approach is moving baseline navigation, in which multiple Autonomous Underwater Vehicles (AUVs) exchange range measurements using acoustic modems to perform mobile trilateration. While the sharing of information between vehicles can be highly beneficial, exchanging measurements and state estimates can also be dangerous because of the risk of measurements being used by a vehicle more than once; such data re-use leads to inconsistent (overconfident) estimates, making data association and outlier rejection more difficult and divergence more likely. In this paper, we present a technique for the consistent cooperative localization of multiple AUVs performing mobile trilateration. Each AUV establishes a bank of filters, performing careful bookkeeping to track the origins of measurements and prevent the use any of the measurements more than once. The multiple estimates are combined in a consistent manner, yielding conservative covariance estimates. The technique is illustrated using simulation results. The new method is compared side-by-side with a naive approach that does not keep track of the origins of measurements, illustrating that the new method keeps conservative covariance bounds whereas state estimates obtained with the naive approach become overconfident and diverge.",
"This paper presents a distributed algorithm for performing joint localisation of a team of robots. The mobile robots have heterogeneous sensing capabilities, with some having high quality inertial and exteroceptive sensing, while others have only low quality sensing or none at all. By sharing information, a combined estimate of all robot poses is obtained. Inter-robot range-bearing measurements provide the mechanism for transferring pose information from well-localised vehicles to those less capable. In our proposed formulation, high frequency egocentric data (e.g., odometry, IMU, GPS) is fused locally on each platform. This is the distributed part of the algorithm. Inter-robot measurements, and accompanying state estimates, are communicated to a central server, which generates an optimal minimum mean-squared estimate of all robot poses. This server is easily duplicated for full redundant decentralisation. Communication and computation are efficient due to the sparseness properties of the information-form Gaussian representation. A team of three indoor mobile robots equipped with lasers, odometry and inertial sensing provides experimental verification of the algorithms effectiveness in combining location information.",
"We demonstrate distributed, online, and real-time cooperative localization and mapping between multiple robots operating throughout an unknown environment using indirect measurements. We present a novel Expectation Maximization (EM) based approach to efficiently identify inlier multi-robot loop closures by incorporating robot pose uncertainty, which significantly improves the trajectory accuracy over long-term navigation. An EM and hypothesis based method is used to determine a common reference frame. We detail a 2D laser scan correspondence method to form robust correspondences between laser scans shared amongst robots. The implementation is experimentally validated using teams of aerial vehicles, and analyzed to determine its accuracy, computational efficiency, scalability to many robots, and robustness to varying environments. We demonstrate through multiple experiments that our method can efficiently build maps of large indoor and outdoor environments in a distributed, online, and real-time setting.",
"This paper describes an on-line algorithm for multi-robot simultaneous localization and mapping (SLAM). The starting point is the single-robot Rao-Blackwellized particle filter described by , and three key generalizations are made. First, the particle filter is extended to handle multi-robot SLAM problems in which the initial pose of the robots is known (such as occurs when all robots start from the same location). Second, an approximation is introduced to solve the more general problem in which the initial pose of robots is not known a priori (such as occurs when the robots start from widely separated locations). In this latter case, it is assumed that pairs of robots will eventually encounter one another, thereby determining their relative pose. This relative attitude is used to initialize the filter, and subsequent observations from both robots are combined into a common map. Third and finally, a method is introduced to integrate observations collected prior to the first robot encounter, using the notion of a virtual robot travelling backwards in time. This novel approach allows one to integrate all data from all robots into a single common map.",
"This paper presents a new approach to the cooperative localization problem, namely distributed multi-robot localization. A group of M robots is viewed as a single system composed of robots that carry, in general, different sensors and have different positioning capabilities. A single Kalman filter is formulated to estimate the position and orientation of all the members of the group. This centralized schema is capable of fusing information provided by the sensors distributed on the individual robots while accommodating independencies and interdependencies among the collected data. In order to allow for distributed processing, the equations of the centralized Kalman filter are treated so that this filter can be decomposed into M modified Kalman filters each running on a separate robot. The distributed localization algorithm is applied to a group of 3 robots and the improvement in localization accuracy is presented."
]
} |
1710.05772 | 2765252922 | Decentralized visual simultaneous localization and mapping (SLAM) is a powerful tool for multi-robot applications in environments where absolute positioning systems are not available. Being visual, it relies on cameras, cheap, lightweight and versatile sensors, and being decentralized, it does not rely on communication to a central ground station. In this work, we integrate state-of-the-art decentralized SLAM components into a new, complete decentralized visual SLAM system. To allow for data association and co-optimization, existing decentralized visual SLAM systems regularly exchange the full map data between all robots, incurring large data transfers at a complexity that scales quadratically with the robot count. In contrast, our method performs efficient data association in two stages: in the first stage a compact full-image descriptor is deterministically sent to only one robot. In the second stage, which is only executed if the first stage succeeded, the data required for relative pose estimation is sent, again to only one robot. Thus, data association scales linearly with the robot count and uses highly compact place representations. For optimization, a state-of-the-art decentralized pose-graph optimization method is used. It exchanges a minimum amount of data which is linear with trajectory overlap. We characterize the resulting system and identify bottlenecks in its components. The system is evaluated on publicly available data and we provide open access to the code. | In multi-robot systems, maximum-likelihood trajectory estimation can be performed by collecting all measurements at a centralized inference engine, which performs the optimization @cite_25 @cite_5 @cite_1 @cite_10 @cite_43 . However, it is not practical to collect all measurements at a single inference engine since it requires a large communication bandwidth. Furthermore, solving trajectory estimation over a large team of robots can be too demanding for a single computational unit. | {
"cite_N": [
"@cite_1",
"@cite_43",
"@cite_5",
"@cite_10",
"@cite_25"
],
"mid": [
"2170049866",
"1574773379",
"2011897632",
"",
"2125543527"
],
"abstract": [
"This paper presents a distributed algorithm for performing joint localisation of a team of robots. The mobile robots have heterogeneous sensing capabilities, with some having high quality inertial and exteroceptive sensing, while others have only low quality sensing or none at all. By sharing information, a combined estimate of all robot poses is obtained. Inter-robot range-bearing measurements provide the mechanism for transferring pose information from well-localised vehicles to those less capable. In our proposed formulation, high frequency egocentric data (e.g., odometry, IMU, GPS) is fused locally on each platform. This is the distributed part of the algorithm. Inter-robot measurements, and accompanying state estimates, are communicated to a central server, which generates an optimal minimum mean-squared estimate of all robot poses. This server is easily duplicated for full redundant decentralisation. Communication and computation are efficient due to the sparseness properties of the information-form Gaussian representation. A team of three indoor mobile robots equipped with lasers, odometry and inertial sensing provides experimental verification of the algorithms effectiveness in combining location information.",
"We demonstrate distributed, online, and real-time cooperative localization and mapping between multiple robots operating throughout an unknown environment using indirect measurements. We present a novel Expectation Maximization (EM) based approach to efficiently identify inlier multi-robot loop closures by incorporating robot pose uncertainty, which significantly improves the trajectory accuracy over long-term navigation. An EM and hypothesis based method is used to determine a common reference frame. We detail a 2D laser scan correspondence method to form robust correspondences between laser scans shared amongst robots. The implementation is experimentally validated using teams of aerial vehicles, and analyzed to determine its accuracy, computational efficiency, scalability to many robots, and robustness to varying environments. We demonstrate through multiple experiments that our method can efficiently build maps of large indoor and outdoor environments in a distributed, online, and real-time setting.",
"This paper describes a new algorithm for cooperative and persistent simultaneous localization and mapping (SLAM) using multiple robots. Recent pose graph representations have proven very successful for single robot mapping and localization. Among these methods, incremental smoothing and mapping (iSAM) gives an exact incremental solution to the SLAM problem by solving a full nonlinear optimization problem in real-time. In this paper, we present a novel extension to iSAM to facilitate online multi-robot mapping based on multiple pose graphs. Our main contribution is a relative formulation of the relationship between multiple pose graphs that avoids the initialization problem and leads to an efficient solution when compared to a completely global formulation. The relative pose graphs are optimized together to provide a globally consistent multi-robot solution. Efficient access to covariances at any time for relative parameters is provided through iSAM, facilitating data association and loop closing. The performance of the technique is illustrated on various data sets including a publicly available multi-robot data set. Further evaluation is performed in a collaborative helicopter and ground robot experiment.",
"",
"This paper presents collaborative smoothing and mapping (C-SAM) as a viable approach to the multi-robot map- alignment problem. This method enables a team of robots to build joint maps with or without initial knowledge of their relative poses. To accomplish the simultaneous localization and mapping this method uses square root information smoothing (SRIS). In contrast to traditional extended Kalman filter (EKF) methods the smoothing does not exclude any information and is therefore also better equipped to deal with non-linear process and measurement models. The method proposed does not require the collaborative robots to have initial correspondence. The key contribution of this work is an optimal smoothing algorithm for merging maps that are created by different robots independently or in groups. The method not only joins maps from different robots, it also recovers the complete robot trajectory for each robot involved in the map joining. It is also shown how data association between duplicate features is done and how this reduces uncertainty in the complete map. Two simulated scenarios are presented where the C-SAM algorithm is applied on two individually created maps. One basically joins two maps resulting in a large map while the other shows a scenario where sensor extension is carried out."
]
} |
1710.05772 | 2765252922 | Decentralized visual simultaneous localization and mapping (SLAM) is a powerful tool for multi-robot applications in environments where absolute positioning systems are not available. Being visual, it relies on cameras, cheap, lightweight and versatile sensors, and being decentralized, it does not rely on communication to a central ground station. In this work, we integrate state-of-the-art decentralized SLAM components into a new, complete decentralized visual SLAM system. To allow for data association and co-optimization, existing decentralized visual SLAM systems regularly exchange the full map data between all robots, incurring large data transfers at a complexity that scales quadratically with the robot count. In contrast, our method performs efficient data association in two stages: in the first stage a compact full-image descriptor is deterministically sent to only one robot. In the second stage, which is only executed if the first stage succeeded, the data required for relative pose estimation is sent, again to only one robot. Thus, data association scales linearly with the robot count and uses highly compact place representations. For optimization, a state-of-the-art decentralized pose-graph optimization method is used. It exchanges a minimum amount of data which is linear with trajectory overlap. We characterize the resulting system and identify bottlenecks in its components. The system is evaluated on publicly available data and we provide open access to the code. | These reasons triggered interest towards , in which the robots only exploit local communication, in order to reach a consensus on the trajectory estimate @cite_20 @cite_8 @cite_15 @cite_4 @cite_2 . Recently, @cite_2 @cite_22 used Gaussian elimination, and developed an approach, called DDF-SAM, in which robots exchange Gaussian marginals over the (i.e., the variables observed by multiple robots). | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_2",
"@cite_15",
"@cite_20"
],
"mid": [
"2010331587",
"",
"2130343868",
"1964591239",
"2097064869",
"2146702612"
],
"abstract": [
"We propose a distributed algorithm for collaborative localization of multiple autonomous robots that fuses inter-robot relative measurements with odometry measurements to improve upon dead reckoning estimates. It is an extension of our previous work [6], in which a method for fusing inter-robot pose measurements was presented. In this paper we extend the method to fuse any type of inter-robot measurements (distance, bearing, relative position, relative orientation, and any combination thereof), thus increasing the applicability of the method. The proposed method is posed as an optimization problem in a product Riemannian manifold; and is solved by gradient descent without performing a parameterization of the orientations. The proposed distributed algorithm allows each robot to compute its own pose estimate based on local measurements and communication with its neighbors. Simulations show that the proposed distributed algorithm significantly improves localization accuracy over the case of no-collaboration. Simulations show that, in some situations, the proposed distributed algorithm outperforms two competing methods - an Extended Kalman Filter-based algorithm as well as a distributed pose graph optimization method that relies on an Euclidean parameterization of orientations.",
"",
"In this paper a novel approach to the problem of decentralized agreement toward a common point in space in a multi-agent system is proposed. Our method allows the agents to agree on the relative location of the network centroid respect to themselves, on a common reference frame and therefore on a common heading. Using this information a global positioning system for the agents using only local measurements can be achieved. In the proposed scenario, an agent is able to sense the distance between itself and its neighbors and the direction in which it sees its neighbors with respect to its local reference frame. Furthermore only point-to-point asynchronous communications between neighboring agents are allowed thus achieving robustness against random communication failures. The proposed algorithms can be thought as general tools to locally retrieve global information usually not available to the agents.",
"We address the problem of multi-robot distributed SLAM with an extended Smoothing and Mapping (SAM) approach to implement Decentralized Data Fusion (DDF). We present DDF-SAM, a novel method for efficiently and robustly distributing map information across a team of robots, to achieve scalability in computational cost and in communication bandwidth and robustness to node failure and to changes in network topology. DDF-SAM consists of three modules: (1) a local optimization module to execute single-robot SAM and condense the local graph; (2) a communication module to collect and propagate condensed local graphs to other robots, and (3) a neighborhood graph optimizer module to combine local graphs into maps describing the neighborhood of a robot. We demonstrate scalability and robustness through a simulated example, in which inference is consistently faster than a comparable naive approach.",
"In this paper we address the problem of estimating the poses of a team of agents when they do not share any common reference frame. Each agent is capable of measuring the relative position and orientation of its neighboring agents, however these measurements are not exact but they are corrupted with noises. The goal is to compute the pose of each agent relative to an anchor node. We present a strategy where, first of all, the agents compute their orientations relative to the anchor. After that, they update the relative position measurements according to these orientations, to finally compute their positions. As contribution we discuss the proposed strategy, that has the interesting property that can be executed in a distributed fashion. The distributed implementation allows each agent to recover its pose using exclusively local information and local interactions with its neighbors. This algorithm has a low memory load, since it only requires each node to maintain an estimate of its own orientation and position.",
"This paper presents a distributed Maximum A Posteriori (MAP) estimator for multi-robot Cooperative Localization (CL). As opposed to centralized MAP-based CL, the proposed algorithm reduces the memory and processing requirements by distributing data and computations amongst the robots. Specifically, a distributed data-allocation scheme is presented that enables robots to simultaneously process and update their local data. Additionally, a distributed Conjugate Gradient algorithm is employed that reduces the cost of computing the MAP estimates, while utilizing all available resources in the team and increasing robustness to single-point failures. Finally, a computationally efficient distributed marginalization of past robot poses is introduced for limiting the size of the optimization problem. The communication and computational complexity of the proposed algorithm is described in detail, while extensive simulation studies are presented for validating the performance of the distributed MAP estimator and comparing its accuracy to that of existing approaches."
]
} |
1710.05772 | 2765252922 | Decentralized visual simultaneous localization and mapping (SLAM) is a powerful tool for multi-robot applications in environments where absolute positioning systems are not available. Being visual, it relies on cameras, cheap, lightweight and versatile sensors, and being decentralized, it does not rely on communication to a central ground station. In this work, we integrate state-of-the-art decentralized SLAM components into a new, complete decentralized visual SLAM system. To allow for data association and co-optimization, existing decentralized visual SLAM systems regularly exchange the full map data between all robots, incurring large data transfers at a complexity that scales quadratically with the robot count. In contrast, our method performs efficient data association in two stages: in the first stage a compact full-image descriptor is deterministically sent to only one robot. In the second stage, which is only executed if the first stage succeeded, the data required for relative pose estimation is sent, again to only one robot. Thus, data association scales linearly with the robot count and uses highly compact place representations. For optimization, a state-of-the-art decentralized pose-graph optimization method is used. It exchanges a minimum amount of data which is linear with trajectory overlap. We characterize the resulting system and identify bottlenecks in its components. The system is evaluated on publicly available data and we provide open access to the code. | In order to solve decentralized SLAM, measurements that relate poses between different robots ( inter-robot measurements ) need to be established. A popular type of inter-robot measurements are direct measurements of the other robot @cite_42 , such as time-of-flight distance measurements @cite_14 or vision-based relative pose etimation @cite_5 . The latter is typically aided with markers that are deployed on the robots. To the best of our knowledge, most types of direct measurements require specialized hardware (which can precisely measure time-of-flight for example, or visual markers). Furthermore, many types of direct measurements require the robots to be in line of sight, which, in many environments, imposes a major limitation on the set of relative poses that can be established, see fig:loslimit . Limiting relative measurements in this way translates into limitations for higher-level applications that would like to use decentralized SLAM as a tool. | {
"cite_N": [
"@cite_14",
"@cite_5",
"@cite_42"
],
"mid": [
"",
"2011897632",
"2168110171"
],
"abstract": [
"",
"This paper describes a new algorithm for cooperative and persistent simultaneous localization and mapping (SLAM) using multiple robots. Recent pose graph representations have proven very successful for single robot mapping and localization. Among these methods, incremental smoothing and mapping (iSAM) gives an exact incremental solution to the SLAM problem by solving a full nonlinear optimization problem in real-time. In this paper, we present a novel extension to iSAM to facilitate online multi-robot mapping based on multiple pose graphs. Our main contribution is a relative formulation of the relationship between multiple pose graphs that avoids the initialization problem and leads to an efficient solution when compared to a completely global formulation. The relative pose graphs are optimized together to provide a globally consistent multi-robot solution. Efficient access to covariances at any time for relative parameters is provided through iSAM, facilitating data association and loop closing. The performance of the technique is illustrated on various data sets including a publicly available multi-robot data set. Further evaluation is performed in a collaborative helicopter and ground robot experiment.",
"This paper presents a new approach to the multi-robot map-alignment problem that enables teams of robots to build joint maps without initial knowledge of their relative poses. The key contribution of this work is an optimal algorithm for merging (not necessarily overlapping) maps that are created by different robots independently. Relative pose measurements between pairs of robots are processed to compute the coordinate transformation between any two maps. Noise in the robot-to-robot observations, propagated through the map-alignment process, increases the error in the position estimates of the transformed landmarks, and reduces the overall accuracy of the merged map. When there is overlap between the two maps, landmarks that appear twice provide additional information, in the form of constraints, which increases the alignment accuracy. Landmark duplicates are identified through a fast nearest-neighbor matching algorithm. In order to reduce the computational complexity of this search process, a kd-tree is used to represent the landmarks in the original map. The criterion employed for matching any two landmarks is the Mahalanobis distance. As a means of validation, we present experimental results obtained from two robots mapping an area of 4,800 m2"
]
} |
1710.06084 | 2765858924 | This thesis proposes a combinatorial generalization of a nilpotent operator on a vector space. The resulting object is highly natural, with basic connections to a variety of fields in pure mathematics, engineering, and the sciences. For the purpose of exposition we focus the discussion of applications on homological algebra and computation, with additional remarks in lattice theory, linear algebra, and abelian categories. For motivation, we recall that the methods of algebraic topology have driven remarkable progress in the qualitative study of large, noisy bodies of data over the past 15 years. A primary tool in Topological Data Analysis [TDA] is the homological persistence module, which leverages categorical structure to compare algebraic shape descriptors across multiple scales of measurement. Our principle application to computation is a novel algorithm to calculate persistent homology which, in certain cases, improves the state of the art by several orders of magnitude. Included are novel results in discrete, spectral, and algebraic Morse theory, and on the strong maps of matroid theory. The defining theme throughout is interplay between the combinatorial theory matroids and the algebraic theory of categories. The nature of these interactions is remarkably simple, but their consequences in homological algebra, quiver theory, and combinatorial optimization represent new and widely open fields for interaction between the disciplines. | The to compute persistent homology was introduced for coefficients in the two element field by Edelsbrunner, Letscher, and Zomorodian in @cite_72 . The adaptation of this algorithm for arbitrary field coefficients was presented by Carlsson and Zomorodian in @cite_28 . The standard algorithm is known to have worst case cubic complexity, a bound that was shown to be sharp by Morozov in @cite_80 . Under certain sparsity conditions, the complexity of this algorithm is less than cubic. An algorithm by Milosavljevi 'c, Morozov, and Skraba has been shown to perform the same computation in @math time, where @math is the matrix-multiplication coefficient @cite_3 . | {
"cite_N": [
"@cite_28",
"@cite_80",
"@cite_72",
"@cite_3"
],
"mid": [
"2144044408",
"",
"1957046493",
"2154477220"
],
"abstract": [
"We show that the persistent homology of a filtered d-dimensional simplicial complex is simply the standard homology of a particular graded module over a polynomial ring. Our analysis establishes the existence of a simple description of persistent homology groups over arbitrary fields. It also enables us to derive a natural algorithm for computing persistent homology of spaces in arbitrary dimension over any field. This result generalizes and extends the previously known algorithm that was restricted to subcomplexes of S3 and Z2 coefficients. Finally, our study implies the lack of a simple classification over non-fields. Instead, we give an algorithm for computing individual persistent homology groups over an arbitrary principal ideal domain in any dimension.",
"",
"Topological data analysis provides a multiscale description of the geometry and topology of quantitative data. The persistence landscape is a topological summary that can be easily combined with tools from statistics and machine learning. We give efficient algorithms for calculating persistence landscapes, their averages, and distances between such averages. We discuss an implementation of these algorithms and some related procedures. These are intended to facilitate the combination of statistics and machine learning with topological data analysis. We present an experiment showing that the low-dimensional persistence landscapes of points sampled from spheres (and boxes) of varying dimensions differ.",
"We present a new algorithm for computing zigzag persistent homology, an algebraic structure which encodes changes to homology groups of a simplicial complex over a sequence of simplex additions and deletions. Provided that there is an algorithm that multiplies two n×n matrices in M(n) time, our algorithm runs in O(M(n) + n2 log2 n) time for a sequence of n additions and deletions. In particular, the running time is O(n2.376), by result of Coppersmith and Winograd. The fastest previously known algorithm for this problem takes O(n3) time in the worst case."
]
} |
1710.06084 | 2765858924 | This thesis proposes a combinatorial generalization of a nilpotent operator on a vector space. The resulting object is highly natural, with basic connections to a variety of fields in pure mathematics, engineering, and the sciences. For the purpose of exposition we focus the discussion of applications on homological algebra and computation, with additional remarks in lattice theory, linear algebra, and abelian categories. For motivation, we recall that the methods of algebraic topology have driven remarkable progress in the qualitative study of large, noisy bodies of data over the past 15 years. A primary tool in Topological Data Analysis [TDA] is the homological persistence module, which leverages categorical structure to compare algebraic shape descriptors across multiple scales of measurement. Our principle application to computation is a novel algorithm to calculate persistent homology which, in certain cases, improves the state of the art by several orders of magnitude. Included are novel results in discrete, spectral, and algebraic Morse theory, and on the strong maps of matroid theory. The defining theme throughout is interplay between the combinatorial theory matroids and the algebraic theory of categories. The nature of these interactions is remarkably simple, but their consequences in homological algebra, quiver theory, and combinatorial optimization represent new and widely open fields for interaction between the disciplines. | A number of closely related algorithms share the cubic worst-case bound while demonstrating dramatic improvements in performance empirically. These include the algorithm of Chen and Kerber @cite_6 , and the algorithm of de Silva, Morzov, and Vejdemo-Johansson @cite_73 @cite_84 . Some parallel algorithms for shared memory systems include the algorithm @cite_19 , and the algorithm @cite_65 . Algorithms for distributed computation include the @cite_10 and the spectral sequence algorithm of Lipsky, Skraba, and Vejdemo-Johansson @cite_49 . | {
"cite_N": [
"@cite_65",
"@cite_84",
"@cite_6",
"@cite_19",
"@cite_49",
"@cite_73",
"@cite_10"
],
"mid": [
"2951695255",
"1967991235",
"2187925611",
"2784708638",
"2269879476",
"2102692026",
"2950069512"
],
"abstract": [
"We present a parallelizable algorithm for computing the persistent homology of a filtered chain complex. Our approach differs from the commonly used reduction algorithm by first computing persistence pairs within local chunks, then simplifying the unpaired columns, and finally applying standard reduction on the simplified matrix. The approach generalizes a technique by G \", which uses discrete Morse Theory to compute persistence; we derive the same worst-case complexity bound in a more general context. The algorithm employs several practical optimization techniques which are of independent interest. Our sequential implementation of the algorithm is competitive with state-of-the-art methods, and we improve the performance through parallelized computation.",
"Nonlinear dimensionality reduction (NLDR) algorithms such as Isomap, LLE and Laplacian Eigenmaps address the problem of representing high-dimensional nonlinear data in terms of low-dimensional coordinates which represent the intrinsic structure of the data. This paradigm incorporates the assumption that real-valued coordinates provide a rich enough class of functions to represent the data faithfully and efficiently. On the other hand, there are simple structures which challenge this assumption: the circle, for example, is one-dimensional but its faithful representation requires two real coordinates. In this work, we present a strategy for constructing circle-valued functions on a statistical data set. We develop a machinery of persistent cohomology to identify candidates for significant circle-structures in the data, and we use harmonic smoothing and integration to obtain the circle-valued coordinate functions themselves. We suggest that this enriched class of coordinate functions permits a precise NLDR analysis of a broader range of realistic data sets.",
"The persistence diagram of a filtered simplicial complex is usually computed by reducing the boundary matrix of the complex. We introduce a simple optimization technique: by processing the simplices of the complex in decreasing dimension, we can “kill” columns (i.e., set them to zero) without reducing them. This technique completely avoids reduction on roughly half of the columns. We demonstrate that this idea significantly improves the running time of the reduction algorithm in practice. We also give an output-sensitive complexity analysis for the new algorithm which yields to sub-cubic asymptotic bounds under certain assumptions.",
"Combining concepts from topology and algorithms, this book delivers what its title promises: an introduction to the field of computational topology. Starting with motivating problems in both mathematics and computer science and building up from classic topics in geometric and algebraic topology, the third part of the text advances to persistent homology. This point of view is critically important in turning a mostly theoretical field of mathematics into one that is relevant to a multitude of disciplines in the sciences and engineering. The main approach is the discovery of topology through algorithms. The book is ideal for teaching a graduate or advanced undergraduate course in computational topology, as it develops all the background of both the mathematical and algorithmic aspects of the subject from first principles. Thus the text could serve equally well in a course taught in a mathematics department or computer science department.",
"We approach the problem of the computation of persistent homology for large datasets by a divide-and-conquer strategy. Dividing the total space into separate but overlapping components, we are able to limit the total memory residency for any part of the computation, while not degrading the overall complexity much. Locally computed persistence information is then merged from the components and their intersections using a spectral sequence generalizing the Mayer-Vietoris long exact sequence. We describe the Mayer-Vietoris spectral sequence and give details on how to compute with it. This allows us to merge local homological data into the global persistent homology. Furthermore, we detail how the classical topology constructions inherent in the spectral sequence adapt to a persistence perspective, as well as describe the techniques from computational commutative algebra necessary for this extension. The resulting computational scheme suggests a parallelization scheme, and we discuss the communication steps involved in this scheme. Furthermore, the computational scheme can also serve as a guideline for which parts of the boundary matrix manipulation need to co-exist in primary memory at any given time allowing for stratified memory access in single-core computation. The spectral sequence viewpoint also provides easy proofs of a homology nerve lemma as well as a persistent homology nerve lemma. In addition, the algebraic tools we develop to approch persistent homology provide a purely algebraic formulation of kernel, image and cokernel persistence (D. Cohen-Steiner, H. Edelsbrunner, J. Harer, and D. Morozov. Persistent homology for kernels, images, and cokernels. In Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1011-1020. Society for Industrial and Applied Mathematics, 2009.)",
"We consider sequences of absolute and relative homology and cohomology groups that arise naturally for a filtered cell complex. We establish algebraic relationships between their persistence modules, and show that they contain equivalent information. We explain how one can use the existing algorithm for persistent homology to process any of the four modules, and relate it to a recently introduced persistent cohomology algorithm. We present experimental evidence for the practical efficiency of the latter algorithm.",
"Persistent homology is a popular and powerful tool for capturing topological features of data. Advances in algorithms for computing persistent homology have reduced the computation time drastically -- as long as the algorithm does not exhaust the available memory. Following up on a recently presented parallel method for persistence computation on shared memory systems, we demonstrate that a simple adaption of the standard reduction algorithm leads to a variant for distributed systems. Our algorithmic design ensures that the data is distributed over the nodes without redundancy; this permits the computation of much larger instances than on a single machine. Moreover, we observe that the parallelism at least compensates for the overhead caused by communication between nodes, and often even speeds up the computation compared to sequential and even parallel shared memory algorithms. In our experiments, we were able to compute the persistent homology of filtrations with more than a billion (10^9) elements within seconds on a cluster with 32 nodes using less than 10GB of memory per node."
]
} |
1710.05627 | 2766207971 | How can a delivery robot navigate reliably to a destination in a new office building, with minimal prior information? To tackle this challenge, this paper introduces a two-level hierarchical approach, which integrates model-free deep learning and model-based path planning. At the low level, a neural-network motion controller, called the intention-net, is trained end-to-end to provide robust local navigation. The intention-net maps images from a single monocular camera and "intentions" directly to robot controls. At the high level, a path planner uses a crude map, e.g., a 2-D floor plan, to compute a path from the robot's current location to the goal. The planned path provides intentions to the intention-net. Preliminary experiments suggest that the learned motion controller is robust against perceptual uncertainty and by integrating with a path planner, it generalizes effectively to new environments and goals. | Deep learning has been immensely successful in many domains @cite_17 @cite_1 @cite_14 @cite_13 @cite_19 @cite_16 @cite_10 . In robot navigation, one use of deep learning is to learn a flight controller that maps perceptual inputs directly to control for local collision-free maneuver of a drone @cite_16 . It addresses the issue of local collision avoidance, but not that of goal-directed global navigation. Another use is to train a system end-to-end for autonomous driving, using monocular camera images @cite_17 , but the system drives along a fixed route and cannot reach arbitrary goals. Some recent work explores model-free end-to-end learning for goal-directed navigation by incorporating the goal as part of the perceptual inputs @cite_9 @cite_0 . One may improve the learning efficiency and navigation performance by adding auxiliary objectives, such as local depth prediction and loop closure classification @cite_21 . Without a model, these approaches cannot exploit the sequential decision nature of global navigation effectively and have difficulty in generalizing to complex new environments. To tackle this challenge, our proposed two-level architecture integrates model-free deep learning for local collision avoidance and model-based global path planning, using a crude map. | {
"cite_N": [
"@cite_14",
"@cite_10",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2257979135",
"2524241275",
"2952791429",
"2953248129",
"2522340145",
"2964161785",
"2565902248",
"2160815625",
"2342840547"
],
"abstract": [
"",
"The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.",
"Learning from demonstration for motion planning is an ongoing research topic. In this paper we present a model that is able to learn the complex mapping from raw 2D-laser range findings and a target position to the required steering commands for the robot. To our best knowledge, this work presents the first approach that learns a target-oriented end-to-end navigation model for a robotic platform. The supervised model training is based on expert demonstrations generated in simulation with an existing motion planner. We demonstrate that the learned navigation model is directly transferable to previously unseen virtual and, more interestingly, real-world environments. It can safely navigate the robot through obstacle-cluttered environments to reach the provided targets. We present an extensive qualitative and quantitative evaluation of the neural network-based motion planner, and compare it to a grid-based global approach, both in simulation and in real-world experiments.",
"Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs. In particular we consider jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks. This approach can learn to navigate from raw sensory input in complicated 3D mazes, approaching human-level performance even under conditions where the goal location changes frequently. We provide detailed analysis of the agent behaviour, its ability to localise, and its network activity dynamics, showing that the agent implicitly learns key navigation abilities.",
"Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.",
"Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new target goals, and (2) data inefficiency i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to the task of target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows to better generalize. To address the second issue, we propose AI2-THOR framework, which provides an environment with high-quality 3D scenes and physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and across scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment. The supplementary video can be accessed at the following link: this https URL",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"Deep reinforcement learning has emerged as a promising and powerful technique for automatically acquiring control policies that can process raw sensory inputs, such as images, and perform complex behaviors. However, extending deep RL to real-world robotic tasks has proven challenging, particularly in safety-critical domains such as autonomous flight, where a trial-and-error learning process is often impractical. In this paper, we explore the following question: can we train vision-based navigation policies entirely in simulation, and then transfer them into the real world to achieve real-world flight without a single real training image? We propose a learning method that we call CAD @math RL, which can be used to perform collision-free indoor flight in the real world while being trained entirely on 3D CAD models. Our method uses single RGB images from a monocular camera, without needing to explicitly reconstruct the 3D geometry of the environment or perform explicit motion planning. Our learned collision avoidance policy is represented by a deep convolutional neural network that directly processes raw monocular images and outputs velocity commands. This policy is trained entirely on simulated images, with a Monte Carlo policy evaluation algorithm that directly optimizes the network's ability to produce collision-free flight. By highly randomizing the rendering settings for our simulated training set, we show that we can train a policy that generalizes to the real world, without requiring the simulator to be particularly realistic or high-fidelity. We evaluate our method by flying a real quadrotor through indoor environments, and further evaluate the design choices in our simulator through a series of ablation studies on depth prediction. For supplementary video see: this https URL",
"Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models (GMMs) to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. An alternative way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition benchmarks, sometimes by a large margin. This article provides an overview of this progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.",
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS)."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.