aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1702.04173 | 2594373924 | The study of canonically complete attribute-based access control (ABAC) languages is relatively new. A canonically complete language is useful as it is functionally complete and provides a "normal form" for policies. However, previous work on canonically complete ABAC languages requires that the set of authorization decisions is totally ordered, which does not accurately reflect the intuition behind the use of the allow, deny and not-applicable decisions in access control. A number of recent ABAC languages use a fourth value and the set of authorization decisions is partially ordered. In this paper, we show how canonical completeness in multi-valued logics can be extended to the case where the set of truth values forms a lattice. This enables us to investigate the canonical completeness of logics having a partially ordered set of truth values, such as Belnap logic, and show that ABAC languages based on Belnap logic, such as PBel, are not canonically complete. We then construct a canonically complete four-valued logic using connections between the generators of the symmetric group (defined over the set of decisions) and unary operators in a canonically suitable logic. Finally, we propose a new authorization language @math , an extension of , which incorporates a lattice-ordered decision set and is canonically complete. We then discuss how the advantages of @math can be leveraged within the framework of XACML. | A function @math is completely specified by a truth table containing @math columns and @math rows. However, not every truth table can be represented by a formula in a given logic @math . @math is said to be if for every function @math , there is a formula @math of arity @math whose evaluation corresponds to the truth table. In , we explain why we may regard a tree-structured authorization language as a logic defined by a set of decisions and the set of policy-combining operators. In this sense, XACML is not functionally complete @cite_4 , while @cite_6 and PBel are @cite_17 . | {
"cite_N": [
"@cite_4",
"@cite_6",
"@cite_17"
],
"mid": [
"2412660217",
"1577867985",
"2140730933"
],
"abstract": [
"Attribute-based access control (ABAC) has attracted considerable interest in recent years, resulting in an extensive literature on the subject, including the standardized XML-based language XACML. ABAC policies written in languages like XACML have a tree-like structure in which leaf nodes are associated with authorization decisions and non-leaf nodes are associated with decision-combining algorithms. In this paper, we consider the expressive power of the rule- and policy-combining algorithms defined by the XACML standard. In particular, we identify unexpected dependencies between the combining algorithms and demonstrate that there exist useful combining algorithms that cannot be expressed by any combination of XACML combining algorithms. We briefly discuss the decision operators defined in the PTaCL language, an abstract language for defining ABAC policies, and the advantages of replacing the XACML combining algorithms with the PTaCL operators. Following this, we review results in the literature on multi-valued logic and introduce the notion of canonically complete policy languages. We discuss important practical advantages of canonically complete policy languages, primarily in simplifying policy specification and providing efficiently enforceable policies. Finally, we propose a new policy authorization language PTaCL which is canonically complete and show it is capable of expressing any arbitrary policy in a normal form and discuss the advantages of using PTaCL over existing policy languages such as XACML and PTaCL.",
"Many languages and algebras have been proposed in recent years for the specification of authorization policies. For some proposals, such as XACML, the main motivation is to address real-world requirements, typically by providing a complex policy language with somewhat informal evaluation methods; others try to provide a greater degree of formality --- particularly with respect to policy evaluation --- but support far fewer features. In short, there are very few proposals that combine a rich set of language features with a well-defined semantics, and even fewer that do this for authorization policies for attribute-based access control in open environments. In this paper, we decompose the problem of policy specification into two distinct sub-languages: the policy target language (PTL) for target specification, which determines when a policy should be evaluated; and the policy composition language (PCL) for building more complex policies from existing ones. We define syntax and semantics for two such languages and demonstrate that they can be both simple and expressive. PTaCL, the language obtained by combining the features of these two sub-languages, supports the specification of a wide range of policies. However, the power of PTaCL means that it is possible to define policies that could produce unexpected results. We provide an analysis of how PTL should be restricted and how policies written in PCL should be evaluated to minimize the likelihood of undesirable results.",
"Access control to IT systems increasingly relies on the ability to compose policies. Hence there is benefit in any framework for policy composition that is intuitive, formal (and so “analyzable” and “implementable”), expressive, independent of specific application domains, and yet able to be extended to create domain-specific instances. Here we develop such a framework based on Belnap logic. An access-control policy is interpreted as a four-valued predicate that maps access requests to either grant, deny, conflict, or unspecified -- the four values of the Belnap bilattice. We define an expressive access-control policy language PBel, having composition operators based on the operators of Belnap logic. Natural orderings on policies are obtained by lifting the truth and information orderings of the Belnap bilattice. These orderings lead to a query language in which policy analyses, for example, conflict freedom, can be specified. Policy analysis is supported through a reduction of the validity of policy queries to the validity of propositional formulas on predicates over access requests. We evaluate our approach through firewall policy and RBAC policy examples, and discuss domain-specific and generic extensions of our policy language."
]
} |
1702.04173 | 2594373924 | The study of canonically complete attribute-based access control (ABAC) languages is relatively new. A canonically complete language is useful as it is functionally complete and provides a "normal form" for policies. However, previous work on canonically complete ABAC languages requires that the set of authorization decisions is totally ordered, which does not accurately reflect the intuition behind the use of the allow, deny and not-applicable decisions in access control. A number of recent ABAC languages use a fourth value and the set of authorization decisions is partially ordered. In this paper, we show how canonical completeness in multi-valued logics can be extended to the case where the set of truth values forms a lattice. This enables us to investigate the canonical completeness of logics having a partially ordered set of truth values, such as Belnap logic, and show that ABAC languages based on Belnap logic, such as PBel, are not canonically complete. We then construct a canonically complete four-valued logic using connections between the generators of the symmetric group (defined over the set of decisions) and unary operators in a canonically suitable logic. Finally, we propose a new authorization language @math , an extension of , which incorporates a lattice-ordered decision set and is canonically complete. We then discuss how the advantages of @math can be leveraged within the framework of XACML. | Jobe defined a canonically complete @math -valued logic @cite_8 . The operators and the construction of the unary selection operators using these operators are given in Appendix . The expression for @math above could be expressed in normal form, providing we could find suitable unary operators for a @math -valued logic. In we explain how to produce a suitable set of unary operators for an @math -valued logic. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2086979258"
],
"abstract": [
"This paper examines the questions of functional completeness and canonical completeness in many-valued logics, offering proofs for several theorems on these topics. A skelet al description of the domain for these theorems is as follows. We are concerned with a proper logic L , containing a denumerably infinite class of propositional symbols, P, Q, R, …, a finite set of unary operations, U 1 , U 2 ,…, U b , and a finite set of binary operations, B 1 , B 2 , …, B c . Well-formed formulas in L are recursively defined by the conventional set of rules. With L there is associated an integer, M ≧ 2, and the integers m , where (1 ≦ m ≦M), are the truth values of L ."
]
} |
1702.04173 | 2594373924 | The study of canonically complete attribute-based access control (ABAC) languages is relatively new. A canonically complete language is useful as it is functionally complete and provides a "normal form" for policies. However, previous work on canonically complete ABAC languages requires that the set of authorization decisions is totally ordered, which does not accurately reflect the intuition behind the use of the allow, deny and not-applicable decisions in access control. A number of recent ABAC languages use a fourth value and the set of authorization decisions is partially ordered. In this paper, we show how canonical completeness in multi-valued logics can be extended to the case where the set of truth values forms a lattice. This enables us to investigate the canonical completeness of logics having a partially ordered set of truth values, such as Belnap logic, and show that ABAC languages based on Belnap logic, such as PBel, are not canonically complete. We then construct a canonically complete four-valued logic using connections between the generators of the symmetric group (defined over the set of decisions) and unary operators in a canonically suitable logic. Finally, we propose a new authorization language @math , an extension of , which incorporates a lattice-ordered decision set and is canonically complete. We then discuss how the advantages of @math can be leveraged within the framework of XACML. | There are several tree-structured ABAC languages in the literature, including the OASIS standard XACML, PBel and PTaCL @cite_16 @cite_6 @cite_17 . A number of policy algebras have also been defined, which have some similarities with tree-structured languages. The semantics of a policy are defined in terms of sets of authorized and denied requests @cite_3 @cite_13 @cite_7 @cite_1 , and policy operators are defined in terms of set operations such as intersection and union. These languages differ to some extent in the choices of @math and the set of operators that are used. XACML, for example, defines several rule- and policy-combining algorithms (which may be regarded as binary operators), but no unary operators. An XACML rule is equivalent to an atomic policy. PBel and PTaCL prefer to define a rather small set of operators: PTaCL defines a single binary operator and two unary operators, whereas PBel defines two binary operators and a single unary operator. XACML and PTaCL use a three-valued decision set comprising @math , @math and @math , to which PBel adds @math , which represents conflict''. | {
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_6",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2069728203",
"2057345152",
"1577867985",
"5684571",
"2048056473",
"2140730933"
],
"abstract": [
"",
"Collaborative and distributed applications, such as dynamic coalitions and virtualized grid computing, often require integrating access control policies of collaborating parties. Such an integration must be able to support complex authorization specifications and the fine-grained integration requirements that the various parties may have. In this paper, we introduce an algebra for fine-grained integration of sophisticated policies. The algebra, which consists of three binary and two unary operations, is able to support the specification of a large variety of integration constraints. To assess the expressive power of our algebra, we introduce a notion of completeness and prove that our algebra is complete with respect to this notion. We then propose a framework that uses the algebra for the fine-grained integration of policies expressed in XACML. We also present a methodology for generating the actual integrated XACML policy, based on the notion of Multi-Terminal Binary Decision Diagrams.",
"Despite considerable advancements in the area of access control and authorization languages, current approaches to enforcing access control are all based on monolithic and complete specifications. This assumption is too restrictive when access control restrictions to be enforced come from the combination of different policy specifications, each possibly under the control of independent authorities, and where the specifics of some component policies may not even be known apriori. Turning individual specifications into a coherent policy to be fed into the access control system requires a nontrivial combination and translation process. This article addresses the problem of combining authorization specifications that may be independently stated, possibly in different languages and according to different policies. We propose an algebra of security policies together with its formal semantics and illustrate how to formulate complex policies in the algebra and reason about them. A translation of policy expressions into equivalent logic programs is illustrated, which provides the basis for the implementation of the algebra. The algebra's expressiveness is analyzed through a comparison with first-order logic.",
"Many languages and algebras have been proposed in recent years for the specification of authorization policies. For some proposals, such as XACML, the main motivation is to address real-world requirements, typically by providing a complex policy language with somewhat informal evaluation methods; others try to provide a greater degree of formality --- particularly with respect to policy evaluation --- but support far fewer features. In short, there are very few proposals that combine a rich set of language features with a well-defined semantics, and even fewer that do this for authorization policies for attribute-based access control in open environments. In this paper, we decompose the problem of policy specification into two distinct sub-languages: the policy target language (PTL) for target specification, which determines when a policy should be evaluated; and the policy composition language (PCL) for building more complex policies from existing ones. We define syntax and semantics for two such languages and demonstrate that they can be both simple and expressive. PTaCL, the language obtained by combining the features of these two sub-languages, supports the specification of a wide range of policies. However, the power of PTaCL means that it is possible to define policies that could produce unexpected results. We provide an analysis of how PTL should be restricted and how policies written in PCL should be evaluated to minimize the likelihood of undesirable results.",
"",
"Security-sensitive environments protect their information resources against unauthorized use by enforcing access control mechanisms driven by access control policies. Due to the need to compare, contrast, and compose such protected information resources, access control policies regulating their manipulation need to be compared, contrasted, and composed. An algebra for manipulating such access control policies at a higher (propositional) level, where the operations of the algebra are abstracted from their specification details, is the subject of this paper. This algebra is applicable to policies that have controlled nondeterminism and all or nothing assignments of access privileges in their specification. These requirements reflect current practices in discretionary and role-based access control models. Therefore, the proposed algebra can be used to reason about role-based access control policies combined with other forms of discretionary policies. We show how to use algebraic identities to reason about consistency, completeness, and determinacy of composed policies using similar properties of their constituents.",
"Access control to IT systems increasingly relies on the ability to compose policies. Hence there is benefit in any framework for policy composition that is intuitive, formal (and so “analyzable” and “implementable”), expressive, independent of specific application domains, and yet able to be extended to create domain-specific instances. Here we develop such a framework based on Belnap logic. An access-control policy is interpreted as a four-valued predicate that maps access requests to either grant, deny, conflict, or unspecified -- the four values of the Belnap bilattice. We define an expressive access-control policy language PBel, having composition operators based on the operators of Belnap logic. Natural orderings on policies are obtained by lifting the truth and information orderings of the Belnap bilattice. These orderings lead to a query language in which policy analyses, for example, conflict freedom, can be specified. Policy analysis is supported through a reduction of the validity of policy queries to the validity of propositional formulas on predicates over access requests. We evaluate our approach through firewall policy and RBAC policy examples, and discuss domain-specific and generic extensions of our policy language."
]
} |
1702.04173 | 2594373924 | The study of canonically complete attribute-based access control (ABAC) languages is relatively new. A canonically complete language is useful as it is functionally complete and provides a "normal form" for policies. However, previous work on canonically complete ABAC languages requires that the set of authorization decisions is totally ordered, which does not accurately reflect the intuition behind the use of the allow, deny and not-applicable decisions in access control. A number of recent ABAC languages use a fourth value and the set of authorization decisions is partially ordered. In this paper, we show how canonical completeness in multi-valued logics can be extended to the case where the set of truth values forms a lattice. This enables us to investigate the canonical completeness of logics having a partially ordered set of truth values, such as Belnap logic, and show that ABAC languages based on Belnap logic, such as PBel, are not canonically complete. We then construct a canonically complete four-valued logic using connections between the generators of the symmetric group (defined over the set of decisions) and unary operators in a canonically suitable logic. Finally, we propose a new authorization language @math , an extension of , which incorporates a lattice-ordered decision set and is canonically complete. We then discuss how the advantages of @math can be leveraged within the framework of XACML. | The main difference between existing languages, however, is the extent to which they are complete in the senses defined in @cite_4 . We summarize these differences in Table , where CS, FC and CC denote canonically suitable, functionally complete and canonically complete, respectively. In , we prove that PBel is canonically suitable but not canonically complete. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2412660217"
],
"abstract": [
"Attribute-based access control (ABAC) has attracted considerable interest in recent years, resulting in an extensive literature on the subject, including the standardized XML-based language XACML. ABAC policies written in languages like XACML have a tree-like structure in which leaf nodes are associated with authorization decisions and non-leaf nodes are associated with decision-combining algorithms. In this paper, we consider the expressive power of the rule- and policy-combining algorithms defined by the XACML standard. In particular, we identify unexpected dependencies between the combining algorithms and demonstrate that there exist useful combining algorithms that cannot be expressed by any combination of XACML combining algorithms. We briefly discuss the decision operators defined in the PTaCL language, an abstract language for defining ABAC policies, and the advantages of replacing the XACML combining algorithms with the PTaCL operators. Following this, we review results in the literature on multi-valued logic and introduce the notion of canonically complete policy languages. We discuss important practical advantages of canonically complete policy languages, primarily in simplifying policy specification and providing efficiently enforceable policies. Finally, we propose a new policy authorization language PTaCL which is canonically complete and show it is capable of expressing any arbitrary policy in a normal form and discuss the advantages of using PTaCL over existing policy languages such as XACML and PTaCL."
]
} |
1702.04173 | 2594373924 | The study of canonically complete attribute-based access control (ABAC) languages is relatively new. A canonically complete language is useful as it is functionally complete and provides a "normal form" for policies. However, previous work on canonically complete ABAC languages requires that the set of authorization decisions is totally ordered, which does not accurately reflect the intuition behind the use of the allow, deny and not-applicable decisions in access control. A number of recent ABAC languages use a fourth value and the set of authorization decisions is partially ordered. In this paper, we show how canonical completeness in multi-valued logics can be extended to the case where the set of truth values forms a lattice. This enables us to investigate the canonical completeness of logics having a partially ordered set of truth values, such as Belnap logic, and show that ABAC languages based on Belnap logic, such as PBel, are not canonically complete. We then construct a canonically complete four-valued logic using connections between the generators of the symmetric group (defined over the set of decisions) and unary operators in a canonically suitable logic. Finally, we propose a new authorization language @math , an extension of , which incorporates a lattice-ordered decision set and is canonically complete. We then discuss how the advantages of @math can be leveraged within the framework of XACML. | For example, defines three policy operators @math and @math . To express XACML's deny- and permit-overrides in requires significant effort. For convenience, we introduce the operator @math : [ d d' def = (( d) ( d')). ] It is then possible to show that The operators @math and @math are equivalent to the permit- and allow-overrides policy-combining algorithms in XACML. As can be seen, the definitions of these operators in terms of the operators are complex, and, more generally, it is a non-trivial task to derive such formulae. Disjunctive normal form in propositional logic makes it trivial to write down a logical formula, using only conjunction, disjunction and negation, that is equivalent to an arbitrary Boolean function expressed in the form of a truth table. Similarly, a canonically complete ABAC language, such as @cite_4 , makes it possible to write down a policy in normal form from its decision table. In this paper, we show that there exist @math -valued canonically complete logics in which the set of truth values forms a lattice. We discuss why and how this can simplify policy generation in . | {
"cite_N": [
"@cite_4"
],
"mid": [
"2412660217"
],
"abstract": [
"Attribute-based access control (ABAC) has attracted considerable interest in recent years, resulting in an extensive literature on the subject, including the standardized XML-based language XACML. ABAC policies written in languages like XACML have a tree-like structure in which leaf nodes are associated with authorization decisions and non-leaf nodes are associated with decision-combining algorithms. In this paper, we consider the expressive power of the rule- and policy-combining algorithms defined by the XACML standard. In particular, we identify unexpected dependencies between the combining algorithms and demonstrate that there exist useful combining algorithms that cannot be expressed by any combination of XACML combining algorithms. We briefly discuss the decision operators defined in the PTaCL language, an abstract language for defining ABAC policies, and the advantages of replacing the XACML combining algorithms with the PTaCL operators. Following this, we review results in the literature on multi-valued logic and introduce the notion of canonically complete policy languages. We discuss important practical advantages of canonically complete policy languages, primarily in simplifying policy specification and providing efficiently enforceable policies. Finally, we propose a new policy authorization language PTaCL which is canonically complete and show it is capable of expressing any arbitrary policy in a normal form and discuss the advantages of using PTaCL over existing policy languages such as XACML and PTaCL."
]
} |
1702.04174 | 2951082072 | The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges. | Common binary classifiers applied to this problem include Artificial Neural Networks (ANN), Boosting techniques, and Support Vector Machines (SVM). ANNs were the most popular method in earlier works (e.g. @cite_8 , @cite_31 ). Boosting algorithms, such as AdaBoost and GentleBoost, have been a common choice for AU recognition (e.g. @cite_3 , @cite_26 ). Boosting algorithms are simple and quick to train. They have fewer parameters than SVM or ANN, and can be less prone to overfitting. They implicitly perform feature selection, which is desirable for handling high-dimensional data and speeding up inference, and can handle multiclass classification. SVMs are currently the most popular choice (e.g. @cite_15 , @cite_5 , @cite_1 ). SVMs provide good performance, can be non-linear, parameter optimisation is relatively easy, as efficient implementations are readily available, and a choice of kernel functions provides extreme flexibility of design. | {
"cite_N": [
"@cite_26",
"@cite_15",
"@cite_8",
"@cite_1",
"@cite_3",
"@cite_5",
"@cite_31"
],
"mid": [
"2017405810",
"2011723974",
"2167015621",
"2132723029",
"2006217447",
"",
"2161117158"
],
"abstract": [
"It is well known that how to extract dynamic features is a key issue for video-based face analysis. In this paper, we present a novel approach of facial expression recognition based on the encoded dynamic features. In order to capture the dynamic characteristics of facial events, we design the dynamic haar-like features to represent the temporal variations of facial appearance. Inspired by the binary pattern coding, we further encode the dynamic features into the binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally, the Adaboost is performed to learn a set of discriminating encoded dynamic features for facial expression recognition. We conduct the experiments on the CMU expression database, and the experiment result shows the power of the proposed method. We also extend this method to the active units (AU) recognition, and get a promising performance.",
"For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a “dense” form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a “coarse” form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.",
"Previous work suggests that Gabor-wavelet-based methods can achieve high sensitivity and specificity for emotion-specified expressions (e.g., happy, sad) and single action units (AUs) of the Facial Action Coding System (FACS). This paper evaluates a Gabor-wavelet-based method to recognize AUs in image sequences of increasing complexity. A recognition rate of 83 is obtained for three single AUs when image sequences contain homogeneous subjects and are without observable head motion. The accuracy of AU recognition decreases to 32 when the number of AUs increases to nine and the image sequences consist of AU combinations, head motion, and non-homogeneous subjects. For comparison, an average recognition rate of 87.6 is achieved for the geometry-feature-based method. The best recognition is a rate of 92.7 obtained by combining Gabor wavelets and geometry features.",
"This paper presents a framework to automatically measure the intensity of naturally occurring facial actions. Naturalistic expressions are non-posed spontaneous actions. The facial action coding system (FACS) is the gold standard technique for describing facial expressions, which are parsed as comprehensive, nonoverlapping action units (Aus). AUs have intensities ranging from absent to maximal on a six-point metric (i.e., 0 to 5). Despite the efforts in recognizing the presence of non-posed action units, measuring their intensity has not been studied comprehensively. In this paper, we develop a framework to measure the intensity of AU12 (lip corner puller) and AU6 (cheek raising) in videos captured from infant-mother live face-to-face communications. The AU12 and AU6 are the most challenging case of infant's expressions (e.g., low facial texture in infant's face). One of the problems in facial image analysis is the large dimensionality of the visual data. Our approach for solving this problem is to utilize the spectral regression technique to project high dimensionality facial images into a low dimensionality space. Represented facial images in the low dimensional space are utilized to train support vector machine classifiers to predict the intensity of action units. Analysis of 18 minutes of captured video of non-posed facial expressions of several infants and mothers shows significant agreement between a human FACS coder and our approach, which makes it an efficient approach for automated measurement of the intensity of non-posed facial action units.",
"Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.",
"",
"This work describes a new pre-processing step to classify facial expression. Previous works suggest that Gabor wavelets applied to recognize facial expression images subtracted from neutral face from the same subject could achieve good recognition rate under controlled condition as eye and month alignment. We propose a recognition system where the Gabor kernels are applied on facial expression subtracted from an averaged neutral face. A fast pre-processing technique that generates a small dimension output data is also proposed. A correct recognition rate of 86.6 is obtained in a 7 upper face actions and 81.6 in a 7 lower face actions detection problem using a neural network based classifier. The performance is evaluated in a heterogeneous subject database with head motion and lighting variations."
]
} |
1702.04174 | 2951082072 | The field of Automatic Facial Expression Analysis has grown rapidly in recent years. However, despite progress in new approaches as well as benchmarking efforts, most evaluations still focus on either posed expressions, near-frontal recordings, or both. This makes it hard to tell how existing expression recognition approaches perform under conditions where faces appear in a wide range of poses (or camera views), displaying ecologically valid expressions. The main obstacle for assessing this is the availability of suitable data, and the challenge proposed here addresses this limitation. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) extends FERA 2015 to the estimation of Action Units occurrence and intensity under different camera views. In this paper we present the third challenge in automatic recognition of facial expressions, to be held in conjunction with the 12th IEEE conference on Face and Gesture Recognition, May 2017, in Washington, United States. Two sub-challenges are defined: the detection of AU occurrence, and the estimation of AU intensity. In this work we outline the evaluation protocol, the data used, and the results of a baseline method for both sub-challenges. | Some approaches use the confidence of a (binary) frame-based AU activation classifier to estimate AU intensity. The rationale is that the lower the intensity is, the harder the classification will be. For example, used the distance of the test sample to the SVM separating hyperplane @cite_6 , while used the confidence of the decision given by AdaBoost @cite_3 . | {
"cite_N": [
"@cite_3",
"@cite_6"
],
"mid": [
"2006217447",
"2159668072"
],
"abstract": [
"Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.",
"Spontaneous facial expressions differ from posed expressions in both which muscles are moved, and in the dynamics of the movement. Advances in the field of automatic facial expression measurement will require development and assessment on spontaneous behavior. Here we present preliminary results on a task of facial action detection in spontaneous facial expressions. We employ a user independent fully automatic system for real time recognition of facial actions from the Facial Action Coding System (FACS). The system automatically detects frontal faces in the video stream and coded each frame with respect to 20 Action units. The approach applies machine learning methods such as support vector machines and AdaBoost, to texture-based image representations. The output margin for the learned classifiers predicts action unit intensity. Frame-by-frame intensity measurements will enable investigations into facial expression dynamics which were previously intractable by human coding."
]
} |
1702.04121 | 2587899849 | Over the past decade there has been considerable interest in spectral algorithms for learning Predictive State Representations (PSRs). Spectral algorithms have appealing theoretical guarantees; however, the resulting models do not always perform well on inference tasks in practice. One reason for this behavior is the mismatch between the intended task (accurate filtering or prediction) and the loss function being optimized by the algorithm (estimation error in model parameters). A natural idea is to improve performance by refining PSRs using an algorithm such as EM. Unfortunately it is not obvious how to apply apply an EM style algorithm in the context of PSRs as the Log Likelihood is not well defined for all PSRs. We show that it is possible to overcome this problem using ideas from Predictive State Inference Machines. We combine spectral algorithms for PSRs as a consistent and efficient initialization with PSIM-style updates to refine the resulting model parameters. By combining these two ideas we develop Inference Gradients, a simple, fast, and robust method for practical learning of PSRs. Inference Gradients performs gradient descent in the PSR parameter space to optimize an inference-based loss function like PSIM. Because Inference Gradients uses a spectral initialization we get the same consistency benefits as PSRs. We show that Inference Gradients outperforms both PSRs and PSIMs on real and synthetic data sets. | @cite_3 propose a gradient descent algorithm for improving the performance of PSRs where they optimize a proxy to the log likelihood. Given a PSR @math and a sequence of observations @math , the negative log likelihood is: Because the log likelihood is not well defined for an arbitrary PSR, choose to optimize a related loss function which rectifies and re-normalizes each predicted observation probability distribution: @math where @math is the space of all observation sequences with the same length as @math . This yields the gradient This expression is analytically intractable, so they propose a stochastic gradient descent procedure where they approximate this expression using contrastive divergence. They show that this approach can be used to significantly improve the performance of a PSR initialized via spectral techniques. This approach allows for gradient descent on a surrogate to the log loss, however the resulting algorithm has a complex update rule, and can be slow in practice. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2566786680"
],
"abstract": [
"Predictive state representations (PSRs) model dynamical systems using appropriately chosen predictions about future observations as a representation of the current state. In contrast to the hidden states posited by HMMs or RNNs, PSR states are directly observable in the training data; this gives rise to a moment-matching spectral algorithm for learning PSRs that is computationally efficient and statistically consistent when the model complexity matches that of the true system generating the data. In practice, however, model mismatch is inevitable and while spectral learning remains appealingly fast and simple it may fail to find optimal models. To address this problem, we investigate the use of gradient methods for improving spectrally-learned PSRs. We show that only a small amount of additional gradient optimization can lead to significant performance gains, and moreover that initializing gradient methods with the spectral learning solution yields better models in significantly less time than starting from scratch."
]
} |
1702.04121 | 2587899849 | Over the past decade there has been considerable interest in spectral algorithms for learning Predictive State Representations (PSRs). Spectral algorithms have appealing theoretical guarantees; however, the resulting models do not always perform well on inference tasks in practice. One reason for this behavior is the mismatch between the intended task (accurate filtering or prediction) and the loss function being optimized by the algorithm (estimation error in model parameters). A natural idea is to improve performance by refining PSRs using an algorithm such as EM. Unfortunately it is not obvious how to apply apply an EM style algorithm in the context of PSRs as the Log Likelihood is not well defined for all PSRs. We show that it is possible to overcome this problem using ideas from Predictive State Inference Machines. We combine spectral algorithms for PSRs as a consistent and efficient initialization with PSIM-style updates to refine the resulting model parameters. By combining these two ideas we develop Inference Gradients, a simple, fast, and robust method for practical learning of PSRs. Inference Gradients performs gradient descent in the PSR parameter space to optimize an inference-based loss function like PSIM. Because Inference Gradients uses a spectral initialization we get the same consistency benefits as PSRs. We show that Inference Gradients outperforms both PSRs and PSIMs on real and synthetic data sets. | @cite_8 propose a two pass algorithm for learning a dynamical system model, where they first learn a PSR using a spectral algorithm, then subsequently convert the PSR into a valid HMM using an exterior point method. Their approach is different from ours as they produce an HMM as the final model rather than a PSR. Additionally, their algorithm focuses on model parameter optimization rather than optimal performance on inference tasks. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2402992405"
],
"abstract": [
"Probabilistic latent-variable models are a fundamental tool in statistics and machine learning. Despite their widespread use, identifying the parameters of basic latent variable models continues to be an extremely challenging problem. Traditional maximum likelihood-based learning algorithms find valid parameters, but suffer from high computational cost, slow convergence, and local optima. In contrast, recently developed spectral algorithms are computationally efficient and provide strong statistical guarantees, but are not guaranteed to find valid parameters. In this work, we introduce a two-stage learning algorithm for latent variable models. We first use a spectral method of moments algorithm to find a solution that is close to the optimal solution but not necessarily in the valid set of model parameters. We then incrementally refine the solution via an exterior point method until a local optima that is arbitrarily near the valid set of parameters is found. We perform several experiments on synthetic and real-world data and show that our approach is more accurate than previous work, especially when training data is limited."
]
} |
1702.03654 | 2950607315 | Agglutinative languages such as Turkish, Finnish and Hungarian require morphological disambiguation before further processing due to the complex morphology of words. A morphological disambiguator is used to select the correct morphological analysis of a word. Morphological disambiguation is important because it generally is one of the first steps of natural language processing and its performance affects subsequent analyses. In this paper, we propose a system that uses deep learning techniques for morphological disambiguation. Many of the state-of-the-art results in computer vision, speech recognition and natural language processing have been obtained through deep learning models. However, applying deep learning techniques to morphologically rich languages is not well studied. In this work, while we focus on Turkish morphological disambiguation we also present results for French and German in order to show that the proposed architecture achieves high accuracy with no language-specific feature engineering or additional resource. In the experiments, we achieve 84.12, 88.35 and 93.78 morphological disambiguation accuracy among the ambiguous words for Turkish, German and French respectively. | In a natural language processing pipeline morphological disambiguation can be considered at the same level as POS tagging. In order to perform POS tagging in English, various approaches such as rule-based models @cite_12 , statistical models @cite_0 , maximum entropy models @cite_1 , HMMs @cite_34 , CRFs @cite_8 and decision trees @cite_13 are proposed. However, morphological disambiguation is a much harder problem in general due to the fact that it requires the classification of both roots, suffixes and the corresponding labels. Moreover, compared to an agglutinative language such as Turkish, English words can take on a limited number of word forms and part-of-speech tags. Yüret and Türe ( Yuret:2006:LMD:1220835.1220877 ) observe that more than ten thousand tag types exists in a corpus comprised of a million Turkish words. Thus, due to the high number of possible tags and the number of possible analyses in languages with productive morphology, morphological disambiguation is quite different from part-of-speech tagging in English. | {
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_34",
"@cite_13",
"@cite_12"
],
"mid": [
"2147880316",
"1773803948",
"2117400858",
"2046224275",
"2114822977",
"2157693466"
],
"abstract": [
"We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.",
"This paper presents a statistical model which trains from a corpus annotated with Part Of Speech tags and assigns them to previously unseen text with state of the art accuracy The model can be classi ed as a Maximum Entropy model and simultaneously uses many contextual features to predict the POS tag Furthermore this paper demonstrates the use of specialized fea tures to model di cult tagging decisions discusses the corpus consistency problems discovered during the implementation of these features and proposes a training strategy that mitigates these problems",
"Recently, there has been a rebirth of empiricism in the field of natural language processing. Manual encoding of linguistic information is being challenged by automated corpus-based learning as a method of providing a natural language processing system with linguistic knowledge. Although corpus-based approaches have been successful in many different areas of natural language processing, it is often the case that these methods capture the linguistic information they are modelling indirectly in large opaque tables of statistics. This can make it difficult to analyze, understand and improve the ability of these approaches to model underlying linguistic behavior. In this paper, we will describe a simple rule-based approach to automated learning of linguistic knowledge. This approach has been shown for a number of tasks to capture information in a clearer and more direct fashion without a compromise in performance. We present a detailed case study of this learning method applied to part-of-speech tagging.",
"We present an implementation of a part-of-speech tagger based on a hidden Markov model. The methodology enables robust and accurate tagging with few resource requirements. Only a lexicon and some unlabeled training text are required. Accuracy exceeds 96 . We describe implementation strategies and optimizations which result in high-speed operation. Three applications for tagging are described: phrase recognition; word sense disambiguation; and grammatical function assignment.",
"",
"Automatic part of speech tagging is an area of natural language processing where statistical techniques have been more successful than rule-based methods. In this paper, we present a simple rule-based part of speech tagger which automatically acquires its rules and tags with accuracy comparable to stochastic taggers. The rule-based tagger has many advantages over these taggers, including: a vast reduction in stored information required, the perspicuity of a small set of meaningful rules, ease of finding and implementing improvements to the tagger, and better portability from one tag set, corpus genre or language to another. Perhaps the biggest contribution of this work is in demonstrating that the stochastic method is not the only viable method for part of speech tagging. The fact that a simple rule-based tagger that automatically learns its rules can perform so well should offer encouragement for researchers to further explore rule-based tagging, searching for a better and more expressive set of rule templates and other variations on the simple but effective theme described below."
]
} |
1702.03644 | 2951115266 | Kernel regression is an essential and ubiquitous tool for non-parametric data analysis, particularly popular among time series and spatial data. However, the central operation which is performed many times, evaluating a kernel on the data set, takes linear time. This is impractical for modern large data sets. In this paper we describe coresets for kernel regression: compressed data sets which can be used as proxy for the original data and have provably bounded worst case error. The size of the coresets are independent of the raw number of data points, rather they only depend on the error guarantee, and in some cases the size of domain and amount of smoothing. We evaluate our methods on very large time series and spatial data, and demonstrate that they incur negligible error, can be constructed extremely efficiently, and allow for great computational gains. | This is the first work to address sample complexity and coreset size for Nadaraya-Watson kernel regression. There is an enormous body of work on other types of coresets, see the recent survey on coresets @cite_23 , including many for parametric regression variants like least-square regression @cite_19 and @math regression @cite_7 . | {
"cite_N": [
"@cite_19",
"@cite_7",
"@cite_23"
],
"mid": [
"1992065791",
"2010371250",
"2230030897"
],
"abstract": [
"We study the (constrained) least-squares regression as well as multiple response least-squares regression and ask the question of whether a subset of the data, a coreset, suffices to compute a good approximate solution to the regression. We give deterministic, low-order polynomial-time algorithms to construct such coresets with approximation guarantees, together with lower bounds indicating that there is not much room for improvement upon our results.",
"The @math regression problem takes as input a matrix @math , a vector @math , and a number @math , and it returns as output a number @math and a vector @math such that @math . In this paper, we construct coresets and obtain an efficient two-stage sampling-based approximation algorithm for the very overconstrained ( @math ) version of this classical problem, for all @math . The first stage of our algorithm nonuniformly samples @math rows of @math and the corresponding elements of @math , and then it solves the @math regression problem on the sample; we prove this is an 8-approximation. The second stage of our algorithm uses the output of the first stage to resample @math constraints, and then it solves the @math regression problem on the new sample; we prove this is a @math -approximation. Our algorithm unifies, improves upon, and extends the existing algorithms for special cases of @math regression, namely, @math [K. L. Clarkson, in Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York, SIAM, Philadelphia, 2005, pp. 257-266; P. Drineas, M. W. Mahoney, and S. Muthukrishnan, in Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms, ACM, New York, SIAM, Philadelphia, 2006, pp. 1127-1136]. In the course of proving our result, we develop two concepts—well-conditioned bases and subspace-preserving sampling—that are of independent interest.",
"Geometric data summarization has become an essential tool in both geometric approximation algorithms and where geometry intersects with big data problems. In linear or near-linear time large data sets can be compressed into a summary, and then more intricate algorithms can be run on the summaries whose results approximate those of the full data set. Coresets and sketches are the two most important classes of these summaries. We survey five types of coresets and sketches: shape-fitting, density estimation, high-dimensional vectors, high-dimensional point sets matrices, and clustering."
]
} |
1702.03644 | 2951115266 | Kernel regression is an essential and ubiquitous tool for non-parametric data analysis, particularly popular among time series and spatial data. However, the central operation which is performed many times, evaluating a kernel on the data set, takes linear time. This is impractical for modern large data sets. In this paper we describe coresets for kernel regression: compressed data sets which can be used as proxy for the original data and have provably bounded worst case error. The size of the coresets are independent of the raw number of data points, rather they only depend on the error guarantee, and in some cases the size of domain and amount of smoothing. We evaluate our methods on very large time series and spatial data, and demonstrate that they incur negligible error, can be constructed extremely efficiently, and allow for great computational gains. | The only non-parametric regression coreset we are aware of is a form of kernel regression @cite_24 related to the smallest enclosing ball. It predicts the value at a point @math as @math with loss function @math , for a parameter @math . Then it finds a set of @math non-zero @math parameters (corresponding with points in the coreset) so many points satisfy @math . No implementations were attempted. | {
"cite_N": [
"@cite_24"
],
"mid": [
"1497808028"
],
"abstract": [
"A rigid coreset minimum enclosing ball training machine for kernel regression estimation was proposed. First, it transfers the kernel regression estimation machine problem into a center-constrained minimum enclosing ball representation form, and subsequently trains the kernel methods using the proposed MEB algorithm. The primal variables of the kernel methods are recovered via KKT conditions. Then, detailed theoretical analysis and main theoretical results of our new algorithm are given. It can be concluded that our proposed MEB training algorithm is independent of sample dimension and the time complexity is linear in sample numbers, which greatly cuts down the complexity level and is expected to speedup the learning process obviously. Finally, comments about the future development directions are discussed."
]
} |
1702.03767 | 2952653199 | Non-technical losses (NTL) occur during the distribution of electricity in power grids and include, but are not limited to, electricity theft and faulty meters. In emerging countries, they may range up to 40 of the total electricity distributed. In order to detect NTLs, machine learning methods are used that learn irregular consumption patterns from customer data and inspection results. The Big Data paradigm followed in modern machine learning reflects the desire of deriving better conclusions from simply analyzing more data, without the necessity of looking at theory and models. However, the sample of inspected customers may be biased, i.e. it does not represent the population of all customers. As a consequence, machine learning models trained on these inspection results are biased as well and therefore lead to unreliable predictions of whether customers cause NTL or not. In machine learning, this issue is called covariate shift and has not been addressed in the literature on NTL detection yet. In this work, we present a novel framework for quantifying and visualizing covariate shift. We apply it to a commercial data set from Brazil that consists of 3.6M customers and 820K inspection results. We show that some features have a stronger covariate shift than others, making predictions less reliable. In particular, previous inspections were focused on certain neighborhoods or customer classes and that they were not sufficiently spread among the population of customers. This framework is about to be deployed in a commercial product for NTL detection. | From an electrical engineering perspective, energy balance methods @cite_8 can be applied to the detection of NTL. However, this requires topological information of the distribution network and does not reflect a change of network. In practice, the network topology undergoes rapid changes in emerging countries, i.e. the countries in which NTL is a particular issue. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1984585304"
],
"abstract": [
"This paper aims at presenting a new method for the evaluation of technical (demand and energy) losses in electrical power distribution systems. A computational tool was developed and implemented at Eletropaulo, the largest distribution company in Brazil. The methodology divides the distribution system into eight different segments, namely: energy meters, customer connections to the network, low voltage network, distribution transformers, medium voltage network, distribution substations, subtransmission system and other technical losses. The latter segment includes equipment losses in capacitors, voltage regulators, connectors, insulators and so forth. The computational tool comprises two modules. The first one determines technical losses in specific networks in a hierarchical way. From the evaluation of losses in a representative part of the distribution system, per unit loss indices for each segment are readily computed. Such indices are transferred to a second module, which is responsible for the assessment of a global energy balance for the overall distribution system."
]
} |
1702.03307 | 2951796156 | A generative model based on training deep architectures is proposed. The model consists of K networks that are trained together to learn the underlying distribution of a given data set. The process starts with dividing the input data into K clusters and feeding each of them into a separate network. After few iterations of training networks separately, we use an EM-like algorithm to train the networks together and update the clusters of the data. We call this model Mixture of Networks. The provided model is a platform that can be used for any deep structure and be trained by any conventional objective function for distribution modeling. As the components of the model are neural networks, it has high capability in characterizing complicated data distributions as well as clustering data. We apply the algorithm on MNIST hand-written digits and Yale face datasets. We also demonstrate the clustering ability of the model using some real-world and toy examples. | A fundamental work on deep generative models has been done by hinton2006 , where they introduced a fast algorithm for unsupervised training of deep belief networks (DBNs), which are deep graphical models. In a recent work by Salakhutdinov salakhutdinov2015learning , a comprehensive review over this model is presented. Built upon this model, lee2009 presented a similar network with convolutional layers. They introduced probabilistic max-pooling technique and constructed a translation-invariant model. In @cite_21 , another generative model based on DBNs was presented, which was used for image feature extraction. Unsupervised deep representation learning techniques have been used in @cite_5 to build a generative model that can exploit high-level features to generate high-quality samples. | {
"cite_N": [
"@cite_5",
"@cite_21"
],
"mid": [
"2952742172",
"1995997122"
],
"abstract": [
"It has previously been hypothesized, and supported with some experimental evidence, that deeper representations, when well trained, tend to do a better job at disentangling the underlying factors of variation. We study the following related conjecture: better representations, in the sense of better disentangling, can be exploited to produce faster-mixing Markov chains. Consequently, mixing would be more efficient at higher levels of representation. To better understand why and how this is happening, we propose a secondary conjecture: the higher-level samples fill more uniformly the space they occupy and the high-density manifolds tend to unfold when represented at higher levels. The paper discusses these hypotheses and tests them experimentally through visualization and measurements of mixing and interpolating between samples.",
"The most popular way to use probabilistic models in vision is first to extract some descriptors of small image patches or object parts using well-engineered features, and then to use statistical learning tools to model the dependencies among these features and eventual labels. Learning probabilistic models directly on the raw pixel values has proved to be much more difficult and is typically only used for regularizing discriminative methods. In this work, we use one of the best, pixel-level, generative models of natural images–a gated MRF–as the lowest level of a deep belief network (DBN) that has several hidden layers. We show that the resulting DBN is very good at coping with occlusion when predicting expression categories from face images, and it can produce features that perform comparably to SIFT descriptors for discriminating different types of scene. The generative ability of the model also makes it easy to see what information is captured and what is lost at each level of representation."
]
} |
1702.03307 | 2951796156 | A generative model based on training deep architectures is proposed. The model consists of K networks that are trained together to learn the underlying distribution of a given data set. The process starts with dividing the input data into K clusters and feeding each of them into a separate network. After few iterations of training networks separately, we use an EM-like algorithm to train the networks together and update the clusters of the data. We call this model Mixture of Networks. The provided model is a platform that can be used for any deep structure and be trained by any conventional objective function for distribution modeling. As the components of the model are neural networks, it has high capability in characterizing complicated data distributions as well as clustering data. We apply the algorithm on MNIST hand-written digits and Yale face datasets. We also demonstrate the clustering ability of the model using some real-world and toy examples. | Two important and recent classes of deep generative models are generative adversarial networks (GANs) and variational auto-encoders (VAE) @cite_20 . GANs are trained based on solving a minimax problem to generate samples that are not distinguishable from the samples in the training sets. Based on the variational inference concept, VAEs are designed for fast training and having explicit expression for posterior probability of the latent variable. Many recent advancements in the area of deep generative models are based on these two models . | {
"cite_N": [
"@cite_20"
],
"mid": [
"2399306074"
],
"abstract": [
"Variational Autoencoders are powerful models for unsupervised learning. However deep models with several layers of dependent stochastic variables are difficult to train which limits the improvements obtained using these highly expressive models. We propose a new inference model, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network. We show that this model provides state of the art predictive log-likelihood and tighter log-likelihood lower bound compared to the purely bottom-up inference in layered Variational Autoencoders and other generative models. We provide a detailed analysis of the learned hierarchical latent representation and show that our new inference model is qualitatively different and utilizes a deeper more distributed hierarchy of latent variables. Finally, we observe that batch normalization and deterministic warm-up (gradually turning on the KL-term) are crucial for training variational models with many stochastic layers."
]
} |
1702.03307 | 2951796156 | A generative model based on training deep architectures is proposed. The model consists of K networks that are trained together to learn the underlying distribution of a given data set. The process starts with dividing the input data into K clusters and feeding each of them into a separate network. After few iterations of training networks separately, we use an EM-like algorithm to train the networks together and update the clusters of the data. We call this model Mixture of Networks. The provided model is a platform that can be used for any deep structure and be trained by any conventional objective function for distribution modeling. As the components of the model are neural networks, it has high capability in characterizing complicated data distributions as well as clustering data. We apply the algorithm on MNIST hand-written digits and Yale face datasets. We also demonstrate the clustering ability of the model using some real-world and toy examples. | Different types of neural networks have been used to work as a generative model for different applications. In @cite_3 , inspired by a human vision system, Recurrent Neural Networks (RNNs) are trained for generating images. @cite_8 proposed a method for training Convolutional Neural Networks (CNNs) for this purpose. In @cite_0 , authors trained a deep neural network in a supervised way to be able to generate images of different objects given their names, locations, and angles of view. | {
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_8"
],
"mid": [
"1893585201",
"1850742715",
"1936878994"
],
"abstract": [
"We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"The convolutional neural networks (CNNs) have proven to be a powerful tool for discriminative learning. Recently researchers have also started to show interest in the generative aspects of CNNs in order to gain a deeper understanding of what they have learned and how to further improve them. This paper investigates generative modeling of CNNs. The main contributions include: (1) We construct a generative model for the CNN in the form of exponential tilting of a reference distribution. (2) We propose a generative gradient for pre-training CNNs by a non-parametric importance sampling scheme, which is fundamentally different from the commonly used discriminative gradient, and yet has the same computational architecture and cost as the latter. (3) We propose a generative visualization method for the CNNs by sampling from an explicit parametric image distribution. The proposed visualization method can directly draw synthetic samples for any given node in a trained CNN by the Hamiltonian Monte Carlo (HMC) algorithm, without resorting to any extra hold-out images. Experiments on the challenging ImageNet benchmark show that the proposed generative gradient pre-training consistently helps improve the performances of CNNs, and the proposed generative visualization method generates meaningful and varied samples of synthetic images from a large-scale deep CNN."
]
} |
1702.03431 | 2738493602 | State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose to model the statistical relationships of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose and a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth maps. To improve generalization and to better exploit unlabeled depth maps, we jointly train a generator and a discriminator. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized and unlabeled samples. The proposed discriminator network architecture is highly efficient and runs at 90 FPS on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks. | Recent works have extended the VAE @cite_34 @cite_43 @cite_26 and the GAN @cite_14 @cite_42 @cite_27 @cite_11 from unsupervised to semi-supervised settings, though only for classification tasks. These works assume a multi-modal distribution in the latent space; while fitting for classification, this assumption does not hold for real-valued structured prediction, as is the case for hand pose estimation. Other works @cite_38 @cite_24 @cite_12 @cite_36 @cite_11 modify the generation model to improve synthesis. For example, the methodology in @cite_12 @cite_11 stabilized the training process of the GAN, resulting in higher quality synthetic samples. We use the fully convolutional network as proposed in @cite_12 as the GAN architecture and the feature matching strategy proposed in @cite_11 . | {
"cite_N": [
"@cite_38",
"@cite_14",
"@cite_26",
"@cite_36",
"@cite_42",
"@cite_24",
"@cite_43",
"@cite_27",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"1850742715",
"2125389028",
"2469266052",
"2298992465",
"2178768799",
"2951523806",
"2188365844",
"2412510955",
"2949416428",
"2173520492",
"2432004435"
],
"abstract": [
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.",
"Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.",
"In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method - which we dub categorical generative adversarial networks (or CatGAN) - on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).",
"In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.",
"Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.",
"We extend Generative Adversarial Networks (GANs) to the semi-supervised context by forcing the discriminator network to output class labels. We train a generative model G and a discriminator D on a dataset with inputs belonging to one of N classes. At training time, D is made to predict which of N+1 classes the input belongs to, where an extra class is added to correspond to the outputs of G. We show that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN.",
"The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes."
]
} |
1702.03431 | 2738493602 | State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose to model the statistical relationships of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose and a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth maps. To improve generalization and to better exploit unlabeled depth maps, we jointly train a generator and a discriminator. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized and unlabeled samples. The proposed discriminator network architecture is highly efficient and runs at 90 FPS on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks. | Since it is not possible to estimate the posterior on the GAN, @cite_46 @cite_21 @cite_4 have extended the GAN to be bidirectional. Our proposed network most resembles @cite_4 , which also formulates posterior estimation as multi-task learning. However, instead of only estimating a subvector of the latent variable and leaving the rest as random noise as in @cite_4 , we learn the entire posterior. Some other works extend the GAN to cover multiple domains, and synthesize images from text @cite_19 @cite_35 or from another image domain @cite_29 @cite_40 . We tackle a far more challenging case of synthesizing depth maps from given poses. The synthesized depth map need to be very accurate to correspond to the given pose parameters and indeed they are, as we are even able to use synthesized images for training. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_29",
"@cite_21",
"@cite_19",
"@cite_40",
"@cite_46"
],
"mid": [
"",
"2434741482",
"2471149695",
"2411541852",
"2949999304",
"2553897675",
"2412320034"
],
"abstract": [
"",
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",
"We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation.",
"We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent data-space samples from the generative network and joint samples from the inference network. We illustrate the ability of the model to learn mutually coherent inference and generation networks through the inspections of model samples and reconstructions and confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
"We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T, such that the output of a given function f, which accepts inputs in either domains, would remain unchanged. Other than the function f, the training data is unsupervised and consist of a set of samples from each domain. The Domain Transfer Network (DTN) we present employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. We apply our method to visual domains including digits and face images and demonstrate its ability to generate convincing novel images of previously unseen entities, while preserving their identity.",
"The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning."
]
} |
1702.03431 | 2738493602 | State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose to model the statistical relationships of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose and a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth maps. To improve generalization and to better exploit unlabeled depth maps, we jointly train a generator and a discriminator. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized and unlabeled samples. The proposed discriminator network architecture is highly efficient and runs at 90 FPS on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks. | Hand pose estimation generally falls into two camps, model-based tracking and frame-wise discriminative estimation. Conventional methods need either manually designed energy functions to measure the difference between synthesized samples and observations in model-based tracking @cite_9 @cite_53 @cite_49 @cite_39 @cite_1 @cite_0 @cite_18 or hand-crafted local @cite_37 @cite_10 @cite_15 @cite_44 @cite_39 or holistic @cite_32 features for discriminative estimation. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_9",
"@cite_53",
"@cite_1",
"@cite_32",
"@cite_39",
"@cite_0",
"@cite_44",
"@cite_49",
"@cite_15",
"@cite_10"
],
"mid": [
"2469784314",
"2110619642",
"2100642335",
"1990947293",
"2520346623",
"",
"2218414108",
"2227547437",
"2518780600",
"",
"1928739709",
"2093414253"
],
"abstract": [
"Fully articulated hand tracking promises to enable fundamentally new interactions with virtual and augmented worlds, but the limited accuracy and efficiency of current systems has prevented widespread adoption. Today's dominant paradigm uses machine learning for initialization and recovery followed by iterative model-fitting optimization to achieve a detailed pose fit. We follow this paradigm, but make several changes to the model-fitting, namely using: (1) a more discriminative objective function; (2) a smooth-surface model that provides gradients for non-linear optimization; and (3) joint optimization over both the model pose and the correspondences between observed data points and the model surface. While each of these changes may actually increase the cost per fitting iteration, we find a compensating decrease in the number of iterations. Further, the wide basin of convergence means that fewer starting points are needed for successful model fitting. Our system runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers. The hand tracker is efficient enough to run on low-power devices such as tablets. We can track up to several meters from the camera to provide a large working volume for interaction, even using the noisy data from current-generation depth cameras. Quantitative assessments on standard datasets show that the new approach exceeds the state of the art in accuracy. Qualitative results take the form of live recordings of a range of interactive experiences enabled by this new approach.",
"This paper presents the first semi-supervised transductive algorithm for real-time articulated hand pose estimation. Noisy data and occlusions are the major challenges of articulated hand pose estimation. In addition, the discrepancies among realistic and synthetic pose data undermine the performances of existing approaches that use synthetic data extensively in training. We therefore propose the Semi-supervised Transductive Regression (STR) forest which learns the relationship between a small, sparsely labelled realistic dataset and a large synthetic dataset. We also design a novel data-driven, pseudo-kinematic technique to refine noisy or occluded joints. Our contributions include: (i) capturing the benefits of both realistic and synthetic data via transductive learning, (ii) showing accuracies can be improved by considering unlabelled data, and (iii) introducing a pseudo-kinematic technique to refine articulations efficiently. Experimental results show not only the promising performance of our method with respect to noise and occlusions, but also its superiority over state-of-the-arts in accuracy, robustness and speed.",
"We present a novel solution to the problem of recovering and tracking the 3D position, orientation and full articulation of a human hand from markerless visual observations obtained by a Kinect sensor. We treat this as an optimization problem, seeking for the hand model parameters that minimize the discrepancy between the appearance and 3D structure of hypothesized instances of a hand model and actual hand observations. This optimization problem is effectively solved using a variant of Particle Swarm Optimization (PSO). The proposed method does not require special markers and or a complex image acquisition setup. Being model based, it provides continuous solutions to the problem of tracking hand articulations. Extensive experiments with a prototype GPU-based implementation of the proposed method demonstrate that accurate and robust 3D tracking of hand articulations can be achieved in near real-time (15Hz).",
"We present a realtime hand tracking system using a depth sensor. It tracks a fully articulated hand under large viewpoints in realtime (25 FPS on a desktop without using a GPU) and with high accuracy (error below 10 mm). To our knowledge, it is the first system that achieves such robustness, accuracy, and speed simultaneously, as verified on challenging real data. Our system is made of several novel techniques. We model a hand simply using a number of spheres and define a fast cost function. Those are critical for realtime performance. We propose a hybrid method that combines gradient based and stochastic optimization methods to achieve fast convergence and good accuracy. We present new finger detection and hand initialization methods that greatly enhance the robustness of tracking.",
"Real-time simultaneous tracking of hands manipulating and interacting with external objects has many potential applications in augmented reality, tangible computing, and wearable computing. However, due to difficult occlusions, fast motions, and uniform hand appearance, jointly tracking hand and object pose is more challenging than tracking either of the two separately. Many previous approaches resort to complex multi-camera setups to remedy the occlusion problem and often employ expensive segmentation and optimization steps which makes real-time tracking impossible. In this paper, we propose a real-time solution that uses a single commodity RGB-D camera. The core of our approach is a 3D articulated Gaussian mixture alignment strategy tailored to hand-object tracking that allows fast pose optimization. The alignment energy uses novel regularizers to address occlusions and hand-object contacts. For added robustness, we guide the optimization with discriminative part classification of the hand and segmentation of the object. We conducted extensive experiments on several existing datasets and introduce a new annotated hand-object dataset. Quantitative and qualitative results show the key advantages of our method: speed, accuracy, and robustness.",
"",
"We address the problem of hand pose estimation, formulated as an inverse problem. Typical approaches optimize an energy function over pose parameters using a 'black box' image generation procedure. This procedure knows little about either the relationships between the parameters or the form of the energy function. In this paper, we show that we can significantly improving upon black box optimization by exploiting high-level knowledge of the structure of the parameters and using a local surrogate energy function. Our new framework, hierarchical sampling optimization, consists of a sequence of predictors organized into a kinematic hierarchy. Each predictor is conditioned on its ancestors, and generates a set of samples over a subset of the pose parameters. The highly-efficient surrogate energy is used to select among samples. Having evaluated the full hierarchy, the partial pose samples are concatenated to generate a full-pose hypothesis. Several hypotheses are generated using the same procedure, and finally the original full energy function selects the best result. Experimental evaluation on three publically available datasets show that our method is particularly impressive in low-compute scenarios where it significantly outperforms all other state-of-the-art methods.",
"Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.",
"We present a hierarchical regression framework for estimating hand joint positions from single depth images based on local surface normals. The hierarchical regression follows the tree structured topology of hand from wrist to finger tips. We propose a conditional regression forest, i.e. the Frame Conditioned Regression Forest (FCRF) which uses a new normal difference feature. At each stage of the regression, the frame of reference is established from either the local surface normal or previously estimated hand joints. By making the regression with respect to the local frame, the pose estimation is more robust to rigid transformations. We also introduce a new efficient approximation to estimate surface normals. We verify the effectiveness of our method by conducting experiments on two challenging real-world datasets and show consistent improvements over previous discriminative pose estimation methods.",
"",
"We extends the previous 2D cascaded object pose regression work [9] in two aspects so that it works better for 3D articulated objects. Our first contribution is 3D pose-indexed features that generalize the previous 2D parameterized features and achieve better invariance to 3D transformations. Our second contribution is a principled hierarchical regression that is adapted to the articulated object structure. It is therefore more accurate and faster. Comprehensive experiments verify the state-of-the-art accuracy and efficiency of the proposed approach on the challenging 3D hand pose estimation problem, on a public dataset and our new dataset.",
"In this paper we present the Latent Regression Forest (LRF), a novel framework for real-time, 3D hand pose estimation from a single depth image. In contrast to prior forest-based methods, which take dense pixels as input, classify them independently and then estimate joint positions afterwards, our method can be considered as a structured coarse-to-fine search, starting from the centre of mass of a point cloud until locating all the skelet al joints. The searching process is guided by a learnt Latent Tree Model which reflects the hierarchical topology of the hand. Our main contributions can be summarised as follows: (i) Learning the topology of the hand in an unsupervised, data-driven manner. (ii) A new forest-based, discriminative framework for structured search in images, as well as an error regression step to avoid error accumulation. (iii) A new multi-view hand pose dataset containing 180K annotated images from 10 different subjects. Our experiments show that the LRF out-performs state-of-the-art methods in both accuracy and efficiency."
]
} |
1702.03431 | 2738493602 | State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose to model the statistical relationships of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose and a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth maps. To improve generalization and to better exploit unlabeled depth maps, we jointly train a generator and a discriminator. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized and unlabeled samples. The proposed discriminator network architecture is highly efficient and runs at 90 FPS on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks. | Most recent works @cite_52 @cite_28 @cite_50 @cite_23 @cite_41 @cite_33 @cite_22 @cite_16 @cite_13 apply convolutional neural networks ,(CNNs), and combine feature extraction and discriminative estimation into an end-to-end learning framework. Since CNNs need lots of labeled training data, semi-automatic methods have been proposed most recently @cite_31 @cite_51 for accurate annotation but still take lots of efforts. On the other hand, few works have considered utilizing more easily accessible unlabeled depth maps to learn better representations. In that sense, our work resembles @cite_37 which tries to correlate unlabelled depth maps. While @cite_37 takes a discriminative approach to learn a transductive random forest, our generative approach is able to capture the distribution of unlabeled depth maps. | {
"cite_N": [
"@cite_37",
"@cite_33",
"@cite_22",
"@cite_28",
"@cite_41",
"@cite_52",
"@cite_23",
"@cite_50",
"@cite_51",
"@cite_31",
"@cite_16",
"@cite_13"
],
"mid": [
"2110619642",
"2952561223",
"",
"1702419847",
"2473634362",
"2075156252",
"2466332079",
"2210697964",
"2606965392",
"",
"2514214752",
"2737305288"
],
"abstract": [
"This paper presents the first semi-supervised transductive algorithm for real-time articulated hand pose estimation. Noisy data and occlusions are the major challenges of articulated hand pose estimation. In addition, the discrepancies among realistic and synthetic pose data undermine the performances of existing approaches that use synthetic data extensively in training. We therefore propose the Semi-supervised Transductive Regression (STR) forest which learns the relationship between a small, sparsely labelled realistic dataset and a large synthetic dataset. We also design a novel data-driven, pseudo-kinematic technique to refine noisy or occluded joints. Our contributions include: (i) capturing the benefits of both realistic and synthetic data via transductive learning, (ii) showing accuracies can be improved by considering unlabelled data, and (iii) introducing a pseudo-kinematic technique to refine articulations efficiently. Experimental results show not only the promising performance of our method with respect to noise and occlusions, but also its superiority over state-of-the-arts in accuracy, robustness and speed.",
"Discriminative methods often generate hand poses kinematically implausible, then generative methods are used to correct (or verify) these results in a hybrid method. Estimating 3D hand pose in a hierarchy, where the high-dimensional output space is decomposed into smaller ones, has been shown effective. Existing hierarchical methods mainly focus on the decomposition of the output space while the input space remains almost the same along the hierarchy. In this paper, a hybrid hand pose estimation method is proposed by applying the kinematic hierarchy strategy to the input space (as well as the output space) of the discriminative method by a spatial attention mechanism and to the optimization of the generative method by hierarchical Particle Swarm Optimization (PSO). The spatial attention mechanism integrates cascaded and hierarchical regression into a CNN framework by transforming both the input(and feature space) and the output space, which greatly reduces the viewpoint and articulation variations. Between the levels in the hierarchy, the hierarchical PSO forces the kinematic constraints to the results of the CNNs. The experimental results show that our method significantly outperforms four state-of-the-art methods and three baselines on three public benchmarks.",
"",
"We introduce and evaluate several architectures for Convolutional Neural Networks to predict the 3D joint locations of a hand given a depth map. We first show that a prior on the 3D pose can be easily introduced and significantly improves the accuracy and reliability of the predictions. We also show how to use context efficiently to deal with ambiguities between fingers. These two contributions allow us to significantly outperform the state-of-the-art on several challenging benchmarks, both in terms of accuracy and computation times.",
"Articulated hand pose estimation plays an important role in human-computer interaction. Despite the recent progress, the accuracy of existing methods is still not satisfactory, partially due to the difficulty of embedded high-dimensional and non-linear regression problem. Different from the existing discriminative methods that regress for the hand pose with a single depth image, we propose to first project the query depth image onto three orthogonal planes and utilize these multi-view projections to regress for 2D heat-maps which estimate the joint positions on each plane. These multi-view heat-maps are then fused to produce final 3D hand pose estimation with learned pose priors. Experiments show that the proposed method largely outperforms state-of-the-art on a challenging dataset. Moreover, a cross-dataset experiment also demonstrates the good generalization ability of the proposed method.",
"We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.",
"We propose DeepHand to estimate the 3D pose of a hand using depth data from commercial 3D sensors. We discriminatively train convolutional neural networks to output a low dimensional activation feature given a depth map. This activation feature vector is representative of the global or local joint angle parameters of a hand pose. We efficiently identify 'spatial' nearest neighbors to the activation feature, from a database of features corresponding to synthetic depth maps, and store some 'temporal' neighbors from previous frames. Our matrix completion algorithm uses these 'spatio-temporal' activation features and the corresponding known pose parameter values to estimate the unknown pose parameters of the input feature vector. Our database of activation features supplements large viewpoint coverage and our hierarchical estimation of pose parameters is robust to occlusions. We show that our approach compares favorably to state-of-the-art methods while achieving real time performance (≈ 32 FPS) on a standard computer.",
"We propose an entirely data-driven approach to estimating the 3D pose of a hand given a depth image. We show that we can correct the mistakes made by a Convolutional Neural Network trained to predict an estimate of the 3D pose by using a feedback loop. The components of this feedback loop are also Deep Networks, optimized using training data. They remove the need for fitting a 3D model to the input data, which requires both a carefully designed fitting function and algorithm. We show that our approach outperforms state-of-the-art methods, and is efficient as our implementation runs at over 400 fps on a single GPU.",
"In this paper we introduce a large-scale hand pose dataset, collected using a novel capture method. Existing datasets are either generated synthetically or captured using depth sensors: synthetic datasets exhibit a certain level of appearance difference from real depth images, and real datasets are limited in quantity and coverage, mainly due to the difficulty to annotate them. We propose a tracking system with six 6D magnetic sensors and inverse kinematics to automatically obtain 21-joints hand pose annotations of depth maps captured with minimal restriction on the range of motion. The capture protocol aims to fully cover the natural hand pose space. As shown in embedding plots, the new dataset exhibits a significantly wider and denser range of hand poses compared to existing benchmarks. Current state-of-the-art methods are evaluated on the dataset, and we demonstrate significant improvements in cross-benchmark performance. We also show significant improvements in egocentric hand pose estimation with a CNN trained on the new dataset.",
"",
"We propose a method for extracting very accurate masks of hands in egocentric views. Our method is based on a novel Deep Learning architecture: In contrast with current Deep Learning methods, we do not use upscaling layers applied to a low-dimensional representation of the input image. Instead, we extract features with convolutional layers and map them directly to a segmentation mask with a fully connected layer. We show that this approach, when applied in a multi-scale fashion, is both accurate and efficient enough for real-time. We demonstrate it on a new dataset made of images captured in various environments, from the outdoors to offices.",
"We propose a simple, yet effective approach for real-time hand pose estimation from single depth images using three-dimensional Convolutional Neural Networks (3D CNNs). Image based features extracted by 2D CNNs are not directly suitable for 3D hand pose estimation due to the lack of 3D spatial information. Our proposed 3D CNN taking a 3D volumetric representation of the hand depth image as input can capture the 3D spatial structure of the input and accurately regress full 3D hand pose in a single pass. In order to make the 3D CNN robust to variations in hand sizes and global orientations, we perform 3D data augmentation on the training data. Experiments show that our proposed 3D CNN based approach outperforms state-of-the-art methods on two challenging hand pose datasets, and is very efficient as our implementation runs at over 215 fps on a standard computer with a single GPU."
]
} |
1702.03431 | 2738493602 | State-of-the-art methods for 3D hand pose estimation from depth images require large amounts of annotated training data. We propose to model the statistical relationships of 3D hand poses and corresponding depth images using two deep generative models with a shared latent space. By design, our architecture allows for learning from unlabeled image data in a semi-supervised manner. Assuming a one-to-one mapping between a pose and a depth map, any given point in the shared latent space can be projected into both a hand pose and a corresponding depth map. Regressing the hand pose can then be done by learning a discriminator to estimate the posterior of the latent pose given some depth maps. To improve generalization and to better exploit unlabeled depth maps, we jointly train a generator and a discriminator. At each iteration, the generator is updated with the back-propagated gradient from the discriminator to synthesize realistic depth maps of the articulated hand, while the discriminator benefits from an augmented training set of synthesized and unlabeled samples. The proposed discriminator network architecture is highly efficient and runs at 90 FPS on the CPU with accuracies comparable or better than state-of-art on 3 publicly available benchmarks. | Our work is inspired by @cite_48 @cite_2 , which learned a shared manifold for observations and pose parameters based on the Gaussian process latent variable model ,(GPLVM). Another similar line of works are @cite_30 @cite_3 , which try to learn a shared latent space between pose and gait also based on GPLVM. The GPLVM is a non-parametric model, whereas our generative model is in the format of neural network, which makes it possible to learn the generative models together with the posterior estimation in an end-to-end manner. | {
"cite_N": [
"@cite_30",
"@cite_48",
"@cite_3",
"@cite_2"
],
"mid": [
"2088801701",
"1541825479",
"2343952097",
"2113469957"
],
"abstract": [
"We present new multilayer joint gait-pose manifolds (multilayer JGPMs) for complex human gait motion modeling, where three latent variables are defined jointly in a low-dimensional manifold to represent a variety of body configurations. Specifically, the pose variable (along the pose manifold) denotes a specific stage in a walking cycle; the gait variable (along the gait manifold) represents different walking styles; and the linear scale variable characterizes the maximum stride in a walking cycle. We discuss two kinds of topological priors for coupling the pose and gait manifolds, i.e., cylindrical and toroidal, to examine their effectiveness and suitability for motion modeling. We resort to a topologically-constrained Gaussian process (GP) latent variable model to learn the multilayer JGPMs where two new techniques are introduced to facilitate model learning under limited training data. First is training data diversification that creates a set of simulated motion data with different strides. Second is the topology-aware local learning to speed up model learning by taking advantage of the local topological structure. The experimental results on the Carnegie Mellon University motion capture data demonstrate the advantages of our proposed multilayer models over several existing GP-based motion models in terms of the overall performance of human gait motion modeling.",
"We describe a method for recovering 3D human body pose from silhouettes. Our model is based on learning a latent space using the Gaussian Process Latent Variable Model (GP-LVM) [1] encapsulating both pose and silhouette features Our method is generative, this allows us to model the ambiguities of a silhouette representation in a principled way. We learn a dynamical model over the latent space which allows us to disambiguate between ambiguous silhouettes by temporal consistency. The model has only two free parameters and has several advantages over both regression approaches and other generative methods. In addition to the application shown in this paper the suggested model is easily extended to multiple observation spaces without constraints on type.",
"We study two fundamental issues about video-based human walking estimation, where the goal is to estimate 3D gait kinematics (i.e., joint positions) from 2D gait appearances (i.e., silhouettes). One is how to model the gait kinematics from different walking styles, and the other is how to represent the gait appearances captured under different views and from individuals of distinct walking styles and body shapes. Our research is conducted in three steps. First, we propose the idea of joint gait-pose manifold (JGPM), which represents gait kinematics by coupling two nonlinear variables, pose (a specific walking stage) and gait (a particular walking style) in a unified latent space. We extend the Gaussian process latent variable model (GPLVM) for JGPM learning, where two heuristic topological priors, a torus and a cylinder, are considered and several JGPMs of different degrees of freedom (DoFs) are introduced for comparative analysis. Second, we develop a validation technique and a series of benchmark tests to evaluate multiple JGPMs and recent GPLVMs in terms of their performance for gait motion modeling. It is shown that the toroidal prior is slightly better than the cylindrical one, and the JGPM of 4 DoFs that balances the toroidal prior with the intrinsic data structure achieves the best performance. Third, a JGPM-based visual gait generative model (JGPM-VGGM) is developed, where JGPM plays a central role to bridge the gap between the gait appearances and the gait kinematics. Our proposed JGPM-VGGM is learned from Carnegie Mellon University MoCap data and tested on the HumanEva-I and HumanEva-II data sets. Our experimental results demonstrate the effectiveness and competitiveness of our algorithms compared with existing algorithms.",
"Many computer vision tasks may be expressed as the problem of learning a mapping between image space and a parameter space. For example, in human body pose estimation, recent research has directly modelled the mapping from image features (z) to joint angles (thetas). Fitting such models requires training data in the form of labelled (z, thetas) pairs, from which are learned the conditional densities p(thetas ). Inference is then simple: given test image features z, the conditional p(thetas ) is immediately computed. However large amounts of training data are required to fit the models, particularly in the case where the spaces are high dimensional. We show how the use of unlabelled data-samples from the marginal distributions p(z) and p(thetas)-may be used to improve fitting. This is valuable because it is often significantly easier to obtain unlabelled than labelled samples. We use a Gaussian process latent variable model to learn the mapping from a shared latent low-dimensional manifold to the feature and parameter spaces. This extends existing approaches to (a) use unlabelled data, and (b) represent one-to-many mappings. Experiments on synthetic and real problems demonstrate how the use of unlabelled data improves over existing techniques. In our comparisons, we include existing approaches that are explicitly semi-supervised as well as those which implicitly make use of unlabelled examples."
]
} |
1702.03519 | 2591621978 | Entity extraction is fundamental to many text mining tasks such as organisation name recognition. A popular approach to entity extraction is based on matching sub-string candidates in a document against a dictionary of entities. To handle spelling errors and name variations of entities, usually the matching is approximate and edit or Jaccard distance is used to measure dissimilarity between sub-string candidates and the entities. For approximate entity extraction from free text, existing work considers solely character-based or solely token-based similarity and hence cannot simultaneously deal with minor variations at token level and typos. In this paper, we address this problem by considering both character-based similarity and token-based similarity (i.e. two-level similarity). Measuring one-level (e.g. character-based) similarity is computationally expensive, and measuring two-level similarity is dramatically more expensive. By exploiting the properties of the two-level similarity and the weights of tokens, we develop novel techniques to significantly reduce the number of sub-string candidates that require computation of two-level similarity against the dictionary of entities. A comprehensive experimental study on real world datasets show that our algorithm can efficiently extract entities from documents and produce a high F1 score in the range of [0.91, 0.97]. | : proposed an Adaboost based approach for named entity extraction @cite_5 . Their key idea is to extract entities using two classifiers: a local classifier for detecting if a token belongs to a named entity; a global classifier for detecting if a sub-string candidate is a named entity. Jain and Pennacchiotti @cite_10 proposed an approach using heuristics (e.g. tokens with first letter capitalised) to extract entities from query log, and then the extracted entities are grouped into different clusters and assigned labels accordingly. Cohen and Sarawagi @cite_8 designed an algorithm using the Markov model for entity extraction. The algorithm has two main phases. First, a label (e.g. person name) is assigned to each token based on dictionaries heuristics. Second, the Markov model is trained and used to predict the entity probability for each sub-string candidate based on the token labels. One major limitation of the abovementioned approaches is that they require significant amount of human effort to collect training datasets and or to tune heuristics. | {
"cite_N": [
"@cite_5",
"@cite_10",
"@cite_8"
],
"mid": [
"",
"2113618948",
"2048468185"
],
"abstract": [
"",
"In this paper we propose a completely unsupervised method for open-domain entity extraction and clustering over query logs. The underlying hypothesis is that classes defined by mining search user activity may significantly differ from those typically considered over web documents, in that they better model the user space, i.e. users' perception and interests. We show that our method outperforms state of the art (semi-)supervised systems based either on web documents or on query logs (16 gain on the clustering task). We also report evidence that our method successfully supports a real world application, namely keyword generation for sponsored search.",
"We consider the problem of improving named entity recognition (NER) systems by using external dictionaries---more specifically, the problem of extending state-of-the-art NER systems by incorporating information about the similarity of extracted entities to entities in an external dictionary. This is difficult because most high-performance named entity recognition systems operate by sequentially classifying words as to whether or not they participate in an entity name; however, the most useful similarity measures score entire candidate names. To correct this mismatch we formalize a semi-Markov extraction process, which is based on sequentially classifying segments of several adjacent words, rather than single words. In addition to allowing a natural way of coupling high-performance NER methods and high-performance similarity functions, this formalism also allows the direct use of other useful entity-level features, and provides a more natural formulation of the NER problem than sequential word classification. Experiments in multiple domains show that the new model can substantially improve extraction performance over previous methods for using external dictionaries in NER."
]
} |
1702.03519 | 2591621978 | Entity extraction is fundamental to many text mining tasks such as organisation name recognition. A popular approach to entity extraction is based on matching sub-string candidates in a document against a dictionary of entities. To handle spelling errors and name variations of entities, usually the matching is approximate and edit or Jaccard distance is used to measure dissimilarity between sub-string candidates and the entities. For approximate entity extraction from free text, existing work considers solely character-based or solely token-based similarity and hence cannot simultaneously deal with minor variations at token level and typos. In this paper, we address this problem by considering both character-based similarity and token-based similarity (i.e. two-level similarity). Measuring one-level (e.g. character-based) similarity is computationally expensive, and measuring two-level similarity is dramatically more expensive. By exploiting the properties of the two-level similarity and the weights of tokens, we develop novel techniques to significantly reduce the number of sub-string candidates that require computation of two-level similarity against the dictionary of entities. A comprehensive experimental study on real world datasets show that our algorithm can efficiently extract entities from documents and produce a high F1 score in the range of [0.91, 0.97]. | : The approximate entity extraction problem can be viewed as the approximate string matching problem which is a well-studied problem. Navarro gives a nice survey for the approximate string matching problem @cite_9 . Here, we focus on some recent work in entity extraction. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2001496424"
],
"abstract": [
"We survey the current techniques to cope with the problem of string matching that allows errors. This is becoming a more and more relevant issue for many fast growing areas such as information retrieval and computational biology. We focus on online searching and mostly on edit distance, explaining the problem and its relevance, its statistical behavior, its history and current developments, and the central ideas of the algorithms and their complexities. We present a number of experiments to compare the performance of the different algorithms and show which are the best choices. We conclude with some directions for future work and open problems."
]
} |
1702.03519 | 2591621978 | Entity extraction is fundamental to many text mining tasks such as organisation name recognition. A popular approach to entity extraction is based on matching sub-string candidates in a document against a dictionary of entities. To handle spelling errors and name variations of entities, usually the matching is approximate and edit or Jaccard distance is used to measure dissimilarity between sub-string candidates and the entities. For approximate entity extraction from free text, existing work considers solely character-based or solely token-based similarity and hence cannot simultaneously deal with minor variations at token level and typos. In this paper, we address this problem by considering both character-based similarity and token-based similarity (i.e. two-level similarity). Measuring one-level (e.g. character-based) similarity is computationally expensive, and measuring two-level similarity is dramatically more expensive. By exploiting the properties of the two-level similarity and the weights of tokens, we develop novel techniques to significantly reduce the number of sub-string candidates that require computation of two-level similarity against the dictionary of entities. A comprehensive experimental study on real world datasets show that our algorithm can efficiently extract entities from documents and produce a high F1 score in the range of [0.91, 0.97]. | developed a dictionary-based algorithm for entity extraction @cite_11 . But their algorithm aims to extract sub-string candidates that exactly match entities in dictionary from short documents (e.g. tweets). @cite_23 proposed a memory efficient indexing approach for string matching using character-based similarity. Their proposed index is memory friendly by reusing position information of @math -grams through a two-level scheme. proposed an approximate entity extraction algorithm using neighbourhood generation @cite_13 . @cite_2 designed an efficient algorithm for approximate entity extraction based on trie tree index. Kim and Shim proposed an algorithm that finds from a document top- @math most similar sub-string candidates to an entity @cite_25 . A more recent study @cite_15 presents techniques to find duplicated text segments between two documents using token-level similarity. All these algorithms use one-level, i.e. character-based or token-based, similarity to find similar entities (or text segments) in documents. | {
"cite_N": [
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2055261195",
"2430378630",
"2099370490",
"2157233202",
""
],
"abstract": [
"",
"Dictionary-based entity extraction has attracted much attention from the database community recently, which locates sub strings in a document into predefined entities (e.g., person names or locations). To improve extraction recall, a recent trend is to provide approximate matching between sub strings of the document and entities by tolerating minor errors. In this paper we study dictionary-based approximate entity extraction with edit-distance constraints. Existing methods have several limitations. First, they need to tune many parameters to achieve high performance. Second, they are inefficient for large edit-distance thresholds. We propose a trie-based method to address these problems. We first partition each entity into a set of segments, and then use a trie structure to index segments. To extract similar entities, we search segments from the document, and extend the matching segments in both entities and the document to find similar pairs. We develop an extension-based method to efficiently find similar string pairs by extending the matching segments. We optimize our partition scheme and select the best partition strategy to improve the extraction performance. Experimental results show that our method achieves much higher performance compared with state-of-the-art studies.",
"With the growing popularity of electronic documents, replication can occur for many reasons. People may copy text segments from various sources and make modifications. In this paper, we study the problem of local similarity search to find partially replicated text. Unlike existing studies on similarity search which find entirely duplicated documents, our target is to identify documents that approximately share a pair of sliding windows which differ by no more than τ tokens. Our problem is technically challenging because for sliding windows the tokens to be indexed are less selective than entire documents, rendering set similarity join-based algorithms less efficient. Our proposed method is based on enumerating token combinations to obtain signatures with high selectivity. In order to strike a balance between signature and candidate generation, we partition the token universe and for different partitions we generate combinations composed of different numbers of tokens. A cost-aware algorithm is devised to find a good partitioning of the token universe. We also propose to leverage the overlap between adjacent windows to share computation and thus speed up query processing. In addition, we develop the techniques to support the large thresholds. Experiments on real datasets demonstrate the efficiency of our method against alternative solutions.",
"Named entity recognition aims at extracting named entities from unstructured text. A recent trend of named entity recognition is finding approximate matches in the text with respect to a large dictionary of known entities, as the domain knowledge encoded in the dictionary helps to improve the extraction performance. In this paper, we study the problem of approximate dictionary matching with edit distance constraints. Compared to existing studies using token-based similarity constraints, our problem definition enables us to capture typographical or orthographical errors, both of which are common in entity extraction tasks yet may be missed by token-based similarity constraints. Our problem is technically challenging as existing approaches based on q-gram filtering have poor performance due to the existence of many short entities in the dictionary. Our proposed solution is based on an improved neighborhood generation method employing novel partitioning and prefix pruning techniques. We also propose an efficient document processing algorithm that minimizes unnecessary comparisons and enumerations and hence achieves good scalability. We have conducted extensive experiments on several publicly available named entity recognition datasets. The proposed algorithm outperforms alternative approaches by up to an order of magnitude.",
"There is a wide range of applications that require to query a large database of texts to search for similar strings or substrings. Traditional approximate substring matching requests a user to specify a similarity threshold. Without top-k approximate substring matching, users have to try repeatedly different maximum distance threshold values when the proper threshold is unknown in advance. In our paper, we first propose the efficient algorithms for finding the top-k approximate substring matches with a given query string in a set of data strings. To reduce the number of expensive distance computations, the proposed algorithms utilize our novel filtering techniques which take advantages of q-grams and inverted q-gram indexes available. We conduct extensive experiments with real-life data sets. Our experimental results confirm the effectiveness and scalability of our proposed algorithms.",
""
]
} |
1702.03126 | 2922329567 | Likelihood-free methods, such as approximate Bayesian computation, are powerful tools for practical inference problems with intractable likelihood functions. Markov chain Monte Carlo and sequential Monte Carlo variants of approximate Bayesian computation can be effective techniques for sampling posterior distributions without likelihoods. However, the efficiency of these methods depends crucially on the proposal kernel used to generate proposal posterior samples, and a poor choice can lead to extremely low efficiency. We propose a new method for likelihood-free Bayesian inference based upon ideas from multilevel Monte Carlo. Our method is accurate and does not require proposal kernels, thereby overcoming a key obstacle in the use of likelihood-free approaches in real-world situations. | MLMC has also recently been considered in an ABC context. @cite_1 extend the work of @cite_8 by replacing the Metropolis-Hasting acceptance probability in a similar way to the MCMC-ABC method . The MLSMC method is exploited to achieve coupling in an ABC context by @cite_5 . | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_8"
],
"mid": [
"2032616735",
"2093470850",
"1989307896"
],
"abstract": [
"Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology).",
"A realistic understanding of how a biological system arises from interactions between its parts increasingly depends on quantitative mathematical and statistical modelling. This Review explains how statistical inferences and stochastic modelling are the best tools we have for describing heterogeneous biological systems.",
"In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates."
]
} |
1702.03407 | 2950202237 | When integrating computational tools such as automatic segmentation into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data, and in particular, to detect when an automatic method fails. However, this is difficult to achieve due to absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross-validation because validation data is often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared to a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA we take the predicted segmentation from a new image to train a reverse classifier which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as part of large-scale image analysis studies. | Meanwhile, unsupervised methods @cite_3 @cite_1 aim to estimate the segmentation accuracy directly from the images and labelmaps using, for example, information-theoretic and geometrical features. While unsupervised methods can be applied to scenarios where the main purpose of segmentation is to yield visually consistent results that are meaningful to a human observer, the application in medical settings is unclear. | {
"cite_N": [
"@cite_1",
"@cite_3"
],
"mid": [
"2008043556",
"2092572597"
],
"abstract": [
"Image segmentation is an important processing step in many image, video and computer vision applications. Extensive research has been done in creating many different approaches and algorithms for image segmentation, but it is still difficult to assess whether one algorithm produces more accurate segmentations than another, whether it be for a particular image or set of images, or more generally, for a whole class of images. To date, the most common method for evaluating the effectiveness of a segmentation method is subjective evaluation, in which a human visually compares the image segmentation results for separate segmentation algorithms, which is a tedious process and inherently limits the depth of evaluation to a relatively small number of segmentation comparisons over a predetermined set of images. Another common evaluation alternative is supervised evaluation, in which a segmented image is compared against a manually-segmented or pre-processed reference image. Evaluation methods that require user assistance, such as subjective evaluation and supervised evaluation, are infeasible in many vision applications, so unsupervised methods are necessary. Unsupervised evaluation enables the objective comparison of both different segmentation methods and different parameterizations of a single method, without requiring human visual comparisons or comparison with a manually-segmented or pre-processed reference image. Additionally, unsupervised methods generate results for individual images and images whose characteristics may not be known until evaluation time. Unsupervised methods are crucial to real-time segmentation evaluation, and can furthermore enable self-tuning of algorithm parameters based on evaluation results. In this paper, we examine the unsupervised objective evaluation methods that have been proposed in the literature. An extensive evaluation of these methods are presented. The advantages and shortcomings of the underlying design mechanisms in these methods are discussed and analyzed through analytical evaluation and empirical evaluation. Finally, possible future directions for research in unsupervised evaluation are proposed.",
"We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate) is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images."
]
} |
1702.03115 | 2593496609 | Texture characterization is a key problem in image understanding and pattern recognition. In this paper, we present a flexible shape-based texture representation using shape co-occurrence patterns. More precisely, texture images are first represented by a tree of shapes, each of which is associated with several geometrical and radiometric attributes. Then, four typical kinds of shape co-occurrence patterns based on the hierarchical relationships among the shapes in the tree are learned as codewords. Three different coding methods are investigated for learning the codewords, which can be used to encode any given texture image into a descriptive vector. In contrast with existing works, the proposed approach not only inherits the shape-based method’s strong ability to capture geometrical aspects of textures and high robustness to variations in imaging conditions but also provides a flexible way to consider shape relationships and to compute high-order statistics on the tree. To the best of our knowledge, this is the first time that co-occurrence patterns of explicit shapes have been used as a tool for texture analysis. Experiments on various texture and scene data sets demonstrate the efficiency of the proposed approach. | Over the past decades, tremendous investigations have been made in texture analysis, see e.g. @cite_30 @cite_3 @cite_37 @cite_46 @cite_0 @cite_13 @cite_1 @cite_2 , among which an active topic is developing texture models which can efficiently depict both the statistical and the geometrical aspects of textures and are robust to the variations of imaging condition as well. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_46",
"@cite_13"
],
"mid": [
"1484228140",
"",
"2144170305",
"2127006916",
"2129771860",
"2167383966",
"1978563198",
"1965953026"
],
"abstract": [
"We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"",
"We argue for the importance of explicit semantic modelling in human-centred texture analysis tasks such as retrieval, annotation, synthesis, and zero-shot learning. To this end, low-level attributes are selected and used to define a semantic space for texture. 319 texture classes varying in illumination and rotation are positioned within this semantic space using a pair wise relative comparison procedure. Low-level visual features used by existing texture descriptors are then assessed in terms of their correspondence to the semantic space. Textures with strong presence of attributes connoting randomness and complexity are shown to be poorly modelled by existing descriptors. In a retrieval experiment semantic descriptors are shown to outperform visual descriptors. Semantic modelling of texture is thus shown to provide considerable value in both feature selection and in analysis tasks.",
"We present a universal statistical model for texture images in the context of an overcomplete complex wavelet transform. The model is parameterized by a set of statistics computed on pairs of coefficients corresponding to basis functions at adjacent spatial locations, orientations, and scales. We develop an efficient algorithm for synthesizing random images subject to these constraints, by iteratively projecting onto the set of images satisfying each constraint, and we use this to test the perceptual validity of the model. In particular, we demonstrate the necessity of subgroups of the parameter set by showing examples of texture synthesis that fail when those parameters are removed from the set. We also demonstrate the power of our model by successfully synthesizing examples drawn from a diverse collection of artificial and natural textures.",
"A texture descriptor based on the shape index and the accompanying curvedness measure is proposed, and it is evaluated for the automated analysis of astronomical image data. A representative sample of images of low-red shift galaxies from the Sloan Digital Sky Survey (SDSS) serves as a test bed. The goal of applying texture descriptors to these data is to extract novel information about galaxies, information which is often lost in more traditional analysis. In this study, we build a regression model for predicting a spectroscopic quantity, the specific star-formation rate (sSFR). As texture features we consider multi-scale gradient orientation histograms as well as multi-scale shape index histograms, which lead to a new descriptor. Our results show that we can successfully predict spectroscopic quantities from the texture in optical multi-band images. We successfully recover the observed bi-modal distribution of galaxies into quiescent and star-forming. The state-of-the-art for predicting the sSFR is a color-based physical model. We significantly improve its accuracy by augmenting the model with texture information. This study is the first step towards enabling the quantification of physical galaxy properties from imaging data alone.",
"An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.",
"This paper introduces a new texture analysis scheme, which is invariant to local geometric and radiometric changes. The proposed methodology relies on the topographic map of images, obtained from the connected components of level sets. This morphological tool, providing a multi-scale and contrast-invariant representation of images, is shown to be well suited to texture analysis. We first make use of invariant moments to extract geometrical information from the topographic map. This yields features that are invariant to local similarities or local affine transformations. These features are invariant to any local contrast change. We then relax this invariance by computing additional features that are invariant to local affine contrast changes and investigate the resulting analysis scheme by performing classification and retrieval experiments on three texture databases. The obtained experimental results outperform the current state of the art in locally invariant texture analysis.",
"Sensing surface textures by touch is a valuable capability for robots. Until recently it was difficult to build a compliant sensor with high sensitivity and high resolution. The GelSight sensor is compliant and offers sensitivity and resolution exceeding that of the human fingertips. This opens the possibility of measuring and recognizing highly detailed surface textures. The GelSight sensor, when pressed against a surface, delivers a height map. This can be treated as an image, and processed using the tools of visual texture analysis. We have devised a simple yet effective texture recognition system based on local binary patterns, and enhanced it by the use of a multi-scale pyramid and a Hellinger distance metric. We built a database with 40 classes of tactile textures using materials such as fabric, wood, and sandpaper. Our system can correctly categorize materials from this database with high accuracy. This suggests that the GelSight sensor can be useful for material recognition by robots."
]
} |
1702.03456 | 2592468867 | Low-cost mini-drones with advanced sensing and maneuverability enable a new class of intelligent sensing systems. This trend motivated several research efforts to employ drones as standalone systems or to assist legacy deployments. However, several fundamental challenges remain unsolved including: 1) Adequate coverage of sizable targets; 2) Target orientation that render coverage effective only from certain view points; 3) Occlusion by elements in the environment, including other targets. In this paper, we present Argus, a system that provides coverage of wide and oriented targets, using camera-mounted drones, taking into account the challenges stated above. Argus relies on a geometric model that captures both target shapes and coverage constraints. With drones being the scarcest resource in Argus, we study the problem of minimizing the number of drones required to cover a set of such targets and derive a best-possible approximation algorithm. Furthermore, we present a sampling heuristic that yields a comparable performance, yet is up to 100x faster than the approximation algorithm. We build a prototype of Argus to demonstrate and evaluate the proposed coverage algorithms as part of a complete autonomous surveillance system. We evaluate the proposed algorithms using simulations to compare their performance at scale under various conditions. Finally, we present extensions and discuss open problems related to the studied problem. | Full-view coverage is a variant of area coverage with an extra objective of ensuring that any target is covered from all angles @cite_45 . @cite_43 studies the necessary conditions for full-view coverage for static camera deployments and @cite_26 studies full-view coverage for heterogeneous mobile camera scenarios. The related problem of barrier coverage was studied in @cite_55 , where the objective is to detect any targets crossing the barrier into an area of interest. Full view barrier coverage was then introduced @cite_48 and further extended to accommodate stochastic deployments in @cite_14 . | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_48",
"@cite_55",
"@cite_43",
"@cite_45"
],
"mid": [
"1997547017",
"1506634844",
"",
"2045870157",
"",
"2153282579"
],
"abstract": [
"In camera sensor networks (CSNs), full view cov- erage, in which any direction of any point in the operational region is covered by at least one camera sensor, is of great significance since image shot at the frontal viewpoint considerably increases the possibility to recognize the object. However, finding the critical condition to achieve full view coverage in mobile heterogeneous CSNs remains an open question. In this paper, we analyze both the static and mobile random deployed camera sensor networks. A centralized parameter - equivalent sensing radius (ESR) - is defined to evaluate the critical requirement for asymptotic full view coverage in heterogeneous CSNs. We derive the critical sensing range for full view coverage under static model, 2-dimensional random walk mobility model, 1- dimensional random walk mobility model and random rotating model. We then discuss the impact of various mobility patterns on sensing energy consumption and study the relationship between ESR and percentage of full view coverage, and show that random walk mobility model can decrease the sensing energy consumption under certain delay tolerance. To our knowledge, our work is the very first that derive the critical condition to achieve full view coverage in mobile heterogeneous CSNs.",
"Barrier coverage in visual camera sensor networks (visual barrier coverage) has important real-world applications like battlefield surveillance, environmental monitoring, and protection of government property. Cost-effective deployment, a fundamental issue of visual barrier coverage, considers how to deploy the fewest camera sensors along the barrier to detect intruders (e.g., capture faces) with desirable performance. Existing visual barrier coverage approaches like full-view coverage require numerous camera sensors for capturing intruders' faces deterministically for any trajectory and facing angle. However, intruders' trajectories and facing angles are bounded and deterministic intruder detection requires many camera sensors for rare intrusion cases. Certain practical applications can tolerate limited intrusion mis-detection given budget limitations. This paper proposes local face-view barrier coverage, a novel concept that achieves statistical barrier coverage in camera sensor networks leveraging intruders' trajectory lengths l along the barrier and head rotation angles δ. Using (l, δ) and other parameters, we derive a rigorous probability bound for intruder detection for local face-view barrier coverage via a feasible deployment pattern. Our detection probability bound and deployment pattern can guide practical camera sensor network deployments with camera sensor budgets. Extensive evaluations show that local face-view barrier coverage requires up to 50 fewer camera sensors than full-view barrier coverage.",
"",
"When a sensor network is deployed to detect objects penetrating a protected region, it is not necessary to have every point in the deployment region covered by a sensor. It is enough if the penetrating objects are detected at some point in their trajectory. If a sensor network guarantees that every penetrating object will be detected by at least k distinct sensors before it crosses the barrier of wireless sensors, we say the network provides k-barrier coverage. In this paper, we develop theoretical foundations for k-barrier coverage. We propose efficient algorithms using which one can quickly determine, after deploying the sensors, whether the deployment region is k-barrier covered. Next, we establish the optimal deployment pattern to achieve k-barrier coverage when deploying sensors deterministically. Finally, we consider barrier coverage with high probability when sensors are deployed randomly. The major challenge, when dealing with probabilistic barrier coverage, is to derive critical conditions using which one can compute the minimum number of sensors needed to ensure barrier coverage with high probability. Deriving critical conditions for k-barrier coverage is, however, still an open problem. We derive critical conditions for a weaker notion of barrier coverage, called weak k-barrier coverage.",
"",
"Camera sensors are different from traditional scalar sensors as different cameras from different positions can form distinct views of the object. However, traditional disk sensing model does not consider this intrinsic property of camera sensors. To this end, we propose a novel model called full-view coverage. An object is considered to be full-view covered if for any direction from 0 to 2π (object's facing direction), there is always a sensor such that the object is within the sensor's range and more importantly the sensor's viewing direction is sufficiently close to the object's facing direction. With this model, we propose an efficient method for full-view coverage detection in any given camera sensor networks. We also derive a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment. Finally, we show a necessary and sufficient condition on the sensor density for full-view coverage in a triangular lattice based deployment."
]
} |
1702.03456 | 2592468867 | Low-cost mini-drones with advanced sensing and maneuverability enable a new class of intelligent sensing systems. This trend motivated several research efforts to employ drones as standalone systems or to assist legacy deployments. However, several fundamental challenges remain unsolved including: 1) Adequate coverage of sizable targets; 2) Target orientation that render coverage effective only from certain view points; 3) Occlusion by elements in the environment, including other targets. In this paper, we present Argus, a system that provides coverage of wide and oriented targets, using camera-mounted drones, taking into account the challenges stated above. Argus relies on a geometric model that captures both target shapes and coverage constraints. With drones being the scarcest resource in Argus, we study the problem of minimizing the number of drones required to cover a set of such targets and derive a best-possible approximation algorithm. Furthermore, we present a sampling heuristic that yields a comparable performance, yet is up to 100x faster than the approximation algorithm. We build a prototype of Argus to demonstrate and evaluate the proposed coverage algorithms as part of a complete autonomous surveillance system. We evaluate the proposed algorithms using simulations to compare their performance at scale under various conditions. Finally, we present extensions and discuss open problems related to the studied problem. | The goal of area coverage algorithms is to detect any activity of interest within a certain area in a sensor network deployment, or to ensure all clients can communicate within an area for wireless communication. Several approaches to area coverage have been studied including static randomly deployed sensors @cite_32 and strategically placed mobile sensors @cite_62 using either isotropic @cite_60 or anisotropic sensors @cite_40 . Full-view coverage algorithms are another example of area coverage algorithms. Target coverage algorithms are used to cover a number of targets within an area, and were studied in randomly deployed sensors @cite_51 @cite_29 @cite_57 , or strategically placed directional sensors or antennas @cite_4 @cite_1 @cite_38 . Target coverage under quality constraints using randomly placed Pan-Tilt-Zoom (PTZ) cameras was shown to be NP-hard and a 2-approximation algorithm was presented @cite_29 . A 3-approximation algorithm was developed for antenna placement to cover targets such that the bandwidth demand on each antenna stays within a certain limit @cite_4 . | {
"cite_N": [
"@cite_38",
"@cite_62",
"@cite_4",
"@cite_60",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_57",
"@cite_40",
"@cite_51"
],
"mid": [
"2058040501",
"2539096940",
"2091629531",
"2108662033",
"",
"1980394886",
"",
"",
"1992859463",
"2163807729"
],
"abstract": [
"Recent advancements in manufacturing low-cost wireless battery operated cameras has made their application in Wireless Video Sensor Networks (WVSN) increasingly more feasible and affordable. The application of robotic sensing agents equipped with cameras in WVSNs, seems particularly promising in performing coverage tasks for ad hoc surveillance. Their application in this context can be specifically useful for surveying areas with little to no available or affordable infrastructure, or where quick deployment is necessary. In this paper, we address the target coverage problem for finding the minimum number of cameras, their placement, and orientation to cover arbitrarily located targets in an area of interest. We propose a computationally light-weight heuristic, where the number of used mobile cameras is close to those found by near-optimal algorithms. Specifically, we address this problem for non-uniform target distributions that naturally form clusters. Having light-weight heuristics will be particularly useful when the application is required to adapt to target mobility and or is implemented in embedded systems. Our simulation study shows that when clusters are sufficiently separated, the required number of cameras found by our proposed method is very close to those acquired by the near-optimal algorithm, whereas the computational complexity of our algorithm is about ten times less. We also deploy our algorithm on a drone testbed using off-the-shelf components to verify its feasibility.",
"In this paper, we address the problem of persistently covering an environment with a group of mobile robots. In contrast to traditional coverage, in our scenario the coverage level of the environment is always changing. For this reason, the robots have to continually move to maintain a desired coverage level. In this context, our contribution is a complete approach to the problem, including distributed estimation of the coverage and control of the motion of the robots. First, we present an algorithm that allows every robot to estimate the global coverage function only with local information. We pay special attention to the characterization of the algorithm, establishing bounds on the estimation error, and we demonstrate that the algorithm guarantees a perfect estimation in particular areas. Second, we introduce a new function to determine the possible improvement of the coverage at each point of the environment. Upon this metric, we build a motion control strategy that drives the robots to the points of the highest improvement while following the direction of the gradient of the function. Finally, we simulate the proposal to test its correctness and performance.",
"Motivated by the widespread proliferation of wireless networks employing directional antennas, we study some capacitated covering problems arising in these networks. Geometrically, the area covered by a directional antenna with parameters α,ρ,r is a set of points with polar coordinates (r,θ) such that r ≤ r and α ≤ θ ≤ α + ρ. Given a set of customers, their positions on the plane and their bandwidth demands, the capacitated covering problem considered here is to cover all the customers with the minimum number of directional antennas such that the demands of customers assigned to an antenna stays within a bound. We consider two settings of this capacitated cover problem arising in wireless networks. In the first setting where the antennas have variable angular range, we present an approximation algorithm with ratio 3. In the setting where the angular range of antennas is fixed, we improve this approximation ratio to 1.5. These results also apply for a related problem of bin packing with deadlines. In this problem we are are given a set of items, each with a weight, arrival time and deadline, and we want to pack each item into a bin after it arrives but before its deadline. The objective is to minimize the number of bins used. We present a 3-approximation algorithm for this problem, and 1.5-approximation algorithm for the special case when each difference between a deadline and the corresponding arrival time is the same.",
"Distributed algorithms for (re)configuring mobile sensors to cover a given area are important for autonomous multi-robot operations in application areas such as surveillance and environmental monitoring. Depending on the assumptions about the choice of the environment, the sensor models, the coverage metric, and the motion models of sensor nodes, there are different versions of the problem that have been formulated and studied. In this paper, we consider a system of holonomic mobile robots equipped with anisotropic sensors (e.g., limited field of view cameras) that are required to cover a polygonal region with polygonal obstacles to detect interesting events. We assume a given probability distribution of the events over a region. Motivated by scenarios where the sensing performance not only depends on the resolution of sensing but also on the relative orientation between the sensing axis and the event, we assume that the probability of detection of an event depends on both sensing parameters and the orientation of observation. We present a distributed gradient-ascent algorithm for reconfiguring the system of mobile robots so that the joint probability of detection of events over the whole region is maximized (i.e., positioning the mobile robots and determining their sensor parameters). As an example case study, we use a system of mobile robots equipped with limited field of view cameras with pan and zoom capabilities. We present simulation results demonstrating the performance of our algorithm.",
"",
"Wireless sensor and actuator networks have been extensively deployed for enhancing industrial control processes and supply-chains, and many forms of surveillance and environmental monitoring. The availability of low-cost mobile robots equipped with a variety of sensors in addition to communication and computational capabilities makes them particularly promising in target coverage tasks for ad hoc surveillance, where quick, low-cost or non-lasting visual sensing solutions are required, e.g. in border protection and disaster recovery. In this paper, we consider the problem of low complexity placement and orientation of mobile cameras to cover arbitrary targets. We tackle this problem by clustering proximal targets, while calculating estimating the camera location direction for each cluster separately through our cover-set coverage method. Our proposed solutions provide extremely computationally efficient heuristics with only a small increase in number of cameras used, and a small decrease in number of covered targets.",
"",
"",
"Multiple cameras are used to track targets moving amongst obstacles. Surveillance video streamed from a top-view camera is processed to control the orientation of multiple pan-tilt-zoom cameras to cover as many targets as possible at high resolutions. The problem of maximizing the number of covered targets with a set of cameras has been shown to be computationally expensive and hence, several approximations have been suggested in the literature. We develop our own ones, compare them to some existing approaches by extensive simulation and show their superiority. Our new heuristics make an attempt at continuous panning that is needed when moving to real world experimentation to achieve seamless target tracking.",
"We study a novel “coverage by directional sensors” problem with tunable orientations on a set of discrete targets. We propose a Maximum Coverage with Minimum Sensors (MCMS) problem in which coverage in terms of the number of targets to be covered is maximized whereas the number of sensors to be activated is minimized. We present its exact Integer Linear Programming (ILP) formulation and an approximate (but computationally efficient) centralized greedy algorithm (CGA) solution. These centralized solutions are used as baselines for comparison. Then we provide a distributed greedy algorithm (DGA) solution. By incorporating a measure of the sensors residual energy into DGA, we further develop a Sensing Neighborhood Cooperative Sleeping (SNCS) protocol which performs adaptive scheduling on a larger time scale. Finally, we evaluate the properties of the proposed solutions and protocols in terms of providing coverage and maximizing network lifetime through extensive simulations. Moreover, for the case of circular coverage, we compare against the best known existing coverage algorithm."
]
} |
1702.03456 | 2592468867 | Low-cost mini-drones with advanced sensing and maneuverability enable a new class of intelligent sensing systems. This trend motivated several research efforts to employ drones as standalone systems or to assist legacy deployments. However, several fundamental challenges remain unsolved including: 1) Adequate coverage of sizable targets; 2) Target orientation that render coverage effective only from certain view points; 3) Occlusion by elements in the environment, including other targets. In this paper, we present Argus, a system that provides coverage of wide and oriented targets, using camera-mounted drones, taking into account the challenges stated above. Argus relies on a geometric model that captures both target shapes and coverage constraints. With drones being the scarcest resource in Argus, we study the problem of minimizing the number of drones required to cover a set of such targets and derive a best-possible approximation algorithm. Furthermore, we present a sampling heuristic that yields a comparable performance, yet is up to 100x faster than the approximation algorithm. We build a prototype of Argus to demonstrate and evaluate the proposed coverage algorithms as part of a complete autonomous surveillance system. We evaluate the proposed algorithms using simulations to compare their performance at scale under various conditions. Finally, we present extensions and discuss open problems related to the studied problem. | There is significant interest in using drones and drone swarms for surveillance and video capture @cite_41 . Challenges of executing capturing video using using drones has been studied in @cite_8 including low quality footage due to distance and the need to adapt to target mobility. Several proposals have been made for mobile target tracking using a single drone @cite_25 and persistent tracking by handing over targets between multiple drones @cite_39 . Such applications focus on having target in view while not focusing on the angle the target is viewed from. Another direction of using drones for video capture is autonomous cinematography where not only having and maintaining targets in view is important, but also the aesthetic quality plays a role in the selection of footage @cite_46 . In earlier work, we developed several target coverage algorithms for targets represented as points and deployed them on our testbed @cite_1 @cite_38 @cite_18 . | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_8",
"@cite_41",
"@cite_1",
"@cite_39",
"@cite_46",
"@cite_25"
],
"mid": [
"2058040501",
"2557827245",
"2046286451",
"2034287380",
"1980394886",
"2092401564",
"2529317733",
""
],
"abstract": [
"Recent advancements in manufacturing low-cost wireless battery operated cameras has made their application in Wireless Video Sensor Networks (WVSN) increasingly more feasible and affordable. The application of robotic sensing agents equipped with cameras in WVSNs, seems particularly promising in performing coverage tasks for ad hoc surveillance. Their application in this context can be specifically useful for surveying areas with little to no available or affordable infrastructure, or where quick deployment is necessary. In this paper, we address the target coverage problem for finding the minimum number of cameras, their placement, and orientation to cover arbitrarily located targets in an area of interest. We propose a computationally light-weight heuristic, where the number of used mobile cameras is close to those found by near-optimal algorithms. Specifically, we address this problem for non-uniform target distributions that naturally form clusters. Having light-weight heuristics will be particularly useful when the application is required to adapt to target mobility and or is implemented in embedded systems. Our simulation study shows that when clusters are sufficiently separated, the required number of cameras found by our proposed method is very close to those acquired by the near-optimal algorithm, whereas the computational complexity of our algorithm is about ten times less. We also deploy our algorithm on a drone testbed using off-the-shelf components to verify its feasibility.",
"The deployment of multiple robots for achieving a common goal helps to improve the performance, efficiency, and or robustness in a variety of tasks. In particular, the observation of moving targets is an important multirobot application that still exhibits numerous open challenges, including the effective coordination of the robots. This paper reviews control techniques for cooperative mobile robots monitoring multiple targets. The simultaneous movement of robots and targets makes this problem particularly interesting, and our review systematically addresses this cooperative multirobot problem for the first time. We classify and critically discuss the control techniques: cooperative multirobot observation of multiple moving targets, cooperative search, acquisition, and track, cooperative tracking, and multirobot pursuit evasion. We also identify the five major elements that characterize this problem, namely, the coordination method, the environment, the target, the robot and its sensor(s). These elements are used to systematically analyze the control techniques. The majority of the studied work is based on simulation and laboratory studies, which may not accurately reflect real-world operational conditions. Importantly, while our systematic analysis is focused on multitarget observation, our proposed classification is useful also for related multirobot applications.",
"Drones, as known as unmanned aerial vehicles (UAV), are aircrafts which can perform autonomous pilot. They can easily reach locations which are too difficult to reach or dangerous for human beings and collect images from bird's-eye view through aerial photography. Enabling drones to identify people on the ground is important for a variety of applications, such as surveillance, people search, and remote monitoring. Since faces are part of inherent identities of people, how well face recognition technologies can be used by drones becomes essential for future development of the above applications. In this paper, we conduct empirical studies to evaluate several factors that may influence the performance of face detection and recognition techniques on drones. Our findings show that the current face recognition technologies are capable of recognizing faces on drones with some limits in distance and angle, especially when drones take pictures in high altitudes and the face image is taken from a long distance and with a large angle of depression. We also find that augmenting face models with 3D information may help to boost recognition performance in the case of large angles of depression.",
"The use of miniature Unmanned Aerial Vehicles (UAVs), e.g. quadrocopters, has gained great popularity over the last years. Some complex application scenarios for micro UAVs call for the formation of swarms of multiple drones. In this paper a platform for the creation of such swarms is presented. It consists of commercial quadrocopters enhanced with on-board processing and communication units enabling autonomy of individual drones. Furthermore, a generic ground control station has been realized. Different co-operation strategies for teams of UAVs are currently evaluated with an agent based simulation tool. Finally, complex application scenarios for multiple micro UAVs are presented.",
"Wireless sensor and actuator networks have been extensively deployed for enhancing industrial control processes and supply-chains, and many forms of surveillance and environmental monitoring. The availability of low-cost mobile robots equipped with a variety of sensors in addition to communication and computational capabilities makes them particularly promising in target coverage tasks for ad hoc surveillance, where quick, low-cost or non-lasting visual sensing solutions are required, e.g. in border protection and disaster recovery. In this paper, we consider the problem of low complexity placement and orientation of mobile cameras to cover arbitrary targets. We tackle this problem by clustering proximal targets, while calculating estimating the camera location direction for each cluster separately through our cover-set coverage method. Our proposed solutions provide extremely computationally efficient heuristics with only a small increase in number of cameras used, and a small decrease in number of covered targets.",
"We study the problem of tracking mobile targets using a team of aerial robots. Each robot carries a camera to detect targets moving on the ground. The overall goal is to plan for the trajectories of the robots in order to track the most number of targets, and accurately estimate the target locations using the images. The two objectives can conflict since a robot may fly to a higher altitude and potentially cover a larger number of targets at the expense of accuracy. We start by showing that k ≥ 3 robots may not be able to track all n targets while maintaining a constant factor approximation of the optimal quality of tracking at all times. Next, we study the problem of choosing robot trajectories to maximize either the number of targets tracked or the quality of tracking. We formulate this problem as the weighted version of a combinatorial optimization problem known as the Maximum Group Coverage (MGC) problem. A greedy algorithm yields a 1 2 approximation for the weighted MGC problem. Finally, we evaluate the algorithm and the sensing model through simulations and preliminary experiments.",
"We present a system to capture video footage of human subjects in the real world. Our system leverages a quadrotor camera to automatically capture well-composed video of two subjects. Subjects are tracked in a large-scale outdoor environment using RTK GPS and IMU sensors. Then, given the tracked state of our subjects, our system automatically computes static shots based on well-established visual composition principles and canonical shots from cinematography literature. To transition between these static shots, we calculate feasible, safe, and visually pleasing transitions using a novel real-time trajectory planning algorithm. We evaluate the performance of our tracking system, and experimentally show that RTK GPS significantly outperforms conventional GPS in capturing a variety of canonical shots. Lastly, we demonstrate our system guiding a consumer quadrotor camera autonomously capturing footage of two subjects in a variety of use cases. This is the first end-to-end system that enables people to leverage the mobility of quadrotors, as well as the knowledge of expert filmmakers, to autonomously capture high-quality footage of people in the real world.",
""
]
} |
1702.02748 | 2592843547 | High penetration of renewable energy source makes microgrid (MGs) be environment friendly. However, the stochastic input from renewable energy resource brings difficulty in balancing the energy supply and demand. Purchasing extra energy from macrogrid to deal with energy shortage will increase MG energy cost. To mitigate intermittent nature of renewable energy, energy trading and energy storage which can exploit diversity of renewable energy generation across space and time are efficient and cost-effective methods. But current energy storage control action will impact the future control action which brings challenge to energy management. In addition, due to MG participating energy trading as prosumer, it calls for an efficient trading mechanism. Therefore, this paper focuses on the problem of MG energy management and trading. Energy trading problem is formulated as a stochastic optimization one with both individual profit and social welfare maximization. Firstly a Lyapunov optimization based algorithm is developed to solve the stochastic problem. Secondly the double-auction based mechanism is provided to attract MG truthful bidding for buying and selling energy. Through theoretical analysis, we demonstrate that individual MG can achieve a time average energy cost close to offline optimum with tradeoff between storage capacity and energy trading cost. Meanwhile the social welfare is also asymptotically maximized under double auction. Simulation results based on real world data show the effectiveness of our algorithm. | There has been some recent researches in energy trading between MGs. References @cite_7 and @cite_4 consider an individual MG operation with energy storage and renewable energy source. However, MG operated without cooperation will waste some of renewable energy due to limit energy storage capacity. In @cite_6 , cooperation allows MG to borrow energy from other MGs which have extra energy. Iterative double auction is introduced into trading in @cite_5 and @cite_2 . Auction mechanism ensures the participants benefit from cooperation and trading when auctioneer get complete information of MG current status. Reference @cite_0 adjusts double auction mechanism for another scenario called inter-cloud trading. The algorithm can reach asymptotical social welfare maximization with limited information which can protect individual cloud privacy. Nevertheless, a energy trading framework applied within MGs should be developed. MG will attend the trading with considering renewable energy, energy storage and customer demands. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5"
],
"mid": [
"2068375961",
"2118249393",
"2005147352",
"1595829918",
"2017228564",
"1552903754"
],
"abstract": [
"In this paper, we investigate the minimization of the total energy cost of multiple residential households in a smart grid neighborhood sharing a load serving entity. Specifically, each household may have renewable generation, energy storage as well as inelastic and elastic energy loads, and the load serving entity attempts to coordinate the energy consumption of these households in order to minimize the total energy cost within this neighborhood. The renewable generation, the energy demand arrival, and the energy cost function are all stochastic processes and evolve according to some, possibly unknown, probabilistic laws. We develop an online control algorithm, called Lyapunov-based cost minimization algorithm (LCMA), which jointly considers the energy management and demand management decisions. LCMA only needs to keep track of the current values of the underlying stochastic processes without requiring any knowledge of their statistics. Moreover, a decentralized algorithm to implement LCMA is also developed, which can preserve the privacy of individual household owners. Numerical results based on real-world trace data show that our control algorithm can effectively reduce the total energy cost in the neighborhood.",
"Decentralized energy management is of paramount importance in smart micro grids with renewables for various reasons including environmental friendliness, reduced communication overhead, and resilience to failures. In this context, the present work deals with distributed economic dispatch and demand response initiatives for grid-connected micro grids with high-penetration of wind power. To cope with the challenge of the wind's intrinsically stochastic availability, a novel energy planning approach involving the actual wind energy as well as the energy traded with the main grid, is introduced. A stochastic optimization problem is formulated to minimize the micro grid net cost, which includes conventional generation cost as well as the expected transaction cost incurred by wind uncertainty. To bypass the prohibitively high-dimensional integration involved, an efficient sample average approximation method is utilized to obtain a solver with guaranteed convergence. Leveraging the special infrastructure of the micro grid, a decentralized algorithm is further developed via the alternating direction method of multipliers. Case studies are tested to corroborate the merits of the novel approaches.",
"One of the most important challenges in smart grid systems is the integration of renewable energy resources into its design. In this paper, two different techniques to mitigate the time-varying and intermittent nature of renewable energy generation are considered. The first one is the use of storage, which smooths out the fluctuations in the renewable energy generation across time. The second technique is the concept of distributed generation combined with cooperation by exchanging energy among the distributed sources. This technique averages out the variation in energy production across space. This paper analyzes the tradeoff between these two techniques. The problem is formulated as a stochastic optimization problem with the objective of minimizing the time average cost of energy exchange within the grid. First, an analytical model of the optimal cost is provided by investigating the steady state of the system for some specific scenarios. Then, an algorithm to solve the cost minimization problem using the technique of Lyapunov optimization is developed, and results for the performance of the algorithm are provided. These results show that in the presence of limited storage devices, the grid can benefit greatly from cooperation, whereas in the presence of large storage capacity, cooperation does not yield much benefit. Further, it is observed that most of the gains from cooperation can be obtained by exchanging energy only among a few energy-harvesting sources.",
"By sharing resources among different cloud providers, the paradigm of federated clouds exploits temporal availability of resources and geographical diversity of operational costs for efficient job service. While interoperability issues across different cloud platforms in a cloud federation have been extensively studied, fundamental questions on cloud economics remain: When and how should a cloud trade resources (e.g., virtual machines) with others, such that its net profit is maximized over the long run, while a close-to-optimal social welfare in the entire federation can also be guaranteed? To answer this question, a number of important, interrelated decisions, including job scheduling, server provisioning, and resource pricing, should be dynamically and jointly made, while the long-term profit optimality is pursued. In this work, we design efficient algorithms for intercloud virtual machine (VM) trading and scheduling in a cloud federation. For VM transactions among clouds, we design a double-auction-based mechanism that is strategy-proof, individual-rational, ex-post budget-balanced, and efficient to execute over time. Closely combined with the auction mechanism is a dynamic VM trading and scheduling algorithm, which carefully decides the true valuations of VMs in the auction, optimally schedules stochastic job arrivals with different service level agreements (SLAs) onto the VMs, and judiciously turns on and off servers based on the current electricity prices. Through rigorous analysis, we show that each individual cloud, by carrying out the dynamic algorithm in the online double auction, can achieve a time-averaged profit arbitrarily close to the offline optimum. Asymptotic optimality in social welfare is also achieved under homogeneous cloud settings. We carry out simulations to verify the effectiveness of our algorithms, and examine the achievable social welfare under heterogeneous cloud settings, as driven by the real-world Google cluster usage traces.",
"The emergence of cloud computing has established a trend towards building energy-hungry and geographically distributed data centers. Due to their enormous energy consumption, data centers are expected to have major impact on the electric power grid by significantly increasing the load at locations where they are built. Dynamic energy pricing policies in the recently proposed smart grid technology can incentivize the cloud computing controller to shift their computation load towards data centers in regions with cheaper electricity. On the other hand, distributed data centers also provide opportunities to help the smart grid to improve load balancing and robustness. To shed some light into these opportunities, this paper considers an interaction system of the smart grid, the cloud computing system, and other load devices. A nested two stage game based formulation is proposed based on the location-dependent real-time pricing policy of the smart grid. The leading player in this game is the smart grid controller that announces the relationship between the electricity price at each power bus and the total load demand at that bus. In the second stage, the cloud computing controller performs resource allocation as response to the pricing functions, whereas the other load devices perform demand side management. The objective of the smart grid controller is to maximize its own profit and perform load balancing among power buses, whereas the objective of the cloud computing controller is to maximize its own profit with respect to the location-dependent pricing functions. The optimal strategies are derived based on the backward induction principle for the smart grid controller, the cloud computing controller, and the other load devices, using convex optimization and heuristic search.",
"Mobile data offloading through complementary network technologies such as WiFi and femtocell can significantly alleviate network congestion and enhance users' QoS. In this paper we consider a market where mobile network operators (MNOs) lease third-party deployed WiFi or femtocell access points (APs) to dynamically offload the traffic of their mobile users. We assume that each MNO can employ multiple APs and each AP can concurrently serve traffic from multiple MNOs. We design an iterative double auction mechanism that ensures the efficient operation of the market, where MNOs maximize their offloading benefits and APs minimize their offloading costs. Such a mechanism incorporates the special characteristics of the wireless network, such as the coupling of MNOs' offloading decisions and APs' capacity constraints. The proposed market scheme does not require full information about the MNOs and APs, incurs minimum communication overhead, and creates non-negative revenue for the market broker."
]
} |
1702.02512 | 2952120591 | This paper presents a robust and efficient semi-dense visual odometry solution for RGB-D cameras. The core of our method is a 2D-3D ICP pipeline which estimates the pose of the sensor by registering the projection of a 3D semi-dense map of the reference frame with the 2D semi-dense region extracted in the current frame. The processing is speeded up by efficiently implemented approximate nearest neighbour fields under the Euclidean distance criterion, which permits the use of compact Gauss-Newton updates in the optimization. The registration is formulated as a maximum a posterior problem to deal with outliers and sensor noises, and consequently the equivalent weighted least squares problem is solved by iteratively reweighted least squares method. A variety of robust weight functions are tested and the optimum is determined based on the characteristics of the sensor model. Extensive evaluation on publicly available RGB-D datasets shows that the proposed method predominantly outperforms existing state-of-the-art methods. | Lines are alternative features to points and have been widely used in many VO and SLAM frameworks such as @cite_2 @cite_13 . One reason is that lines are abundant in man-made structures and environments, and do not depend on sufficient texture. Another reason is that line features are easily parametrized and included in a bundle adjustment (BA) pipeline @cite_2 @cite_3 for the purpose of global optimization. However, straight lines are still not a general feature because object contours can be arbitrary curves in 3D space. Therefore, Nurutdinova al presents a method which uses parametric curves as landmarks for motion estimation and BA @cite_17 . Futhermore, Engel al apply direct method to semi-dense region @cite_24 @cite_23 , which fully utilizes the photometric information around all boundaries, edges and contours. The most relevant work to ours is @cite_20 , which presents a direct edge alignment approach for 6-DOF tracking. They address the problem of non-differentiability of their Distance Transform (DT) based cost function by using a sub-gradient method. Conversely, we improve the differentiability of the cost function intrinsically and achieve more accurate results at a comparable computational cost. | {
"cite_N": [
"@cite_3",
"@cite_24",
"@cite_23",
"@cite_2",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"1661995841",
"2140599684",
"612478963",
"2121308169",
"2198745846",
"2410084394",
"2198628454"
],
"abstract": [
"The ability to localise a camera moving in a previously unknown environment is desirable for a wide range of applications. In computer vision this problem is studied as monocular SLAM. Recent years have seen improvements to the usability and scalability of monocular SLAM systems to the point that they may soon find uses outside of laboratory conditions. However, the robustness of these systems to rapid camera motions (we refer to this quality as agility) still lags behind that of tracking systems which use known object models. In this paper we attempt to remedy this. We present two approaches to improving the agility of a keyframe-based SLAM system: Firstly, we add edge features to the map and exploit their resilience to motion blur to improve tracking under fast motion. Secondly, we implement a very simple inter-frame rotation estimator to aid tracking when the camera is rapidly panning --- and demonstrate that this method also enables a trivially simple yet effective relocalisation method. Results show that a SLAM system combining points, edge features and motion initialisation allows highly agile tracking at a moderate increase in processing time.",
"We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking - which does not depend on visual features - while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is of large practical value for robotics and augmented reality applications.",
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"While many visual simultaneous localization and mapping (SLAM) systems use point features as landmarks, few take advantage of the edge information in images. Those SLAM systems that do observe edge features do not consider edges with all degrees of freedom. Edges are difficult to use in vision SLAM because of selection, observation, initialization and data association challenges. A map that includes edge features, however, contains higher-order geometric information useful both during and after SLAM. We define a well-localized edge landmark and present an efficient algorithm for selecting such landmarks. Further, we describe how to initialize new landmarks, observe mapped landmarks in subsequent images, and address the data association challenges of edges. Our methods, implemented in a particle-filter SLAM system, operate at frame rate on live video sequences.",
"Large lighting variation challenges all visual odometry methods, even with RGB-D cameras. Here we propose a line segment-based RGB-D indoor odometry algorithm robust to lighting variation. We know line segments are abundant indoors and less sensitive to lighting change than point features. However, depth data are often noisy, corrupted or even missing for line segments which are often found on object boundaries where significant depth discontinuities occur. Our algorithm samples depth data along line segments, and uses a random sample consensus approach to identify correct depth and estimate 3D line segments. We analyze 3D line segment uncertainties and estimate camera motion by minimizing the Mahalanobis distance. In experiments we compare our method with two state-of-the-art methods including a keypoint-based approach and a dense visual odometry algorithm, under both constant and varying lighting. Our method demonstrates superior robustness to lighting change by outperforming the competing methods on 6 out of 8 long indoor sequences under varying lighting. Meanwhile our method also achieves improved accuracy even under constant lighting when tested using public data.",
"There has been a paradigm shifting trend towards feature-less methods due to their elegant formulation, accuracy and ever increasing computational power. In this work, we present a direct edge alignment approach for 6-DOF tracking. We argue that photo-consistency based methods are plagued by a much smaller convergence basin and are extremely sensitive to noise, changing illumination and fast motion. We propose to use the Distance Transform in the energy formulation which can significantly extend the influence of the edges for tracking. We address the problem of non-differentiability of our cost function and of the previous methods by use of a sub-gradient method. Through extensive experiments we show that the proposed method gives comparable performance to the previous method under nominal conditions and is able to run at 30 Hz in single threaded mode. In addition, under large motion we demonstrate our method outperforms previous methods using the same runtime configuration for our method.",
"Modern structure from motion (SfM) remains dependent on point features to recover camera positions, meaning that reconstruction is severely hampered in low-texture environments, for example scanning a plain coffee cup on an uncluttered table. We show how 3D curves can be used to refine camera position estimation in challenging low-texture scenes. In contrast to previous work, we allow the curves to be partially observed in all images, meaning that for the first time, curve-based SfM can be demonstrated in realistic scenes. The algorithm is based on bundle adjustment, so needs an initial estimate, but even a poor estimate from a few point correspondences can be substantially improved by including curves, suggesting that this method would benefit many existing systems."
]
} |
1702.02512 | 2952120591 | This paper presents a robust and efficient semi-dense visual odometry solution for RGB-D cameras. The core of our method is a 2D-3D ICP pipeline which estimates the pose of the sensor by registering the projection of a 3D semi-dense map of the reference frame with the 2D semi-dense region extracted in the current frame. The processing is speeded up by efficiently implemented approximate nearest neighbour fields under the Euclidean distance criterion, which permits the use of compact Gauss-Newton updates in the optimization. The registration is formulated as a maximum a posterior problem to deal with outliers and sensor noises, and consequently the equivalent weighted least squares problem is solved by iteratively reweighted least squares method. A variety of robust weight functions are tested and the optimum is determined based on the characteristics of the sensor model. Extensive evaluation on publicly available RGB-D datasets shows that the proposed method predominantly outperforms existing state-of-the-art methods. | ICP algorithm and its close derivatives @cite_16 @cite_9 @cite_6 @cite_8 still represent the methods of choice for real-time LIDAR tracking though sometimes expensive computational resources like GPU are necessary. The advent of RGB-D cameras has, however, led to a new generation of 2D-3D registration algorithms that exercise a hybrid use of both depth and RGB information. For instance, Steinbr "u cker uses the depth information along with the optimized relative transformation to warp one RGB-D image to the next @cite_0 , thus permitting direct and dense photometric error minimization. Similar idea is applied in @cite_19 @cite_24 @cite_23 . | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_24",
"@cite_23",
"@cite_16"
],
"mid": [
"1716229439",
"1953793983",
"",
"2091226544",
"2021930164",
"2140599684",
"612478963",
"2127045032"
],
"abstract": [
"In this paper we present an extension to the KinectFusion algorithm that permits dense mesh-based mapping of extended scale environments in real-time. This is achieved through (i) altering the original algorithm such that the region of space being mapped by the KinectFusion algorithm can vary dynamically, (ii) extracting a dense point cloud from the regions that leave the KinectFusion volume due to this variation, and, (iii) incrementally adding the resulting points to a triangular mesh representation of the environment. The system is implemented as a set of hierarchical multi-threaded components which are capable of operating in real-time. The architecture facilitates the creation and integration of new modules with minimal impact on the performance on the dense volume tracking and surface reconstruction modules. We provide experimental results demonstrating the system’s ability to map areas considerably beyond the scale of the original KinectFusion algorithm including a two story apartment and an extended sequence taken from a car at night. In order to overcome failure of the iterative closest point (ICP) based odometry in areas of low geometric features we have evaluated the Fast Odometry from Vision (FOVIS) system as an alternative. We provide a comparison between the two approaches where we show a trade off between the reduced drift of the visual odometry approach and the higher local mesh quality of the ICP-based approach. Finally we present ongoing work on incorporating full simultaneous localisation and mapping (SLAM) pose-graph optimisation.",
"Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (ICP) algorithm. Because ICP has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available, leading to an arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between ICP variants, taking into account a broad range of inputs. The second contribution is an open-source ICP library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison of multiple solutions. This paper presents two examples of these field applications. The last contribution is the comparison of two baseline ICP variants using data sets that cover a rich variety of environments. Besides demonstrating the need for improved ICP methods for natural, unstructured and information-deprived environments, these baseline variants also provide a solid basis to which novel solutions could be compared. The combination of our protocol, software, and baseline results demonstrate convincingly how open-source software can push forward the research in mapping and navigation.",
"",
"We present an energy-based approach to visual odometry from RGB-D images of a Microsoft Kinect camera. To this end we propose an energy function which aims at finding the best rigid body motion to map one RGB-D image into another one, assuming a static scene filmed by a moving camera. We then propose a linearization of the energy function which leads to a 6×6 normal equation for the twist coordinates representing the rigid body motion. To allow for larger motions, we solve this equation in a coarse-to-fine scheme. Extensive quantitative analysis on recently proposed benchmark datasets shows that the proposed solution is faster than a state-of-the-art implementation of the iterative closest point (ICP) algorithm by two orders of magnitude. While ICP is more robust to large camera motion, the proposed method gives better results in the regime of small displacements which are often the case in camera tracking applications.",
"The goal of our work is to provide a fast and accurate method to estimate the camera motion from RGB-D images. Our approach registers two consecutive RGB-D frames directly upon each other by minimizing the photometric error. We estimate the camera motion using non-linear minimization in combination with a coarse-to-fine scheme. To allow for noise and outliers in the image data, we propose to use a robust error function that reduces the influence of large residuals. Furthermore, our formulation allows for the inclusion of a motion model which can be based on prior knowledge, temporal filtering, or additional sensors like an IMU. Our method is attractive for robots with limited computational resources as it runs in real-time on a single CPU core and has a small, constant memory footprint. In an extensive set of experiments carried out both on a benchmark dataset and synthetic data, we demonstrate that our approach is more accurate and robust than previous methods. We provide our software under an open source license.",
"We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking - which does not depend on visual features - while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is of large practical value for robotics and augmented reality applications.",
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"The increasing number of ICP variants leads to an explosion of algorithms and parameters. This renders difficult the selection of the appropriate combination for a given application. In this paper, we propose a state-of-the-art, modular, and efficient implementation of an ICP library. We took advantage of the recent availability of fast depth cameras to demonstrate one application example: a 3D pose tracker running at 30 Hz. For this application, we show the modularity of our ICP library by optimizing the use of lean and simple descriptors in order to ease the matching of 3D point clouds. This tracker is then evaluated using datasets recorded along a ground truth of millimeter accuracy. We provide both source code and datasets to the community in order to accelerate further comparisons in this field."
]
} |
1702.02512 | 2952120591 | This paper presents a robust and efficient semi-dense visual odometry solution for RGB-D cameras. The core of our method is a 2D-3D ICP pipeline which estimates the pose of the sensor by registering the projection of a 3D semi-dense map of the reference frame with the 2D semi-dense region extracted in the current frame. The processing is speeded up by efficiently implemented approximate nearest neighbour fields under the Euclidean distance criterion, which permits the use of compact Gauss-Newton updates in the optimization. The registration is formulated as a maximum a posterior problem to deal with outliers and sensor noises, and consequently the equivalent weighted least squares problem is solved by iteratively reweighted least squares method. A variety of robust weight functions are tested and the optimum is determined based on the characteristics of the sensor model. Extensive evaluation on publicly available RGB-D datasets shows that the proposed method predominantly outperforms existing state-of-the-art methods. | When system noises and outliers are taken into account, M-estimators are popular choices for re-weighting the na " ve least squares problem. The earliest tutorial about using different M-estimators in the application of conic fitting was given in @cite_12 . Recently, Aftab investigates the full range of robust M-estimators that are amenable to IRLS @cite_1 . In consideration of the great success of applying IRLS and M-estimators in motion estimation works such as @cite_18 @cite_19 @cite_24 , we utilize it in our work as well. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_12"
],
"mid": [
"",
"1967023569",
"2140599684",
"2021930164",
"2058009001"
],
"abstract": [
"",
"This paper presents a way of using the Iteratively Reweighted Least Squares (IRLS) method to minimize several robust cost functions such as the Huber function, the Cauchy function and others. It is known that IRLS (otherwise known as Weiszfeld) techniques are generally more robust to outliers than the corresponding least squares methods, but the full range of robust M-estimators that are amenable to IRLS has not been investigated. In this paper we address this question and show that IRLS methods can be used to minimize most common robust M-estimators. An exact condition is given and proved for decrease of the cost, from which convergence follows. In addition to the advantage of increased robustness, the proposed algorithm is far simpler than the standard L1 Weiszfeld algorithm. We show the applicability of the proposed algorithm to the rotation averaging, triangulation and point cloud alignment problems.",
"We propose a fundamentally novel approach to real-time visual odometry for a monocular camera. It allows to benefit from the simplicity and accuracy of dense tracking - which does not depend on visual features - while running in real-time on a CPU. The key idea is to continuously estimate a semi-dense inverse depth map for the current frame, which in turn is used to track the motion of the camera using dense image alignment. More specifically, we estimate the depth of all pixels which have a non-negligible image gradient. Each estimate is represented as a Gaussian probability distribution over the inverse depth. We propagate this information over time, and update it with new measurements as new images arrive. In terms of tracking accuracy and computational speed, the proposed method compares favorably to both state-of-the-art dense and feature-based visual odometry and SLAM algorithms. As our method runs in real-time on a CPU, it is of large practical value for robotics and augmented reality applications.",
"The goal of our work is to provide a fast and accurate method to estimate the camera motion from RGB-D images. Our approach registers two consecutive RGB-D frames directly upon each other by minimizing the photometric error. We estimate the camera motion using non-linear minimization in combination with a coarse-to-fine scheme. To allow for noise and outliers in the image data, we propose to use a robust error function that reduces the influence of large residuals. Furthermore, our formulation allows for the inclusion of a motion model which can be based on prior knowledge, temporal filtering, or additional sensors like an IMU. Our method is attractive for robots with limited computational resources as it runs in real-time on a single CPU core and has a small, constant memory footprint. In an extensive set of experiments carried out both on a benchmark dataset and synthetic data, we demonstrate that our approach is more accurate and robust than previous methods. We provide our software under an open source license.",
"Almost all problems in computer vision are related in one form or another to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most commonly used techniques for parameter estimation. These include linear least-squares (pseudo-inverse and eigen analysis); orthogonal least-squares; gradient-weighted least-squares; bias-corrected renormalization; Kalman filtering; and robust techniques (clustering, regression diagnostics, M-estimators, least median of squares). Particular attention has been devoted to discussions about the choice of appropriate minimization criteria and the robustness of the different techniques. Their application to conic fitting is described."
]
} |
1702.02540 | 2587529872 | Although deep learning models have proven effective at solving problems in natural language processing, the mechanism by which they come to their conclusions is often unclear. As a result, these models are generally treated as black boxes, yielding no insight of the underlying learned patterns. In this paper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new approach for tracking the importance of a given input to the LSTM for a given output. By identifying consistently important patterns of words, we are able to distill state of the art LSTMs on sentiment analysis and question answering into a set of representative phrases. This representation is then quantitatively validated by using the extracted phrases to construct a simple, rule-based classifier which approximates the output of the LSTM. | Another approach that has emerged in the literature @cite_15 @cite_0 @cite_2 is for each word in the document, looking at the norm of the derivative of the loss function with respect to the embedding parameters for that word. This bridges the gap between high-dimensional cell state and low-dimensional outputs. These techniques are general- they are applicable to visualizing the importance of sets of input coordinates to output coordinates of any differentiable function. In this work, we describe techniques that are designed around the structure of LSTM's, and show that they can give better results in that setting. | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_2"
],
"mid": [
"",
"2440757793",
"2510317721"
],
"abstract": [
"",
"Automated Text Scoring (ATS) provides a cost-effective and consistent alternative to human marking. However, in order to achieve good performance, the predictive features of the system need to be manually engineered by human experts. We introduce a model that forms word representations by learning the extent to which specific words contribute to the text's score. Using Long-Short Term Memory networks to represent the meaning of texts, we demonstrate that a fully automated framework is able to achieve excellent results over similar approaches. In an attempt to make our results more interpretable, and inspired by recent advances in visualizing neural networks, we introduce a novel method for identifying the regions of the text that the model has found more discriminative.",
"In a variety of application domains the content to be recommended to users is associated with text. This includes research papers, movies with associated plot summaries, news articles, blog posts, etc. Recommendation approaches based on latent factor models can be extended naturally to leverage text by employing an explicit mapping from text to factors. This enables recommendations for new, unseen content, and may generalize better, since the factors for all items are produced by a compactly-parametrized model. Previous work has used topic models or averages of word embeddings for this mapping. In this paper we present a method leveraging deep recurrent neural networks to encode the text sequence into a latent vector, specifically gated recurrent units (GRUs) trained end-to-end on the collaborative filtering task. For the task of scientific paper recommendation, this yields models with significantly higher accuracy. In cold-start scenarios, we beat the previous state-of-the-art, all of which ignore word order. Performance is further improved by multi-task learning, where the text encoder network is trained for a combination of content recommendation and item metadata prediction. This regularizes the collaborative filtering model, ameliorating the problem of sparsity of the observed rating matrix."
]
} |
1702.02363 | 2589025090 | Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types: person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC). | As research on NER has been pushing the limits of automated systems performing named-entity recognition, the need for annotated datasets and benchmarks is also increasing. Knowledge bases are important for NLP researches, since they provide a structured schema of topics that can be used to annotate entities with fine-grained types and or categorize raw texts into related domains. @cite_24 published benchmark evaluations that compare three datasets that use semantic information from KBs: DBpedia Spotlight @cite_12 , KORE50 @cite_8 @cite_19 and the Wikilinks Corpus @cite_9 . These datasets are in English and constructed with the aim of evaluating the performance of NER systems. The authors present the statistics of each dataset and baseline performances of various algorithms. There are other methodologies which leverages KBs to named entity extraction and linking; however, most of them are not available to public @cite_8 @cite_16 . | {
"cite_N": [
"@cite_8",
"@cite_9",
"@cite_24",
"@cite_19",
"@cite_16",
"@cite_12"
],
"mid": [
"11298561",
"2099982145",
"2399640664",
"167809298",
"2189149111",
"2104583100"
],
"abstract": [
"Disambiguating named entities in natural-language text maps mentions of ambiguous names onto canonical entities like people or places, registered in a knowledge base such as DBpedia or YAGO. This paper presents a robust method for collective disambiguation, by harnessing context from knowledge bases and using a new form of coherence graph. It unifies prior approaches into a comprehensive framework that combines three measures: the prior probability of an entity being mentioned, the similarity between the contexts of a mention and a candidate entity, as well as the coherence among candidate entities for all mentions together. The method builds a weighted graph of mentions and candidate entities, and computes a dense subgraph that approximates the best joint mention-entity mapping. Experiments show that the new method significantly outperforms prior methods in terms of accuracy, with robust behavior across a variety of inputs.",
"Cross-document coreference, the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. For large collections, it is clearly impractical to consider all possible groupings of mentions into distinct entities. To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granularities of entities to facilitate more effective approximate inference. To evaluate these ideas, we constructed a labeled corpus of 1.5 million disambiguated mentions in Web pages by selecting link anchors referring to Wikipedia entities. We show that the combination of the hierarchical model with distributed inference quickly obtains high accuracy (with error reduction of 38 ) on this large dataset, demonstrating the scalability of our approach.",
"In the last years, various tools for automatic semantic annotation of textual information have emerged. The main challenge of all approaches is to solve ambiguity of natural language and assign unique semantic entities according to the present context. To compare the different approaches a ground truth namely an annotated benchmark is essential. But, besides the actual disambiguation approach the achieved evaluation results are also dependent on the characteristics of the benchmark dataset and the expressiveness of the dictionary applied to determine entity candidates. This paper presents statistical analyses and mapping experiments on different benchmarks and dictionaries to identify characteristics and structure of the respective datasets.",
"We present AIDA, a framework and online tool for entity detection and disambiguation. Given a natural-language text or a Web table, we map mentions of ambiguous names onto canonical entities like people or places, registered in a knowledge base like DBpedia, Freebase, or YAGO. AIDA is a robust framework centred around collective disambiguation exploiting the prominence of entities, similarity between the context of the mention and its candidates, and the coherence among candidate entities for all mentions. We have developed a Web-based online interface for AIDA where different formats of inputs can be processed on the fly, returning proper entities and showing intermediate steps of the disambiguation process.",
"The past decade has seen the emergence of web-scale structured and linked semantic knowledge resources (e.g., Freebase, DBPedia). These semantic knowledge graphs provide a scalable “schema for the web”, representing a significant opportunity for the spoken language understanding (SLU) research community. This paper leverages these resources to bootstrap a web-scale semantic parser with no requirement for semantic schema design, no data collection, and no manual annotations. Our approach is based on an iterative graph crawl algorithm. From an initial seed node (entity-type), the method learns the related entity-types from the graph structure, and automatically annotates documents that can be linked to the node (e.g., Wikipedia articles, web search documents). Following the branches, the graph is crawled and the procedure is repeated. The resulting collection of annotated documents is used to bootstrap webscale conditional random field (CRF) semantic parsers. Finally, we use a maximum-a-posteriori (MAP) unsupervised adaptation technique on sample data from a specific domain to refine the parsers. The scale of the unsupervised parsers is on the order of thousands of domains and entity-types, millions of entities, and hundreds of millions of relations. The precision-recall of the semantic parsers trained with our unsupervised method approaches those trained with supervised annotations.",
"Interlinking text documents with Linked Open Data enables the Web of Data to be used as background knowledge within document-oriented applications such as search and faceted browsing. As a step towards interconnecting the Web of Documents with the Web of Data, we developed DBpedia Spotlight, a system for automatically annotating text documents with DBpedia URIs. DBpedia Spotlight allows users to configure the annotations to their specific needs through the DBpedia Ontology and quality measures such as prominence, topical pertinence, contextual ambiguity and disambiguation confidence. We compare our approach with the state of the art in disambiguation, and evaluate our results in light of three baselines and six publicly available annotation systems, demonstrating the competitiveness of our system. DBpedia Spotlight is shared as open source and deployed as a Web Service freely available for public use."
]
} |
1702.02363 | 2589025090 | Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types: person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC). | Constructing a comprehensive dataset for TC is tougher than NER since there is no limit for the number of categories that are represented in such sets. In general, there are many TC datasets available in English for many different problems such as sentiment analysis @cite_31 and categorizing gender @cite_1 . The largest and the most popular dataset among them is Reuters Corpus Volume 1 (RCV1) which consists of manually categorized 800K news stories with over 100 sub-categories under three different main categories @cite_17 . This version the dataset has problems with document categories and suffers from lack of documentation about the dataset. propose an improved version of this dataset with reduced categorization mistakes and provide a better documentation @cite_2 . | {
"cite_N": [
"@cite_2",
"@cite_31",
"@cite_1",
"@cite_17"
],
"mid": [
"2150102617",
"2234079371",
"",
"191422183"
],
"abstract": [
"Reuters Corpus Volume I (RCV1) is an archive of over 800,000 manually categorized newswire stories recently made available by Reuters, Ltd. for research purposes. Use of this data for research on text categorization requires a detailed understanding of the real world constraints under which the data was produced. Drawing on interviews with Reuters personnel and access to Reuters documentation, we describe the coding policy and quality control procedures used in producing the RCV1 data, the intended semantics of the hierarchical category taxonomies, and the corrections necessary to remove errorful data. We refer to the original data as RCV1-v1, and the corrected data as RCV1-v2. We benchmark several widely used supervised learning methods on RCV1-v2, illustrating the collection's properties, suggesting new directions for research, and providing baseline results for future studies. We make available detailed, per-category experimental results, as well as corrected versions of the category assignments and taxonomy structures, via online appendices.",
"Aspect extraction aims to extract fine-grained opinion targets from opinion texts. Recent work has shown that the syntactical approach, which employs rules about grammar dependency relations between opinion words and aspects, performs quite well. This approach is highly desirable in practice because it is unsupervised and domain independent. However, the rules need to be carefully selected and tuned manually so as not to produce too many errors. Although it is easy to evaluate the accuracy of each rule automatically, it is not easy to select a set of rules that produces the best overall result due to the overlapping coverage of the rules. In this paper, we propose a novel method to select an effective set of rules. To our knowledge, this is the first work that selects rules automatically. Our experiment results show that the proposed method can select a subset of a given rule set to achieve significantly better results than the full rule set and the existing state-of-the-art CRF-based supervised method.",
"",
"Reuters, the global information, news and technology group, has for the first time made available free of charge, large quantities of archived Reuters news stories for use by research communities around the world. The Reuters Corpus Volume 1 (RCV1) includes over 800,000 news stories typical of the annual English language news output of Reuters. This paper describes the origins of RCV1, the motivations behind its creation, and how it differs from previous corpora. In addition we discuss the system of category coding, whereby each story is annotated for topic, region and industry sector. We also discuss the process by which these codes were applied, and examine the issues involved in maintaining quality and consistency of coding in an operational, commercial environment."
]
} |
1702.02363 | 2589025090 | Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types: person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC). | Research on Turkish NER and TC are very limited compared to English and several other languages. The main reason is the lack of accessibility and usability of both Turkish NER and TC datasets. The most popular Turkish NER dataset is introduced by @cite_25 . This dataset contains articles from newspapers, approximately 500K words, and is manually annotated with coarse-grained entity types. Tatar and Çiçekli propose another coarse-grained NER dataset @cite_6 ; however, it contains only 55K words which makes this dataset to less preferable than previous dataset. More recent studies focus on Turkish NER in social media texts @cite_3 @cite_30 @cite_4 @cite_5 . Due to the research focus in the field, several Twitter-based coarse-grained NER datasets are published @cite_30 @cite_18 @cite_29 . According to our knowledge, there is no literature available regarding to fine-grained NER in Turkish. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_4",
"@cite_29",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_25"
],
"mid": [
"2250619872",
"2950075826",
"2051791742",
"",
"",
"2111082276",
"1982611008",
"2147777361"
],
"abstract": [
"Various recent studies show that the performance of named entity recognition (NER) systems developed for well-formed text types drops significantly when applied to tweets. The only existing study for the highly inflected agglutinative language Turkish reports a drop in FMeasure from 91 to 19 when ported from news articles to tweets. In this study, we present a new named entity-annotated tweet corpus and a detailed analysis of the various tweet-specific linguistic phenomena. We perform comparative NER experiments with a rule-based multilingual NER system adapted to Turkish on three corpora: a news corpus, our new tweet corpus, and another tweet corpus. Based on the analysis and the experimentation results, we suggest system features required to improve NER results for social media like Twitter.",
"Social media texts are significant information sources for several application areas including trend analysis, event monitoring, and opinion mining. Unfortunately, existing solutions for tasks such as named entity recognition that perform well on formal texts usually perform poorly when applied to social media texts. In this paper, we report on experiments that have the purpose of improving named entity recognition on Turkish tweets, using two different annotated data sets. In these experiments, starting with a baseline named entity recognition system, we adapt its recognition rules and resources to better fit Twitter language by relaxing its capitalization constraint and by diacritics-based expansion of its lexical resources, and we employ a simplistic normalization scheme on tweets to observe the effects of these on the overall named entity recognition performance on Turkish tweets. The evaluation results of the system with these different settings are provided with discussions of these results.",
"In this paper, we addressed the Named Entity Recognition (NER) problem for morphologically rich languages by employing a semi-supervised learning approach based on neural networks. We adopted a fast unsupervised method for learning continuous vector representations of words, and used these representations along with language independent features to develop a NER system. We evaluated our system for the highly inflectional Turkish and Czech languages. We improved the state-of-the-art F-score obtained for Turkish without using gazetteers by 2.26 and for Czech by 1.53 . Unlike the previous state-of-the-art systems developed for these languages, our system does not make use of any language dependent features. Therefore, we believe it can easily be applied to other morphologically rich languages.",
"",
"",
"Named entity recognition NER is one of the basic tasks in automatic extraction of information from natural language texts. In this paper, we describe an automatic rule learning method that exploits different features of the input text to identify the named entities located in the natural language texts. Moreover, we explore the use of morphological features for extracting named entities from Turkish texts. We believe that the developed system can also be used for other agglutinative languages. The paper also provides a comprehensive overview of the field by reviewing the NER research literature. We conducted our experiments on the TurkIE dataset, a corpus of articles collected from different Turkish newspapers. Our method achieved an average F-score of 91.08 on the dataset. The results of the comparative experiments demonstrate that the developed technique is successfully applicable to the task of automatic NER and exploiting morphological features can significantly improve the NER from Turkish, an agglutinative language.",
"Named Entity Recognition (NER) is a well-studied area in natural language processing (NLP) and the reported results in the literature are generally very high ( > 95) for most of the languages. Today, the focus area of most practical natural language applications (i.e. web mining, sentiment analysis, machine translation) is real natural language data such as Web2.0 or speech data. Nevertheless, the NER task is rarely investigated on this type of data which differs severely from formal written text. In this paper, we present 3 new Turkish data sets from different domains (on this focused area; namely from Twitter, a Speech-to-Text Interface and a Hardware Forum) annotated specifically for NER and report our first results on them. We believe, the paper draws light to the difficulty of these new domains for NER and the possible future work.",
"This paper presents the results of a study on information extraction from unrestricted Turkish text using statistical language processing methods. In languages like English, there is a very small number of possible word forms with a given root word. However, languages like Turkish have very productive agglutinative morphology. Thus, it is an issue to build statistical models for specific tasks using the surface forms of the words, mainly because of the data sparseness problem. In order to alleviate this problem, we used additional syntactic information, i.e. the morphological structure of the words. We have successfully applied statistical methods using both the lexical and morphological information to sentence segmentation, topic segmentation, and name tagging tasks. For sentence segmentation, we have modeled the final inflectional groups of the words and combined it with the lexical model, and decreased the error rate to 4.34 , which is 21 better than the result obtained using only the surface forms of the words. For topic segmentation, stems of the words (especially nouns) have been found to be more effective than using the surface forms of the words and we have achieved 10.90 segmentation error rate on our test set according to the weighted TDT-2 segmentation cost metric. This is 32 better than the word-based baseline model. For name tagging, we used four different information sources to model names. Our first information source is based on the surface forms of the words. Then we combined the contextual cues with the lexical model, and obtained some improvement. After this, we modeled the morphological analyses of the words, and finally we modeled the tag sequence, and reached an F-Measure of 91.56 , according to the MUC evaluation criteria. Our results are important in the sense that, using linguistic information, i.e. morphological analyses of the words, and a corpus large enough to train a statistical model significantly improves these basic information extraction tasks for Turkish."
]
} |
1702.02363 | 2589025090 | Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types: person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC). | Turkish TC researchers tend to construct their own, case specific datasets in general @cite_14 . The newspapers are the main text source for such studies since they are easy to obtain and classify manually @cite_15 @cite_13 @cite_7 . When the amount of annotated data is considered to train state-of-the-art learning algorithms, aforementioned Turkish datasets suffer from the lack of enough data. The main bottlenecks are requires human effort and time constraint, which limits the size and scope of the constructed datasets. In contrast, our aim is to provide larger, more comprehensive and insightful Turkish datasets for both NER and TC by using knowledge bases to create large-scale gazetteers and eliminating human factor in the annotation process. | {
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_13",
"@cite_7"
],
"mid": [
"2252093051",
"2585199385",
"2119184120",
""
],
"abstract": [
"Morphologically rich languages such as Turkish may benefit from morphological analysis in natural language tasks. In this study, we examine the effects of morphological analysis on text categorization task in Turkish. We use stems and word categories that are extracted with morphological analysis as main features and compare them with fixed length stemmers in a bag of words approach with several learning algorithms. We aim to show the effects of using varying degrees of morphological information.",
"In this study, a first comprehensive text classification using n-gram model has been realized for Turkish. We worked in 3 different areas such as determining the identification of a Turkish document's author, classifying documents according to text's genre and identifying a gender of an author, automatically. Naive Bayes, Support Vector Machine, C 4.5 and Random Forest were used as classification methods and the results were given comparatively. The success in determining the author of the text, genre of the text and gender of the author was obtained as 83 , 93 and 96 , respectively.",
"In news portals, text category information is needed for news presentation. However, for many news stories the category information is unavailable, incorrectly assigned or too generic. This makes the text categorization a necessary tool for news portals. Automated text categorization (ATC) is a multifaceted difficult process that involves decisions regarding tuning of several parameters, term weighting, word stemming, word stopping, and feature selection. In this study we aim to find a categorization setup that will provide highly accurate results in ATC for Turkish news portals. We also examine some other aspects such as the effects of training dataset set size and robustness issues. Two Turkish test collections with different characteristics are created using Bilkent News Portal. Experiments are conducted with four classification methods: C4.5, KNN, Naive Bayes, and SVM (using polynomial and rbf kernels). Our results recommends a text categorization template for Turkish news portals and provides some future research pointers.",
""
]
} |
1702.02371 | 2529136610 | In this paper we study the problem of increasing the decoding success probability of random linear fountain code over GF (2) for small packet lengths used in delay-intolerant applications such as multimedia streaming. Such code over GF (2) are attractive as they have lower decoding complexity than codes over larger field size, but suffer from high transmission redundancy. In our proposed coding scheme we construct a codeword which is not a linear combination of any γ codewords previously transmitted to mitigate such transmission redundancy. We then note the observation that the probability of receiving a linearly dependent codeword is highest when the receiver has received k − 1 linearly independent codewords. We propose using the BlockACK frame so that the codeword received after k − 1 linearly independent codeword is always linearly independent, this reduces the expected redundancy by a factor of three. | Blasco and Liva have proposed a concatenated (15,10) RS code and RLNC @math generator matrix @cite_11 . Their results have shown that by concatenating the RLNC with an MDS code such RS code, the decoding failure probability can be reduced by a factor of upto four assuming that the channel erasure rate are from low-moderate. However their work assumes that coding is performed over a non-binary Galois field @math , and hence does not address the issue of high decoding computational complexity associated with non-binary coding coefficients. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2110464368"
],
"abstract": [
"The performance of a novel fountain coding scheme based on maximum distance separable (MDS) codes constructed over Galois fields of order q>=2 is investigated. Upper and lower bounds on the decoding failure probability under maximum likelihood decoding are developed. Differently from Raptor codes (which are based on a serial concatenation of a high-rate outer block code, and an inner Luby-transform code), the proposed coding scheme can be seen as a parallel concatenation of an outer MDS code and an inner random linear fountain code, both operating on the same Galois field. A performance assessment is performed on the gain provided by MDS based fountain coding over linear random fountain coding in terms of decoding failure probability vs. overhead. It is shown how, for example, the concatenation of a (15,10) Reed-Solomon code and a linear random fountain code over F16 brings to a decoding failure probability 4 orders of magnitude lower than the linear random fountain code for the same overhead in a channel with a packet loss probability of epsilon=0.05. Moreover, it is illustrated how the performance of the concatenated fountain code approaches that of an idealized fountain code for higher-order Galois fields and moderate packet loss probabilities. The scheme introduced is of special interest for the distribution of data using small block sizes."
]
} |
1702.02371 | 2529136610 | In this paper we study the problem of increasing the decoding success probability of random linear fountain code over GF (2) for small packet lengths used in delay-intolerant applications such as multimedia streaming. Such code over GF (2) are attractive as they have lower decoding complexity than codes over larger field size, but suffer from high transmission redundancy. In our proposed coding scheme we construct a codeword which is not a linear combination of any γ codewords previously transmitted to mitigate such transmission redundancy. We then note the observation that the probability of receiving a linearly dependent codeword is highest when the receiver has received k − 1 linearly independent codewords. We propose using the BlockACK frame so that the codeword received after k − 1 linearly independent codeword is always linearly independent, this reduces the expected redundancy by a factor of three. | Sorensen have proposed to use overlapping generations to minimize the decoding complexity @cite_10 . The general idea of performing encoding on overlapping generation is to divide the @math input packets into smaller sub-generations, each sub-generations having smaller number of input packets. By reducing the value of @math the decoding complexity is also minimized, however this approach adversely affect the network throughput @cite_2 , and the use of overlapping sub-generations has been shown to improve throughput. In a multicast network this approach may lead to redundant codewords being received by some receivers, as few receivers are satisfied before others. | {
"cite_N": [
"@cite_10",
"@cite_2"
],
"mid": [
"2006799289",
"2100134469"
],
"abstract": [
"Traditionally, the idea of overlapping generations in network coding research has focused on reducing the complexity of decoding large data files while maintaining the delay performance expected of a system that combines all data packets. However, the effort for encoding and decoding individual generations can still be quite high compared to other sparse coding approaches. This paper focuses on an inherently different approach that combines (i) sparsely coded generations configured on-the- fly based on (ii) controllable and infrequent feedback that allows the system to remove some original packets from the pool of packets to be mixed in the linear combinations. The latter is key to maintain a high impact of the coded packets received during the entire process while maintaining very sparsely coded generations. Interestingly, our proposed approach naturally bridges the idea of overlapping generations with that of tunable sparse network coding, thus providing the system with a seamless and adaptive strategy to balance complexity and delay performance. We analyze two families of strategies focused on these ideas. We also compare them to other standard approaches both in terms of delay performance and complexity as well as providing measurements in commercial devices to support our conclusions. Our results show that a judicious choice of the overlapping of the generations provides close-to-optimal delay performance, while reducing the decoding complexity by up to an order of magnitude with respect to other schemes.",
"To reduce computational complexity and delay in randomized network coded content distribution, and for some other practical reasons, coding is not performed simultaneously over all content blocks, but over much smaller, possibly overlapping subsets of these blocks, known as generations. A penalty of this strategy is throughput reduction. To analyze the throughput loss, we model coding over generations with random generation scheduling as a coupon collector's brotherhood problem. This model enables us to derive the expected number of coded packets needed for successful decoding of the entire content as well as the probability of decoding failure (the latter only when generations do not overlap) and further, to quantify the tradeoff between computational complexity and throughput. Interestingly, with a moderate increase in the generation size, throughput quickly approaches link capacity. Overlaps between generations can further improve throughput substantially for relatively small generation sizes."
]
} |
1702.02371 | 2529136610 | In this paper we study the problem of increasing the decoding success probability of random linear fountain code over GF (2) for small packet lengths used in delay-intolerant applications such as multimedia streaming. Such code over GF (2) are attractive as they have lower decoding complexity than codes over larger field size, but suffer from high transmission redundancy. In our proposed coding scheme we construct a codeword which is not a linear combination of any γ codewords previously transmitted to mitigate such transmission redundancy. We then note the observation that the probability of receiving a linearly dependent codeword is highest when the receiver has received k − 1 linearly independent codewords. We propose using the BlockACK frame so that the codeword received after k − 1 linearly independent codeword is always linearly independent, this reduces the expected redundancy by a factor of three. | Cruces have derived the exact decoding probability of decoding the codewords once a receiver has received @math , @math , codewords @cite_7 . Based on this result they derived the expected number of excess codewords a receiver needs to receive before performing decoding. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2137266817"
],
"abstract": [
"In this letter, we compute the exact probability that a receiver obtains N linearly independent packets among K ≥ N received packets, when the sender s use s random linear network coding over a Galois Field of size q. Such condition maps to the receiver's capability to decode the original information, and its mathematical characterization helps to design the coding so to guarantee the correctness of the transmission. Our formulation represents an improvement over the current upper bound for the decoding probability, and provides theoretical grounding to simulative results in the literature."
]
} |
1702.02535 | 2586858536 | A fundamental advantage of neural models for NLP is their ability to learn representations from scratch. However, in practice this often means ignoring existing external linguistic resources, e.g., WordNet or domain specific ontologies such as the Unified Medical Language System (UMLS). We propose a general, novel method for exploiting such resources via weight sharing. Prior work on weight sharing in neural networks has considered it largely as a means of model compression. In contrast, we treat weight sharing as a flexible mechanism for incorporating prior knowledge into neural models. We show that this approach consistently yields improved performance on classification tasks compared to baseline strategies that do not exploit weight sharing. | Neural Models for NLP . Recently there has been enormous interest in neural models for NLP generally @cite_28 @cite_13 . Most relevant to this work, simple CNN based models (which we have built on here) have proven extremely effective for text categorization @cite_14 @cite_8 . | {
"cite_N": [
"@cite_28",
"@cite_14",
"@cite_13",
"@cite_8"
],
"mid": [
"2952230511",
"2949541494",
"2963042536",
"2216973458"
],
"abstract": [
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including: part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.",
"We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.",
"Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation.",
"Convolutional Neural Networks (CNNs) have recently achieved remarkably strong performance on the practically important task of sentence classification (kim 2014, kalchbrenner 2014, johnson 2014). However, these models require practitioners to specify an exact model architecture and set accompanying hyperparameters, including the filter region size, regularization parameters, and so on. It is currently unknown how sensitive model performance is to changes in these configurations for the task of sentence classification. We thus conduct a sensitivity analysis of one-layer CNNs to explore the effect of architecture components on model performance; our aim is to distinguish between important and comparatively inconsequential design decisions for sentence classification. We focus on one-layer CNNs (to the exclusion of more complex models) due to their comparative simplicity and strong empirical performance, which makes it a modern standard baseline method akin to Support Vector Machine (SVMs) and logistic regression. We derive practical advice from our extensive empirical results for those interested in getting the most out of CNNs for sentence classification in real world settings."
]
} |
1702.02364 | 2546520612 | In this poster paper we present a data dissemination transmission abstraction for over the air programming (OAP) protocol which is fundamentally different from the previous hop-by-hop transmission protocols. Instead of imposing the greedy requirement that at least one node in the ith hop receives all packets before transmitting packets to the next hop and its neighbours, we take advantage of the spatial diversity and broadcast nature of wireless transmission to adopt a cooperative approach in which node broadcast whatever packets it has received with the expectation that it will recover the lost packets with high probability by overhearing the broadcast transmissions of its neighbours. The use of coded transmissions ensures that this does not lead to the broadcast storm problem. We validate the improved performance our of proposed transmission scheme with respect to the previous state of the art OAP protocols on a proof-of-concept two-hops TelosB wireless sensor network testbed. | While the advantages of multiple-input multiple-output (MIMO) are well known towards improving the network throughput, its adaption for hardware and battery constrained devices such as smartphones and sensor motes is not practical. The use of cooperative communication where nodes share'' their antenna to form a virtual MIMO system, without each node having multiple antennas, has been proposed as a viable technique for such devices @cite_4 . In such cooperative transmission scheme only a small subset of nodes in the networks are source(s) and receiver(s), which relaxes the transmission constraint of delivering @math packets to all nodes of the network as demanded in OAP protocols. | {
"cite_N": [
"@cite_4"
],
"mid": [
"10175432"
],
"abstract": [
"Opportunistic routing (OR) is an upcoming and promising routing technique for wireless multihop networks. The key concepts behind OR is overhearing and cooperation among relaying nodes; therefore, OR works well in wireless multihop networks with higher node density, such as mesh or sensor networks. OR enables multiple routes and dynamic relay selection, thus, it obtains higher link reliability and larger transmission range. This paper reviews the basic concepts and describes components of OR with examples. Current trends, issues and challenges of OR are also discussed."
]
} |
1702.02447 | 2587626861 | Hand pose estimation from monocular depth images is an important and challenging problem for human-computer interaction. Recently deep convolutional networks (ConvNet) with sophisticated design have been employed to address it, but the improvement over traditional methods is not so apparent. To promote the performance of directly 3D coordinate regression, we propose a tree-structured Region Ensemble Network (REN), which partitions the convolution outputs into regions and integrates the results from multiple regressors on each regions. Compared with multi-model ensemble, our model is completely end-to-end training. The experimental results demonstrate that our approach achieves the best performance among state-of-the-arts on two public datasets. | 2mm Recently deep ConvNets have been applied on hand pose estimation for depth imaging @cite_4 @cite_18 . @cite_20 use ConvNets to produce 2D heat maps and infer the 3D hand pose with inverse kinematics. @cite_11 directly regress the 3D positions with multi-stage ConvNets using a linear layer as pose prior. In @cite_14 , a feedback loop is employed to iteratively correct the mistake, in which 3 ConvNets are used for initialization, synthesis and pose updating. @cite_19 employ 3 ConvNets to separately regress 2D heat maps for each view with depth projections and fuse them to produce 3D hand pose. In @cite_8 , physical joint constraints are incorporated into a forward kinematics based layer in ConvNet. Similarly, @cite_3 embeds skelet al manifold into ConvNets and trains the model end-to-end to render sequential prediction. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_19",
"@cite_20",
"@cite_11"
],
"mid": [
"2076139799",
"2210697964",
"",
"2466381304",
"2560502266",
"2473634362",
"2075156252",
""
],
"abstract": [
"This paper proposes a novel high-accuracy stereo matching scheme based on adaptive ground control points (AdaptGCP). Different from traditional fixed GCP-based methods, we consider color dissimilarity, spatial relation, and the pixel-matching reliability to select GCP adaptively in each local support window. To minimize the global energy, we propose a practical solution, named as alternating updating scheme of disparity and confidence map, which can effectively eliminate the redundant and interfering information of unreliable pixels. The disparity values of those unreliable pixels are reassigned with the information provided by local plane model, which is fitted with GCPs. Then, the confidence map is updated according to the disparity reassignment and the left—right consistency. Finally, the disparity map is refined by multistep filers. Quantitative evaluations demonstrate the effectiveness of our AdaptGCP scheme for regularizing the ill-posed matching problem. The top ranks on Middlebury benchmark with different error thresholds show that our algorithm achieves the state-of-the-art performance among the latest stereo matching algorithms. This paper provides a new insight toward high-accuracy stereo matching.",
"We propose an entirely data-driven approach to estimating the 3D pose of a hand given a depth image. We show that we can correct the mistakes made by a Convolutional Neural Network trained to predict an estimate of the 3D pose by using a feedback loop. The components of this feedback loop are also Deep Networks, optimized using training data. They remove the need for fitting a 3D model to the input data, which requires both a carefully designed fitting function and algorithm. We show that our approach outperforms state-of-the-art methods, and is efficient as our implementation runs at over 400 fps on a single GPU.",
"",
"Previous learning based hand pose estimation methods does not fully exploit the prior information in hand model geometry. Instead, they usually rely a separate model fitting step to generate valid hand poses. Such a post processing is inconvenient and sub-optimal. In this work, we propose a model based deep learning approach that adopts a forward kinematics based layer to ensure the geometric validity of estimated poses. For the first time, we show that embedding such a non-linear generative process in deep learning is feasible for hand pose estimation. Our approach is verified on challenging public datasets and achieves state-of-the-art performance.",
"This paper focuses on the challenging problem of 3D pose estimation of a diverse spectrum of articulated objects from single depth images. A novel structured prediction approach is considered, where 3D poses are represented as skelet al models that naturally operate on manifolds. Given an input depth image, the problem of predicting the most proper articulation of underlying skelet al model is thus formulated as sequentially searching for the optimal skelet al configuration. This is subsequently addressed by convolutional neural nets trained end-to-end to render sequential prediction of the joint locations as regressing a set of tangent vectors of the underlying manifolds. Our approach is examined on various articulated objects including human hand, mouse, and fish benchmark datasets. Empirically it is shown to deliver highly competitive performance with respect to the state-of-the-arts, while operating in real-time (over 30 FPS).",
"Articulated hand pose estimation plays an important role in human-computer interaction. Despite the recent progress, the accuracy of existing methods is still not satisfactory, partially due to the difficulty of embedded high-dimensional and non-linear regression problem. Different from the existing discriminative methods that regress for the hand pose with a single depth image, we propose to first project the query depth image onto three orthogonal planes and utilize these multi-view projections to regress for 2D heat-maps which estimate the joint positions on each plane. These multi-view heat-maps are then fused to produce final 3D hand pose estimation with learned pose priors. Experiments show that the proposed method largely outperforms state-of-the-art on a challenging dataset. Moreover, a cross-dataset experiment also demonstrates the good generalization ability of the proposed method.",
"We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.",
""
]
} |
1702.02447 | 2587626861 | Hand pose estimation from monocular depth images is an important and challenging problem for human-computer interaction. Recently deep convolutional networks (ConvNet) with sophisticated design have been employed to address it, but the improvement over traditional methods is not so apparent. To promote the performance of directly 3D coordinate regression, we propose a tree-structured Region Ensemble Network (REN), which partitions the convolution outputs into regions and integrates the results from multiple regressors on each regions. Compared with multi-model ensemble, our model is completely end-to-end training. The experimental results demonstrate that our approach achieves the best performance among state-of-the-arts on two public datasets. | 2mm Traditional ensemble learning means that training multiple individual models and combining their outputs via averaging or weighted fusions, which is widely adopted in recognition competitions @cite_0 . In addition to bagging @cite_0 @cite_7 , boosting is also introduced for people counting @cite_15 . However, using multiple ConvNets requires large memory and time, which is not practical for applications. | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_7"
],
"mid": [
"",
"2520826941",
"1976948919"
],
"abstract": [
"",
"In this paper, we address the task of object counting in images. We follow modern learning approaches in which a density map is estimated directly from the input image. We employ CNNs and incorporate two significant improvements to the state of the art methods: layered boosting and selective sampling. As a result, we manage both to increase the counting accuracy and to reduce processing time. Moreover, we show that the proposed method is effective, even in the presence of labeling errors. Extensive experiments on five different datasets demonstrate the efficacy and robustness of our approach. Mean Absolute error was reduced by 20 to 35 . At the same time, the training time of each CNN has been reduced by 50 .",
"We propose a new approach for estimation of the positions of facial key points with three-level carefully designed convolutional networks. At each level, the outputs of multiple networks are fused for robust and accurate estimation. Thanks to the deep structures of convolutional networks, global high-level features are extracted over the whole face region at the initialization stage, which help to locate high accuracy key points. There are two folds of advantage for this. First, the texture context information over the entire face is utilized to locate each key point. Second, since the networks are trained to predict all the key points simultaneously, the geometric constraints among key points are implicitly encoded. The method therefore can avoid local minimum caused by ambiguity and data corruption in difficult image samples due to occlusions, large pose variations, and extreme lightings. The networks at the following two levels are trained to locally refine initial predictions and their inputs are limited to small regions around the initial predictions. Several network structures critical for accurate and robust facial point detection are investigated. Extensive experiments show that our approach outperforms state-of-the-art methods in both detection accuracy and reliability."
]
} |
1702.02516 | 2950190142 | There is a growing interest in the use of video sensor networks in surveillance applications in order to detect intruders with low cost. The essential concern of such networks is whether or not a specified target can pass or intrude the monitored region without being detected. This concern forms a serious challenge to wireless video sensor networks of weak computation and battery power. In this paper, our aim is to prolong the whole network lifetime while fulfilling the surveillance application needs. We present a novel scheduling algorithm where only a subset of video nodes contribute significantly to detect intruders and prevent malicious attacker to predict the behavior of the network prior to intrusion. Our approach is chaos-based, where every node based on its last detection, a hash value and some pseudo-random numbers easily computes a decision function to go to sleep or active mode. We validate the efficiency of our approach through theoretical analysis and demonstrate the benefits of our scheduling algorithm by simulations. Results show that in addition of being able to increase the whole network lifetime and to present comparable results against random attacks (low stealth time), our scheme is also able to withstand malicious attacks due to its fully unpredictable behavior. | Most of the previous works have considered the known-targets coverage problem @cite_5 @cite_16 @cite_11 @cite_17 . The objective is to ensure at all time the coverage of some targets with known locations that are deployed in a two-dimensional plane. For example, the authors in @cite_17 organize sensor nodes into mutually exclusive subsets that are activated successively, where the size of each subset is restricted and not all of the targets need to be covered by the sensors in one subset. @cite_16 , a directional sensor model is proposed, where a sensor is allowed to work in several directions. The idea behind this is to find a minimal set of directions that can cover the maximum number of targets. It is different from the approach described in @cite_5 that aims to find a group of non-disjoint cover sets, each set covering all the targets to maximize the network lifetime. | {
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_17",
"@cite_11"
],
"mid": [
"2156776663",
"2163807729",
"1960247298",
"2123306176"
],
"abstract": [
"Unlike convectional omni-directional sensors that always have an omni-angle of sensing range, directional sensors may have a limited angle of sensing range due to technical constraints or cost considerations. A directional sensor network consists of a number of directional sensors, which can switch to several directions to extend their sensing ability to cover all the targets in a given area. Power conservation is still an important issue in such directional sensor networks. In this paper, we address the multiple directional cover sets problem (MDCS) of organizing the directions of sensors into a group of non-disjoint cover sets to extend the network lifetime. One cover set, in which the directions cover all the targets, is activated at one time. We prove the MDCS to be NP-complete and propose three heuristic algorithms for the MDCS. Simulation results are also presented to demonstrate the performance of these algorithms.",
"We study a novel “coverage by directional sensors” problem with tunable orientations on a set of discrete targets. We propose a Maximum Coverage with Minimum Sensors (MCMS) problem in which coverage in terms of the number of targets to be covered is maximized whereas the number of sensors to be activated is minimized. We present its exact Integer Linear Programming (ILP) formulation and an approximate (but computationally efficient) centralized greedy algorithm (CGA) solution. These centralized solutions are used as baselines for comparison. Then we provide a distributed greedy algorithm (DGA) solution. By incorporating a measure of the sensors residual energy into DGA, we further develop a Sensing Neighborhood Cooperative Sleeping (SNCS) protocol which performs adaptive scheduling on a larger time scale. Finally, we evaluate the properties of the proposed solutions and protocols in terms of providing coverage and maximizing network lifetime through extensive simulations. Moreover, for the case of circular coverage, we compare against the best known existing coverage algorithm.",
"This paper addresses the coverage breach problem in wireless sensor networks with limited bandwidths. In wireless sensor networks, sensor nodes are powered by batteries. To make efficient use of battery energy is critical to sensor network lifetimes. When targets are redundantly covered by multiple sensors, especially in stochastically deployed sensor networks, it is possible to save battery energy by organizing sensors into mutually exclusive subsets and alternatively activating only one subset at any time. Active nodes are responsible for sensing, computing and communicating. While the coverage of each subset is an important metric for sensor organization, the size of each subset also plays an important role in sensor network performance because when active sensors periodically send data to base stations, contention for channel access must be considered. The number of available channels imposes a limit on the cardinality of each subset. Coverage breach happens when a subset of sensors cannot completely cover all the targets. To make efficient use of both energy and bandwidth with a minimum coverage breach is the goal of sensor network design. This paper presents the minimum breach problem using a mathematical model, studies the computational complexity of the problem, and provides two approximate heuristics. Effects of increasing the number of channels and increasing the number of sensors on sensor network coverage are studied through numerical simulations. Overall, the simulation results reveal that when the number of sensors increases, network lifetimes can be improved without loss of network coverage if there is no bandwidth constraint; with bandwidth constraints, network lifetimes may be improved further at the cost of coverage breach.",
"This paper addresses the maximal lifetime scheduling for sensor surveillance systems with K sensors to 1 target. Given a set of sensors and targets in an Euclidean plane, a sensor can watch only one target at a time and a target should be watched by k, k 1, sensors at any time. Our task is to schedule sensors to watch targets and pass data to the base station, such that the lifetime of the surveillance system is maximized, where the lifetime is the duration up to the time when there exists one target that cannot be watched by k sensors or data cannot be forwarded to the base station due to the depletion of energy of the sensor nodes. We propose an optimal solution to find the target watching schedule for sensors that achieves the maximal lifetime. Our solution consists of three steps: 1) computing the maximal lifetime of the surveillance system and a workload matrix by using linear programming techniques, 2) decomposing the workload matrix into a sequence of schedule matrices that can achieve the maximal lifetime, and 3) determining the sensor surveillance trees based on the above obtained schedule matrices, which specify the active sensors and the routes to pass sensed data to the base station. This is the first time in the literature that this scheduling problem of sensor surveillance systems has been formulated and the optimal solution has been found. We illustrate our optimal method by a numeric example and experiments in the end."
]
} |
1702.02516 | 2950190142 | There is a growing interest in the use of video sensor networks in surveillance applications in order to detect intruders with low cost. The essential concern of such networks is whether or not a specified target can pass or intrude the monitored region without being detected. This concern forms a serious challenge to wireless video sensor networks of weak computation and battery power. In this paper, our aim is to prolong the whole network lifetime while fulfilling the surveillance application needs. We present a novel scheduling algorithm where only a subset of video nodes contribute significantly to detect intruders and prevent malicious attacker to predict the behavior of the network prior to intrusion. Our approach is chaos-based, where every node based on its last detection, a hash value and some pseudo-random numbers easily computes a decision function to go to sleep or active mode. We validate the efficiency of our approach through theoretical analysis and demonstrate the benefits of our scheduling algorithm by simulations. Results show that in addition of being able to increase the whole network lifetime and to present comparable results against random attacks (low stealth time), our scheme is also able to withstand malicious attacks due to its fully unpredictable behavior. | Regarding the Region-Coverage Problem in which this study takes place, existing works focus on finding an efficient deployment pattern so that the average overlapping area of each sensor is bounded. The authors in @cite_3 analyze new deployment strategies for satisfying some given coverage probability requirements with directional sensing models. A model of directed communications is introduced to ensure and repair the network connectivity. Based on a rotatable directional sensing model, the authors in @cite_14 present a method to deterministically estimate the amount of directional nodes for a given coverage rate. A sensing connected sub-graph accompanied with a convex hull method is introduced to model a directional sensor network into several parts in a distributed manner. With adjustable sensing directions, the coverage algorithm tries to minimize the overlapping sensing area of directional sensors only with local topology information. Lastly, in @cite_13 , the authors present a distributed algorithm that ensures both coverage of the deployment area and network connectivity, by providing multiple cover sets to manage Field of View redundancies and reduce objects disambiguation. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_3"
],
"mid": [
"1548317701",
"2078411028",
""
],
"abstract": [
"Adequate coverage is very important for sensor networks to fulfill the issued sensing tasks. In traditional sensor networks, the sensors are based on omni-sensing model. However, directional sensing sensors are with great application chances, typically in video sensor networks. Toward this end, this paper addresses the problem of enhancing coverage in a directional sensor network. First, based on a rotatable directional sensing model, we present a method to deterministically estimate the amount of directional nodes for a given coverage rate. We also employ Sensing Connected Sub-graph (SCSG) to divide a directional sensor network into several parts in a distributed manner, in order to decrease time complexity. Moreover, the concept of convex hull is introduced to model each sensing connected sub-graph. According to the characteristic of adjustable sensing directions of directional nodes, we study a coverage-enhancing algorithm to minimize the overlapping sensing area of directional sensors only with local topology information. Extensive simulation is conducted to verify the effectiveness of our solution and we give detailed discussions on the effects of different system parameters.",
"In randomly deployed visual wireless sensor networks for surveillance applications, the scheduling of sensor nodes can be seen from the risk perspective: different parts of the area of interest may have different risk levels according to the pattern of observed events such as the number of detected intrusions. In this paper, we propose a multiple-level activity model that uses behavior functions to define application classes and allows for adaptive scheduling based on the application criticality and on the availability of multiple cover sets per sensor node. The paper then describes how an adaptive scheduling model can be defined in order to dynamically schedule nodes by varying the capture speed according to nodes' environment. Simulation results are presented to validate the performance of the proposed approach in terms of percentage of active nodes, percentage of coverage and stealth time under intrusion scenarios.",
""
]
} |
1702.01848 | 2952639003 | Robots such as autonomous underwater vehicles (AUVs) and autonomous surface vehicles (ASVs) have been used for sensing and monitoring aquatic environments such as oceans and lakes. Environmental sampling is a challenging task because the environmental attributes to be observed can vary both spatially and temporally, and the target environment is usually a large and continuous domain whereas the sampling data is typically sparse and limited. The challenges require that the sampling method must be informative and efficient enough to catch up with the environmental dynamics. In this paper we present a planning and learning method that enables a sampling robot to perform persistent monitoring tasks by learning and refining a spatiotemporal environmental model. Our environmental sampling framework consists of two components: to maximize the information collected, we propose an informative planning component that efficiently generates sampling waypoints that contain the maximal information; To alleviate the computational bottleneck caused by large-scale data accumulated, we develop a component based on a sparse Gaussian Process whose hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. We validate our method with both simulations running on real ocean data and field trials with an ASV in a lake environment. Our experiments show that the proposed framework is both accurate and efficient in learning the spatiotemporal environmental model | Environmental sensing and monitoring allow scientists to assess the processes of a particular environment, and have been used in a broad range of applications. For example, an array of sensor networks can be deployed to detect forest fires @cite_14 and volcano activities @cite_5 ; unmanned aerial vehicles (UAVs) have been used to estimate yields of crops or fruits @cite_16 @cite_54 and to study spatial ecology and its spatiotemporal dynamics @cite_34 ; with a capacity of performing long-range and long-term tasks, marine robots can collect large-area ocean data @cite_4 and trace chemical plumes @cite_51 @cite_41 ; autonomous boats have been used to monitor fish schools @cite_47 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_41",
"@cite_54",
"@cite_16",
"@cite_5",
"@cite_47",
"@cite_34",
"@cite_51"
],
"mid": [
"",
"2124294571",
"2333574198",
"",
"2109934232",
"2003801825",
"2001123768",
"2089212648",
"2163130723"
],
"abstract": [
"",
"Ocean processes are dynamic and complex and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show improvements in both data resolution and path reliability compared to previously executed sampling paths used in the respective regions. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.",
"We present information theoretic search strategies for single and multi-robot teams to find and localize the source of a biochemical or radiological materials in turbulent flows. In our work, robots rely on sporadic and intermittent sensor readings to synthesize information maximizing exploration strategies to find and localize the position of the source. By reasoning about the spatial distribution of these sensory cues, the robots are able to construct a belief distribution over the possible positions of the source. The belief distribution is then employed to synthesize motion strategies that drives the robots to regions in the workspace that results in the largest decrease in the entropy of the belief distribution for the source position. We validate the proposed strategies in 2D and 3D environments and consider the performance of the strategies when robots have limited access to global pose information. In particular, the proposed strategies are validated using a three dimensional (3D) time-varying computational fluid model of the 2010 Deep Water Horizon oil spill.",
"",
"The harvest yield in vineyards can vary significantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse - during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8 of actual crop weight.",
"We present a science-centric evaluation of a 19-day sensor network deployment at Reventador, an active volcano in Ecuador. Each of the 16 sensors continuously sampled seismic and acoustic data at 100 Hz. Nodes used an event-detection algorithm to trigger on interesting volcanic activity and initiate reliable data transfer to the base station. During the deployment, the network recorded 229 earthquakes, eruptions, and other seismoacoustic events. The science requirements of reliable data collection, accurate event detection, and high timing precision drive sensor networks in new directions for geophysical monitoring. The main contribution of this paper is an evaluation of the sensor network as a scientific instrument, holding it to the standards of existing instrumentation in terms of data fidelity (the quality and accuracy of the recorded signals) and yield (the quantity of the captured data). We describe an approach to time rectification of the acquired signals that can recover accurate timing despite failures of the underlying time synchronization protocol. In addition, we perform a detailed study of the sensor network's data using a direct comparison to a standalone data logger, as well as an investigation of seismic and acoustic wave arrival times across the network.",
"Robotic sensor networks (RSNs) find increasing use in environmental monitoring as they can collect data from obscure, hard-to-reach places over long periods of time. This work reports progress in building a network of small, lightweight robotic rafts that will be used to monitor common carp tagged with radio transmitters across Minnesota lakes. We describe the design and architecture of the robotic raft and demonstrate the robustness of our waypoint navigation algorithm through field tests conducted in various lakes. We also present results from experiments aimed at localizing tagged fish. © 2010 Wiley Periodicals, Inc.",
"*Ecologists require spatially explicit data to relate structure to function. To date, heavy reliance has been placed on obtaining such data from remote-sensing instruments mounted on spacecraft or manned aircraft, although the spatial and temporal resolutions of the data are often not suited to local-scale ecological investigations. Recent technological innovations have led to an upsurge in the availability of unmanned aerial vehicles (UAVs) – aircraft remotely operated from the ground – and there are now many lightweight UAVs on offer at reasonable costs. Flying low and slow, UAVs offer ecologists new opportunities for scale-appropriate measurements of ecological phenomena. Equipped with capable sensors, UAVs can deliver fine spatial resolution data at temporal resolutions defined by the end user. Recent innovations in UAV platform design have been accompanied by improvements in navigation and the miniaturization of measurement technologies, allowing the study of individual organisms and their spatiotemporal dynamics at close range.",
"Olfactory-based mechanisms have been hypothesized for biological behaviors including foraging, mate-seeking, homing, and host-seeking. Autonomous underwater vehicles (AUVs) capable of such chemical plume tracing feats would have applicability in searching for environmentally interesting phenomena, unexploded ordinance, undersea wreckage, and sources of hazardous chemicals or pollutants. This article presents an approach and experimental results using a REMUS AUV to find a chemical plume, trace the chemical plume to its source, and maneuver to reliably declare the source location. The experimental results are performed using a plume of Rhodamine dye developed in a turbulent, near-shore, oceanic fluid flow."
]
} |
1702.01848 | 2952639003 | Robots such as autonomous underwater vehicles (AUVs) and autonomous surface vehicles (ASVs) have been used for sensing and monitoring aquatic environments such as oceans and lakes. Environmental sampling is a challenging task because the environmental attributes to be observed can vary both spatially and temporally, and the target environment is usually a large and continuous domain whereas the sampling data is typically sparse and limited. The challenges require that the sampling method must be informative and efficient enough to catch up with the environmental dynamics. In this paper we present a planning and learning method that enables a sampling robot to perform persistent monitoring tasks by learning and refining a spatiotemporal environmental model. Our environmental sampling framework consists of two components: to maximize the information collected, we propose an informative planning component that efficiently generates sampling waypoints that contain the maximal information; To alleviate the computational bottleneck caused by large-scale data accumulated, we develop a component based on a sparse Gaussian Process whose hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. We validate our method with both simulations running on real ocean data and field trials with an ASV in a lake environment. Our experiments show that the proposed framework is both accurate and efficient in learning the spatiotemporal environmental model | Methodologies for environmental sensing and monitoring have also been developed to address challenging problems in different scenarios. For instance, stochastic search has been considered for switching fields @cite_17 , and Markov Chain Monte Carlo (MCMC) techniques have been used to study convergence @cite_19 . Convergence guarantees were also analyzed for control using gradient-aware straight line motions for point-mass vehicles @cite_60 . Distributed control methods based on gradient climbing @cite_28 , consensus @cite_10 , and source seeking under limited communication @cite_53 , have been developed using different optimization techniques. Further work using multiple vehicles @cite_18 took account of simplified turbulent flows @cite_29 and bio-inspired methods with stability guarantees @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_60",
"@cite_28",
"@cite_53",
"@cite_29",
"@cite_19",
"@cite_10",
"@cite_17"
],
"mid": [
"2089422736",
"1990010197",
"2152521798",
"2159826786",
"",
"2088245340",
"1979460656",
"",
"1976061562"
],
"abstract": [
"We develop a robust plume tracking strategy using mobile sensor networks in three dimensional (3D) fields. Inspired by the plume tracking behavior of blue crabs, we propose a stochastic model of plume spikes detected by sensing agents based on the Poisson counting process, which enables us to transform the turbulent plume field detected by sensing agents to a continuously-differential field, the minimum of which is considered as a source in the field, and shares the same location with the plume source. The transformation allows us to design and analyze source-seeking algorithms in smooth fields instead of in turbulent fields with higher fluctuation spatially and temporally. Inspired by fish schools that seek darker (shaded) regions in environments with complex lighting variations, we develop a distributed source-seeking algorithm using mobile sensor networks without explicit gradient estimation. The velocity of each agent is designed using only the measurements taken by the agent and the relative positions to its neighboring agents. We prove that, using this design, the moving direction of a three-agent group will converge to the opposite gradient direction of the field, thus, the group moves towards a source in the field. We also prove that the tracking system is input-to-state stable (ISS), indicating that the system is robust to disturbances. We then generalize the design to N-agent groups, and demonstrate the strategy in both smooth fields and turbulent fields in simulations.",
"We developed a novel, multiple vehicle extension to a typical, single vehicle chemical plume source localization algorithm. The approach is implemented and tested in MATLAB®. We discuss simulations results to assess the potential of using the multiple vehicle approach. It occurs that there can be an advantage from using multiple vehicles.",
"We consider the problem of steering an autonomous vehicle to locate a radiation source utilizing measurements of the radiation intensity only. We propose a control algorithm that locates the source through a sequence of line minimizations of the radiation intensity. We implement in a hybrid controller, with sample-and-hold and logic variables, a discretized version of the algorithm suitable for steering a point-mass vehicle. The algorithm confers global convergence and practical stability properties to the closed-loop hybrid system. We discuss these properties and characterize the region of convergence for the vehicle. Convergence and stability results are supplemented with simulations.",
"We present a stable control strategy for groups of vehicles to move and reconfigure cooperatively in response to a sensed, distributed environment. Each vehicle in the group serves as a mobile sensor and the vehicle network as a mobile and reconfigurable sensor array. Our control strategy decouples, in part, the cooperative management of the network formation from the network maneuvers. The underlying coordination framework uses virtual bodies and artificial potentials. We focus on gradient climbing missions in which the mobile sensor network seeks out local maxima or minima in the environmental field. The network can adapt its configuration in response to the sensed environment in order to optimize its gradient climb.",
"",
"We develop a plume tracking algorithm for a swarm of mobile sensing agents in turbulent flow. Inspired by blue crabs, we propose a stochastic model for plume spikes based on the Poisson counting process, which captures the turbulent characteristic of plumes. We then propose an approach to estimate the parameters of the spike model, and transform the turbulent plume field detected by sensing agents into a smoother scalar field that shares the same source with the plume field. This transformation allows us to design path planning algorithms for mobile sensing agents in the smoother field instead of in the turbulent plume field. Inspired by the source seeking behaviors of fish schools, we design a velocity controller for each mobile agent by decomposing the velocities into two perpendicular parts: the forward velocity incorporates feedback from the estimated spike parameters, and the side velocity keeps the swarm together. The combined velocity is then used to plan the path for each agent in the swarm. Theoretical justifications are provided for convergence of the agent group to the plume source. The algorithms are also demonstrated through simulations.",
"We propose a new method for locating the source(s) of a fluid that is diffusing into sea water. Our method utilizes multiple Autonomous Underwater Vehicles (AUVs) whose motion is controlled via a discrete-time Markov Chain Monte Carlo (MCMC) algorithm. The MCMC algorithm relies only on local measurements of the concentration of the fluid to construct and estimate of the concentration field over the search domain, and hence localize the source(s). We prove the existence of an invariant measure for the Markov chain that is generated by the closed-loop motion of the vehicles. The convergence rate of the Markov chain is investigated through extensive numerical simulations.",
"",
"We consider the problem of designing controllers to steer mobile robots to the source (the minimizer) of a signal field. In addition to the mobility constraints, e.g., posed by the nonholonomic dynamics, we assume that the field is completely unknown to the robot and the robot has no knowledge of its own position. Furthermore, the unknown field is randomly switching. In the case where the information of the field (e.g., the gradient) is completely known, standard motion planning techniques for mobile robots would converge to the known source. In the absence of mobility constraints, convergence to the minimum of unknown fields can be pursued using the framework of numerical optimization. By considering these facts, this paper exploits an idea of the stochastic approximation for solving the problem mentioned in the beginning and proposes a source seeking controller which sequentially generates the next waypoints such that the resulting discrete trajectory converges to the unknown source and which steers the robot along the waypoints, under the assumption that the robot can move to any point in the body fixed coordinate frame. To this end, we develop a rotation-invariant and forward-sided version of the simultaneous-perturbation stochastic approximation algorithm as a method to generate the next waypoints. Based on this algorithm, we design source seeking controllers. Furthermore, it is proven that the robot converges to a small set including the source in a probabilistic sense if the signal field switches periodically and sufficiently fast. The proposed controllers are demonstrated by numerical simulations."
]
} |
1702.01848 | 2952639003 | Robots such as autonomous underwater vehicles (AUVs) and autonomous surface vehicles (ASVs) have been used for sensing and monitoring aquatic environments such as oceans and lakes. Environmental sampling is a challenging task because the environmental attributes to be observed can vary both spatially and temporally, and the target environment is usually a large and continuous domain whereas the sampling data is typically sparse and limited. The challenges require that the sampling method must be informative and efficient enough to catch up with the environmental dynamics. In this paper we present a planning and learning method that enables a sampling robot to perform persistent monitoring tasks by learning and refining a spatiotemporal environmental model. Our environmental sampling framework consists of two components: to maximize the information collected, we propose an informative planning component that efficiently generates sampling waypoints that contain the maximal information; To alleviate the computational bottleneck caused by large-scale data accumulated, we develop a component based on a sparse Gaussian Process whose hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. We validate our method with both simulations running on real ocean data and field trials with an ASV in a lake environment. Our experiments show that the proposed framework is both accurate and efficient in learning the spatiotemporal environmental model | In artificial intelligence and robotics research domain, the planning and environment monitoring are two big and well studied topics. Here we focus on reviewing the works that are related to the informative planning and the GP-based environmental model prediction as well as the sparse variants of GPs. Formally, the planning method that navigates robots to collect and maximize information gain through sampling the environment is called informative planning @cite_12 @cite_1 . A variety of methodologies have been proposed to tackle the informative planning problem, among which the most investigated approaches belong to the nonmyopic framework. Formally, the term myopic means that the path waypoints are computed independently and greedily, without considering the cost and consequences of making observations in a long horizon of future. Instead, a nonmyopic strategy performs optimization and computes a series of waypoints by considering the effect of later time-steps @cite_27 . | {
"cite_N": [
"@cite_27",
"@cite_1",
"@cite_12"
],
"mid": [
"2125943099",
"2032239956",
""
],
"abstract": [
"In many sensing applications we must continuously gather information to provide a good estimate of the state of the environment at every point in time. A robot may tour an environment, gathering information every hour. In a wireless sensor network, these tours correspond to packets being transmitted. In these settings, we are often faced with resource restrictions, like energy constraints. The users issue queries with certain expectations on the answer quality. Thus, we must optimize the tours to ensure the satisfaction of the user constraints, while at the same time minimize the cost of the query plan. For a single timestep, this optimization problem is NP-hard, but recent approximation algorithms with theoretical guarantees provide good solutions. In this paper, we present a new efficient algorithm, exploiting dynamic programming and submodularity of the information collected, that efficiently plans data collection tours for an entire (finite) horizon. Our algorithm can use any single step procedure as a black box, and, based on its properties, provides strong theoretical guarantees for the solution. We also provide an extensive empirical analysis demonstrating the benefits of nonmyopic planning in two real world sensing applications.",
"We introduce a graph-based informative path planning algorithm for a mobile robot which explicitly handles time. The objective function must be submodular in the samples taken by the robot, and the samples obtained are allowed to depend on the time at which the robot visits each location. Using a submodular objective function allows our algorithm to handle problems with diminishing returns, e.g. the case when taking a sample provides less utility when other nearby points have already been sampled. We give a formal description of this framework wherein an objective function that maps the path of the robot to the set of samples taken is defined. We also show how this framework can handle the case in which the robot takes samples along the edges of the graph. A proof of the approximation guarantee for the algorithm is given. Finally, quantitative results are shown for three problems: one simple example with a known Gaussian process model, one simulated example for an underwater robot planning problem using data from a well-known ocean modeling system, and one field experiment using an autonomous surface vehicle (ASV) measuring wireless signal strength on a lake.",
""
]
} |
1702.01848 | 2952639003 | Robots such as autonomous underwater vehicles (AUVs) and autonomous surface vehicles (ASVs) have been used for sensing and monitoring aquatic environments such as oceans and lakes. Environmental sampling is a challenging task because the environmental attributes to be observed can vary both spatially and temporally, and the target environment is usually a large and continuous domain whereas the sampling data is typically sparse and limited. The challenges require that the sampling method must be informative and efficient enough to catch up with the environmental dynamics. In this paper we present a planning and learning method that enables a sampling robot to perform persistent monitoring tasks by learning and refining a spatiotemporal environmental model. Our environmental sampling framework consists of two components: to maximize the information collected, we propose an informative planning component that efficiently generates sampling waypoints that contain the maximal information; To alleviate the computational bottleneck caused by large-scale data accumulated, we develop a component based on a sparse Gaussian Process whose hyperparameters are learned online by taking advantage of only a subset of data that provides the greatest contribution. We validate our method with both simulations running on real ocean data and field trials with an ASV in a lake environment. Our experiments show that the proposed framework is both accurate and efficient in learning the spatiotemporal environmental model | A critical problem for persistent (long-term even life-long) tasks that one must consider is the large-scale data accumulated. Although abundant data might predict the most accurate model, in practice large amounts of data can exceed the robot's onboard computational capacity. Methods for reducing the computing burdens of GPs have been previously investigated. For example, GP regressions can be done in a real-time fashion where the problem can be estimated locally with local data @cite_23 . Another representative framework is a sparse representations of the GP model @cite_26 @cite_52 which is based on a combination of a Bayesian online algorithm together with a sequential construction of the most relevant subset of the data. This method allows the model to be refined in a recursive way as the data streams in. The framework has been further extended to many application domains such as visual tracking @cite_6 and spatial modeling @cite_59 . | {
"cite_N": [
"@cite_26",
"@cite_52",
"@cite_6",
"@cite_59",
"@cite_23"
],
"mid": [
"2129564505",
"2123687908",
"2143013621",
"2007236156",
"2134122536"
],
"abstract": [
"We develop an approach for sparse representations of gaussian process (GP) models (which are Bayesian types of kernel machines) in order to overcome their limitations for large data sets. The method is based on a Combination of a Bayesian on-line algorithm, together with a sequential construction of a relevant subsample of the data that fully specifies the prediction of the GP model. By using an appealing parameterization and projection techniques in a reproducing kernel Hilbert space, recursions for the effective parameters and a sparse gaussian approximation of the posterior process are obtained. This allows for both a propagation of predictions and Bayesian error measures. The significance and robustness of our approach are demonstrated on a variety of experiments.",
"We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n2m), storage is O(nm), the cost for prediction is O(n) and the cost to compute confidence bounds is O(nm), where n ≪ m. We show how to compute a stopping criterion, give bounds on the approximation error, and show applications to large scale problems.",
"We present a new Gaussian process (GP) inference algorithm, called online sparse matrix Gaussian processes (OSMGP), and demonstrate its merits by applying it to the problems of head pose estimation and visual tracking. The OSMGP is based upon the observation that for kernels with local support, the Gram matrix is typically sparse. Maintaining and updating the sparse Cholesky factor of the Gram matrix can be done efficiently using Givens rotations. This leads to an exact, online algorithm whose update time scales linearly with the size of the Gram matrix. Further, we provide a method for constant time operation of the OSMGP using matrix downdates. The downdates maintain the Cholesky factor at a constant size by removing certain rows and columns corresponding to discarded training examples. We demonstrate that, using these matrix downdates, online hyperparameter estimation can be included at cost linear in the number of total training examples. We describe a robust appearance-based head pose estimation system based upon the OSMGP. Numerous experiments and comparisons with existing methods using a large dataset system demonstrate the efficiency and accuracy of our system. Further, to showcase the applicability of OSMGP to a wide variety of problems, we also describe a regression-based visual tracking method. Experiments show that our OSMGP algorithm generalizes well using online learning.",
"In this paper, we consider the problem of learning a two dimensional spatial model of a gas distribution with a mobile robot. Building maps that can be used to accurately predict the gas concentration at query locations is a challenging task due to the chaotic nature of gas dispersal. We present an approach that formulates this task as a regression problem. To deal with the specific properties of typical gas distributions, we propose a sparse Gaussian process mixture model. This allows us to accurately represent the smooth background signal as well as areas of high concentration. We integrate the sparsification of the training data into an EM procedure used for learning the mixture components and the gating function. Our approach has been implemented and tested using datasets recorded with a real mobile robot equipped with an electronic nose. We demonstrate that our models are well suited for predicting gas concentrations at new query locations and that they outperform alternative methods used in robotics to carry out in this task.",
"Learning in real-time applications, e.g., online approximation of the inverse dynamics model for model-based robot control, requires fast online regression techniques. Inspired by local learning, we propose a method to speed up standard Gaussian process regression (GPR) with local GP models (LGP). The training data is partitioned in local regions, for each an individual GP model is trained. The prediction for a query point is performed by weighted estimation using nearby local models. Unlike other GP approximations, such as mixtures of experts, we use a distance based measure for partitioning of the data and weighted prediction. The proposed method achieves online learning and prediction in real-time. Comparisons with other non-parametric regression methods show that LGP has higher accuracy than LWPR and close to the performance of standard GPR and ν-SVR."
]
} |
1702.02030 | 2951396696 | Although there exist plentiful theories of empirical risk minimization (ERM) for supervised learning, current theoretical understandings of ERM for a related problem---stochastic convex optimization (SCO), are limited. In this work, we strengthen the realm of ERM for SCO by exploiting smoothness and strong convexity conditions to improve the risk bounds. First, we establish an @math risk bound when the random function is nonnegative, convex and smooth, and the expected function is Lipschitz continuous, where @math is the dimensionality of the problem, @math is the number of samples, and @math is the minimal risk. Thus, when @math is small we obtain an @math risk bound, which is analogous to the @math optimistic rate of ERM for supervised learning. Second, if the objective function is also @math -strongly convex, we prove an @math risk bound where @math is the condition number, and improve it to @math when @math . As a result, we obtain an @math risk bound under the condition that @math is large and @math is small, which to the best of our knowledge, is the first @math -type of risk bound of ERM. Third, we stress that the above results are established in a unified framework, which allows us to derive new risk bounds under weaker conditions, e.g., without convexity of the random function and Lipschitz continuity of the expected function. Finally, we demonstrate that to achieve an @math risk bound for supervised learning, the @math requirement on @math can be replaced with @math , which is dimensionality-independent. | As we mentioned earlier, there are few works devoted to ERM for stochastic optimization. When @math is bounded and @math is Lipschitz continuous, @cite_5 demonstrate that @math converges to @math uniformly over @math with an @math error bound that holds with high probability, implying an @math risk bound of ERM. They further establish an @math risk bound of ERM that holds in expectation when @math is @math -strongly convex and Lipschitz continuous. Stochastic optimization with exp-concave functions is studied recently , Their excess risk bound is for a regularized empirical risk minimizer. and @cite_6 proves an @math bound of ERM that holds with high probability when @math is @math -exp-concave, Lipschitz continuous, and bounded. Lower bounds of ERM for stochastic optimization is investigated by @cite_8 , who exhibits (i) a lower bound of @math sample complexity for uniform convergence that nearly matches the upper bound of @cite_5 ; and (ii) a lower bound of @math sample complexity of ERM, which is matched by our @math bound when @math is small. | {
"cite_N": [
"@cite_5",
"@cite_6",
"@cite_8"
],
"mid": [
"1537193711",
"2547508527",
"2517669009"
],
"abstract": [
"Introduction 1. Notation and preliminary background 2. Gaussian variables. K-convexity 3. Ellipsoids 4. Dvoretzky's theorem 5. Entropy, approximation numbers, and Gaussian processes 6. Volume ratio 7. Milman's ellipsoids 8. Another proof of the QS theorem 9. Volume numbers 10. Weak cotype 2 11. Weak type 2 12. Weak Hilbert spaces 13. Some examples: the Tsirelson spaces 14. Reflexivity of weak Hilbert spaces 15. Fredholm determinants Final remarks Bibliography Index.",
"We present an algorithm for the statistical learning setting with a bounded exp-concave loss in @math dimensions that obtains excess risk @math with probability at least @math . The core technique is to boost the confidence of recent in-expectation @math excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-concavity. We also show that with probability @math the standard ERM method obtains excess risk @math . We further show that a regret bound for any online learner in this setting translates to a high probability excess risk bound for the corresponding online-to-batch conversion of the online learner. Lastly, we present two high probability bounds for the exp-concave model selection aggregation problem that are quantile-adaptive in a certain sense. The first bound is a purely exponential weights type algorithm, obtains a nearly optimal rate, and has no explicit dependence on the Lipschitz continuity of the loss. The second bound requires Lipschitz continuity but obtains the optimal rate.",
"In stochastic convex optimization the goal is to minimize a convex function @math over a convex set @math where @math is some unknown distribution and each @math in the support of @math is convex over @math . The optimization is commonly based on i.i.d. samples @math from @math . A standard approach to such problems is empirical risk minimization (ERM) that optimizes @math . Here we consider the question of how many samples are necessary for ERM to succeed and the closely related question of uniform convergence of @math to @math over @math . We demonstrate that in the standard @math setting of Lipschitz-bounded functions over a @math of bounded radius, ERM requires sample size that scales linearly with the dimension @math . This nearly matches standard upper bounds and improves on @math dependence proved for @math setting by Shalev- (2009). In stark contrast, these problems can be solved using dimension-independent number of samples for @math setting and @math dependence for @math setting using other approaches. We further show that our lower bound applies even if the functions in the support of @math are smooth and efficiently computable and even if an @math regularization term is added. Finally, we demonstrate that for a more general class of bounded-range (but not Lipschitz-bounded) stochastic convex programs an infinite gap appears already in dimension 2."
]
} |
1702.02030 | 2951396696 | Although there exist plentiful theories of empirical risk minimization (ERM) for supervised learning, current theoretical understandings of ERM for a related problem---stochastic convex optimization (SCO), are limited. In this work, we strengthen the realm of ERM for SCO by exploiting smoothness and strong convexity conditions to improve the risk bounds. First, we establish an @math risk bound when the random function is nonnegative, convex and smooth, and the expected function is Lipschitz continuous, where @math is the dimensionality of the problem, @math is the number of samples, and @math is the minimal risk. Thus, when @math is small we obtain an @math risk bound, which is analogous to the @math optimistic rate of ERM for supervised learning. Second, if the objective function is also @math -strongly convex, we prove an @math risk bound where @math is the condition number, and improve it to @math when @math . As a result, we obtain an @math risk bound under the condition that @math is large and @math is small, which to the best of our knowledge, is the first @math -type of risk bound of ERM. Third, we stress that the above results are established in a unified framework, which allows us to derive new risk bounds under weaker conditions, e.g., without convexity of the random function and Lipschitz continuity of the expected function. Finally, we demonstrate that to achieve an @math risk bound for supervised learning, the @math requirement on @math can be replaced with @math , which is dimensionality-independent. | It is worth mentioning the difference among proof techniques in these works. The uniform convergence result of @cite_5 leverages the covering number to bound @math for any @math . The analysis for strongly convex functions by @cite_5 and exp-concave functions by @cite_1 utilize the tool of stability, which only produces risk bounds that hold in expectation. A simple way to achieve a high probability bound is to use ERM combined with a generic or specific boosting-the-confidence method , but the guarantee is not directly on the empirical minimizer as noted by @cite_5 . The convergence of ERM given by @cite_6 relies on a central condition or stochastic mixability'' of the exp-concave function. In this paper, we present a general approach for analyzing ERM for SCO of smooth functions. In particular, our analysis is based on a uniform convergence of @math to @math for any @math , and a concentration inequality of @math , where @math is the optimal solution to ). | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_6"
],
"mid": [
"1537193711",
"2189962665",
"2547508527"
],
"abstract": [
"Introduction 1. Notation and preliminary background 2. Gaussian variables. K-convexity 3. Ellipsoids 4. Dvoretzky's theorem 5. Entropy, approximation numbers, and Gaussian processes 6. Volume ratio 7. Milman's ellipsoids 8. Another proof of the QS theorem 9. Volume numbers 10. Weak cotype 2 11. Weak type 2 12. Weak Hilbert spaces 13. Some examples: the Tsirelson spaces 14. Reflexivity of weak Hilbert spaces 15. Fredholm determinants Final remarks Bibliography Index.",
"We consider Empirical Risk Minimization (ERM) in the context of stochastic optimization with exp-concave and smooth losses—a general optimization framework that captures several important learning problems including linear and logistic regression, learning SVMs with the squared hinge-loss, portfolio selection and more. In this setting, we establish the first evidence that ERM is able to attain fast generalization rates, and show that the expected loss of the ERM solution in d dimensions converges to the optimal expected loss in a rate of d n. This rate matches existing lower bounds up to constants and improves by a log n factor upon the state-of-the-art, which is only known to be attained by an online-to-batch conversion of computationally expensive online algorithms.",
"We present an algorithm for the statistical learning setting with a bounded exp-concave loss in @math dimensions that obtains excess risk @math with probability at least @math . The core technique is to boost the confidence of recent in-expectation @math excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-concavity. We also show that with probability @math the standard ERM method obtains excess risk @math . We further show that a regret bound for any online learner in this setting translates to a high probability excess risk bound for the corresponding online-to-batch conversion of the online learner. Lastly, we present two high probability bounds for the exp-concave model selection aggregation problem that are quantile-adaptive in a certain sense. The first bound is a purely exponential weights type algorithm, obtains a nearly optimal rate, and has no explicit dependence on the Lipschitz continuity of the loss. The second bound requires Lipschitz continuity but obtains the optimal rate."
]
} |
1702.02030 | 2951396696 | Although there exist plentiful theories of empirical risk minimization (ERM) for supervised learning, current theoretical understandings of ERM for a related problem---stochastic convex optimization (SCO), are limited. In this work, we strengthen the realm of ERM for SCO by exploiting smoothness and strong convexity conditions to improve the risk bounds. First, we establish an @math risk bound when the random function is nonnegative, convex and smooth, and the expected function is Lipschitz continuous, where @math is the dimensionality of the problem, @math is the number of samples, and @math is the minimal risk. Thus, when @math is small we obtain an @math risk bound, which is analogous to the @math optimistic rate of ERM for supervised learning. Second, if the objective function is also @math -strongly convex, we prove an @math risk bound where @math is the condition number, and improve it to @math when @math . As a result, we obtain an @math risk bound under the condition that @math is large and @math is small, which to the best of our knowledge, is the first @math -type of risk bound of ERM. Third, we stress that the above results are established in a unified framework, which allows us to derive new risk bounds under weaker conditions, e.g., without convexity of the random function and Lipschitz continuity of the expected function. Finally, we demonstrate that to achieve an @math risk bound for supervised learning, the @math requirement on @math can be replaced with @math , which is dimensionality-independent. | Generally speaking, when @math has finite VC-dimension, the excess risk can be upper bounded by @math , where @math is the VC-dimension of @math . If the loss @math is Lipschitz continuous with respect to its first argument, we have a risk bound of @math , where @math is the Rademacher complexity of @math . The Rademacher complexity typically scales as @math , e.g., @math contains linear functions with low-norm, implying an @math risk bound . There have been intensive efforts to derive rates faster than @math under various conditions , such as low-noise , smoothness , strong convexity , to name a few amongst many. Specifically, when the random function @math is nonnegative and smooth, @cite_4 have established a risk bound of @math , reducing to an @math bound if @math and @math . A generalized linear form of ) is studied by @cite_7 , and a risk bound of @math is proved if the expected function @math is @math -strongly convex. | {
"cite_N": [
"@cite_4",
"@cite_7"
],
"mid": [
"6657555",
"2114609025"
],
"abstract": [
"We establish an excess risk bound of O(H R_n^2 + R_n H L* ) for empirical risk minimization with an H-smooth loss function and a hypothesis class with Rademacher complexity R_n, where L* is the best risk achievable by the hypothesis class. For typical hypothesis classes where R_n = R n , this translates to a learning rate of O(RH n) in the separable (L*=0) case and O(RH n + L^* RH n ) more generally. We also provide similar guarantees for online and stochastic convex optimization with a smooth non-negative objective.",
"We study convergence properties of empirical minimization of a stochastic strongly convex objective, where the stochastic component is linear. We show that the value attained by the empirical minimizer converges to the optimal value with rate 1 n. The result applies, in particular, to the SVM objective. Thus, we obtain a rate of 1 n on the convergence of the SVM objective (with fixed regularization parameter) to its infinite data limit. We demonstrate how this is essential for obtaining certain type of oracle inequalities for SVMs. The results extend also to approximate minimization as well as to strong convexity with respect to an arbitrary norm, and so also to objectives regularized using other lp norms."
]
} |
1702.01824 | 2908553673 | Matrix factorization is at the heart of many machine learning algorithms, for example, dimensionality reduction (e.g. kernel PCA) or recommender systems relying on collaborative filtering. Understanding a singular value decomposition (SVD) of a matrix as a neural network optimization problem enables us to decompose large matrices efficiently while dealing naturally with missing values in the given matrix. But most importantly, it allows us to learn the connection between data points' feature vectors and the matrix containing information about their pairwise relations. In this paper we introduce a novel neural network architecture termed Similarity Encoder (SimEc), which is designed to simultaneously factorize a given target matrix while also learning the mapping to project the data points' feature vectors into a similarity preserving embedding space. This makes it possible to, for example, easily compute out-of-sample solutions for new data points. Additionally, we demonstrate that SimEc can preserve non-metric similarities and even predict multiple pairwise relations between data points at once. | The optimal (in a least squares sense) low dimensional embeddings to factorize a matrix @math or @math can be found by computing a singular value decomposition (SVD) or eigendecompositon of the matrix and using the @math largest eigenvalues and corresponding eigenvectors to compute a low rank approximation of the matrix. However, performing an SVD is computationally very expensive for large matrices, and in these cases requires the use of approximate iterative methods @cite_35 . Furthermore, an exact decomposition can not be computed for matrices that contain missing values, in which case weighted error functions need to be employed @cite_0 . Back in 1982, a simple neural network (NN) was conceived to compute a PCA @cite_44 and in 1992, NNs were proposed as a method to efficiently compute the SVD @cite_3 or eigendecomposition @cite_37 of a matrix while naturally dealing with missing values in the target matrix, which we will discuss in more detail in . | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_3",
"@cite_0",
"@cite_44"
],
"mid": [
"2054141820",
"2090389282",
"2099018497",
"2101409192",
"2432567885"
],
"abstract": [
"As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques for producing product recommendations, allowing the incorporation of additional information such as implicit feedback, temporal effects, and confidence levels.",
"Two new algorithms and associated neuron-like network architectures are proposed for solving the eigenvalue problem in real-time. The first approach is based on the solution of a set of nonlinear algebraic equations by employing optimization techniques. The second approach employs a multilayer neural network with linear artificial neurons and it exploits the continuous-time error back-propagation learning algorithm. The second approach enables us to find all the eigenvalues and the associated eigenvectors simultaneously by training the network to match some desired patterns, while the first approach is suitable to find during one run only one particular eigenvalue (e.g. an extreme eigenvalue) and the corresponding eigenvector in realtime. In order to find all eigenpairs the optimization process must be repeated in this case many times for different initial conditions. The performance and convergence behaviour of the proposed neural network architectures are investigated by extensive computer simulations.",
"A new massively parallel algorithm for singular value decomposition (SVD) has been proposed. To implement this algorithm an analogue neuron-like multilayer architecture with continuous-time learning rules has been developed. Extensive computer simulation experiments have confirmed the validity and high performance of the proposed algorithm. The proposed neural network associated with learning rules may be viewed as a nonlinear control feedback-loop system. This conceptual viewpoint enables many powerful techniques and methods developed in control and system theory to be employed to improve the convergence of the learning algorithm.",
"A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model.",
"A simple linear neuron model with constrained Hebbian-type synaptic modification is analyzed and a new class of unconstrained learning rules is derived. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence."
]
} |
1702.02052 | 2586087033 | Domain adaptation is crucial in many real-world applications where the distribution of the training data differs from the distribution of the test data. Previous Deep Learning-based approaches to domain adaptation need to be trained jointly on source and target domain data and are therefore unappealing in scenarios where models need to be adapted to a large number of domains or where a domain is evolving, e.g. spam detection where attackers continuously change their tactics. @PARASPLIT To fill this gap, we propose Knowledge Adaptation, an extension of Knowledge Distillation (, 2006; , 2015) to the domain adaptation scenario. We show how a student model achieves state-of-the-art results on unsupervised domain adaptation from multiple sources on a standard sentiment analysis benchmark by taking into account the domain-specific expertise of multiple teachers and the similarities between their domains. @PARASPLIT When learning from a single teacher, using domain similarity to gauge trustworthiness is inadequate. To this end, we propose a simple metric that correlates well with the teacher's accuracy in the target domain. We demonstrate that incorporating high-confidence examples selected by this metric enables the student model to achieve state-of-the-art performance in the single-source scenario. | Deep learning-based approaches to domain adaptation are more recent and have focused mainly on learning domain-invariant representations: Glorot2011a first employed stacked Denoising Auto-encoders (SDA) to extract meaningful representations. Chen2012 in turn extended SDA to marginalized SDA by addressing SDA's high computational cost and lack of scalability to high-dimensional features, while Zhuang2015 proposed to use deep auto-encoders for transfer learning. @cite_17 added a Gradient Reversal Layer that hinders the model's ability to discriminate between domains. Finally, Zhou2016 transferred the source examples to the target domain and vice versa using Bi-Transferring Deep Neural Networks, while Bousmalis2016 propose Domain Separation Networks. All of these approaches, however, require to jointly train the model on source and target data for every new target domain. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1731081199"
],
"abstract": [
"We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application."
]
} |
1702.02052 | 2586087033 | Domain adaptation is crucial in many real-world applications where the distribution of the training data differs from the distribution of the test data. Previous Deep Learning-based approaches to domain adaptation need to be trained jointly on source and target domain data and are therefore unappealing in scenarios where models need to be adapted to a large number of domains or where a domain is evolving, e.g. spam detection where attackers continuously change their tactics. @PARASPLIT To fill this gap, we propose Knowledge Adaptation, an extension of Knowledge Distillation (, 2006; , 2015) to the domain adaptation scenario. We show how a student model achieves state-of-the-art results on unsupervised domain adaptation from multiple sources on a standard sentiment analysis benchmark by taking into account the domain-specific expertise of multiple teachers and the similarities between their domains. @PARASPLIT When learning from a single teacher, using domain similarity to gauge trustworthiness is inadequate. To this end, we propose a simple metric that correlates well with the teacher's accuracy in the target domain. We demonstrate that incorporating high-confidence examples selected by this metric enables the student model to achieve state-of-the-art performance in the single-source scenario. | For domain adaptation from multiple sources, Mansour2009a proposed a distribution weighted hypothesis with theoretical guarantees. Duan2009 proposed a method to learn a least-squares SVM classifer by leveraging source classifiers, while @cite_12 assign pseudo-labels to the target data. Finally, Wu2016a exploit general sentiment knowledge and word-level sentiment polarity relations for multi-source domain adaptation. | {
"cite_N": [
"@cite_12"
],
"mid": [
"1979511819"
],
"abstract": [
"We consider the characterization of muscle fatigue through noninvasive sensing mechanism such as surface electromyography (SEMG). While changes in the properties of SEMG signals with respect to muscle fatigue have been reported in the literature, the large variation in these signals across different individuals makes the task of modeling and classification of SEMG signals challenging. Indeed, the variation in SEMG parameters from subject to subject creates differences in the data distribution. In this paper, we propose a transfer learning framework based on the multi-source domain adaptation methodology for detecting different stages of fatigue using SEMG signals, that addresses the distribution differences. In the proposed framework, the SEMG data of a subject represent a domain; data from multiple subjects in the training set form the multiple source domains and the test subject data form the target domain. SEMG signals are predominantly different in conditional probability distribution across subjects. The key feature of the proposed framework is a novel weighting scheme that addresses the conditional probability distribution differences across multiple domains (subjects). We have validated the proposed framework on Surface Electromyogram signals collected from 8 people during a fatigue-causing repetitive gripping activity. Comprehensive experiments on the SEMG data set demonstrate that the proposed method improves the classification accuracy by 20 to 30 over the cases without any domain adaptation method and by 13 to 30 over the existing state-of-the-art domain adaptation methods."
]
} |
1702.01932 | 2950902819 | Neural network models are capable of generating extremely natural sounding conversational interactions. Nevertheless, these models have yet to demonstrate that they can incorporate content in the form of factual information or entity-grounded opinion that would enable them to serve in more task-oriented conversational applications. This paper presents a novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses without slot filling. We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting. Our approach yields significant improvements over a competitive Seq2Seq baseline. Human judges found that our outputs are significantly more informative. | The present work situates itself within the data-driven paradigm of conversation generation, in which statistical and neural machine translation models are derived from conversational data @cite_8 @cite_10 @cite_31 @cite_2 @cite_19 @cite_37 . The introduction of contextual models by @cite_10 was an important advance within this framework, and we extend their basic approach by injecting side information from textual data. Introduction of side information has been shown to be beneficial to machine translation @cite_3 , as has also the incorporation of images into multi-modal translation @cite_5 @cite_26 . Similarly, @cite_1 employ a knowledge graph to embed side information into dialog systems. Multi-task learning can be helpful in tasks ranging from query classification to machine translation @cite_20 @cite_27 @cite_12 @cite_9 . We adopt this approach in order to implicitly encode relevant external knowledge from textual data. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_31",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"2963206148",
"2730360481",
"10957333",
"2963842982",
"2963217826",
"2472014052",
"1591706642",
"2251743902",
"2963963856",
"2513263213",
"889023230",
"2951580200",
"2951008357",
""
],
"abstract": [
"Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., I don’t know) regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.",
"In Multimodal Neural Machine Translation (MNMT), a neural model generates a translated sentence that describes an image, given the image itself and one source descriptions in English. This is considered as the multimodal image caption translation task. The images are processed with Convolutional Neural Network (CNN) to extract visual features exploitable by the translation model. So far, the CNNs used are pre-trained on object detection and localization task. We hypothesize that richer architecture, such as dense captioning models, may be more suitable for MNMT and could lead to improved translations. We extend this intuition to the word-embeddings, where we compute both linguistic and visual representation for our corpus vocabulary. We combine and compare different confi",
"We present a data-driven approach to generating responses to Twitter status posts, based on phrase-based Statistical Machine Translation. We find that mapping conversational stimuli onto responses is more difficult than translating between languages, due to the wider range of possible responses, the larger fraction of unaligned words phrases, and the presence of large phrase pairs whose alignment cannot be further decomposed. After addressing these challenges, we compare approaches based on SMT and Information Retrieval in a human evaluation. We show that SMT outperforms IR on this task, and its output is preferred over actual human responses in 15 of cases. As far as we are aware, this is the first work to investigate the use of phrase-based SMT to directly translate a linguistic stimulus into an appropriate response.",
"Abstract: Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. To date, most of its applications focused on only one task and not much work explored this framework for multiple tasks. This paper examines three multi-task learning (MTL) settings for sequence to sequence models: (a) the oneto-many setting - where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting - useful when only the decoder can be shared, as in the case of translation and image caption generation, and (c) the many-to-many setting - where multiple encoders and decoders are shared, which is the case with unsupervised objectives and translation. Our results show that training on a small amount of parsing and image caption data can improve the translation quality between English and German by up to 1.5 BLEU points over strong single-task baselines on the WMT benchmarks. Furthermore, we have established a new state-of-the-art result in constituent parsing with 93.0 F1. Lastly, we reveal interesting properties of the two unsupervised learning objectives, autoencoder and skip-thought, in the MTL context: autoencoder helps less in terms of perplexities but more on BLEU scores compared to skip-thought.",
"",
"",
"Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.",
"In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.",
"We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation. NRM takes the general encoderdecoder framework: it formalizes the generation of response as a decoding process based on the latent representation of the input text, while both encoding and decoding are realized with recurrent neural networks (RNN). The NRM is trained with a large amount of one-round conversation data collected from a microblogging service. Empirical study shows that NRM can generate grammatically correct and content-wise appropriate responses to over 75 of the input text, outperforming stateof-the-arts in the same setting, including retrieval-based and SMT-based models.",
"We present a novel neural machine translation (NMT) architecture associating visual and textual features for translation tasks with multiple modalities. Transformed global and regional visual features are concatenated with text to form attendable sequences which are dissipated over parallel long short-term memory (LSTM) threads to assist the encoder generating a representation for attention-based decoding. Experiments show that the proposed NMT outperform the text-only baseline.",
"We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.",
"We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.",
"We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
""
]
} |
1702.01932 | 2950902819 | Neural network models are capable of generating extremely natural sounding conversational interactions. Nevertheless, these models have yet to demonstrate that they can incorporate content in the form of factual information or entity-grounded opinion that would enable them to serve in more task-oriented conversational applications. This paper presents a novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses without slot filling. We generalize the widely-used Seq2Seq approach by conditioning responses on both conversation history and external "facts", allowing the model to be versatile and applicable in an open-domain setting. Our approach yields significant improvements over a competitive Seq2Seq baseline. Human judges found that our outputs are significantly more informative. | This work should be seen as distinct from more goal-directed neural dialog modeling in which question-answer slots are explicitly learned from small amounts of crowd-sourced data, customer support logs, or user data @cite_17 @cite_0 @cite_29 @cite_35 . In many respects, that paradigm can be characterized as the neural extension of conventional dialog models with or without statistical modeling, e.g., @cite_13 @cite_23 @cite_21 @cite_28 . Our purpose is to explore the space of less clearly goal-directed, but nonetheless informative (i.e., informational) conversation that does not demand explicit slot-filling. | {
"cite_N": [
"@cite_35",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_0",
"@cite_23",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"194577561",
"",
"2160458012",
"2291723583",
"1970207841",
"2004637830",
"1948566616"
],
"abstract": [
"",
"Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.",
"",
"This system demonstration paper presents IRIS (Informal Response Interactive System), a chat-oriented dialogue system based on the vector space model framework. The system belongs to the class of example-based dialogue systems and builds its chat capabilities on a dual search strategy over a large collection of dialogue samples. Additional strategies allowing for system adaptation and learning implemented over the same vector model space framework are also described and discussed.",
"©2016 Association for Computational Linguistics. Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.",
"Abstract We study how decisions for word ordering and word choice in surface natural language generation can be automatically learned from annotated data. We examine four trainable systems for surface natural language generation in the air travel domain, called NLG[1–4]. NLG1 is a lookup table which stores the most frequent phrase to express a concept, and is intended as a baseline system for comparison purposes. NLG2 and NLG3 attempt to find the highest probability word sequence with respect to a maximum entropy probability model. They differ in that NLG2 predicts words left-to-right, while NLG3 predicts words in dependency tree order. NLG4 requires a dependency-style grammar of phrase fragments and conditions on their use, and attempts to find the highest probability word sequence that is consistent with the rules and conditions of the grammar. NLG4 has been implemented in a dialog strategy for a prototype air travel conversational system, in which word order is dynamically modified to emphasize certain aspects of the run-time dialog state.",
"The two current approaches to language generation, template-based and rule-based (linguistic) NLG, have limitations when applied to spoken dialogue systems, in part because they were developed for text generation. In this paper, we propose a new corpus-based approach to natural language generation, specifically designed for spoken dialogue systems.",
"© 2015 Association for Computational Linguistics. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.."
]
} |
1702.02175 | 2586572282 | Complementing images with inertial measurements has become one of the most popular approaches to achieve highly accurate and robust real-time camera pose tracking. In this paper, we present a keyframe-based approach to visual-inertial simultaneous localization and mapping (SLAM) for monocular and stereo cameras. Our visual-inertial SLAM system is based on a real-time capable visual-inertial odometry method that provides locally consistent trajectory and map estimates. We achieve global consistency in the estimate through online loop-closing and non-linear optimization. Furthermore, our system supports relocalization in a map that has been previously obtained and allows for continued SLAM operation. We evaluate our approach in terms of accuracy, relocalization capability and run-time efficiency on public indoor benchmark datasets and on newly recorded outdoor sequences. We demonstrate state-of-the-art performance of our system compared to a visual-inertial odometry method and baseline visual SLAM approaches in recovering the trajectory of the camera. | Over the last decades, tremendous progress has been achieved in the development of visual localization and SLAM methods. Many of the current state-of-the-art SLAM systems for monocular or stereo cameras use keyframes in order to locally track the camera motion towards a reference frame. They optimize the camera trajectory for global consistency based on relative spatial constraints in a subsequent SLAM graph optimization layer. Two prominent recent examples of such methods are ORB-SLAM @cite_22 and LSD-SLAM @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_22"
],
"mid": [
"612478963",
"1612997784"
],
"abstract": [
"We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct methods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on ( sim (3) ), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.",
"This paper presents ORB-SLAM, a feature-based monocular simultaneous localization and mapping (SLAM) system that operates in real time, in small and large indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public."
]
} |
1702.02175 | 2586572282 | Complementing images with inertial measurements has become one of the most popular approaches to achieve highly accurate and robust real-time camera pose tracking. In this paper, we present a keyframe-based approach to visual-inertial simultaneous localization and mapping (SLAM) for monocular and stereo cameras. Our visual-inertial SLAM system is based on a real-time capable visual-inertial odometry method that provides locally consistent trajectory and map estimates. We achieve global consistency in the estimate through online loop-closing and non-linear optimization. Furthermore, our system supports relocalization in a map that has been previously obtained and allows for continued SLAM operation. We evaluate our approach in terms of accuracy, relocalization capability and run-time efficiency on public indoor benchmark datasets and on newly recorded outdoor sequences. We demonstrate state-of-the-art performance of our system compared to a visual-inertial odometry method and baseline visual SLAM approaches in recovering the trajectory of the camera. | Visual localization, e.g. on mobile phones, has typically been approached by mapping the environment in an offline process and localizing within the prebuilt map afterwards (e.g., @cite_21 @cite_1 @cite_19 @cite_6 ). A recent visual-inertial localization approach that runs efficiently on mobile devices has recently been proposed by @cite_6 . This approach uses sophisticated map compression and image matching techniques to localize the camera in a prebuilt interest-point based map. However, in contrast to our visual-inertial SLAM approach, these methods do not allow for continued global mapping of the environment using SLAM. | {
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_1",
"@cite_6"
],
"mid": [
"1552672384",
"2099361355",
"2059991351",
"2278591674"
],
"abstract": [
"Recent improvements in image-based localization have produced powerful methods that scale up to the massive 3D models emerging from modern Structure-from-Motion techniques. However, these approaches are too resource intensive to run in real-time, let alone to be implemented on mobile devices. In this paper, we propose to combine the scalability of such a global localization system running on a server with the speed and precision of a local pose tracker on a mobile device. Our approach is both scalable and drift-free by design and eliminates the need for loop closure. We propose two strategies to combine the information provided by local tracking and global localization. We evaluate our system on a large-scale dataset of the historic inner city of Aachen where it achieves interactive framerates at a localization error of less than 50cm while using less than 5MB of memory on the mobile device.",
"We present a real-time approach for image-based localization within large scenes that have been reconstructed offline using structure from motion (Sfm). From monocular video, our method continuously computes a precise 6-DOF camera pose, by efficiently tracking natural features and matching them to 3D points in the Sfm point cloud. Our main contribution lies in efficiently interleaving a fast keypoint tracker that uses inexpensive binary feature descriptors with a new approach for direct 2D-to-3D matching. The 2D-to-3D matching avoids the need for online extraction of scale-invariant features. Instead, offline we construct an indexed database containing multiple DAISY descriptors per 3D point extracted at multiple scales. The key to the efficiency of our method lies in invoking DAISY descriptor extraction and matching sparingly during localization, and in distributing this computation over a window of successive frames. This enables the algorithm to run in real-time, without fluctuations in the latency over long durations. We evaluate the method in large indoor and outdoor scenes. Our algorithm runs at over 30 Hz on a laptop and at 12 Hz on a low-power, mobile computer suitable for onboard computation on a quadrotor micro aerial vehicle.",
"We propose a system for easily preparing arbitrary wide-area environments for subsequent real-time tracking with a handheld device. Our system evaluation shows that minimal user effort is required to initialize a camera tracking session in an unprepared environment. We combine panoramas captured using a handheld omnidirectional camera from several viewpoints to create a point cloud model. After the offline modeling step, live camera pose tracking is initialized by feature point matching, and continuously updated by aligning the point cloud model to the camera image. Given a reconstruction made with less than five minutes of video, we achieve below 25 cm translational error and 0.5 degrees rotational error for over 80 of images tested. In contrast to camera-based simultaneous localization and mapping (SLAM) systems, our methods are suitable for handheld use in large outdoor spaces.",
"Accurately estimating a robot's pose relative to a global scene model and precisely tracking the pose in real-time is a fundamental problem for navigation and obstacle avoidance tasks. Due to the computational complexity of localization against a large map and the memory consumed by the model, state-of-the-art approaches are either limited to small workspaces or rely on a server-side system to query the global model while tracking the pose locally. The latter approaches face the problem of smoothly integrating the server's pose estimates into the trajectory computed locally to avoid temporal discontinuities. In this paper, we demonstrate that large-scale, real-time pose estimation and tracking can be performed on mobile platforms with limited resources without the use of an external server. This is achieved by employing map and descriptor compression schemes as well as efficient search algorithms from computer vision. We derive a formulation for integrating the global pose information into a local state estimator that produces much smoother trajectories than current approaches. Through detailed experiments, we evaluate each of our design choices individually and document its impact on the overall system performance, demonstrating that our approach outperforms state-of-the-art algorithms for localization at scale."
]
} |
1702.01446 | 2594880695 | A regret minimizing set Q is a small size representation of a much larger database P so that user queries executed on Q return answers whose scores are not much worse than those on the full dataset. In particular, a k-regret minimizing set has the property that the regret ratio between the score of the top-1 item in Q and the score of the top-k item in P is minimized, where the score of an item is the inner product of the item's attributes with a user's weight (preference) vector. The problem is challenging because we want to find a single representative set Q whose regret ratio is small with respect to all possible user weight vectors. We show that k-regret minimization is NP-Complete for all dimensions d >= 3. This settles an open problem from [VLDB 2014], and resolves the complexity status of the problem for all d: the problem is known to have polynomial-time solution for d <= 2. In addition, we propose two new approximation schemes for regret minimization, both with provable guarantees, one based on coresets and another based on hitting sets. We also carry out extensive experimental evaluation, and show that our schemes compute regret-minimizing sets comparable in size to the greedy algorithm proposed in [VLDB 14] but our schemes are significantly faster and scalable to large data sets. | relatedWork The work on regret minimization was inspired by preference top- @math and skyline queries. Both of these research topics try to help a user find the best objects'' from a database. Top- @math queries assign scores to objects by some method, and return the objects with the topmost @math scores while the skyline query finds the objects such that no other object can be strictly better. Efficiently answering top- @math queries has seen a long line of work, see e.g. @cite_38 @cite_7 @cite_11 @cite_1 @cite_32 @cite_42 @cite_8 @cite_15 @cite_46 @cite_36 @cite_26 and the survey @cite_23 . In earlier work, the ranking of points was done by weight, i.e., ranking criterion was fixed. Recent work has considered the specification of the ranking as part of the query. Typically, this is specified as a preference vector @math and the ranking of the points is by linear projection on @math see e.g. @cite_38 @cite_19 @cite_2 . Another ranking criterion is based on the distance from a given query point in a metric space i.e., the top- @math query is a @math -nearest neighbor query @cite_41 . | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_7",
"@cite_8",
"@cite_46",
"@cite_36",
"@cite_41",
"@cite_42",
"@cite_1",
"@cite_32",
"@cite_19",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_11"
],
"mid": [
"2099328971",
"",
"",
"",
"",
"",
"2317824144",
"2099797738",
"",
"",
"2112618157",
"2009688537",
"2156887552",
"",
""
],
"abstract": [
"A top-k query retrieves the k highest scoring tuples from a data set with respect to a scoring function defined on the attributes of a tuple. The efficient evaluation of top-k queries has been an active research topic and many different instantiations of the problem, in a variety of settings, have been studied. However, techniques developed for conventional, centralized or distributed databases are not directly applicable to highly dynamic environments and on-line applications, like data streams. Recently, techniques supporting top-k queries on data streams have been introduced. Such techniques are restrictive however, as they can only efficiently report top-k answers with respect to a pre-specified (as opposed to ad-hoc) set of queries. In this paper we introduce a novel geometric representation for the top-k query problem that allows us to raise this restriction. Utilizing notions of geometric arrangements, we design and analyze algorithms for incrementally maintaining a data set organized in an arrangement representation under streaming updates. We introduce query evaluation strategies that operate on top of an arrangement data structure that are able to guarantee efficient evaluation for ad-hoc queries. The performance of our core technique is augmented by incorporating tuple pruning strategies, minimizing the number of tuples that need to be stored and manipulated. This results in a main memory indexing technique supporting both efficient incremental updates and the evaluation of ad-hoc top-k queries. A thorough experimental study evaluates the efficiency of the proposed technique.",
"",
"",
"",
"",
"",
"Top-k dominating queries combine the natural idea of selecting the k best items with a comprehensive “goodness” criterion based on dominance. A point p1 dominates p2 if p1 is as good as p2 in all attributes and is strictly better in at least one. Existing works address the problem in settings where data objects are multidimensional points. However, there are domains where we only have access to the distance between two objects. In cases like these, attributes reflect distances from a set of input objects and are dynamically generated as the input objects change. Consequently, prior works from the literature cannot be applied, despite the fact that the dominance relation is still meaningful and valid. For this reason, in this work, we present the first study for processing top-k dominating queries over distance-based dynamic attribute vectors, defined over a metric space. We propose four progressive algorithms that utilize the properties of the underlying metric space to efficiently solve the problem and present an extensive, comparative evaluation on both synthetic and real-world datasets.",
"A query to a web search engine usually consists of a list of keywords, to which the search engine responds with the best or \"top\" k pages for the query. This top-k query model is prevalent over multimedia collections in general, but also over plain relational data for certain applications. For example, consider a relation with information on available restaurants, including their location, price range for one diner, and overall food rating. A user who queries such a relation might simply specify the user's location and target price range, and expect in return the best 10 restaurants in terms of some combination of proximity to the user, closeness of match to the target price range, and overall food rating. Processing top-k queries efficiently is challenging for a number of reasons. One critical such reason is that, in many web applications, the relation attributes might not be available other than through external web-accessible form interfaces, which we will have to query repeatedly for a potentially large set of candidate objects. In this article, we study how to process top-k queries efficiently in this setting, where the attributes for which users specify target values might be handled by external, autonomous sources with a variety of access interfaces. We present a sequential algorithm for processing such queries, but observe that any sequential top-k query processing strategy is bound to require unnecessarily long query processing times, since web accesses exhibit high and variable latency. Fortunately, web sources can be probed in parallel, and each source can typically process concurrent requests, although sources may impose some restrictions on the type and number of probes that they are willing to accept. We adapt our sequential query processing technique and introduce an efficient algorithm that maximizes source-access parallelism to minimize query response time, while satisfying source-access constraints. We evaluate our techniques experimentally using both synthetic and real web-accessible data and show that parallel algorithms can be significantly more efficient than their sequential counterparts.",
"",
"",
"Top-k query asks for k tuples ordered according to a specific ranking function that combines the values from multiple participating attributes. The combined score function is usually linear. To efficiently answer top-k queries, preprocessing and indexing the data have been used to speed up the run time performance. Many indexing methods allow the online query algorithms progressively retrieve the data and stop at a certain point. However, in many cases, the number of data accesses is sensitive to the query parameters (i.e., linear weights in the score functions).In this paper, we study the sequentially layered indexing problem where tuples are put into multiple consecutive layers and any top-k query can be answered by at most k layers of tuples. We propose a new criterion for building the layered index. A layered index is robust if for any k, the number of tuples in the top k layers is minimal in comparison with all the other alternatives. The robust index guarantees the worst case performance for arbitrary query parameters. We derive a necessary and sufficient condition for robust index. The problem is shown solvable within O(ndlog n) (where d is the number of dimensions, and n is the number of tuples). To reduce the high complexity of the exact solution, we develop an approximate approach, which has time complexity O(2d n(log n)r(d)-1), where r(d) = ⌈d 2⌉ + ⌊d 2⌋ ⌈d 2⌉. Our experimental results show that our proposed method outperforms the best known previous methods.",
"Efficient processing of top-k queries is a crucial requirement in many interactive environments that involve massive amounts of data. In particular, efficient top-k processing in domains such as the Web, multimedia search, and distributed systems has shown a great impact on performance. In this survey, we describe and classify top-k processing techniques in relational databases. We discuss different design dimensions in the current techniques including query models, data access methods, implementation levels, data and query certainty, and supported scoring functions. We show the implications of each dimension on the design of the underlying techniques. We also discuss top-k queries in XML domain, and show their connections to relational approaches.",
"Given a set of objects, each with multiple numeric attributes, a (preference) top-k query retrieves the k objects with the highest scores according to a user preference, defined as a linear combination of attribute values. We consider the problem of processing a large number of continuous top-k queries, each with its own preference. When objects or user preferences change, the query results must be updated. We present a dynamic index that supports the reverse top k query, which is of independent interest. Combining this index with another one for top-k queries, we develop a scalable solution for processing many continuous top-k queries that exploits the clusteredness in user preferences. We also define an approximate version of the problem and present a solution significantly more efficient than the exact one with little loss in accuracy.",
"",
""
]
} |
1702.01446 | 2594880695 | A regret minimizing set Q is a small size representation of a much larger database P so that user queries executed on Q return answers whose scores are not much worse than those on the full dataset. In particular, a k-regret minimizing set has the property that the regret ratio between the score of the top-1 item in Q and the score of the top-k item in P is minimized, where the score of an item is the inner product of the item's attributes with a user's weight (preference) vector. The problem is challenging because we want to find a single representative set Q whose regret ratio is small with respect to all possible user weight vectors. We show that k-regret minimization is NP-Complete for all dimensions d >= 3. This settles an open problem from [VLDB 2014], and resolves the complexity status of the problem for all d: the problem is known to have polynomial-time solution for d <= 2. In addition, we propose two new approximation schemes for regret minimization, both with provable guarantees, one based on coresets and another based on hitting sets. We also carry out extensive experimental evaluation, and show that our schemes compute regret-minimizing sets comparable in size to the greedy algorithm proposed in [VLDB 14] but our schemes are significantly faster and scalable to large data sets. | In general, preference top- @math queries are hard, and this has led to approximate query answering @cite_4 @cite_2 @cite_31 . Motivated by the need of answering preference top- @math queries, Nanongkai et. al. @cite_13 introduced the notion of a @math -regret minimizing set (RMS) query. Their definition attempted to combine preference top- @math queries and the concept of skylines. They gave upper and lower bounds on the regret ratio if the size of the returned set is fixed to @math . Moreover, they proposed an algorithm to compute a @math -regret set of size @math with regret ratio @math , as well as a greedy heuristic that works well in practice. | {
"cite_N": [
"@cite_13",
"@cite_31",
"@cite_4",
"@cite_2"
],
"mid": [
"2092181172",
"2244863692",
"39411095",
"2156887552"
],
"abstract": [
"We propose the k-representative regret minimization query (k-regret) as an operation to support multi-criteria decision making. Like top-k, the k-regret query assumes that users have some utility or scoring functions; however, it never asks the users to provide such functions. Like skyline, it filters out a set of interesting points from a potentially large database based on the users' criteria; however, it never overwhelms the users by outputting too many tuples. In particular, for any number k and any class of utility functions, the k-regret query outputs k tuples from the database and tries to minimize the maximum regret ratio. This captures how disappointed a user could be had she seen k representative tuples instead of the whole database. We focus on the class of linear utility functions, which is widely applicable. The first challenge of this approach is that it is not clear if the maximum regret ratio would be small, or even bounded. We answer this question affirmatively. Theoretically, we prove that the maximum regret ratio can be bounded and this bound is independent of the database size. Moreover, our extensive experiments on real and synthetic datasets suggest that in practice the maximum regret ratio is reasonably small. Additionally, algorithms developed in this paper are practical as they run in linear time in the size of the database and the experiments show that their running time is small when they run on top of the skyline operation which means that these algorithm could be integrated into current database systems.",
"Given a set of objects @math , each with @math numeric attributes, a top- @math preference scores these objects using a linear combination of their attribute values, where the weight on each attribute reflects the interest in this attribute. Given a query preference @math , a top- @math query finds the @math objects in @math with highest scores with respect to @math . Given a query object @math and a set of preferences @math , a reverse top- @math query finds all preferences @math for which @math becomes one of the top @math objects with respect to @math . Previous solutions to these problems are effective only in low dimensions. In this paper, we develop a solution for much higher dimensions (up to high tens), if many preferences exhibit sparsity —i.e., each specifies non-zero weights for only a handful (say @math - @math ) of attributes (though the subsets of such attributes and their weights can vary greatly). Our idea is to select carefully a set of low-dimensional core subspaces to “cover” the sparse preferences in a workload. These subspaces allow us to index them more effectively than the full-dimensional space. Being multi-dimensional, each subspace covers many possible preferences; furthermore, multiple subspaces can jointly cover a preference, thereby expanding the coverage beyond each subspace’s dimensionality. Experimental evaluation validates our solution’s effectiveness and advantages over previous solutions.",
"Exact top-k query processing has attracted much attention recently because of its wide use in many research areas. Since missing the truly best answers is inherent and unavoidable due to the user's subjective judgment, and the cost of processing exact top-k queries is highly expensive for datasets with huge volume, it is intriguing to answer approximate top-k query instead. In this paper, we first define a novel kind of approximate top-k query, called µ-approximate top-k query. Then we introduce an efficient index structure, i.e. cube index, based on which, we propose our novel Cube Index Algorithm (CIA). We analyze the complexity of both constructing cube index and CIA algorithm. Moreover, extensive experiments show that CIA performs much better than the well-known approximate TAθ algorithm [3].",
"Given a set of objects, each with multiple numeric attributes, a (preference) top-k query retrieves the k objects with the highest scores according to a user preference, defined as a linear combination of attribute values. We consider the problem of processing a large number of continuous top-k queries, each with its own preference. When objects or user preferences change, the query results must be updated. We present a dynamic index that supports the reverse top k query, which is of independent interest. Combining this index with another one for top-k queries, we develop a scalable solution for processing many continuous top-k queries that exploits the clusteredness in user preferences. We also define an approximate version of the problem and present a solution significantly more efficient than the exact one with little loss in accuracy."
]
} |
1702.01446 | 2594880695 | A regret minimizing set Q is a small size representation of a much larger database P so that user queries executed on Q return answers whose scores are not much worse than those on the full dataset. In particular, a k-regret minimizing set has the property that the regret ratio between the score of the top-1 item in Q and the score of the top-k item in P is minimized, where the score of an item is the inner product of the item's attributes with a user's weight (preference) vector. The problem is challenging because we want to find a single representative set Q whose regret ratio is small with respect to all possible user weight vectors. We show that k-regret minimization is NP-Complete for all dimensions d >= 3. This settles an open problem from [VLDB 2014], and resolves the complexity status of the problem for all d: the problem is known to have polynomial-time solution for d <= 2. In addition, we propose two new approximation schemes for regret minimization, both with provable guarantees, one based on coresets and another based on hitting sets. We also carry out extensive experimental evaluation, and show that our schemes compute regret-minimizing sets comparable in size to the greedy algorithm proposed in [VLDB 14] but our schemes are significantly faster and scalable to large data sets. | Chester et. al. @cite_24 generalized the definition of @math -RMS to the @math -RMS for any @math . They showed that the @math - is @math -hard when the dimension @math is also an input to the problem, and they provided an exact polynomial algorithm for @math . There has been more work on the @math -RMS problem see @cite_9 @cite_16 @cite_12 , including a generalization by Faulkner et. al. @cite_3 that considers non-linear utility functions. | {
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_16",
"@cite_12"
],
"mid": [
"2113952707",
"2290570034",
"79208629",
"2104211550",
"1998952728"
],
"abstract": [
"Top-k diversity queries over objects embedded in a low-dimensional vector space aim to retrieve the best k objects that are both relevant to given user's criteria and well distributed over a designated region. An interesting case is provided by spatial Web objects, which are produced in great quantity by location-based services that let users attach content to places and are found also in domains like trip planning, news analysis, and real estate. In this article we present a technique for addressing such queries that, unlike existing methods for diversified top-k queries, does not require accessing and scanning all relevant objects in order to find the best k results. Our Space Partitioning and Probing (SPP) algorithm works by progressively exploring the vector space, while keeping track of the already seen objects and of their relevance and position. The goal is to provide a good quality result set in terms of both relevance and diversity. We assess quality by using as a baseline the result set computed by MMR, one of the most popular diversification algorithms, while minimizing the number of accessed objects. In order to do so, SPP exploits score-based and distance-based access methods, which are available, for instance, in most geo-referenced Web data sources. Experiments with both synthetic and real data show that SPP produces results that are relevant and spatially well distributed, while significantly reducing the number of accessed objects and incurring a very low computational overhead.",
"In exploring representative databases, a primary issue has been finding accurate models of user preferences. Given this, our work generalizes the method of regret minimization as proposed by to include nonlinear utility functions. Regret minimization is an approach for selecting k representative points from a database such that every user's ideal point in the entire database is similar to one of the k points. This approach combines benefits of the methods top-k and skyline; it controls the size of the output but does not require knowledge of users' preferences. Prior work with k-regret queries assumes users' preferences to be modeled by linear utility functions. In this paper, we derive upper and lower bounds for nonlinear utility functions, as these functions can better fit occurrences such as diminishing marginal returns, propensity for risk, and substitutability of preferences. To model these phenomena, we analyze a broad subset of convex, concave, and constant elasticity of substitution functions. We also run simulations on real and synthetic data to prove the efficacy of our bounds in practice.",
"Regret minimizing sets are a recent approach to representing a dataset D by a small subset R of size r of representative data points. The set R is chosen such that executing any top-1 query on R rather than D is minimally perceptible to any user. However, such a subset R may not exist, even for modest sizes, r. In this paper, we introduce the relaxation to k-regret minimizing sets, whereby a top-1 query on R returns a result imperceptibly close to the top-k on D. We show that, in general, with or without the relaxation, this problem is NP-hard. For the specific case of two dimensions, we give an efficient dynamic programming, plane sweep algorithm based on geometric duality to find an optimal solution. For arbitrary dimension, we give an empirically effective, greedy, randomized algorithm based on linear programming. With these algorithms, we can find subsets R of much smaller size that better summarize D, using small values of k larger than 1.",
"We study the notion of regret ratio proposed in [19] [VLDB10] to deal with multi-criteria decision making in database systems. The regret minimization query proposed in [19] was shown to have features of both skyline and top-k: it does not need information from the user but still controls the output size. While this approach is suitable for obtaining a reasonably small regret ratio, it is still open whether one can make the regret ratio arbitrarily small. Moreover, it remains open whether reasonable questions can be asked to the users in order to improve efficiency of the process. In this paper, we study the problem of minimizing regret ratio when the system is enhanced with interaction. We assume that when presented with a set of tuples the user can tell which tuple is most preferred. Under this assumption, we develop the problem of interactive regret minimization where we fix the number of questions and tuples per question that we can display, and aim at minimizing the regret ratio. We try to answer two questions in this paper: (1) How much does interaction help? That is, how much can we improve the regret ratio when there are interactions? (2) How efficient can interaction be? In particular, we measure how many questions we have to ask the user in order to make her regret ratio small enough. We answer both questions from both theoretical and practical standpoints. For the first question, we show that interaction can reduce the regret ratio almost exponentially. To do this, we prove a lower bound for the previous approach (thereby resolving an open problem from [19] ), and develop an almost-optimal upper bound that makes the regret ratio exponentially smaller. Our experiments also confirm that, in practice, interactions help in improving the regret ratio by many orders of magnitude. For the second question, we prove that when our algorithm shows a reasonable number of points per question, it only needs a few questions to make the regret ratio small. Thus, interactive regret minimization seems to be a necessary and sufficient way to deal with multi-criteria decision making in database systems.",
"Returning tuples that users may be interested in is one of the most important goals for multi-criteria decision making. Top-k queries and skyline queries are two representative queries. A top-k query has its merit of returning a limited number of tuples to users but requires users to give their exact utility functions. A skyline query has its merit that users do not need to give their exact utility functions but has no control over the number of tuples to be returned. In this paper, we study a k-regret query, a recently proposed query, which integrates the merits of the two representative queries. We first identify some interesting geometry properties for the k-regret query. Based on these properties, we define a set of candidate points called happy points for the k-regret query, which has not been studied in the literature. This result is very fundamental and beneficial to not only all existing algorithms but also all new algorithms to be developed for the k-regret query. Since it is found that the number of happy points is very small, the efficiency of all existing algorithms can be improved significantly. Furthermore, based on other geometry properties, we propose two efficient algorithms each of which performs more efficiently than the best-known fastest algorithm. Our experimental results show that our proposed algorithms run faster than the best-known method on both synthetic and real datasets. In particular, in our experiments on real datasets, the best-known method took more than 3 hours to answer a k-regret query but one of our proposed methods took about a few minutes and the other took within a second."
]
} |
1702.01446 | 2594880695 | A regret minimizing set Q is a small size representation of a much larger database P so that user queries executed on Q return answers whose scores are not much worse than those on the full dataset. In particular, a k-regret minimizing set has the property that the regret ratio between the score of the top-1 item in Q and the score of the top-k item in P is minimized, where the score of an item is the inner product of the item's attributes with a user's weight (preference) vector. The problem is challenging because we want to find a single representative set Q whose regret ratio is small with respect to all possible user weight vectors. We show that k-regret minimization is NP-Complete for all dimensions d >= 3. This settles an open problem from [VLDB 2014], and resolves the complexity status of the problem for all d: the problem is known to have polynomial-time solution for d <= 2. In addition, we propose two new approximation schemes for regret minimization, both with provable guarantees, one based on coresets and another based on hitting sets. We also carry out extensive experimental evaluation, and show that our schemes compute regret-minimizing sets comparable in size to the greedy algorithm proposed in [VLDB 14] but our schemes are significantly faster and scalable to large data sets. | The @math -RMS problem is also closely related to the problem of approximating the Pareto curve (or skyline) of a set of points. Papadamitriou and Yannakakis @cite_21 @cite_18 considered this problem and defined an approximate Pareto curve as a set of points whose @math scaling dominates every point on the skyline. They showed that there exists such a set of polynomial size @cite_21 @cite_18 . However, computing such a set of the smallest size is @math -Complete @cite_44 . See also @cite_30 . | {
"cite_N": [
"@cite_44",
"@cite_18",
"@cite_21",
"@cite_30"
],
"mid": [
"2129143099",
"1984385885",
"1928381443",
"2026427926"
],
"abstract": [
"We propose and investigate from the algorithmic standpoint a novel form of fuzzy query called approximately dominating representatives or ADRs. The ADRs of a multidimensional point set consist of a few points guaranteed to contain an approximate optimum of any monotone Lipschitz continuous combining function of the dimensions. ADRs can be computed by appropriately post-processing Pareto, or ''skyline'', queries [Kian-Lee Tan, Pin-Kwang Eng, Beng Chin Ooi, Efficient progressive skyline computation, in: VLDB, 2001, pp. 301-310; Wolf-Tilo Balke, Ulrich Guntzer, Jason Xin Zheng, Efficient distributed skylining for web information systems, in: EDBT, 2004. [14]]. We show that the problem of minimizing the number of points returned, for a user-specified desired approximation, can be solved in polynomial time in two dimensions; for three and more it is NP-hard but has a polynomial-time logarithmic approximation. Finally, we present a polynomial-time, constant factor approximation algorithm for three dimensions.",
"The optimization of queries in distributed database systems is known to be subject to delicate trade-offs. For example, the Mariposa database system allows users to specify a desired delay-cost tradeoff (that is, to supply a decreasing function u(d), specifying how much the user is willing to pay in order to receive the query results within time d); Mariposa divides a query graph into horizontal “strides,” analyzes each stride, and uses a greedy heuristic to find the “best” plan for all strides. We show that Mariposa's greedy heuristic can be arbitrarily far from the desired optimum. Applying a recent approach in multiobjective optimization algorithms to this problem, we show that the optimum cost-delay trade-off (Pareto) curve in Mariposa's framework can be approximated fast within any desired accuracy. We also present a polynomial algorithm for the general multiobjective query optimization problem, which approximates arbirarily well the optimum cost-delay tradeoff (without the restriction of Mariposa's heuristic stride subdivision).",
"We study problems in multiobjective optimization, in which solutions to a combinatorial optimization problem are evaluated with respect to several cost criteria, and we are interested in the trade-off between these objectives (the so-called Pareto curve). We point out that, under very general conditions, there is a polynomially succinct curve that spl epsiv -approximates the Pareto curve, for any spl epsiv >0. We give a necessary and sufficient condition under which this approximate Pareto curve can be constructed in time polynomial in the size of the instance and 1 spl epsiv . In the case of multiple linear objectives, we distinguish between two cases: when the underlying feasible region is convex, then we show that approximating the multi-objective problem is equivalent to approximating the single-objective problem. If however the feasible region is discrete, then we point out that the question reduces to an old and recurrent one: how does the complexity of a combinatorial optimization problem change when its feasible region is intersected with a hyperplane with small coefficients; we report some interesting new findings in this domain. Finally, we apply these concepts and techniques to formulate and solve approximately a cost-time-quality trade-off for optimizing access to the World-Wide Web, in a model first studied by (1996) (which was actually the original motivation for this work).",
"Trade-off (aka Pareto) curves are typically used to represent the trade-off among different objectives in multiobjective optimization problems. Although trade-off curves are exponentially large for typical combinatorial optimization problems (and infinite for continuous problems), it was observed in Papadimitriou and Yannakakis [On the approximability of trade-offs and optimal access of web sources, in: Proc. 41st IEEE Symp. on Foundations of Computer Science, 2000] that there exist polynomial size e approximations for any e > 0, and that under certain general conditions, such approximate e-Pareto curves can be constructed in polynomial time. In this paper we seek general-purpose algorithms for the efficient approximation of trade-off curves using as few points as possible. In the case of two objectives, we present a general algorithm that efficiently computes an e-Pareto curve that uses at most 3 times the number of points of the smallest such curve; we show that no algorithm can be better than 3-competitive in this setting. If we relax e to any e' > e, then we can efficiently construct an e'-curve that uses no more points than the smallest e-curve. With three objectives we show that no algorithm can be c-competitive for any constant c unless it is allowed to use a larger e value. We present an algorithm that is 4-competitive for any e' > (1 + e)2 - 1. We explore the problem in high dimensions and give hardness proofs showing that (unless P=NP) no constant approximation factor can be achieved efficiently even if we relax e by an arbitrary constant."
]
} |
1702.01586 | 2592120485 | Influence maximization (IM), which selects a set of k users (called seeds) to maximize the influence spread over a social network, is a fundamental problem in a wide range of applications such as viral marketing and network monitoring. Existing IM solutions fail to consider the highly dynamic nature of social influence, which results in either poor seed qualities or long processing time when the network evolves. To address this problem, we define a novel IM query named Stream Influence Maximization (SIM) on social streams. Technically, SIM adopts the sliding window model and maintains a set of k seeds with the largest influence value over the most recent social actions. Next, we propose the Influential Checkpoints (IC) framework to facilitate continuous SIM query processing. The IC framework creates a checkpoint for each window shift and ensures an e-approximate solution. To improve its efficiency, we further devise a Sparse Influential Checkpoints (SIC) framework which selectively keeps O(logN β checkpoints for a sliding window of size N and maintains an e(1−β) 2-approximate solution. Experimental results on both real-world and synthetic datasets confirm the effectiveness and efficiency of our proposed frameworks against the state-of-the-art IM approaches. | Several works @cite_32 @cite_21 studied how to continuously estimate a function in the sliding window model. They leverage special properties of target functions to achieve sublinear performance and reasonable quality. Let @math be the target function, and @math , @math , @math be three sequences on streams such that @math is a tail subsequence of @math and @math is contiguous to @math . The exponential histogram @cite_32 is proposed to approximate weakly additive functions, i.e., @math for some small constant @math . The smooth histogram @cite_21 requires that the target functions are @math -smooth . Specifically, we say @math is @math -smooth if @math , then @math for some @math . Following the analysis in @cite_21 , smooth histograms are applicable only when @math can be computed with an approximation ratio of at least @math in the append-only streaming setting. In this paper, we use monotone submodular influence functions @cite_8 widely adopted in the social influence analysis. However, monotone submodular functions are not weakly additive and existing SSO algorithms @cite_1 @cite_19 can only achieves an approximation of at most @math . This imply that these techniques cannot be directly applied to our scenario. | {
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_19"
],
"mid": [
"2061820396",
"2128846062",
"1997959284",
"2004110412",
"2177862760"
],
"abstract": [
"Models for the processes by which ideas and influence propagate through a social network have been studied in a number of domains, including the diffusion of medical and technological innovations, the sudden and widespread adoption of various strategies in game-theoretic settings, and the effects of \"word of mouth\" in the promotion of new products. Recently, motivated by the design of viral marketing strategies, Domingos and Richardson posed a fundamental algorithmic problem for such social network processes: if we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?We consider this problem in several of the most widely studied models in social network analysis. The optimization problem of selecting the most influential nodes is NP-hard here, and we provide the first provable approximation guarantees for efficient algorithms. Using an analysis framework based on submodular functions, we show that a natural greedy strategy obtains a solution that is provably within 63 of optimal for several classes of models; our framework suggests a general approach for reasoning about the performance guarantees of algorithms for these types of influence problems in social networks.We also provide computational experiments on large collaboration networks, showing that in addition to their provable guarantees, our approximation algorithms significantly out-perform node-selection heuristics based on the well-studied notions of degree centrality and distance centrality from the field of social networks.",
"In the streaming model elements arrive sequentially and can be observed only once. Maintaining statistics and aggregates is an important and non-trivial task in the model. This becomes even more challenging in the sliding windows model, where statistics must be maintained only over the most recent n elements. In their pioneering paper, Datar, Gionis, Indyk and Motwani [15] presented exponential histograms, an effective method for estimating statistics on sliding windows. In this paper we present a new smooth histograms method that improves the approximation error rate obtained via exponential histograms. Furthermore, our smooth histograms method not only captures and improves multiple previous results on sliding windows bur also extends the class functions that can be approximated on sliding windows. In particular, we provide the first approximation algorithms for the following functions: Lp norms for p notin [1,2], frequency moments, length of increasing subsequence and geometric mean.",
"How can one summarize a massive data set \"on the fly\", i.e., without even having seen it in its entirety? In this paper, we address the problem of extracting representative elements from a large stream of data. I.e., we would like to select a subset of say k data points from the stream that are most representative according to some objective function. Many natural notions of \"representativeness\" satisfy submodularity, an intuitive notion of diminishing returns. Thus, such problems can be reduced to maximizing a submodular set function subject to a cardinality constraint. Classical approaches to submodular maximization require full access to the data set. We develop the first efficient streaming algorithm with constant factor 1 2-e approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. In our experiments, we extensively evaluate the effectiveness of our approach on several applications, including training large-scale kernel methods and exemplar-based clustering, on millions of data points. We observe that our streaming method, while achieving practically the same utility value, runs about 100 times faster than previous work.",
"We consider the problem of maintaining aggregates and statistics over data streams, with respect to the last N data elements seen so far. We refer to this model as the sliding window model. We consider the following basic problem: Given a stream of bits, maintain a count of the number of 1's in the last N elements seen from the stream. We show that, using @math bits of memory, we can estimate the number of 1's to within a factor of @math . We also give a matching lower bound of @math memory bits for any deterministic or randomized algorithms. We extend our scheme to maintain the sum of the last N positive integers and provide matching upper and lower bounds for this more general problem as well. We also show how to efficiently compute the Lp norms ( @math ) of vectors in the sliding window model using our techniques. Using our algorithm, one can adapt many other techniques to work for the sliding window model with a multiplicative overhead of @math in memory and a @math factor loss in accuracy. These include maintaining approximate histograms, hash tables, and statistics or aggregates such as sum and averages.",
"Greedy algorithms are practitioners’ best friends—they are intuitive, are simple to implement, and often lead to very good solutions. However, implementing greedy algorithms in a distributed setting is challenging since the greedy choice is inherently sequential, and it is not clear how to take advantage of the extra processing power. Our main result is a powerful sampling technique that aids in parallelization of sequential algorithms. Armed with this primitive, we then adapt a broad class of greedy algorithms to the MapReduce paradigm; this class includes maximum cover and submodular maximization subject to p-system constraint problems. Our method yields efficient algorithms that run in a logarithmic number of rounds while obtaining solutions that are arbitrarily close to those produced by the standard sequential greedy algorithm. We begin with algorithms for modular maximization subject to a matroid constraint and then extend this approach to obtain approximation algorithms for submodular maximization subject to knapsack or p-system constraints."
]
} |
1702.01478 | 2587037412 | We propose augmenting deep neural networks with an attention mechanism for the visual object detection task. As perceiving a scene, humans have the capability of multiple fixation points, each attended to scene content at different locations and scales. However, such a mechanism is missing in the current state-of-the-art visual object detection methods. Inspired by the human vision system, we propose a novel deep network architecture that imitates this attention mechanism. As detecting objects in an image, the network adaptively places a sequence of glimpses of different shapes at different locations in the image. Evidences of the presence of an object and its location are extracted from these glimpses, which are then fused for estimating the object class and bounding box coordinates. Due to lacks of ground truth annotations of the visual attention mechanism, we train our network using a reinforcement learning algorithm with policy gradients. Experiment results on standard object detection benchmarks show that the proposed network consistently outperforms the baseline networks that does not model the attention mechanism. | The attention mechanism has been proposed for different applications including speech recognition @cite_25 , machine translation @cite_16 and question--answering @cite_19 . Particularly, @cite_7 propose a recurrent neural network that sequentially selects image regions and combines extracted information from these regions for the handwritten digit classification task. | {
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_25",
"@cite_7"
],
"mid": [
"2951008357",
"2133564696",
"",
"2951527505"
],
"abstract": [
"We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.",
"",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so."
]
} |
1702.01478 | 2587037412 | We propose augmenting deep neural networks with an attention mechanism for the visual object detection task. As perceiving a scene, humans have the capability of multiple fixation points, each attended to scene content at different locations and scales. However, such a mechanism is missing in the current state-of-the-art visual object detection methods. Inspired by the human vision system, we propose a novel deep network architecture that imitates this attention mechanism. As detecting objects in an image, the network adaptively places a sequence of glimpses of different shapes at different locations in the image. Evidences of the presence of an object and its location are extracted from these glimpses, which are then fused for estimating the object class and bounding box coordinates. Due to lacks of ground truth annotations of the visual attention mechanism, we train our network using a reinforcement learning algorithm with policy gradients. Experiment results on standard object detection benchmarks show that the proposed network consistently outperforms the baseline networks that does not model the attention mechanism. | In this paper, we extend @cite_7 to deal with the visual object detection task, which is considered a much more difficult task due to various appearance variations visual objects can exhibit in images. Moreover, unlike the classification task, the visual object detection task also requires an algorithm to localize the objects from various classes present in an image. While the glimpse size and shape is fixed in @cite_7 , the glimpse size and shape is adaptively changed for maximizing the object detection performance. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2951527505"
],
"abstract": [
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so."
]
} |
1702.01478 | 2587037412 | We propose augmenting deep neural networks with an attention mechanism for the visual object detection task. As perceiving a scene, humans have the capability of multiple fixation points, each attended to scene content at different locations and scales. However, such a mechanism is missing in the current state-of-the-art visual object detection methods. Inspired by the human vision system, we propose a novel deep network architecture that imitates this attention mechanism. As detecting objects in an image, the network adaptively places a sequence of glimpses of different shapes at different locations in the image. Evidences of the presence of an object and its location are extracted from these glimpses, which are then fused for estimating the object class and bounding box coordinates. Due to lacks of ground truth annotations of the visual attention mechanism, we train our network using a reinforcement learning algorithm with policy gradients. Experiment results on standard object detection benchmarks show that the proposed network consistently outperforms the baseline networks that does not model the attention mechanism. | A few attention-based methods are proposed for the object detection task. @cite_15 train a class specific object localization model using a reinforcement learning algorithm and utilize the model for a detection task by evaluating all the regions generated over the course of localization. @cite_6 propose a class specific model which iteratively modifies the initial ROI until it declares the existence of an object. Unlike these works, the proposed method is class agnostic, which scales better as dealing with large amount of object classes. We do not need to train a detector for each object class. | {
"cite_N": [
"@cite_15",
"@cite_6"
],
"mid": [
"2179488730",
"1689909837"
],
"abstract": [
"We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.",
"We present a novel detection method using a deep convolutional neural network (CNN), named AttentionNet. We cast an object detection problem as an iterative classification problem, which is the most suitable form of a CNN. AttentionNet provides quantized weak directions pointing a target object and the ensemble of iterative predictions from AttentionNet converges to an accurate object boundary box. Since AttentionNet is a unified network for object detection, it detects objects without any separated models from the object proposal to the post bounding-box regression. We evaluate AttentionNet by a human detection task and achieve the state-of-the-art performance of 65 (AP) on PASCAL VOC 2007 2012 with an 8-layered architecture only."
]
} |
1702.01478 | 2587037412 | We propose augmenting deep neural networks with an attention mechanism for the visual object detection task. As perceiving a scene, humans have the capability of multiple fixation points, each attended to scene content at different locations and scales. However, such a mechanism is missing in the current state-of-the-art visual object detection methods. Inspired by the human vision system, we propose a novel deep network architecture that imitates this attention mechanism. As detecting objects in an image, the network adaptively places a sequence of glimpses of different shapes at different locations in the image. Evidences of the presence of an object and its location are extracted from these glimpses, which are then fused for estimating the object class and bounding box coordinates. Due to lacks of ground truth annotations of the visual attention mechanism, we train our network using a reinforcement learning algorithm with policy gradients. Experiment results on standard object detection benchmarks show that the proposed network consistently outperforms the baseline networks that does not model the attention mechanism. | Most of the recent object detection methods are based on the R-CNN algorithm and explore two directions for further performance improvement. The first direction is to make the underlying CNN deeper @cite_22 . The second one is to incorporate semantic segmentation @cite_4 , which typically require additional training data for semantic segmentation. Other works focus on speeding up the computation time @cite_1 @cite_17 @cite_21 @cite_27 @cite_3 . | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_3",
"@cite_27",
"@cite_17"
],
"mid": [
"2949295283",
"2949650786",
"",
"2951548327",
"",
"1484816920",
"2613718673"
],
"abstract": [
"Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object detection. Fast R-CNN builds on previous work to efficiently classify object proposals using deep convolutional networks. Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Fast R-CNN trains the very deep VGG16 network 9x faster than R-CNN, is 213x faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Compared to SPPnet, Fast R-CNN trains VGG16 3x faster, tests 10x faster, and is more accurate. Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at this https URL",
"",
"Deep convolutional neural networks (CNNs) have had a major impact in most areas of image understanding, including object category detection. In object detection, methods such as R-CNN have obtained excellent results by integrating CNNs with region proposal generation algorithms such as selective search. In this paper, we investigate the role of proposal generation in CNN-based detectors in order to determine whether it is a necessary modelling component, carrying essential geometric information not contained in the CNN, or whether it is merely a way of accelerating detection. We do so by designing and evaluating a detector that uses a trivial region generation scheme, constant for each image. Combined with SPP, this results in an excellent and fast detector that does not require to process an image with algorithms other than the CNN itself. We also streamline and simplify the training of CNN-based detectors by integrating several learning steps in a single algorithm, as well as by proposing a number of improvements that accelerate detection.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn."
]
} |
1702.01478 | 2587037412 | We propose augmenting deep neural networks with an attention mechanism for the visual object detection task. As perceiving a scene, humans have the capability of multiple fixation points, each attended to scene content at different locations and scales. However, such a mechanism is missing in the current state-of-the-art visual object detection methods. Inspired by the human vision system, we propose a novel deep network architecture that imitates this attention mechanism. As detecting objects in an image, the network adaptively places a sequence of glimpses of different shapes at different locations in the image. Evidences of the presence of an object and its location are extracted from these glimpses, which are then fused for estimating the object class and bounding box coordinates. Due to lacks of ground truth annotations of the visual attention mechanism, we train our network using a reinforcement learning algorithm with policy gradients. Experiment results on standard object detection benchmarks show that the proposed network consistently outperforms the baseline networks that does not model the attention mechanism. | An attempt to extract features from multiple regions is made by a few works. In @cite_23 , in addition to the proposal bounding box, visual features are extracted from a set of hand-chosen regions. In @cite_10 , an additional context region is used for detecting small objects in images. The work of @cite_13 extracts features from the entire image in addition to the proposal regions for incorporating the context information. In contrary, the proposed approach does not rely on manual region selection. It learns to adaptively select regions based on the image content. | {
"cite_N": [
"@cite_13",
"@cite_10",
"@cite_23"
],
"mid": [
"2951829713",
"2594258618",
"1932624639"
],
"abstract": [
"It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are important. ION improves state-of-the-art on PASCAL VOC 2012 object detection from 73.9 to 76.4 mAP. On the new and more challenging MS COCO dataset, we improve state-of-art-the from 19.7 to 33.1 mAP. In the 2015 MS COCO Detection Challenge, our ION model won the Best Student Entry and finished 3rd place overall. As intuition suggests, our detection results provide strong evidence that context and multi-scale representations improve small object detection.",
"Existing object detection literature focuses on detecting a big object covering a large part of an image. The problem of detecting a small object covering a small part of an image is largely ignored. As a result, the state-of-the-art object detection algorithm renders unsatisfactory performance as applied to detect small objects in images. In this paper, we dedicate an effort to bridge the gap. We first compose a benchmark dataset tailored for the small object detection problem to better evaluate the small object detection performance. We then augment the state-of-the-art R-CNN algorithm with a context model and a small region proposal generator to improve the small object detection performance. We conduct extensive experimental validations for studying various design choices. Experiment results show that the augmented R-CNN algorithm improves the mean average precision by 29.8 over the original R-CNN algorithm on detecting small objects.",
"We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2 and 73.9 correspondingly, surpassing any other published work by a significant margin."
]
} |
1702.01711 | 2593778915 | This paper presents a simple, robust and (almost) unsupervised dictionary-based method, qwn-ppv (Q-WordNet as Personalized PageRanking Vector) to automatically generate polarity lexicons. We show that qwn-ppv outperforms other automatically generated lexicons for the four extrinsic evaluations presented here. It also shows very competitive and robust results with respect to manually annotated ones. Results suggest that no single lexicon is best for every task and dataset and that the intrinsic evaluation of polarity lexicons is not a good performance indicator on a Sentiment Analysis task. The qwn-ppv method allows to easily create quality polarity lexicons whenever no domain-based annotated corpora are available for a given language. | The most popular manually-built polarity lexicon is part of the General Inquirer @cite_8 , and consists of 1915 words labelled as positive'' and 2291 as negative''. Taboada manually created their lexicons annotating the polarity of 6232 words on a scale of 5 to -5. Liu , starting with Hu and Liu , have along the years collected a manually corrected polarity lexicon which is formed by 4818 negative and 2041 positive words. Another manually corrected lexicon @cite_21 is the one used by the Opinion Finder system @cite_13 and contains 4903 negatively and 2718 positively annotated words respectively. | {
"cite_N": [
"@cite_21",
"@cite_13",
"@cite_8"
],
"mid": [
"2088622183",
"2022204871",
"2082291422"
],
"abstract": [
"This paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective (opinionated) expressions. High-precision classifiers label unannotated data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. The bootstrapping process learns many subjective patterns and increases recall while maintaining high precision.",
"This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.",
""
]
} |
1702.01711 | 2593778915 | This paper presents a simple, robust and (almost) unsupervised dictionary-based method, qwn-ppv (Q-WordNet as Personalized PageRanking Vector) to automatically generate polarity lexicons. We show that qwn-ppv outperforms other automatically generated lexicons for the four extrinsic evaluations presented here. It also shows very competitive and robust results with respect to manually annotated ones. Results suggest that no single lexicon is best for every task and dataset and that the intrinsic evaluation of polarity lexicons is not a good performance indicator on a Sentiment Analysis task. The qwn-ppv method allows to easily create quality polarity lexicons whenever no domain-based annotated corpora are available for a given language. | Among the automatically built lexicons, Turney and Littman proposed a minimally supervised algorithm to calculate the polarity of a word depending on whether it co-ocurred more with a previously collected small set of positive words rather than with a set of negative ones. Agerri and Garc 'ia Serrano presented a very simple method to extract the polarity information starting from the synset in WordNet @cite_10 . Mohammad developed a method in which they first identify (by means of affixes rules) a set of positive negative words which act as seeds, then used a Roget-like thesaurus to mark the synonymous words for each polarity type and to generalize from the seeds. They produce several lexicons the best of which, MSOL(ASL and GI) contains 51K and 76K entries respectively and uses the full General Inquirer as seeds. They performed both intrinsic and extrinsic evaluations using the MPQA 1.1 corpus. | {
"cite_N": [
"@cite_10"
],
"mid": [
"71705042"
],
"abstract": [
"This paper presents Q-WordNet, a lexical resource consisting of WordNet senses automatically annotated by positive and negative polarity. Polarity classification amounts to decide whether a text (sense, sentence, etc.) may be associated to positive or negative connotations. Polarity classification is becoming important for applications such as Opinion Mining and Sentiment Analysis, which facilitates the extraction and analysis of opinions about commercial products, on companies reputation management, brand monitoring, or to track attitudes by mining online forums, blogs, etc. Inspired by work on classification of word senses by polarity (e.g., SentiWordNet), and taking WordNet as a starting point, we build Q-WordNet. Instead of applying external tools such as supervised classifiers to annotated WordNet synsets by polarity, we try to effectively maximize the linguistic information contained in WordNet, thereby taking advantage of the human effort put by lexicographers and annotators. The resulting resource is a subset of WordNet senses classified as positive or negative. In this approach, neutral polarity is seen as the absence of positive or negative polarity. The evaluation of Q-WordNet shows an improvement with respect to previous approaches. We believe that Q-WordNet can be used as a starting point for data-driven approaches in sentiment analysis."
]
} |
1702.01499 | 2586073474 | Deep Convolutional Neural Networks (DCNN) have been proven to be effective for various computer vision problems. In this work, we demonstrate its effectiveness on a continuous object orientation estimation task, which requires prediction of 0 to 360 degrees orientation of the objects. We do so by proposing and comparing three continuous orientation prediction approaches designed for the DCNNs. The first two approaches work by representing an orientation as a point on a unit circle and minimizing either L2 loss or angular difference loss. The third method works by first converting the continuous orientation estimation task into a set of discrete orientation estimation tasks and then converting the discrete orientation outputs back to the continuous orientation using a mean-shift algorithm. By evaluating on a vehicle orientation estimation task and a pedestrian orientation estimation task, we demonstrate that the discretization-based approach not only works better than the other two approaches but also achieves state-of-the-art performance. We also demonstrate that finding an appropriate feature representation is critical to achieve a good performance when adapting a DCNN trained for an image recognition task. | Several works consider learning a suitable representation for the orientation estimation task. In @cite_5 , an embedded representation that reflects the local features and their spatial arrangement as well as enforces supervised manifold constraints on the data is proposed. Then a regression model to estimate the orientation is learned using the proposed representation. Similarly to @cite_5 , @cite_43 @cite_28 learn a representation using spectral clustering and then train a single regression for each cluster while enforcing geometric constraints. @cite_29 formulates the task as a MAP inference task, where the likelihood function is composed of a generative term based on the prediction error generated by the ensemble of Fisher regressors as well as a discriminative term based on SVM classifiers. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_29",
"@cite_43"
],
"mid": [
"2029604012",
"2062516103",
"2208030666",
"2114178445"
],
"abstract": [
"Regression-based models built on local gradient-based feature descriptors have showed to be successful for continuous pose estimation of object categories. Nonetheless, a crucial weakness of these methods is that no geometric information is taken into account. Therefore, geometrically inconsistent poses may be preferred, and this forces to employ a coarse-grained pose estimator as a pre-processing step to avoid potentially large estimation errors. In this paper, we propose a method that combines generative feature models and graph matching techniques in a unified probabilistic formulation of the continuous pose estimation problem. Our approach retains the lightness and generality of generative feature modeling, while favoring geometrically consistent results. Experiments show that pose pre-processing steps are not needed if geometry is embedded in the matching stage. We evaluated our approach on two different car datasets and we experimentally show that our algorithm outperforms state-of-the-art methods by 25 .",
"In this paper we propose a framework for learning a regression function form a set of local features in an image. The regression is learned from an embedded representation that reflects the local features and their spatial arrangement as well as enforces supervised manifold constraints on the data. We applied the approach for viewpoint estimation on a Multiview car dataset, a head pose dataset and arm posture dataset. The experimental results show that this approach has superior results (up to 67 improvement) to the state-of-the-art approaches in very challenging datasets.",
"In this paper, we treat the problem of continuous pose estimation for object categories as a regression problem on the basis of only 2D training information. While regression is a natural framework for continuous problems, regression methods so far achieved inferior results with respect to 3D-based and 2D-based classification-and-refinement approaches. This may be attributed to their weakness to high intra-class variability as well as to noisy matching procedures and lack of geometrical constraints. We propose to apply regression to Fisher-encoded vectors computed from large cells by learning an array of Fisher regressors. Fisher encoding makes our algorithm flexible to variations in class appearance, while the array structure permits to indirectly introduce spatial context information in the approach. We formulate our problem as a MAP inference problem, where the likelihood function is composed of a generative term based on the prediction error generated by the ensemble of Fisher regressors as well as a discriminative term based on SVM classifiers. We test our algorithm on three publicly available datasets that envisage several difficulties, such as high intra-class variability, truncations, occlusions, and motion blur, obtaining state-of-the-art results.",
"In this paper, we propose a method for learning a class representation that can return a continuous value for the pose of an unknown class instance using only 2D data and weak 3D labeling information. Our method is based on generative feature models, i.e., regression functions learned from local descriptors of the same patch collected under different viewpoints. The individual generative models are then clustered in order to create class generative models which form the class representation. At run-time, the pose of the query image is estimated in a maximum a posteriori fashion by combining the regression functions belonging to the matching clusters. We evaluate our approach on the EPFL car dataset and the Pointing'04 face dataset. Experimental results show that our method outperforms by 10 the state-of-the-art in the first dataset and by 9 in the second."
]
} |
1702.01499 | 2586073474 | Deep Convolutional Neural Networks (DCNN) have been proven to be effective for various computer vision problems. In this work, we demonstrate its effectiveness on a continuous object orientation estimation task, which requires prediction of 0 to 360 degrees orientation of the objects. We do so by proposing and comparing three continuous orientation prediction approaches designed for the DCNNs. The first two approaches work by representing an orientation as a point on a unit circle and minimizing either L2 loss or angular difference loss. The third method works by first converting the continuous orientation estimation task into a set of discrete orientation estimation tasks and then converting the discrete orientation outputs back to the continuous orientation using a mean-shift algorithm. By evaluating on a vehicle orientation estimation task and a pedestrian orientation estimation task, we demonstrate that the discretization-based approach not only works better than the other two approaches but also achieves state-of-the-art performance. We also demonstrate that finding an appropriate feature representation is critical to achieve a good performance when adapting a DCNN trained for an image recognition task. | @cite_35 introduces PASCAL3D+ dataset designed for joint object detection and pose estimation. Continuous annotations of azimuth and elevation for 12 object categories are provided. The average number of instances per category is approximately 3,000. The performance is evaluated based on Average Viewpoint Precision (AVP) which takes into account both the detection accuracy and view-point estimation accuracy. Since the focus of this work is the orientation estimation, we employ the EPFL Multi-view Car Dataset @cite_20 and the TUD Multiview Pedestrian Dataset @cite_31 specifically designed to evaluate the orientation prediction. | {
"cite_N": [
"@cite_35",
"@cite_31",
"@cite_20"
],
"mid": [
"1991264156",
"1997500560",
""
],
"abstract": [
"3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http: cvgl.stanford.edu projects pascal3d",
"Automatic recovery of 3D human pose from monocular image sequences is a challenging and important research topic with numerous applications. Although current methods are able to recover 3D pose for a single person in controlled environments, they are severely challenged by real-world scenarios, such as crowded street scenes. To address this problem, we propose a three-stage process building on a number of recent advances. The first stage obtains an initial estimate of the 2D articulation and viewpoint of the person from single frames. The second stage allows early data association across frames based on tracking-by-detection. These two stages successfully accumulate the available 2D image evidence into robust estimates of 2D limb positions over short image sequences (= tracklets). The third and final stage uses those tracklet-based estimates as robust image observations to reliably recover 3D pose. We demonstrate state-of-the-art performance on the HumanEva II benchmark, and also show the applicability of our approach to articulated 3D tracking in realistic street conditions.",
""
]
} |
1702.01499 | 2586073474 | Deep Convolutional Neural Networks (DCNN) have been proven to be effective for various computer vision problems. In this work, we demonstrate its effectiveness on a continuous object orientation estimation task, which requires prediction of 0 to 360 degrees orientation of the objects. We do so by proposing and comparing three continuous orientation prediction approaches designed for the DCNNs. The first two approaches work by representing an orientation as a point on a unit circle and minimizing either L2 loss or angular difference loss. The third method works by first converting the continuous orientation estimation task into a set of discrete orientation estimation tasks and then converting the discrete orientation outputs back to the continuous orientation using a mean-shift algorithm. By evaluating on a vehicle orientation estimation task and a pedestrian orientation estimation task, we demonstrate that the discretization-based approach not only works better than the other two approaches but also achieves state-of-the-art performance. We also demonstrate that finding an appropriate feature representation is critical to achieve a good performance when adapting a DCNN trained for an image recognition task. | Despite the availability of continuous ground-truth view point information, majority of works @cite_35 @cite_42 @cite_6 @cite_34 @cite_22 @cite_38 using PASCA3D+ dataset predict discrete poses and evaluate the performance based on the discretized poses. @cite_42 proposes a method for joint view-point estimation and key point prediction based on CNN. It works by converting the continuous pose estimation task into discrete view point classification task. @cite_38 proposes to augment training data for their CNN model by synthetic images. The view point prediction is cast as a fine-grained (360 classes for each angle) discretized view point classification problem. | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_22",
"@cite_42",
"@cite_6",
"@cite_34"
],
"mid": [
"1991264156",
"1591870335",
"",
"2951900634",
"",
"1964201035"
],
"abstract": [
"3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http: cvgl.stanford.edu projects pascal3d",
"Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.",
"",
"We characterize the problem of pose estimation for rigid objects in terms of determining viewpoint to explain coarse pose and keypoint prediction to capture the finer details. We address both these tasks in two different settings - the constrained setting with known bounding boxes and the more challenging detection setting where the aim is to simultaneously detect and correctly estimate pose of objects. We present Convolutional Neural Network based architectures for these and demonstrate that leveraging viewpoint estimates can substantially improve local appearance based keypoint predictions. In addition to achieving significant improvements over state-of-the-art in the above tasks, we analyze the error modes and effect of object characteristics on performance to guide future efforts towards this goal.",
"",
"Current object class recognition systems typically target 2D bounding box localization, encouraged by benchmark data sets, such as Pascal VOC. While this seems suitable for the detection of individual objects, higher-level applications such as 3D scene understanding or 3D object tracking would benefit from more fine-grained object hypotheses incorporating 3D geometric information, such as viewpoints or the locations of individual parts. In this paper, we help narrowing the representational gap between the ideal input of a scene understanding system and object class detector output, by designing a detector particularly tailored towards 3D geometric reasoning. In particular, we extend the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints. We experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t. 2D bounding box localization, but outperforms prior work in 3D viewpoint estimation and ultra-wide baseline matching."
]
} |
1702.01786 | 2586427025 | In this paper we propose a novel approach to manage the throughput vs latency tradeoff that emerges when managing updates in geo-replicated systems. Our approach consists in allowing full concurrency when processing local updates and using a deferred local serialisation procedure before shipping updates to remote datacenters. This strategy allows to implement inexpensive mechanisms to ensure system consistency requirements while avoiding intrusive effects on update operations, a major performance limitation of previous systems. We have implemented our approach as a variant of Riak KV. Our extensive evaluation shows that we outperform sequencer-based approaches by almost an order of magnitude in the maximum achievable throughput. Furthermore, unlike previous sequencer-free solutions, our approach reaches nearly optimal remote update visibility latencies without limiting throughput. | The support for causal consistency can already be found in early pioneer works in distributed systems, such as Bayou , @cite_5 @cite_45 , Lazy Replication , @cite_14 , and the ISIS , @cite_37 toolkit. Bayou offers session guarantees and enforces causal order of write operations to an eventually consistency data store composed by single-machine replicas in full replication settings. Lazy Replication also ensures that all operations are applied to replicas in causal order by having clients maintaining a vector clock that captures their causal past. ISIS offers a causal multicast primitive, namely , which provides causally ordered message delivery for group communication. | {
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_45",
"@cite_37"
],
"mid": [
"2001803004",
"2156027558",
"",
"1985349143"
],
"abstract": [
"Bayou's anti-entropy protocol for update propagation between weakly consistent storage replicas is based on pair-wise communication, the propagation of write operations, and a set of ordering and closure constraints on the propagation of the writes. The simplicity of the design makes the protocol very flexible, thereby providing support for diverse networking environments and usage scenarios. It accommodates a variety of policies for when and where to propagate updates. It operates over diverse network topologies, including low-bandwidth links. It is incremental. It enables replica convergence, and updates can be propagated using floppy disks and similar transportable media. Moreover, the protocol handles replica creation and retirement in a light-weight manner. Each of these features is enabled by only one or two of the protocol's design choices, and can be independently incorporated in other systems. This paper presents the anti-entropy protocol in detail, describing the design decisions and resulting features.",
"To provide high availability for services such as mail or bulletin boards, data must be replicated. One way to guarantee consistency of replicated data is to force service operations to occur in the same order at all sites, but this approach is expensive. For some applications a weaker causal operation order can preserve consistency while providing better performance. This paper describes a new way of implementing causal operations. Our technique also supports two other kinds of operations: operations that are totally ordered with respect to one another and operations that are totally ordered with respect to all other operations. The method performs well in terms of response time, operation-processing capacity, amount of stored state, and number and size of messages; it does better than replication methods based on reliable multicast techniques.",
"",
"Reference LSR-ARTICLE-1991-001View record in Web of Science Record created on 2005-05-20, modified on 2016-08-08"
]
} |
1702.01786 | 2586427025 | In this paper we propose a novel approach to manage the throughput vs latency tradeoff that emerges when managing updates in geo-replicated systems. Our approach consists in allowing full concurrency when processing local updates and using a deferred local serialisation procedure before shipping updates to remote datacenters. This strategy allows to implement inexpensive mechanisms to ensure system consistency requirements while avoiding intrusive effects on update operations, a major performance limitation of previous systems. We have implemented our approach as a variant of Riak KV. Our extensive evaluation shows that we outperform sequencer-based approaches by almost an order of magnitude in the maximum achievable throughput. Furthermore, unlike previous sequencer-free solutions, our approach reaches nearly optimal remote update visibility latencies without limiting throughput. | Sequencer-based These solutions rely on a single sequencer per datacenter to enforce causal consistency. The sequencer is in charge of totally ordering local updates, in a causally consistent manner, and propagate them to remote locations. This design centralizes, thus significantly simplifying, the arduous labor of checking remote update dependencies. Nevertheless, the use of synchronous sequencers significantly limits the intra-datacenter concurrency, as demonstrated by our experiments. SwiftCloud , @cite_22 and ChainReaction , @cite_38 rely on a vector clock with an entry per datacenter to track the causal dependencies, similarly to . Practi , @cite_20 , on the contrary, uses a single scalar and a sophisticated mechanism of invalidations. Similar to , Practi separates the propagation of data and metadata. This and the concept of invalidations optimize Practi for partial replication, a setting that has not yet been explored in this work. We have shown that sequencers may get easily saturated for medium-size clusters, while is able to handle much heavier loads (up to 7.7 times more). | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_20"
],
"mid": [
"1981851173",
"2195205682",
""
],
"abstract": [
"This paper proposes a Geo-distributed key-value datastore, named ChainReaction, that offers causal+ consistency, with high performance, fault-tolerance, and scalability. ChainReaction enforces causal+ consistency which is stronger than eventual consistency by leveraging on a new variant of chain replication. We have experimentally evaluated the benefits of our approach by running the Yahoo! Cloud Serving Benchmark. Experimental results show that ChainReaction has better performance in read intensive workloads while offering competitive performance for other workloads. Also we show that our solution requires less metadata when compared with previous work.",
"Client-side apps (e.g., mobile or in-browser) need cloud data to be available in a local cache, for both reads and updates. For optimal user experience and developer support, the cache should be consistent and fault-tolerant. In order to scale to high numbers of unreliable and resource-poor clients, and large database, the system needs to use resources sparingly. The SwiftCloud distributed object database is the first to provide fast reads and writes via a causally-consistent client-side local cache backed by the cloud. It is thrifty in resources and scales well, thanks to consistent versioning provided by the cloud, using small and bounded metadata. It remains available during faults, switching to a different data centre when the current one is not responsive, while maintaining its consistency guarantees. This paper presents the SwiftCloud algorithms, design, and experimental evaluation. It shows that client-side apps enjoy the high performance and availability, under the same guarantees as a remote cloud data store, at a small cost.",
""
]
} |
1702.01786 | 2586427025 | In this paper we propose a novel approach to manage the throughput vs latency tradeoff that emerges when managing updates in geo-replicated systems. Our approach consists in allowing full concurrency when processing local updates and using a deferred local serialisation procedure before shipping updates to remote datacenters. This strategy allows to implement inexpensive mechanisms to ensure system consistency requirements while avoiding intrusive effects on update operations, a major performance limitation of previous systems. We have implemented our approach as a variant of Riak KV. Our extensive evaluation shows that we outperform sequencer-based approaches by almost an order of magnitude in the maximum achievable throughput. Furthermore, unlike previous sequencer-free solutions, our approach reaches nearly optimal remote update visibility latencies without limiting throughput. | COPS , @cite_7 and Eiger , @cite_21 finely track dependencies for each individual data item allowing full concurrency within a datacenter. Remote updates are tagged with a list of dependencies. When a datacenter receives a remote update, it needs to explicitly check each dependency. This process was found to be expensive and to limit system’s performance , @cite_40 due to the large amount of metadata generated. Orbe , @cite_24 only partially solves this problem by aggregating dependencies belonging to the same logical partition into one scalar. | {
"cite_N": [
"@cite_24",
"@cite_40",
"@cite_21",
"@cite_7"
],
"mid": [
"2112612200",
"2111113897",
"12688243",
"2161730338"
],
"abstract": [
"We propose two protocols that provide scalable causal consistency for both partitioned and replicated data stores using dependency matrices (DM) and physical clocks. The DM protocol supports basic read and update operations and uses two-dimensional dependency matrices to track dependencies in a client session. It utilizes the transitivity of causality and sparse matrix encoding to keep dependency metadata small and bounded. The DM-Clock protocol extends the DM protocol to support read-only transactions using loosely synchronized physical clocks. We implement the two protocols in Orbe, a distributed key-value store, and evaluate them experimentally. Orbe scales out well, incurs relatively small overhead over an eventually consistent key-value store, and outperforms an existing system that uses explicit dependency tracking to provide scalable causal consistency.",
"GentleRain is a new causally consistent geo-replicated data store that provides throughput comparable to eventual consistency and superior to current implementations of causal consistency. GentleRain uses a periodic aggregation protocol to determine whether updates can be made visible in accordance with causal consistency. Unlike current implementations, it does not use explicit dependency check messages, resulting in a major throughput improvement at the expense of a modest increase in update visibility. Furthermore, GentleRain tracks causal consistency by attaching to updates scalar timestamps derived from loosely synchronized physical clocks. Clock skew does not cause violations of causal consistency, but may delay the visibility of updates. By encoding causality in a single scalar timestamp, GentleRain reduces storage and communication overhead for tracking causality. We evaluate GentleRain using Amazon EC2, and demonstrate that it achieves throughput equal to about 99 of eventual consistency, and 120 better than previous implementations of causal consistency.",
"We present the first scalable, geo-replicated storage system that guarantees low latency, offers a rich data model, and provides \"stronger\" semantics. Namely, all client requests are satisfied in the local datacenter in which they arise; the system efficiently supports useful data model abstractions such as column families and counter columns; and clients can access data in a causally-consistent fashion with read-only and write-only transactional support, even for keys spread across many servers. The primary contributions of this work are enabling scalable causal consistency for the complex columnfamily data model, as well as novel, non-blocking algorithms for both read-only and write-only transactions. Our evaluation shows that our system, Eiger, achieves low latency (single-ms), has throughput competitive with eventually-consistent and non-transactional Cassandra (less than 7 overhead for one of Facebook's real-world workloads), and scales out to large clusters almost linearly (averaging 96 increases up to 128 server clusters).",
"Geo-replicated, distributed data stores that support complex online applications, such as social networks, must provide an \"always-on\" experience where operations always complete with low latency. Today's systems often sacrifice strong consistency to achieve these goals, exposing inconsistencies to their clients and necessitating complex application logic. In this paper, we identify and define a consistency model---causal consistency with convergent conflict handling, or causal+---that is the strongest achieved under these constraints. We present the design and implementation of COPS, a key-value store that delivers this consistency model across the wide-area. A key contribution of COPS is its scalability, which can enforce causal dependencies between keys stored across an entire cluster, rather than a single server like previous systems. The central approach in COPS is tracking and explicitly checking whether causal dependencies between keys are satisfied in the local cluster before exposing writes. Further, in COPS-GT, we introduce get transactions in order to obtain a consistent view of multiple keys without locking or blocking. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads."
]
} |
1702.01381 | 2949948647 | This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance. | @cite_11 proposed a CNN architecture for predicting change in velocity and local change in orientation using short stereo video clips. They used a rather shallow CNN architecture together with unsupervised pre-training of the filters in early layers. Partly because of the shortage of training data in their path prediction application, they were forced to discretize the space of velocities and local changes for a softmax-based classification instead of continuous estimates with regression. @cite_5 tried to solve the same problem as in @cite_11 using a monocular approach. In detail, they used an architecture based on two AlexNet-like CNN branches acting as inputs to a stack of fully connected layers coupled with a regression layer. | {
"cite_N": [
"@cite_5",
"@cite_11"
],
"mid": [
"2555820268",
"2400202024"
],
"abstract": [
"Deep Learning based techniques have been adopted with precision to solve a lot of standard computer vision problems, some of which are image classification, object detection and segmentation. Despite the widespread success of these approaches, they have not yet been exploited largely for solving the standard perception related problems encountered in autonomous navigation such as Visual Odometry (VO), Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM). This paper analyzes the problem of Monocular Visual Odometry using a Deep Learning-based framework, instead of the regular 'feature detection and tracking' pipeline approaches. Several experiments were performed to understand the influence of a known unknown environment, a conventional trackable feature and pre-trained activations tuned for object classification on the network's ability to accurately estimate the motion trajectory of the camera (or the vehicle). Based on these observations, we propose a Convolutional Neural Network architecture, best suited for estimating the object's pose under known environment conditions, and displays promising results when it comes to inferring the actual scale using just a single camera in real-time.",
"We present an approach to predicting velocity and direction changes from visual information (”visual odometry”) using an end-to-end, deep learning-based architecture. The architecture uses a single type of computational module and learning rule to extract visual motion, depth, and finally odometry information from the raw data. Representations of depth and motion are extracted by detecting synchrony across time and stereo channels using network layers with multiplicative interactions. The extracted representations are turned into information about changes in velocity and direction using a convolutional neural network. Preliminary results show that the architecture is capable of learning the resulting mapping from video to egomotion."
]
} |
1702.01381 | 2949948647 | This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance. | @cite_17 proposed a CNN architecture for depth and relative camera motion estimation. They utilized multiple tasks in the learning phase to provide additional supervision in order to get more accurate depth maps and camera motion estimates. @cite_12 proposed a CNN architecture for estimating the relative homography between two images by regressing a 4-point homography parameterization with an Euclidean loss. Finally, instead of relative camera pose, @cite_14 proposed a CNN-based method for absolute 6-DoF camera pose estimation. | {
"cite_N": [
"@cite_14",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2439114332",
"2561074213"
],
"abstract": [
"",
"We present a deep convolutional neural network for estimating the relative homography between a pair of images. Our feed-forward network has 10 layers, takes two stacked grayscale images as input, and produces an 8 degree of freedom homography which can be used to map the pixels from the first image to the second. We present two convolutional neural network architectures for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies. We use a 4-point homography parameterization which maps the four corners from one image into the second image. Our networks are trained in an end-to-end fashion using warped MS-COCO images. Our approach works without the need for separate local feature detection and transformation estimation stages. Our deep models are compared to a traditional homography estimator based on ORB features and we highlight the scenarios where HomographyNet outperforms the traditional technique. We also describe a variety of applications powered by deep homography estimation, thus showcasing the flexibility of a deep learning approach.",
"In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training."
]
} |
1702.01381 | 2949948647 | This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance. | Our proposal is related to all previously discussed works, but it is the first one investigating the suitability of Siamese network architectures in the relative camera pose estimation problem. Compared with @cite_11 @cite_5 , our study aims at more general treatment of the camera pose estimation problem. That is, our approach is applicable for general unrestricted camera motion and for wide baseline view pairs, unlike @cite_11 @cite_5 . Compared with @cite_12 , we are trying to solve relative camera pose, which can be regarded as a more general problem than solving the relative homography between two views. Regarding @cite_14 , we adopt the same learning objective but concentrate on solving a different problem. In particular, unlike prediction of absolute pose @cite_14 , relative pose estimation provides means for relation and representation learning for scenes and objects. Finally, compared with @cite_17 , our study focuses on analyzing the differences in traditional and CNN-based approaches for relative camera pose estimation and does not consider the role of additional supervisory signals. That is, our approach does not require depth maps for training which is beneficial in practice. Further details of our approach will be given in the following sections. | {
"cite_N": [
"@cite_14",
"@cite_11",
"@cite_5",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2400202024",
"2555820268",
"2439114332",
"2561074213"
],
"abstract": [
"",
"We present an approach to predicting velocity and direction changes from visual information (”visual odometry”) using an end-to-end, deep learning-based architecture. The architecture uses a single type of computational module and learning rule to extract visual motion, depth, and finally odometry information from the raw data. Representations of depth and motion are extracted by detecting synchrony across time and stereo channels using network layers with multiplicative interactions. The extracted representations are turned into information about changes in velocity and direction using a convolutional neural network. Preliminary results show that the architecture is capable of learning the resulting mapping from video to egomotion.",
"Deep Learning based techniques have been adopted with precision to solve a lot of standard computer vision problems, some of which are image classification, object detection and segmentation. Despite the widespread success of these approaches, they have not yet been exploited largely for solving the standard perception related problems encountered in autonomous navigation such as Visual Odometry (VO), Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM). This paper analyzes the problem of Monocular Visual Odometry using a Deep Learning-based framework, instead of the regular 'feature detection and tracking' pipeline approaches. Several experiments were performed to understand the influence of a known unknown environment, a conventional trackable feature and pre-trained activations tuned for object classification on the network's ability to accurately estimate the motion trajectory of the camera (or the vehicle). Based on these observations, we propose a Convolutional Neural Network architecture, best suited for estimating the object's pose under known environment conditions, and displays promising results when it comes to inferring the actual scale using just a single camera in real-time.",
"We present a deep convolutional neural network for estimating the relative homography between a pair of images. Our feed-forward network has 10 layers, takes two stacked grayscale images as input, and produces an 8 degree of freedom homography which can be used to map the pixels from the first image to the second. We present two convolutional neural network architectures for HomographyNet: a regression network which directly estimates the real-valued homography parameters, and a classification network which produces a distribution over quantized homographies. We use a 4-point homography parameterization which maps the four corners from one image into the second image. Our networks are trained in an end-to-end fashion using warped MS-COCO images. Our approach works without the need for separate local feature detection and transformation estimation stages. Our deep models are compared to a traditional homography estimator based on ORB features and we highlight the scenarios where HomographyNet outperforms the traditional technique. We also describe a variety of applications powered by deep homography estimation, thus showcasing the flexibility of a deep learning approach.",
"In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training."
]
} |
1702.01229 | 2950712739 | The heterogeneity-gap between different modalities brings a significant challenge to multimedia information retrieval. Some studies formalize the cross-modal retrieval tasks as a ranking problem and learn a shared multi-modal embedding space to measure the cross-modality similarity. However, previous methods often establish the shared embedding space based on linear mapping functions which might not be sophisticated enough to reveal more complicated inter-modal correspondences. Additionally, current studies assume that the rankings are of equal importance, and thus all rankings are used simultaneously, or a small number of rankings are selected randomly to train the embedding space at each iteration. Such strategies, however, always suffer from outliers as well as reduced generalization capability due to their lack of insightful understanding of procedure of human cognition. In this paper, we involve the self-paced learning theory with diversity into the cross-modal learning to rank and learn an optimal multi-modal embedding space based on non-linear mapping functions. This strategy enhances the model's robustness to outliers and achieves better generalization via training the model gradually from easy rankings by diverse queries to more complex ones. An efficient alternative algorithm is exploited to solve the proposed challenging problem with fast convergence in practice. Extensive experimental results on several benchmark datasets indicate that the proposed method achieves significant improvements over the state-of-the-arts in this literature. | Grangier al pioneered to formalize the cross-modal retrieval tasks as a pair-wise ranking problem and maximize the final retrieval performance with a Passive-Aggressive algorithm, namely Passive-Aggressive Model for Image Retrieval (PAMIR). However, since this method verifies the pairwise ranking criterion with mapping from image query space to the document space, its performance may be deteriorated by the skewed multi-modal data. Consequently, some efforts are devoted to formalize cross-media retrieval as a list-wise ranking loss optimization problem. For example, Xu al propose to optimize the list-wise ranking loss with a low-rank embedding; Wu al @cite_9 learn the latent joint representation of multi-modal data through a conditional random field. Inspired by dictionary learning together with sparse coding techniques, multi-modal dictionary learning is also studied by associating each modal data with a dictionary @cite_52 @cite_18 . Additionally, hashing technique is also employed to solve the problem of CMLR due to its efficiency for large-scale datasets @cite_1 @cite_43 @cite_56 @cite_66 . | {
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_1",
"@cite_52",
"@cite_56",
"@cite_43",
"@cite_66"
],
"mid": [
"2156872152",
"2090575037",
"",
"2018832332",
"",
"1965385957",
"2388114291"
],
"abstract": [
"Recent approaches to multi-view learning have shown that factorizing the information into parts that are shared across all views and parts that are private to each view could effectively account for the dependencies and independencies between the different input modalities. Unfortunately, these approaches involve minimizing non-convex objective functions. In this paper, we propose an approach to learning such factorized representations inspired by sparse coding techniques. In particular, we show that structured sparsity allows us to address the multi-view learning problem by alternately solving two convex optimization problems. Furthermore, the resulting factorized latent spaces generalize over existing approaches in that they allow having latent dimensions shared between any subset of the views instead of between all the views only. We show that our approach outperforms state-of-the-art methods on the task of human pose estimation.",
"Cross-modal ranking is a research topic that is imperative to many applications involving multimodal data. Discovering a joint representation for multimodal data and learning a ranking function are essential in order to boost the cross-media retrieval (i.e., image-query-text or text-query-image). In this paper, we propose an approach to discover the latent joint representation of pairs of multimodal data (e.g., pairs of an image query and a text document) via a conditional random field and structural learning in a listwise ranking manner. We call this approach cross-modal learning to rank via latent joint representation (CML @math ). In CML @math , the correlations between multimodal data are captured in terms of their sharing hidden variables (e.g., topics), and a hidden-topic-driven discriminative ranking function is learned in a listwise ranking manner. The experiments show that the proposed approach achieves a good performance in cross-media retrieval and meanwhile has the capability to learn the discriminative representation of multimodal data.",
"",
"Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.",
"",
"Hashing techniques have been extensively investigated to boost similarity search for large-scale high-dimensional data. Most of the existing approaches formulate the their objective as a pair-wise similarity-preserving problem. In this paper, we consider the hashing problem from the perspective of optimizing a list-wise learning to rank problem and propose an approach called List-Wise supervised Hashing (LWH). In LWH, the hash functions are optimized by employing structural SVM in order to explicitly minimize the ranking loss of the whole list-wise permutations instead of merely the point-wise or pair-wise supervision. We evaluate the performance of LWH on two real-world data sets. Experimental results demonstrate that our method obtains a significant improvement over the state-of-the-art hashing approaches due to both structural large margin and list-wise ranking pursuing in a supervised manner.",
"Due to the storage and retrieval efficiency, hashing has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval. Cross-modal hashing, which enables efficient retrieval of images in response to text queries or vice versa, has received increasing attention recently. Most existing work on cross-modal hashing does not capture the spatial dependency of images and temporal dynamics of text sentences for learning powerful feature representations and cross-modal embeddings that mitigate the heterogeneity of different modalities. This paper presents a new Deep Visual-Semantic Hashing (DVSH) model that generates compact hash codes of images and sentences in an end-to-end deep learning architecture, which capture the intrinsic cross-modal correspondences between visual data and natural language. DVSH is a hybrid deep architecture that constitutes a visual-semantic fusion network for learning joint embedding space of images and text sentences, and two modality-specific hashing networks for learning hash functions to generate compact binary codes. Our architecture effectively unifies joint multimodal embedding and cross-modal hashing, which is based on a novel combination of Convolutional Neural Networks over images, Recurrent Neural Networks over sentences, and a structured max-margin objective that integrates all things together to enable learning of similarity-preserving and high-quality hash codes. Extensive empirical evidence shows that our DVSH approach yields state of the art results in cross-modal retrieval experiments on image-sentences datasets, i.e. standard IAPR TC-12 and large-scale Microsoft COCO."
]
} |
1702.01229 | 2950712739 | The heterogeneity-gap between different modalities brings a significant challenge to multimedia information retrieval. Some studies formalize the cross-modal retrieval tasks as a ranking problem and learn a shared multi-modal embedding space to measure the cross-modality similarity. However, previous methods often establish the shared embedding space based on linear mapping functions which might not be sophisticated enough to reveal more complicated inter-modal correspondences. Additionally, current studies assume that the rankings are of equal importance, and thus all rankings are used simultaneously, or a small number of rankings are selected randomly to train the embedding space at each iteration. Such strategies, however, always suffer from outliers as well as reduced generalization capability due to their lack of insightful understanding of procedure of human cognition. In this paper, we involve the self-paced learning theory with diversity into the cross-modal learning to rank and learn an optimal multi-modal embedding space based on non-linear mapping functions. This strategy enhances the model's robustness to outliers and achieves better generalization via training the model gradually from easy rankings by diverse queries to more complex ones. An efficient alternative algorithm is exploited to solve the proposed challenging problem with fast convergence in practice. Extensive experimental results on several benchmark datasets indicate that the proposed method achieves significant improvements over the state-of-the-arts in this literature. | Note that the mentioned approaches above commonly use linear mapping functions to translate multi-modal data into the shared space for its simplicity. However, linear mapping function might not be sophisticated enough to reveal the explicit correspondences between different modalities. For this reason, Feng al @cite_63 leverage correspondence autoencoder with deep architectures to learn the mid-level presentation of multi-modal data; Jiang al @cite_58 assume a deep compositional cross-modal semantic representation is more attractive for CMLR and optimize the pairwise ranking using non-linear mapping. These techniques have shown their effectiveness to learn a more sophisticated embedding space with large scale training collections. However, an expensive computational cost is usually required by these methods due to a large number of parameters. Additionally, the ranking performance is limited when there is not enough training data available for some real world applications. | {
"cite_N": [
"@cite_58",
"@cite_63"
],
"mid": [
"2053946370",
"1964073652"
],
"abstract": [
"Cross-modal retrieval is a very hot research topic that is imperative to many applications involving multi-modal data. Discovering an appropriate representation for multi-modal data and learning a ranking function are essential to boost the cross-media retrieval. Motivated by the assumption that a compositional cross-modal semantic representation (pairs of images and text) is more attractive for cross-modal ranking, this paper exploits the existing image-text databases to optimize a ranking function for cross-modal retrieval, called deep compositional cross-modal learning to rank (C2MLR). In this paper, C2MLR considers learning a multi-modal embedding from the perspective of optimizing a pairwise ranking problem while enhancing both local alignment and global alignment. In particular, the local alignment (i.e., the alignment of visual objects and textual words) and the global alignment (i.e., the image-level and sentence-level alignment) are collaboratively utilized to learn the multi-modal embedding common space in a max-margin learning to rank manner. The experiments demonstrate the superiority of our proposed C2MLR due to its nature of multi-modal compositional embedding.",
"The problem of cross-modal retrieval, e.g., using a text query to search for images and vice-versa, is considered in this paper. A novel model involving correspondence autoencoder (Corr-AE) is proposed here for solving this problem. The model is constructed by correlating hidden representations of two uni-modal autoencoders. A novel optimal objective, which minimizes a linear combination of representation learning errors for each modality and correlation learning error between hidden representations of two modalities, is used to train the model as a whole. Minimization of correlation learning error forces the model to learn hidden representations with only common information in different modalities, while minimization of representation learning error makes hidden representations are good enough to reconstruct input of each modality. A parameter @math is used to balance the representation learning error and the correlation learning error. Based on two different multi-modal autoencoders, Corr-AE is extended to other two correspondence models, here we called Corr-Cross-AE and Corr-Full-AE. The proposed models are evaluated on three publicly available data sets from real scenes. We demonstrate that the three correspondence autoencoders perform significantly better than three canonical correlation analysis based models and two popular multi-modal deep models on cross-modal retrieval tasks."
]
} |
1702.01135 | 2950147618 | Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods. | The authors of @cite_2 propose a technique for finding local adversarial examples in DNNs with ReLUs. Given an input point @math , they encode the problem as a linear program and invoke a black-box LP solver. The activation function issue is circumvented by considering a sufficiently small neighborhood of @math , in which all ReLUs are fixed at the active or inactive state, making the problem convex. Thus, it is unclear how to address an @math for which one or more ReLUs are on the boundary between active and inactive states. In contrast, Reluplex can be used on input domains for which ReLUs can have more than one possible state. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2395317528"
],
"abstract": [
"Despite having high accuracy, neural nets have been shown to be susceptible to adversarial examples, where a small perturbation to an input can cause it to become mislabeled. We propose metrics for measuring the robustness of a neural net and devise a novel algorithm for approximating these metrics based on an encoding of robustness as a linear program. We show how our metrics can be used to evaluate the robustness of deep neural nets with experiments on the MNIST and CIFAR-10 datasets. Our algorithm generates more informative estimates of robustness metrics compared to estimates based on existing algorithms. Furthermore, we show how existing approaches to improving robustness \"overfit\" to adversarial examples generated using a specific algorithm. Finally, we show that our techniques can be used to additionally improve neural net robustness both according to the metrics that we propose, but also according to previously proposed metrics."
]
} |
1702.00926 | 2949997773 | We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. To robustly match points among different instances within the same object class, we formulate FCSS using local self-similarity (LSS) within a fully convolutional network. In contrast to existing CNN-based descriptors, FCSS is inherently insensitive to intra-class appearance variations because of its LSS-based structure, while maintaining the precise localization ability of deep neural networks. The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in existing image datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks. | Conventional gradient-based and intensity comparison-based descriptors, such as SIFT @cite_13 , HOG @cite_36 , DAISY @cite_8 , and BRIEF @cite_26 , have shown limited performance in dense correspondence estimation across semantically similar but different object instances. Over the past few years, besides these handcrafted features, several attempts have been made using deep CNNs to learn discriminative descriptors for local patches from large-scale datasets. Some of these techniques have extracted immediate activations as the descriptor @cite_6 @cite_46 @cite_24 @cite_25 , which have shown to be effective for patch-level matching. Other methods have directly learned a similarity measure for comparing patches using a convolutional similarity network @cite_49 @cite_9 @cite_34 @cite_38 . Even though CNN-based descriptors encode a discriminative structure with a deep architecture, they have inherent limitations in handling large intra-class variations @cite_35 @cite_39 . Furthermore, they are mostly tailored to estimate sparse correspondences, and cannot in practice provide dense descriptors due to their high computational complexity. Of particular importance, current research on semantic correspondence lacks an appropriate benchmark with dense ground-truth correspondences, making supervised learning of CNNs less feasible for this task. | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_26",
"@cite_8",
"@cite_36",
"@cite_9",
"@cite_6",
"@cite_39",
"@cite_24",
"@cite_49",
"@cite_46",
"@cite_34",
"@cite_13",
"@cite_25"
],
"mid": [
"",
"1869500417",
"2019085623",
"2104853049",
"2161969291",
"",
"",
"2951625587",
"2953360861",
"1929856797",
"2128237624",
"",
"2151103935",
"2950124505"
],
"abstract": [
"",
"Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available.",
"Binary descriptors are becoming increasingly popular as a means to compare feature points very fast while requiring comparatively small amounts of memory. The typical approach to creating them is to first compute floating-point ones, using an algorithm such as SIFT, and then to binarize them. In this paper, we show that we can directly compute a binary descriptor, which we call BRIEF, on the basis of simple intensity difference tests. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and SIFT on standard benchmarks and show that it yields comparable recognition accuracy, while running in an almost vanishing fraction of the time required by either.",
"In this paper, we introduce a local image descriptor, DAISY, which is very efficient to compute densely. We also present an EM-based algorithm to compute dense depth and occlusion maps from wide-baseline image pairs using this descriptor. This yields much better results in wide-baseline situations than the pixel and correlation-based algorithms that are commonly used in narrow-baseline stereo. Also, using a descriptor makes our algorithm robust against many photometric and geometric transformations. Our descriptor is inspired from earlier ones such as SIFT and GLOH but can be computed much faster for our purposes. Unlike SURF, which can also be computed efficiently at every pixel, it does not introduce artifacts that degrade the matching performance when used densely. It is important to note that our approach is the first algorithm that attempts to estimate dense depth maps from wide-baseline image pairs, and we show that it is a good one at that with many experiments for depth estimation accuracy, occlusion detection, and comparing it against other descriptors on laser-scanned ground truth scenes. We also tested our approach on a variety of indoor and outdoor scenes with different photometric and geometric transformations and our experiments support our claim to being robust against these.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"",
"",
"We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training.",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"Motivated by recent successes on learning feature representations and on learning feature comparison functions, we propose a unified approach to combining both for training a patch matching system. Our system, dubbed Match-Net, consists of a deep convolutional network that extracts features from patches and a network of three fully connected layers that computes a similarity between the extracted features. To ensure experimental repeatability, we train MatchNet on standard datasets and employ an input sampler to augment the training set with synthetic exemplar pairs that reduce overfitting. Once trained, we achieve better computational efficiency during matching by disassembling MatchNet and separately applying the feature computation and similarity networks in two sequential stages. We perform a comprehensive set of experiments on standard datasets to carefully study the contributions of each aspect of MatchNet, with direct comparisons to established methods. Our results confirm that our unified approach improves accuracy over previous state-of-the-art results on patch matching datasets, while reducing the storage requirement for descriptors. We make pre-trained MatchNet publicly available.",
"Latest results indicate that features learned via convolutional neural networks outperform previous descriptors on classification tasks by a large margin. It has been shown that these networks still work well when they are applied to datasets or recognition tasks different from those they were trained on. However, descriptors like SIFT are not only used in recognition but also for many correspondence problems that rely on descriptor matching. In this paper we compare features from various layers of convolutional neural nets to standard SIFT descriptors. We consider a network that was trained on ImageNet and another one that was trained without supervision. Surprisingly, convolutional neural networks clearly outperform SIFT on descriptor matching. This paper has been merged with arXiv:1406.6909",
"",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"Convolutional neural nets (convnets) trained from massive labeled datasets have substantially improved the state-of-the-art in image classification and object detection. However, visual understanding requires establishing correspondence on a finer level than object category. Given their large pooling regions and training from whole-image labels, it is not clear that convnets derive their success from an accurate correspondence model which could be used for precise localization. In this paper, we study the effectiveness of convnet activation features for tasks requiring correspondence. We present evidence that convnet features localize at a much finer scale than their receptive field sizes, that they can be used to perform intraclass alignment as well as conventional hand-engineered features, and that they outperform conventional features in keypoint prediction on objects from PASCAL VOC 2011."
]
} |
1702.00926 | 2949997773 | We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. To robustly match points among different instances within the same object class, we formulate FCSS using local self-similarity (LSS) within a fully convolutional network. In contrast to existing CNN-based descriptors, FCSS is inherently insensitive to intra-class appearance variations because of its LSS-based structure, while maintaining the precise localization ability of deep neural networks. The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in existing image datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks. | Many techniques for dense semantic correspondence employ handcrafted features such as SIFT @cite_13 or HOG @cite_36 . To improve the matching quality, they focus on optimization. Among these methods are some based on SIFT Flow @cite_31 @cite_22 , which uses hierarchical dual-layer belief propagation (BP). Other instances include the methods with an exemplar-LDA approach @cite_44 , through joint image set alignment @cite_19 , or together with cosegmentation @cite_53 . | {
"cite_N": [
"@cite_22",
"@cite_36",
"@cite_53",
"@cite_44",
"@cite_19",
"@cite_31",
"@cite_13"
],
"mid": [
"2124861766",
"2161969291",
"2464606141",
"209424029",
"1926639317",
"",
"2151103935"
],
"abstract": [
"We introduce a fast deformable spatial pyramid (DSP) matching algorithm for computing dense pixel correspondences. Dense matching methods typically enforce both appearance agreement between matched pixels as well as geometric smoothness between neighboring pixels. Whereas the prevailing approaches operate at the pixel level, we propose a pyramid graph model that simultaneously regularizes match consistency at multiple spatial extents-ranging from an entire image, to coarse grid cells, to every single pixel. This novel regularization substantially improves pixel-level matching in the face of challenging image variations, while the \"deformable\" aspect of our model overcomes the strict rigidity of traditional spatial pyramids. Results on Label Me and Caltech show our approach outperforms state-of-the-art methods (SIFT Flow [15] and Patch-Match [2]), both in terms of accuracy and run time.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"We propose a new technique to jointly recover cosegmentation and dense per-pixel correspondence in two images. Our method parameterizes the correspondence field using piecewise similarity transformations and recovers a mapping between the estimated common \"foreground\" regions in the two images allowing them to be precisely aligned. Our formulation is based on a hierarchical Markov random field model with segmentation and transformation labels. The hierarchical structure uses nested image regions to constrain inference across multiple scales. Unlike prior hierarchical methods which assume that the structure is given, our proposed iterative technique dynamically recovers the structure along with the labeling. This joint inference is performed in an energy minimization framework using iterated graph cuts. We evaluate our method on a new dataset of 400 image pairs with manually obtained ground truth, where it outperforms state-of-the-art methods designed specifically for either cosegmentation or correspondence estimation.",
"Determining dense semantic correspondences across objects and scenes is a difficult problem that underpins many higher-level computer vision algorithms. Unlike canonical dense correspondence problems which consider images that are spatially or temporally adjacent, semantic correspondence is characterized by images that share similar high-level structures whose exact appearance and geometry may differ. Motivated by object recognition literature and recent work on rapidly estimating linear classifiers, we treat semantic correspondence as a constrained detection problem, where an exemplar LDA classifier is learned for each pixel. LDA classifiers have two distinct benefits: (i) they exhibit higher average precision than similarity metrics typically used in correspondence problems, and (ii) unlike exemplar SVM, can output globally interpretable posterior probabilities without calibration, whilst also being significantly faster to train. We pose the correspondence problem as a graphical model, where the unary potentials are computed via convolution with the set of exemplar classifiers, and the joint potentials enforce smoothly varying correspondence assignment.",
"Given a set of poorly aligned images of the same visual concept without any annotations, we propose an algorithm to jointly bring them into pixel-wise correspondence by estimating a FlowWeb representation of the image set. FlowWeb is a fully-connected correspondence flow graph with each node representing an image, and each edge representing the correspondence flow field between a pair of images, i.e. a vector field indicating how each pixel in one image can find a corresponding pixel in the other image. Correspondence flow is related to optical flow but allows for correspondences between visually dissimilar regions if there is evidence they correspond transitively on the graph. Our algorithm starts by initializing all edges of this complete graph with an off-the-shelf, pairwise flow method. We then iteratively update the graph to force it to be more self-consistent. Once the algorithm converges, dense, globally-consistent correspondences can be read off the graph. Our results suggest that FlowWeb improves alignment accuracy over previous pairwise as well as joint alignment methods.",
"",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
]
} |
1702.00926 | 2949997773 | We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. To robustly match points among different instances within the same object class, we formulate FCSS using local self-similarity (LSS) within a fully convolutional network. In contrast to existing CNN-based descriptors, FCSS is inherently insensitive to intra-class appearance variations because of its LSS-based structure, while maintaining the precise localization ability of deep neural networks. The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in existing image datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks. | For the purpose of object recognition, Dosovitskiy @cite_16 trained the network to discriminate between a set of surrogate classes formed by applying various transformations. For object matching, Lin @cite_7 proposed an unsupervised learning to learn a compact binary descriptor by leveraging an iterative training scheme. More closely related to our work is the method of Zhou @cite_42 , which exploits cycle-consistency with a 3D CAD model @cite_43 as a supervisory signal to train a deep network for semantic correspondence. However, the need to have a suitable 3D CAD model for each object class limits its applicability. | {
"cite_N": [
"@cite_43",
"@cite_16",
"@cite_42",
"@cite_7"
],
"mid": [
"",
"2148349024",
"2474531669",
"2461086877"
],
"abstract": [
"",
"Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101).",
"Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and realto-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms stateof-the-art pairwise matching methods in correspondencerelated tasks.",
"In this paper, we propose a new unsupervised deep learning approach called DeepBit to learn compact binary descriptor for efficient visual object matching. Unlike most existing binary descriptors which were designed with random projections or linear hash functions, we develop a deep neural network to learn binary descriptors in an unsupervised manner. We enforce three criterions on binary codes which are learned at the top layer of our network: 1) minimal loss quantization, 2) evenly distributed codes and 3) uncorrelated bits. Then, we learn the parameters of the networks with a back-propagation technique. Experimental results on three different visual analysis tasks including image matching, image retrieval, and object recognition clearly demonstrate the effectiveness of the proposed approach."
]
} |
1702.00855 | 2592801061 | In the area of ad-targeting, predicting user responses is essential for many applications such as Real-Time Bidding (RTB). Many of the features available in this domain are sparse categorical features. This presents a challenge especially when the user responses to be predicted are rare, because each feature will only have very few positive examples. Recently, neural embedding techniques such as word2vec which learn distributed representations of words using occurrence statistics in the corpus have been shown to be effective in many Natural Language Processing tasks. In this paper, we use real-world data set to show that a similar technique can be used to learn distributed representations of features from users' web history, and that such representations can be used to improve the accuracy of commonly used models for predicting rare user responses. | Various methods have been employed to address the feature sparsity problem. For example, higher order category information derived from human annotation, or from the data via unsupervised methods such as topic modelling, clustering etc. @cite_2 @cite_10 has been used to improve generalization. Other techniques such as counting features can also help by allowing rare features to contribute jointly @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_2"
],
"mid": [
"2076618162",
"1483176838",
"2288597091"
],
"abstract": [
"Online advertising allows advertisers to only bid and pay for measurable user responses, such as clicks on ads. As a consequence, click prediction systems are central to most online advertising systems. With over 750 million daily active users and over 1 million active advertisers, predicting clicks on Facebook ads is a challenging machine learning task. In this paper we introduce a model which combines decision trees with logistic regression, outperforming either of these methods on its own by over 3 , an improvement with significant impact to the overall system performance. We then explore how a number of fundamental parameters impact the final prediction performance of our system. Not surprisingly, the most important thing is to have the right features: those capturing historical information about the user or ad dominate other types of features. Once we have the right features and the right model (decisions trees plus logistic regression), other factors play small roles (though even small improvements are important at scale). Picking the optimal handling for data freshness, learning rate schema and data sampling improve the model slightly, though much less than adding a high-value feature, or picking the right model to begin with.",
"Being an emerging paradigm for display advertising, Real-Time Bidding (RTB) drives the focus of the bidding strategy from context to users' interest by computing a bid for each impression in real time. The data mining work and particularly the bidding strategy development becomes crucial in this performance-driven business. However, researchers in computational advertising area have been suffering from lack of publicly available benchmark datasets, which are essential to compare different algorithms and systems. Fortunately, a leading Chinese advertising technology company iPinYou decided to release the dataset used in its global RTB algorithm competition in 2013. The dataset includes logs of ad auctions, bids, impressions, clicks, and final conversions. These logs reflect the market environment as well as form a complete path of users' responses from advertisers' perspective. This dataset directly supports the experiments of some important research problems such as bid optimisation and CTR estimation. To the best of our knowledge, this is the first publicly available dataset on RTB display advertising. Thus, they are valuable for reproducible research and understanding the whole RTB ecosystem. In this paper, we first provide the detailed statistical analysis of this dataset. Then we introduce the research problem of bid optimisation in RTB and the simple yet comprehensive evaluation protocol. Besides, a series of benchmark experiments are also conducted, including both click-through rate (CTR) estimation and bid optimisation.",
"Predicting user responses, such as click-through rate and conversion rate, are critical in many web applications including web search, personalised recommendation, and online advertising. Different from continuous raw features that we usually found in the image and audio domains, the input features in web space are always of multi-field and are mostly discrete and categorical while their dependencies are little known. Major user response prediction models have to either limit themselves to linear models or require manually building up high-order combination features. The former loses the ability of exploring feature interactions, while the latter results in a heavy computation in the large feature space. To tackle the issue, we propose two novel models using deep neural networks (DNNs) to automatically learn effective patterns from categorical feature interactions and make predictions of users' ad clicks. To get our DNNs efficiently work, we propose to leverage three feature transformation methods, i.e., factorisation machines (FMs), restricted Boltzmann machines (RBMs) and denoising auto-encoders (DAEs). This paper presents the structure of our models and their efficient training algorithms. The large-scale experiments with real-world data demonstrate that our methods work better than major state-of-the-art models."
]
} |
1702.00855 | 2592801061 | In the area of ad-targeting, predicting user responses is essential for many applications such as Real-Time Bidding (RTB). Many of the features available in this domain are sparse categorical features. This presents a challenge especially when the user responses to be predicted are rare, because each feature will only have very few positive examples. Recently, neural embedding techniques such as word2vec which learn distributed representations of words using occurrence statistics in the corpus have been shown to be effective in many Natural Language Processing tasks. In this paper, we use real-world data set to show that a similar technique can be used to learn distributed representations of features from users' web history, and that such representations can be used to improve the accuracy of commonly used models for predicting rare user responses. | Another category of solutions involve embedding sparse categorical features into low-dimensional vector space. Various feature transformation methods that yield dense features has been investigated in conjunction with deep neural networks resulting in improvement over major state-of-the-art models @cite_2 . also investigated a framework they refer to as implicit look-alike modelling, in which entities like users, web-pages, ads etc. are mapped into a latent vector space using both general web browsing behavior and ad response behaviour data @cite_11 . | {
"cite_N": [
"@cite_11",
"@cite_2"
],
"mid": [
"2235277763",
"2288597091"
],
"abstract": [
"User behaviour targeting is essential in online advertising. Compared with sponsored search keyword targeting and contextual advertising page content targeting, user behaviour targeting builds users’ interest profiles via tracking their online behaviour and then delivers the relevant ads according to each user’s interest, which leads to higher targeting accuracy and thus more improved advertising performance. The current user profiling methods include building keywords and topic tags or mapping users onto a hierarchical taxonomy. However, to our knowledge, there is no previous work that explicitly investigates the user online visits similarity and incorporates such similarity into their ad response prediction. In this work, we propose a general framework which learns the user profiles based on their online browsing behaviour, and transfers the learned knowledge onto prediction of their ad response. Technically, we propose a transfer learning model based on the probabilistic latent factor graphic models, where the users’ ad response profiles are generated from their online browsing profiles. The large-scale experiments based on real-world data demonstrate significant improvement of our solution over some strong baselines.",
"Predicting user responses, such as click-through rate and conversion rate, are critical in many web applications including web search, personalised recommendation, and online advertising. Different from continuous raw features that we usually found in the image and audio domains, the input features in web space are always of multi-field and are mostly discrete and categorical while their dependencies are little known. Major user response prediction models have to either limit themselves to linear models or require manually building up high-order combination features. The former loses the ability of exploring feature interactions, while the latter results in a heavy computation in the large feature space. To tackle the issue, we propose two novel models using deep neural networks (DNNs) to automatically learn effective patterns from categorical feature interactions and make predictions of users' ad clicks. To get our DNNs efficiently work, we propose to leverage three feature transformation methods, i.e., factorisation machines (FMs), restricted Boltzmann machines (RBMs) and denoising auto-encoders (DAEs). This paper presents the structure of our models and their efficient training algorithms. The large-scale experiments with real-world data demonstrate that our methods work better than major state-of-the-art models."
]
} |
1702.00921 | 2594606335 | The digital revolution has brought most of the world on the world wide web. The data available on WWW has increased many folds in the past decade. Social networks, online clubs and organisations have come into existence. Information is extracted from these venues about a real world entity like a person, organisation, event, etc. However, this information may change over time, and there is a need for the sources to be up-to-date. Therefore, it is desirable to have a model to extract relevant data items from different sources and merge them to build a complete profile of an entity (entity profiling). Further, this model should be able to handle incorrect or obsolete data items. In this paper, we propose a novel method for completing a profile. We have developed a two phase method-1) The first phase (resolution phase) links records to the queries. We have proposed and observed that the use of random forest for entity resolution increases the performance of the system as this has resulted in more records getting linked to the correct entity. Also, we used trustworthiness of a source as a feature to the random forest. 2) The second phase selects the appropriate values from records to complete a profile based on our proposed selection criteria. We have used various metrics for measuring the performance of the resolution phase as well as for the overall ReLiC framework. It is established through our results that the use of biased sources has significantly improved the performance of the ReLiC framework. Experimental results show that our proposed system, ReLiC outperforms the state-of-the-art. | The approaches to entity resolution can be classified into two classes - learning based and rule based approaches. A learning-based model for entity resolution using Markov Logic is proposed in @cite_3 . FEBRL @cite_36 (Freely extensible biomedical Record Linkage) uses SVM to learn the appropriate matching combinations. The similarity measures used for this approach is same as that in the rule-based approach proposed in @cite_34 . MARLIN @cite_1 (Multiply Adaptive Record Linkage with INduction) uses edit distance and cosine similarity measures along with different classifiers for measuring string similarity. A conceptual semantic framework for entity resolution is proposed in @cite_32 . Zhao, G. et al @cite_10 propose a novel model for linking entities mentioned in texts to semi-structured knowledge bases, by considering relationships between entities which occur together more frequently in text. This technique has been shown to disambiguate entities effectively. Cheng, J. et al @cite_8 have used entity resolution for linking social networking profiles to the organisations the particular user belong to. An analysis for information extraction techniques for linking tweets to entities is has been reported in @cite_35 . Detailed description and analysis of various entity resolution and data linkage approaches are discussed in @cite_11 @cite_2 . | {
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_36",
"@cite_1",
"@cite_32",
"@cite_3",
"@cite_2",
"@cite_34",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2461105834",
"1582775886",
"2164456230",
"",
"2171472464",
"",
"2073471108",
"2466190630",
""
],
"abstract": [
"",
"Many organizations possess social media accounts on different social networks, but these profiles are not always linked. End applications, users, as well as the organization themselves, can benefit when the profiles are appropriately identified and linked. Most existing works on social network entity linking focus on linking individuals, and do not model features specific for organizational linking. We address this gap not only to link official social media accounts but also to discover and solve the identification and linking of associated affiliate accounts -- such as geographical divisions and brands -- which are important to distinguish. We instantiate our method for classifying profiles on social network services for Twitter and Facebook, which major organizations use. We classify profiles as to whether they belong to an organization or its affiliates. Our best classifier achieves an accuracy of 0.976 on average in both datasets, significantly improving baselines that exploit the features used in state-of-the-art comparable user linkage strategies.",
"Record or data linkage is an important enabling technology in the health sector, as linked data is a cost-effective resource that can help to improve research into health policies, detect adverse drug reactions, reduce costs, and uncover fraud within the health system. Significant advances, mostly originating from data mining and machine learning, have been made in recent years in many areas of record linkage techniques. Most of these new methods are not yet implemented in current record linkage systems, or are hidden within 'black box' commercial software. This makes it difficult for users to learn about new record linkage techniques, as well as to compare existing linkage techniques with new ones. What is required are flexible tools that enable users to experiment with new record linkage techniques at low costs. This paper describes the Febrl (Freely Extensible Biomedical Record Linkage) system, which is available under an open source software licence. It contains many recently developed advanced techniques for data cleaning and standardisation, indexing (blocking), field comparison, and record pair classification, and encapsulates them into a graphical user interface. Febrl can be seen as a training tool suitable for users to learn and experiment with both traditional and new record linkage techniques, as well as for practitioners to conduct linkages with data sets containing up to several hundred thousand records.",
"The problem of identifying approximately duplicate records in databases is an essential step for data cleaning and data integration processes. Most existing approaches have relied on generic or manually tuned distance metrics for estimating the similarity of potential duplicates. In this paper, we present a framework for improving duplicate detection using trainable measures of textual similarity. We propose to employ learnable text distance functions for each database field, and show that such measures are capable of adapting to the specific notion of similarity that is appropriate for the field's domain. We present two learnable text similarity measures suitable for this task: an extended variant of learnable string edit distance, and a novel vector-space based measure that employs a Support Vector Machine (SVM) for training. Experimental results on a range of datasets show that our framework can improve duplicate detection accuracy over traditional techniques.",
"",
"Entity resolution is the problem of determining which records in a database refer to the same entities, and is a crucial and expensive step in the data mining process. Interest in it has grown rapidly in recent years, and many approaches have been proposed. However, they tend to address only isolated aspects of the problem, and are often ad hoc. This paper proposes a well-founded, integrated solution to the entity resolution problem based on Markov logic. Markov logic combines first-order logic and probabilistic graphical models by attaching weights to first-order formulas, and viewing them as templates for features of Markov networks. We show how a number of previous approaches can be formulated and seamlessly combined in Markov logic, and how the resulting learning and inference problems can be solved efficiently. Experiments on two citation databases show the utility of this approach, and evaluate the contribution of the different components.",
"",
"Abstract A mathematical model is developed to provide a theoretical framework for a computer-oriented solution to the problem of recognizing those records in two files which represent identical persons, objects or events (said to be matched). A comparison is to be made between the recorded characteristics and values in two records (one from each file) and a decision made as to whether or not the members of the comparison-pair represent the same person or event, or whether there is insufficient evidence to justify either of these decisions at stipulated levels of error. These three decisions are referred to as link (A 1), a non-link (A 3), and a possible link (A 2). The first two decisions are called positive dispositions. The two types of error are defined as the error of the decision A 1 when the members of the comparison pair are in fact unmatched, and the error of the decision A 3 when the members of the comparison pair are, in fact matched. The probabilities of these errors are defined as and respecti...",
"We propose Feedback-query-expansion and Re-ranking methods which model the semantic relatedness of entities in one document.We demonstrate the effectiveness of our methods by comparing with the baseline systems on three data sets.Our team has scored in the top 3 teams across multiple metrics for the English EDL task in TAC2014. Entity disambiguation is a fundamental task of semantic Web annotation. Entity Linking (EL) is an essential procedure in entity disambiguation, which aims to link a mention appearing in a plain text to a structured or semi-structured knowledge base, such as Wikipedia. Existing research on EL usually annotates the mentions in a text one by one and treats entities independent to each other. However this might not be true in many application scenarios. For example, if two mentions appear in one text, they are likely to have certain intrinsic relationships. In this paper, we first propose a novel query expansion method for candidate generation utilizing the information of co-occurrences of mentions. We further propose a re-ranking model which can be iteratively adjusted based on the prediction in the previous round. Experiments on real-world data demonstrate the effectiveness of our proposed methods for entity disambiguation.",
""
]
} |
1702.00921 | 2594606335 | The digital revolution has brought most of the world on the world wide web. The data available on WWW has increased many folds in the past decade. Social networks, online clubs and organisations have come into existence. Information is extracted from these venues about a real world entity like a person, organisation, event, etc. However, this information may change over time, and there is a need for the sources to be up-to-date. Therefore, it is desirable to have a model to extract relevant data items from different sources and merge them to build a complete profile of an entity (entity profiling). Further, this model should be able to handle incorrect or obsolete data items. In this paper, we propose a novel method for completing a profile. We have developed a two phase method-1) The first phase (resolution phase) links records to the queries. We have proposed and observed that the use of random forest for entity resolution increases the performance of the system as this has resulted in more records getting linked to the correct entity. Also, we used trustworthiness of a source as a feature to the random forest. 2) The second phase selects the appropriate values from records to complete a profile based on our proposed selection criteria. We have used various metrics for measuring the performance of the resolution phase as well as for the overall ReLiC framework. It is established through our results that the use of biased sources has significantly improved the performance of the ReLiC framework. Experimental results show that our proposed system, ReLiC outperforms the state-of-the-art. | Xiao, C. et al @cite_4 proposed a model to join multiple records by the similarity between the corresponding values of a given attribute. The two primary methods incorporated by this approach are - 1) application of similarity function over attribute values and 2) to declare a threshold above which it should be considered as a match. A rule-based approach proposed in @cite_34 uses the following three similarity measures to compute similarity -- Winkler, Tokenset and Trigram. The similarity threshold is of two types i.e. upper and lower. Anything above the upper threshold means a proper match and below lower threshold means a non-match and between them is a possible match. The idea of quantitative similarity between attribute values is one of the fundamental techniques for entity resolution, which we have incorporated in our proposed method for entity profiling. However, we modified it to suit our model and purpose. A novel method for retrieval of relevant blog posts corresponding to a query about an entity by using a facet based information retrieval model is proposed in @cite_25 . | {
"cite_N": [
"@cite_34",
"@cite_4",
"@cite_25"
],
"mid": [
"2073471108",
"2097184821",
"1999247825"
],
"abstract": [
"Abstract A mathematical model is developed to provide a theoretical framework for a computer-oriented solution to the problem of recognizing those records in two files which represent identical persons, objects or events (said to be matched). A comparison is to be made between the recorded characteristics and values in two records (one from each file) and a decision made as to whether or not the members of the comparison-pair represent the same person or event, or whether there is insufficient evidence to justify either of these decisions at stipulated levels of error. These three decisions are referred to as link (A 1), a non-link (A 3), and a possible link (A 2). The first two decisions are called positive dispositions. The two types of error are defined as the error of the decision A 1 when the members of the comparison pair are in fact unmatched, and the error of the decision A 3 when the members of the comparison pair are, in fact matched. The probabilities of these errors are defined as and respecti...",
"With the increasing amount of data and the need to integrate data from multiple data sources, a challenging issue is to find near duplicate records efficiently. In this paper, we focus on efficient algorithms to find pairs of records such that their similarities are above a given threshold. Several existing algorithms rely on the prefix filtering principle to avoid computing similarity values for all possible pairs of records. We propose new filtering techniques by exploiting the ordering information; they are integrated into the existing methods and drastically reduce the candidate sizes and hence improve the efficiency. Experimental results show that our proposed algorithms can achieve up to 2.6x - 5x speed-up over previous algorithms on several real datasets and provide alternative solutions to the near duplicate Web page detection problem.",
"The paper presents methods of retrieving blog posts containing opinions about an entity expressed in the query. The methods use a lexicon of subjective words and phrases compiled from manually and automatically developed resources. One of the methods uses the Kullback-Leibler divergence to weight subjective words occurring near query terms in documents, another uses proximity between the occurrences of query terms and subjective words in documents, and the third combines both factors. Methods of structuring queries into facets, facet expansion using Wikipedia, and a facet-based retrieval are also investigated in this work. The methods were evaluated using the TREC 2007 and 2008 Blog track topics, and proved to be highly effective."
]
} |
1702.00921 | 2594606335 | The digital revolution has brought most of the world on the world wide web. The data available on WWW has increased many folds in the past decade. Social networks, online clubs and organisations have come into existence. Information is extracted from these venues about a real world entity like a person, organisation, event, etc. However, this information may change over time, and there is a need for the sources to be up-to-date. Therefore, it is desirable to have a model to extract relevant data items from different sources and merge them to build a complete profile of an entity (entity profiling). Further, this model should be able to handle incorrect or obsolete data items. In this paper, we propose a novel method for completing a profile. We have developed a two phase method-1) The first phase (resolution phase) links records to the queries. We have proposed and observed that the use of random forest for entity resolution increases the performance of the system as this has resulted in more records getting linked to the correct entity. Also, we used trustworthiness of a source as a feature to the random forest. 2) The second phase selects the appropriate values from records to complete a profile based on our proposed selection criteria. We have used various metrics for measuring the performance of the resolution phase as well as for the overall ReLiC framework. It is established through our results that the use of biased sources has significantly improved the performance of the ReLiC framework. Experimental results show that our proposed system, ReLiC outperforms the state-of-the-art. | Benjelloun, O. et al @cite_20 discusses the pairwise resolution of the entities. The novelty of this approach is the usage of the match and merge properties. The four properties considered here are Idempotence, Associativity, Commutativity and Relativity. The validity of these properties, once confirmed, helps in efficient entity resolution. Bilgic, M. et al @cite_14 proposed a method to solve entity resolution problem with Markov logic. This method proposes combining of first-order logic and probabilistic graphical models. Weights are associated with first-order formulas and taken as features for Markov networks. The combining of first-order logic and Markov network leads to proper learning and efficient solution to the problem. A theoretical framework for knowledge-based entity resolution using first-order logic is proposed in @cite_12 . The focus is on the analysis of knowledge patterns for optimizing a knowledge model, which is then used for entity resolution. A method for linking words or phrases in unstructured texts to entities using part of speech patterns in text is proposed in @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_12",
"@cite_20"
],
"mid": [
"2114341539",
"2149827033",
"2061162567",
""
],
"abstract": [
"Linking words or phrases in unstructured text to entities in knowledge bases is the problem of entity recognition and disambiguation. In this paper, we focus on the task of entity recognition in Web text to address the challenges of entity correctness, completeness and emergence that existing approaches mainly suffer from. Experimental results show that our approach significantly outperforms the state-of-the-art approaches in terms of precision, F-measure, micro-accuracy and macro-accuracy, while still preserving high recall.",
"Visualizing and analyzing social networks is a challenging problem that has been receiving growing attention. An important first step, before analysis can begin, is ensuring that the data is accurate. A common data quality problem is that the data may inadvertently contain several distinct references to the same underlying entity; the process of reconciling these references is called entity-resolution. D-Dupe is an interactive tool that combines data mining algorithms for entity resolution with a task-specific network visualization. Users cope with complexity of cleaning large networks by focusing on a small subnetwork containing a potential duplicate pair. The subnetwork highlights relationships in the social network, making the common relationships easy to visually identify. D-Dupe users resolve ambiguities either by merging nodes or by marking them distinct. The entity resolution process is iterative: as pairs of nodes are resolved, additional duplicates may be revealed; therefore, resolution decisions are often chained together. We give examples of how users can flexibly apply sequences of actions to produce a high quality entity resolution result. We illustrate and evaluate the benefits of D-Dupe on three bibliographic collections. Two of the datasets had already been cleaned, and therefore should not have contained duplicates; despite this fact, many duplicates were rapidly identified using D-Dupe's unique combination of entity resolution algorithms within a task-specific visual interface",
"Abstract Entity resolution is the process of determining whether a collection of entity representations refer to the same entity in the real world. In this paper we introduce a theoretical framework that supports knowledge-based entity resolution. From a logical point of view, the expressive power of the framework is equivalent to a decidable fragment of first-order logic including conjunction, disjunction and a certain form of negation. Although the framework is expressive for representing knowledge about entity resolution in a collective way, the questions that arise are: (1) how efficiently can knowledge patterns be processed; (2) how effectively can redundancy among knowledge patterns be eliminated. In answering these questions, we first study the evaluation problem for knowledge patterns. Our results show that this problem is NP-complete w.r.t. combined complexity but in ptime w.r.t. data complexity. This nice property leads us to investigate the containment problem for knowledge patterns, which turns out to be NP-complete. We further develop a notion of optimality for knowledge patterns and a mechanism of optimizing a knowledge model (i.e. a finite set of knowledge patterns). We prove that the optimality decision problem for knowledge patterns is still NP-complete.",
""
]
} |
1702.00921 | 2594606335 | The digital revolution has brought most of the world on the world wide web. The data available on WWW has increased many folds in the past decade. Social networks, online clubs and organisations have come into existence. Information is extracted from these venues about a real world entity like a person, organisation, event, etc. However, this information may change over time, and there is a need for the sources to be up-to-date. Therefore, it is desirable to have a model to extract relevant data items from different sources and merge them to build a complete profile of an entity (entity profiling). Further, this model should be able to handle incorrect or obsolete data items. In this paper, we propose a novel method for completing a profile. We have developed a two phase method-1) The first phase (resolution phase) links records to the queries. We have proposed and observed that the use of random forest for entity resolution increases the performance of the system as this has resulted in more records getting linked to the correct entity. Also, we used trustworthiness of a source as a feature to the random forest. 2) The second phase selects the appropriate values from records to complete a profile based on our proposed selection criteria. We have used various metrics for measuring the performance of the resolution phase as well as for the overall ReLiC framework. It is established through our results that the use of biased sources has significantly improved the performance of the ReLiC framework. Experimental results show that our proposed system, ReLiC outperforms the state-of-the-art. | Benny, S. et al @cite_18 proposed a Hadoop framework for entity resolution using the Map and Reduce algorithm on big data. A technique for carrying out entity resolution on heterogeneous distributed probabilistic data is proposed in @cite_31 . They use the expectation maximization algorithm for integrating the data. They have reported significant performance improvements over existing methods for entity resolution in a distributed environment. Ayat, N. et al @cite_37 proposed a method for entity resolution on probabilistic data. They have proposed algorithms for entity resolution using context-free and context-sensitive similarity functions. Hu, W. et al @cite_9 have proposed a scalable technique to address the problem of entity linkage on the semantic web. They have used a bootstrapping method, taking into consideration both the semantic co-referent entities as well as the similarity between property values of the entity for entity resolution. | {
"cite_N": [
"@cite_9",
"@cite_18",
"@cite_31",
"@cite_37"
],
"mid": [
"1666220597",
"2461442606",
"917424677",
""
],
"abstract": [
"In the Big Data era, ever-increasing RDF data have reached a scale in billions of entities and brought challenges to the problem of entity linkage on the Semantic Web. Although millions of entities, typically denoted by URIs, have been explicitly linked with owl:sameAs, potentially coreferent ones are still numerous. Existing automatic approaches address this problem mainly from two perspectives: one is via equivalence reasoning, which infers semantically coreferent entities but probably misses many potentials; the other is by similarity computation between property-values of entities, which is not always accurate and do not scale well. In this paper, we introduce a bootstrapping approach by leveraging these two kinds of methods for entity linkage. Given an entity, our approach first infers a set of semantically coreferent entities. Then, it iteratively expands this entity set using discriminative property-value pairs. The discriminability is learned with a statistical measure, which does not only identify important property-values in the entity set, but also takes matched properties into account. Frequent property combinations are also mined to improve linkage accuracy. We develop an online entity linkage search engine, and show its superior precision and recall by comparing with representative approaches on a large-scale and two benchmark datasets.",
"Abstract Large amount of data is being generated from sensors, satellites, social media etc. This big data (velocity, variety, veracity, value and veracity) can be processed so as to make timely decisions by the decision makers. This paper presents results of the proposed Hadoop framework that performs entity resolution in Map and reduce phase. MapReduce phase matches two real world objects and generates rules. The similarity score of these rules are used for matching stream data during testing phase. Similarity is calculated using 13 different semantic measures such as token-based similarity, edit-based similarity, hybrid similarity, phonetic similarity as well as domain dependent Natural language processing measures. Semantic measures are implemented using Hive programming. The proposed system is tested using e-catalogues of Amazon and Google.",
"First, we expose the instance of ER with EM algorithm to integrate different models.We apply ER for HDPD to achieve major performance in terms of response time.Matching instance is compared with BFA, Koosh-TND and Packages-TND algorithms.After the integration, we work on real data and compare with FC, SCC and FD algorithms.Our method improves in terms of response and communication time with existing methods. Distributed computing is linked and equated to the industrial revolution. Its transformational nature is, however, associated with significant instances in the form of internet of thing operations. Entity resolution (ER) is a problem of matching and resolving records that represent the same real world entity. This is a long-standing challenge in distributed databases and information retrieval as a statistic. In a centralized approach, the problem of ER has not been scaled well as large amount of data need to be sent to a central node. In this paper, we present an algorithm which deals with heterogeneous distributed probabilistic data (HDPD) and also reduces processing time in a distributed environment. We propose two different approaches. First, we explore this instance with a matching (identification) problem to integrate different data models with expectation-maximization (EM) algorithm. Second, we apply ER methodology for HDPD to achieve major performance in terms of response time to produce the outcome. We validate HDPD through experiments over a 100-node cluster that records significant performance improvements over naive approaches. This paper is expected to provide insights in to database organizations and new technological development for the growth of distributed environment.",
""
]
} |
1702.00921 | 2594606335 | The digital revolution has brought most of the world on the world wide web. The data available on WWW has increased many folds in the past decade. Social networks, online clubs and organisations have come into existence. Information is extracted from these venues about a real world entity like a person, organisation, event, etc. However, this information may change over time, and there is a need for the sources to be up-to-date. Therefore, it is desirable to have a model to extract relevant data items from different sources and merge them to build a complete profile of an entity (entity profiling). Further, this model should be able to handle incorrect or obsolete data items. In this paper, we propose a novel method for completing a profile. We have developed a two phase method-1) The first phase (resolution phase) links records to the queries. We have proposed and observed that the use of random forest for entity resolution increases the performance of the system as this has resulted in more records getting linked to the correct entity. Also, we used trustworthiness of a source as a feature to the random forest. 2) The second phase selects the appropriate values from records to complete a profile based on our proposed selection criteria. We have used various metrics for measuring the performance of the resolution phase as well as for the overall ReLiC framework. It is established through our results that the use of biased sources has significantly improved the performance of the ReLiC framework. Experimental results show that our proposed system, ReLiC outperforms the state-of-the-art. | Wang, J. et al @cite_0 have proposed a hybrid technique for entity resolution using both a human and a system. A system does the initial processing of data, and the users have to choose the correct pair from the most likely pairs identified by the system. Cheng, G. et al @cite_15 have worked on semi-automatic data integration. They have proposed a technique for selection of features to be used for interactive entity resolution, which makes use of human users to carry out the task of entity resolution. The features are selected in such a way that they convey the largest amount of diverse and characteristic information about an entity. However, this may slow down the entity profiling process, and the performance of the system will be dependent on the knowledge of the people involved. The FEVER framework @cite_16 is used to analyze different entity resolution approaches. | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_16"
],
"mid": [
"",
"1669362197",
"2139929608"
],
"abstract": [
"",
"Abstract Entity resolution is a fundamental task in data integration. Recent studies of this problem, including active learning, crowdsourcing, and pay-as-you-go approaches, have started to involve human users in the loop to carry out interactive entity resolution tasks, namely to invite human users to judge whether two entity descriptions refer to the same real-world entity. This process of judgment requires tool support, particularly when entity descriptions contain a large number of features (i.e. property–value pairs). To facilitate judgment, in this article, we propose to select, from entity descriptions, a subset of critical features as a summary to be shown and judged by human users. Features preferred to be selected are those that reflect the most commonalities shared by and the most conflicts between the two entities, and that carry the largest amount of characteristic and diverse information about them. Selected features are then grouped and ordered to improve readability and further speed up judgment. Experimental results demonstrate that summaries generated by our method help users judge more efficiently (3.57–3.78 times faster) than entire entity descriptions, without significantly hurting the accuracy of judgment. The accuracy achieved by our method is also higher than those achieved by existing summarization methods.",
"We present FEVER, a new evaluation platform for entity resolution approaches. The modular structure of the FEVER framework supports the incorporation or reconstruction of many previously proposed approaches for entity resolution. A distinctive feature of FEVER is that it not only evaluates traditional measures such as precision and recall but also the effort for configuring (e.g., parameter tuning, training) a good entity resolution approach. FEVER thus strives for a fair comparative evaluation of different approaches by considering both the effectiveness and configuration effort."
]
} |
1702.01198 | 2594256432 | The recent increase in number of wireless devices has been driven by the growing markets of smart homes and the Internet of Things (IoT). As a result, expanding and or efficient utilization of the radio frequency (RF) spectrum is critical to accommodate such an increase in wireless bandwidth. Alternatively, recent free-space optical (FSO) communication technologies have demonstrated the feasibility of building WiFO, a high capacity indoor wireless network using the femtocell architecture. Since FSO transmission does not interfere with the RF signals, such a system can be integrated with the current WiFi systems to provide orders of magnitude improvement in bandwidth. A novel component of WiFO is its ability to jointly encode bits from different flows for optimal transmissions. In this paper, we introduce the WiFO architecture and a novel cooperative transmission framework using location assisted coding (LAC) technique to increase the overall wireless capacity. Specifically, achievable rate regions for WiFO using LAC will be characterized. Both numerical and theoretical analyses are given to validate the proposed coding schemes. | From the coding's perspective, the proposed LAC technique in WiFO is similar to MIMO systems that have been used widely in communication systems to improve the capacity @cite_11 @cite_2 @cite_19 . Both LAC and MIMO techniques use several transmitters to transmit signals to achieve higher capacity. However, using multiple transmitters at the same time can also cause interference among transmissions to different receivers if they are in the same transmission range. As such, a MIMO receiver typically receives signals from multiple transmit antennas and these signals are intended for that particular MIMO receiver at any time slot. On the other hand, in WiFO, multiple transmitters transmit the joint messages simultaneously to multiple WiFO receivers, rather than a single receiver. By taking advantage of the known interference patterns using the receiver location information, LAC technique can help the WiFO receivers to decode each message independently in presence of interference. In a certain sense, this work is similar to the work of @cite_25 . We note that a special case of LAC technique was first introduced in @cite_8 . In this paper, we extend and improve the LAC technique to obtain higher rates. | {
"cite_N": [
"@cite_8",
"@cite_19",
"@cite_2",
"@cite_25",
"@cite_11"
],
"mid": [
"2088527549",
"2166579323",
"2151795416",
"2163728264",
""
],
"abstract": [
"As the number of wireless devices grows, the increasing demand for the shared radio frequency (RF) spectrum becomes a critical problem. Unlike wired communications in which, theoretically, more fibers can be used to accommodate the increasing bandwidth demand, wireless spectrum cannot be arbitrarily increased due to the fundamental limitations imposed by the physical laws. On the other hand, recent advances in free space optical (FSO) technologies promise a complementary approach to increase wireless capacity. However, high-speed FSO technologies are currently confined to short distance transmissions, resulting in limited mobility. In this paper, we briefly describe WiFO, a hybrid WiFi-FSO network for Gbps wireless local area network (WLAN) femtocells that can provide up to one Gbps per user while maintaining seamless mobility. While typical RF femtocells are non-overlapped to minimize inter-cell interference, there are advantages of using overlapped femtocells to increase mobility and throughput when the number of users is small. That said, the primary contribution of this paper will be a novel location assisted coding (LAC) technique used in the WiFO network that aims to increase throughput and reduce interference for multiple users in a dense array of femtocells. Both theoretical analysis and numerical experiments show orders of magnitude increase in throughput using LAC over basic codes.",
"The sum capacity of a Gaussian broadcast MIMO channel can be achieved with dirty paper coding (DPC). However, algorithms that approach the DPC sum capacity do not appear viable in the forseeable future, which motivates lower complexity interference suppression techniques. Block diagonalization (BD) is a linear preceding technique for downlink multiuser MIMO systems. With perfect channel knowledge at the transmitter, BD can eliminate other users' interference at each receiver. In this paper, we study the sum capacity of BD with and without receive antenna selection. We analytically compare BD without receive antenna selection to DPC for a set of given channels. It is shown that (1) if the user channels are orthogonal to each other, then BD achieves the same sum capacity as DPC; (2) if the user channels lie in the same subspace, then the gain of DPC over BD can be upper bounded by the minimum of the number of transmit and receive antennas. These observations also hold for BD with receive antenna selection. Further, we study the ergodic sum capacity of BD with and without receive antenna selection in a Rayleigh fading channel. Simulations show that BD can achieve a significant part of the total throughput of DPC. An upper bound on the ergodic sum capacity gain of DPC over BD is proposed for easy estimation of the gap between the sum capacity of DPC and BD without receive antenna selection.",
"A Gaussian broadcast channel (GBC) with r single-antenna receivers and t antennas at the transmitter is considered. Both transmitter and receivers have perfect knowledge of the channel. Despite its apparent simplicity, this model is, in general, a nondegraded broadcast channel (BC), for which the capacity region is not fully known. For the two-user case, we find a special case of Marton's (1979) region that achieves optimal sum-rate (throughput). In brief, the transmitter decomposes the channel into two interference channels, where interference is caused by the other user signal. Users are successively encoded, such that encoding of the second user is based on the noncausal knowledge of the interference caused by the first user. The crosstalk parameters are optimized such that the overall throughput is maximum and, surprisingly, this is shown to be optimal over all possible strategies (not only with respect to Marton's achievable region). For the case of r>2 users, we find a somewhat simpler choice of Marton's region based on ordering and successively encoding the users. For each user i in the given ordering, the interference caused by users j>i is eliminated by zero forcing at the transmitter, while interference caused by users j<i is taken into account by coding for noncausally known interference. Under certain mild conditions, this scheme is found to be throughput-wise asymptotically optimal for both high and low signal-to-noise ratio (SNR). We conclude by providing some numerical results for the ergodic throughput of the simplified zero-forcing scheme in independent Rayleigh fading.",
"This paper proposes COPE, a new architecture for wireless mesh networks. In addition to forwarding packets, routers mix (i.e., code) packets from different sources to increase the information content of each transmission. We show that intelligently mixing packets increases network throughput. Our design is rooted in the theory of network coding. Prior work on network coding is mainly theoretical and focuses on multicast traffic. This paper aims to bridge theory with practice; it addresses the common case of unicast traffic, dynamic and potentially bursty flows, and practical issues facing the integration of network coding in the current network stack. We evaluate our design on a 20-node wireless network, and discuss the results of the first testbed deployment of wireless network coding. The results show that COPE largely increases network throughput. The gains vary from a few percent to several folds depending on the traffic pattern, congestion level, and transport protocol.",
""
]
} |
1702.01172 | 2256012482 | Working with Web archives raises a number of issues caused by their temporal characteristics. Depending on the age of the content, additional knowledge might be needed to find and understand older texts. Especially facts about entities are subject to change. Most severe in terms of information retrieval are name changes. In order to find entities that have changed their name over time, search engines need to be aware of this evolution. We tackle this problem by analyzing Wikipedia in terms of entity evolutions mentioned in articles regardless the structural elements. We gathered statistics and automatically extracted minimum excerpts covering name changes by incorporating lists dedicated to that subject. In future work, these excerpts are going to be used to discover patterns and detect changes in other sources. In this work we investigate whether or not Wikipedia is a suitable source for extracting the required knowledge. | Most related to the long-term aim of this work, a knowledge base dedicated to entity evolution, is YAGO2 @cite_8 . It is an endeavor to extend the original YAGO knowledge base with temporal as well as spatial information. Most relevant to us is the temporal data, which YAGO2 incorporates to enhance entities as well as facts. In contrast to our aim, they do not gather this data by extracting new knowledge. Instead, they use temporal information which has already been extracted for YAGO and connect it to the corresponding entity or fact. For instance, date of birth and date of death are considered as a person's time of existence. Therefore, dates of name changes are still missing as they are not present in YAGO either. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2122865749"
],
"abstract": [
"We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and WordNet. It contains 447 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95 of the facts in YAGO2. In this paper, we present the extraction methodology, the integration of the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple model to time and space."
]
} |
1702.01172 | 2256012482 | Working with Web archives raises a number of issues caused by their temporal characteristics. Depending on the age of the content, additional knowledge might be needed to find and understand older texts. Especially facts about entities are subject to change. Most severe in terms of information retrieval are name changes. In order to find entities that have changed their name over time, search engines need to be aware of this evolution. We tackle this problem by analyzing Wikipedia in terms of entity evolutions mentioned in articles regardless the structural elements. We gathered statistics and automatically extracted minimum excerpts covering name changes by incorporating lists dedicated to that subject. In future work, these excerpts are going to be used to discover patterns and detect changes in other sources. In this work we investigate whether or not Wikipedia is a suitable source for extracting the required knowledge. | A prominent research topic in the context of Wikipedia is prediction of quality flaws. It denotes the task of automatically detecting flaws according to Wikipedia's guidelines, something not to neglect when working with Wikipedia. have done an impressive work in this field and give a nice overview of the first challenge dedicated to this topic @cite_5 . Another related topic is the research on Wikipedia's revision history and talk pages. This could also serve as an additional resource for name evolutions in the future. are working on automatically annotating discussions on talk pages and eventually link these to the corresponding content on Wikipedia articles. Additionally, they provide a toolkit for accessing Wikipedia's history @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_5"
],
"mid": [
"2143097423",
"2293853475"
],
"abstract": [
"We present an open-source toolkit which allows (i) to reconstruct past states of Wikipedia, and (ii) to efficiently access the edit history of Wikipedia articles. Reconstructing past states of Wikipedia is a prerequisite for reproducing previous experimental work based on Wikipedia. Beyond that, the edit history of Wikipedia articles has been shown to be a valuable knowledge source for NLP, but access is severely impeded by the lack of efficient tools for managing the huge amount of provided data. By using a dedicated storage format, our toolkit massively decreases the data volume to less than 2 of the original size, and at the same time provides an easy-to-use interface to access the revision data. The language-independent design allows to process any language represented in Wikipedia. We expect this work to consolidate NLP research using Wikipedia in general, and to foster research making use of the knowledge encoded in Wikipedia's edit history.",
"The paper overviews the task \"Quality Flaw Prediction in Wikipedia\" of the PAN'12 competition. An evaluation corpus is introduced which comprises 1 592 226 English Wikipedia articles, of which 208 228 have been tagged to con- tain one of ten important quality flaws. Moreover, the performance of three qual- ity flaw classifiers is evaluated."
]
} |
1702.01287 | 2953358523 | We introduce a Multi-modal Neural Machine Translation model in which a doubly-attentive decoder naturally incorporates spatial visual features obtained using pre-trained convolutional neural networks, bridging the gap between image description and translation. Our decoder learns to attend to source-language words and parts of an image independently by means of two separate attention mechanisms as it generates words in the target language. We find that our model can efficiently exploit not just back-translated in-domain multi-modal data but also large general-domain text-only MT corpora. We also report state-of-the-art results on the Multi30k data set. | Our work differs from previous work in that, first, we propose attention-based MNMT models. This is an important difference since the use of attention in NMT has become standard and is the current state-of-the-art @cite_13 @cite_35 @cite_11 @cite_6 . Second, we propose a where we effectively fuse two mono-modal attention mechanisms into one multi-modal decoder, training the entire model jointly and end-to-end. In addition, we are interested in how to merge textual and visual representations into multi-modal representations when generating words in the target language, which differs substantially from text-only translation tasks even when these translate from many source languages into many target languages @cite_16 @cite_11 . To the best of our knowledge, we are the first to integrate multi-modal inputs in NMT via independent attention mechanisms. | {
"cite_N": [
"@cite_35",
"@cite_6",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"2949335953",
"1816313093",
"2251743902",
"",
"2229833550"
],
"abstract": [
"An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.",
"Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character n-gram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.1 and 1.3 BLEU, respectively.",
"In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available.",
"",
"We propose multi-way, multilingual neural machine translation. The proposed approach enables a single neural translation model to translate between multiple languages, with a number of parameters that grows only linearly with the number of languages. This is made possible by having a single attention mechanism that is shared across all language pairs. We train the proposed multi-way, multilingual model on ten language pairs from WMT'15 simultaneously and observe clear performance improvements over models trained on only one language pair. In particular, we observe that the proposed model significantly improves the translation quality of low-resource language pairs."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.