aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
@cite_15 proposed data-dependent grammars, a formalism which permits context sensitivity by allowing rules to be parameterized by semantic values. A parameterized nonterminal appearing on the right-hand side of a rule acts as a form of function call that also returns a semantic value. These semantic values are computed by written in a general-purpose programming language. There are also which can make grammar branches succeed or fail depending on a semantic value.
{ "cite_N": [ "@cite_15" ], "mid": [ "1964494435" ], "abstract": [ "We present the design and theory of a new parsing engine, YAKKER, capable of satisfying the many needs of modern programmers and modern data processing applications. In particular, our new parsing engine handles (1) full scannerless context-free grammars with (2) regular expressions as right-hand sides for defining nonterminals. YAKKER also includes (3) facilities for binding variables to intermediate parse results and (4) using such bindings within arbitrary constraints to control parsing. These facilities allow the kind of data-dependent parsing commonly needed in systems applications, particularly those that operate over binary data. In addition, (5) nonterminals may be parameterized by arbitrary values, which gives the system good modularity and abstraction properties in the presence of data-dependent parsing. Finally, (6) legacy parsing libraries,such as sophisticated libraries for dates and times, may be directly incorporated into parser specifications. We illustrate the importance and utility of this rich collection of features by presenting its use on examples ranging from difficult programming language grammars to web server logs to binary data specification. We also show that our grammars have important compositionality properties and explain why such properties areimportant in modern applications such as automatic grammar induction. In terms of technical contributions, we provide a traditional high-level semantics for our new grammar formalization and show how to compile grammars into non deterministic automata. These automata are stack-based, somewhat like conventional push-down automata,but are also equipped with environments to track data-dependent parsing state. We prove the correctness of our translation of data-dependent grammars into these new automata and then show how to implement the automata efficiently using a variation of Earley's parsing algorithm." ] }
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
Data-dependent grammars can be compiled to a format accepted by a target parsing tool, which must support fairly general semantic actions. In subsequent work @cite_18 , the authors introduced a new kind of automaton that can be used to implement parsers recognizing data-dependent grammars. These techniques are put to work in a tool called Yakker.
{ "cite_N": [ "@cite_18" ], "mid": [ "1560736395" ], "abstract": [ "Dependent grammars extend context-free grammars by allowing semantic values to be bound to variables and used to constrain parsing. Dependent grammars can cleanly specify common features that cannot be handled by context-free grammars, such as length fields in data formats and significant indentation in programming languages. Few parser generators support dependent parsing, however. To address this shortcoming, we have developed a new method for implementing dependent parsers by extending existing parsing algorithms. Our method proposes a point-free language of dependent grammars, which we believe closely corresponds to existing context-free parsing algorithms, and gives a novel transformation from conventional dependent grammars to point-free ones. To validate our technique, we have specified the semantics of both source and target dependent grammar languages, and proven our transformation sound and complete with respect to those semantics. Furthermore, we have empirically validated the suitability of our point-free language by adapting four parsing engines to support it: an Earley parsing engine; a GLR parsing engine; memoizing, arrow-style parser combinators; and PEG parser combinators." ] }
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
Afroozeh and Izmaylova @cite_24 show how advanced parser features such as lexical disambiguation filters, operator precedence, significant indentation and conditional preprocessor directives can be translated to data-dependent grammars. Quite clearly, the task is non-trivial and one comes away with the feeling that dependent grammars are better suited as an elegant calculus to be targeted by parsing tool writers rather than as a paradigm that fits the needs of tool users. The machinery implementing the formalism is also distinctively non-trivial, involving a multi-stage transformation into a continuation routine or into a new kind of automaton. In contrast, our approach consists of a lightweight library that can be layered on top of a general-purpose programming language.
{ "cite_N": [ "@cite_24" ], "mid": [ "2001404404" ], "abstract": [ "Despite the long history of research in parsing, constructing parsers for real programming languages remains a difficult and painful task. In the last decades, different parser generators emerged to allow the construction of parsers from a BNF-like specification. However, still today, many parsers are handwritten, or are only partly generated, and include various hacks to deal with different peculiarities in programming languages. The main problem is that current declarative syntax definition techniques are based on pure context-free grammars, while many constructs found in programming languages require context information. In this paper we propose a parsing framework that embraces context information in its core. Our framework is based on data-dependent grammars, which extend context-free grammars with arbitrary computation, variable binding and constraints. We present an implementation of our framework on top of the Generalized LL (GLL) parsing algorithm, and show how common idioms in syntax of programming languages such as (1) lexical disambiguation filters, (2) operator precedence, (3) indentation-sensitive rules, and (4) conditional preprocessor directives can be mapped to data-dependent grammars. We demonstrate the initial experience with our framework, by parsing more than 20000 Java, C#, Haskell, and OCaml source files." ] }
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
Monadic parsing @cite_12 is a well-known way to build functional-style parser-combinator libraries, made popular by Haskell libraries such as Parsec @cite_11 . In this paradigm, the type of a parser is a function parameterized by a result type, i.e. with signature @math , where the parameter string is the input text and the returned string is the input remaining after parsing. The parser type is also a monad instance, meaning there is a bind function whose signature, in Haskell notation, is:
{ "cite_N": [ "@cite_12", "@cite_11" ], "mid": [ "1997644097", "2123835026" ], "abstract": [ "This paper is a tutorial on defining recursive descent parsers in Haskell. In the spirit of one-stop shopping, the paper combines material from three areas into a single source. The three areas are functional parsers, the use of monads to structure functional programs, and the use of special syntax for monadic programs in Haskell. More specifically, the paper shows how to define monadic parsers using do notation in Haskell. The paper is targeted at the level of a good undergraduate student who is familiar with Haskell, and has completed a grammars and parsing course. Some knowledge of functional parsers would be useful, but no experience with monads is assumed.", "Despite the long list of publications on parser combinators, there does not yet exist a monadic parser combinator library that is applicable in real world situations. In particular naive implementations of parser combinators are likely to suffer from space leaks and are often unable to report precise error messages in case of parse errors. The Parsec parser combinator library described in this paper, utilizes a novel implementation technique for space and time e±cient parser combinators that in case of a parse error, report both the position of the error as well as all grammar productions that would have been legal at that point in the input." ] }
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
An in-depth analysis of this aspect was done by Atkey @cite_22 . In particular, he formalizes monadic parsers by introducing , which are the right-hand sides of rules that can contain monadic combinators. These combinators generate grammar fragments at parse-time (much like a monadic parser generates a new parser), hence the term . While monadic parsing seems at first sight very similar to the data-dependent grammars from , Atkey @cite_22 carefully contrasts the two approaches:
{ "cite_N": [ "@cite_22" ], "mid": [ "2155033057" ], "abstract": [ "The recovery of structure from flat sequences of input data is a problem that almost all programs need to solve. Computer Science has developed a wide array of declarative languages for describing the structure of languages, usually based on the context-free grammar formalism, and there exist parser generators that produce efficient parsers for these descriptions. However, when faced with a problem involving parsing, most programmers opt for ad-hoc hand-coded solutions, or use parser combinator libraries to construct parsing functions. This paper develops a hybrid approach, treating grammars as collections of active right-hand sides, indexed by a set of non-terminals. Active right-hand sides are built using the standard monadic parser combinators and allow the consumed input to affect the language being parsed, thus allowing for the precise description of the realistic languages that arise in programming. We carefully investigate the semantics of grammars with active right-hand sides, not just from the point of view of language acceptance but also in terms of the generation of parse results. Ambiguous grammars may generate exponentially, or even infinitely, many parse results and these must be efficiently represented using Shared Packed Parse Forests (SPPFs). A particular feature of our approach is the use of Reynolds-style parametricity to ensure that the language that grammars describe cannot be affected by the representation of parse results." ] }
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
Attribute grammars @cite_10 associate attributes to AST nodes (assuming an AST node per matched grammar rule). The attributes can be synthesized: their value derived from the attributes of subnodes, or inherited: their value computed by a parent node. The formalism supports context-sensitive parsing through production guards predicated over attributes.
{ "cite_N": [ "@cite_10" ], "mid": [ "1964071625" ], "abstract": [ "“Meaning” may be assigned to a string in a context-free language by defining “attributes” of the symbols in a derivation tree for that string. The attributes can be defined by functions associated with each production in the grammar. This paper examines the implications of this process when some of the attributes are “synthesized”, i.e., defined solely in terms of attributes of thedescendants of the corresponding nonterminal symbol, while other attributes are “inherited”, i.e., defined in terms of attributes of theancestors of the nonterminal symbol. An algorithm is given which detects when such semantic rules could possibly lead to circular definition of some attributes. An example is given of a simple programming language defined with both inherited and synthesized attributes, and the method of definition is compared to other techniques for formal specification of semantics which have appeared in the literature." ] }
1609.05365
2951219499
Historically, true context-sensitive parsing has seldom been applied to programming languages, due to its inherent complexity. However, many mainstream programming and markup languages (C, Haskell, Python, XML, and more) possess context-sensitive features. These features are traditionally handled with ad-hoc code (e.g., custom lexers), outside of the scope of parsing theory. Current grammar formalisms struggle to express context-sensitive features. Most solutions lack context transparency: they make grammars hard to write, maintain and compose by hardwiring context through the entire grammar. Instead, we approach context-sensitive parsing through the idea that parsers may recall previously matched input (or data derived therefrom) in order to make parsing decisions. We make use of mutable parse state to enable this form of recall. We introduce principled stateful parsing as a new transactional discipline that makes state changes transparent to parsing mechanisms such as backtracking and memoization. To enforce this discipline, users specify parsers using formally specified primitive state manipulation operations. Our solution is available as a parsing library named Autumn. We illustrate our solution by implementing some practical context-sensitive grammar features such as significant whitespace handling and namespace classification.
Rats! @cite_17 is a fully-memoizing () PEG parser. Rats! is, to the best of our knowledge, the only stateful parsing tool that provides some guarantees for state usage, by ensuring that state changes are discarded if certain conditions are met.
{ "cite_N": [ "@cite_17" ], "mid": [ "2098396599" ], "abstract": [ "We explore how to make the benefits of modularity available for syntactic specifications and present Rats!, a parser generator for Java that supports easily extensible syntax. Our parser generator builds on recent research on parsing expression grammars (PEGs), which, by being closed under composition, prioritizing choices, supporting unlimited lookahead, and integrating lexing and parsing, offer an attractive alternative to context-free grammars. PEGs are implemented by so-called packrat parsers, which are recursive descent parsers that memoize all intermediate results (hence their name). Memoization ensures linear-time performance in the presence of unlimited lookahead, but also results in an essentially lazy, functional parsing technique. In this paper, we explore how to leverage PEGs and packrat parsers as the foundation for extensible syntax. In particular, we show how make packrat parsing more widely applicable by implementing this lazy, functional technique in a strict, imperative language, while also generating better performing parsers through aggressive optimizations. Next, we develop a module system for organizing, modifying, and composing large-scale syntactic specifications. Finally, we describe a new technique for managing (global) parsing state in functional parsers. Our experimental evaluation demonstrates that the resulting parser generator succeeds at providing extensible syntax. In particular, Rats! enables other grammar writers to realize real-world language extensions in little time and code, and it generates parsers that consistently out-perform parsers created by two GLR parser generators." ] }
1609.05473
2523469089
As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.
@cite_14 proposed an alternative training methodology to generative models, i.e. GANs, where the training procedure is a game between a generative model and a discriminative model. This framework bypasses the difficulty of maximum likelihood learning and has gained striking successes in natural image generation @cite_3 . However, little progress has been made in applying GANs to sequence discrete data generation problems, e.g. natural language generation @cite_4 . This is due to the generator network in GAN is designed to be able to adjust the output continuously, which does not work on discrete data generation @cite_34 .
{ "cite_N": [ "@cite_34", "@cite_14", "@cite_4", "@cite_3" ], "mid": [ "", "2099471712", "2174424190", "2951523806" ], "abstract": [ "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.", "In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset." ] }
1609.05473
2523469089
As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.
On the other hand, a lot of efforts have been made to generate structured sequences. Recurrent neural networks can be trained to produce sequences of tokens in many applications such as machine translation @cite_13 @cite_5 . The most popular way of training RNNs is to maximize the likelihood of each token in the training data whereas @cite_15 pointed out that the discrepancy between training and generating makes the maximum likelihood estimation suboptimal and proposed scheduled sampling strategy (SS). Later @cite_4 theorized that the objective function underneath SS is improper and explained the reason why GANs tend to generate natural-looking samples in theory. Consequently, the GANs have great potential but are not practically feasible to discrete probabilistic models currently.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_13", "@cite_4" ], "mid": [ "2133564696", "2950304420", "2949888546", "2174424190" ], "abstract": [ "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token. At inference, the unknown previous token is then replaced by a token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence. We propose a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements. Moreover, it was used successfully in our winning entry to the MSCOCO image captioning challenge, 2015.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality." ] }
1609.05473
2523469089
As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.
As pointed out by @cite_27 , the sequence data generation can be formulated as a sequential decision making process, which can be potentially be solved by reinforcement learning techniques. Modeling the sequence generator as a policy of picking the next token, policy gradient methods @cite_20 can be adopted to optimize the generator once there is an (implicit) reward function to guide the policy. For most practical sequence generation tasks, e.g. machine translation @cite_13 , the reward signal is meaningful only for the entire sequence, for instance in the game of Go @cite_10 , the reward signal is only set at the end of the game. In those cases, state-action evaluation methods such as Monte Carlo (tree) search have been adopted @cite_35 . By contract, our proposed SeqGAN extends GANs with the RL-based generator to solve the sequence generation problem, where a reward signal is provided by the discriminator at the end of each episode via Monte Carlo approach, and the generator picks the action and learns the policy using estimated overall rewards.
{ "cite_N": [ "@cite_13", "@cite_35", "@cite_27", "@cite_10", "@cite_20" ], "mid": [ "2949888546", "2126316555", "590442793", "2257979135", "2155027007" ], "abstract": [ "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.", "We connect a broad class of generative models through their shared reliance on sequential decision making. Motivated by this view, we develop extensions to an existing model, and then explore the idea further in the context of data imputation - perhaps the simplest setting in which to investigate the relation between unconditional and conditional generative modelling. We formulate data imputation as an MDP and develop models capable of representing effective policies for it. We construct the models using neural networks and train them using a form of guided policy search [9]. Our models generate predictions through an iterative process of feedback and refinement. We show that this approach can learn effective policies for imputation problems of varying difficulty and across multiple datasets.", "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.", "Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy." ] }
1609.05522
2523426658
The objective of this work is to estimate 3D human pose from a single RGB image. Extracting image representations which incorporate both spatial relation of body parts and their relative depth plays an essential role in accurate3D pose reconstruction. In this paper, for the first time, we show that camera viewpoint in combination to 2D joint lo-cations significantly improves 3D pose accuracy without the explicit use of perspective geometry mathematical models.To this end, we train a deep Convolutional Neural Net-work (CNN) to learn categorical camera viewpoint. To make the network robust against clothing and body shape of the subject in the image, we utilized 3D computer rendering to synthesize additional training images. We test our framework on the largest 3D pose estimation bench-mark, Human3.6m, and achieve up to 20 error reduction compared to the state-of-the-art approaches that do not use body part segmentation.
There is a large literature belonging to the first group. For example, in @cite_11 first body parts are segmented and then are described by second-order label-sensitive pooling @cite_7 ; the approach in @cite_0 represents the image with HOG features; and LAB and HOG features are used in @cite_15 . Convolutional neural network has also been exploited to learn image features and regression model simultaneously; for example two neural networks are trained in @cite_6 to learn image features and 3D pose embedding which are later used to learn a score network that can assign high score to correct image-pose pairs and low scores to other pairs. @cite_13 proposed a CNN multi-task framework that jointly learns pose regression and body part detectors.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_0", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "78159342", "2949812103", "2169738563", "2054820429", "2293220651", "2052747804" ], "abstract": [ "Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.", "This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration.", "We describe twin Gaussian processes (TGP), a generic structured prediction method that uses Gaussian process (GP) priors on both covariates and responses, both multivariate, and estimates outputs by minimizing the Kullback-Leibler divergence between two GP modeled as normal distributions over finite index sets of training and testing examples, emphasizing the goal that similar inputs should produce similar percepts and this should hold, on average, between their marginal distributions. TGP captures not only the interdependencies between covariates, as in a typical GP, but also those between responses, so correlations among both inputs and outputs are accounted for. TGP is exemplified, with promising results, for the reconstruction of 3d human poses from monocular and multicamera video sequences in the recently introduced HumanEva benchmark, where we achieve 5 cm error on average per 3d marker for models trained jointly, using data from multiple people and multiple activities. The method is fast and automatic: it requires no hand-crafting of the initial pose, camera calibration parameters, or the availability of a 3d body model associated with human subjects used for training or testing.", "In this work we address the problem of estimating the 3D human pose from a single RGB image, which is a challenging problem since different 3D poses may have similar 2D projections. Following the success of regression forests for 3D pose estimation from depth data or 2D pose estimation from RGB images, we extend regression forests to infer missing depth data of image features and 3D pose simultaneously. Since we do not observe depth for inference or training directly, we hypothesize the depth of the features by sweeping with a plane through the 3D volume of potential joint locations. The regression forests are then combined with a pictorial structure framework, which is extended to 3D. The approach is evaluated on two challenging benchmarks where stateof-the-art performance is achieved.", "In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations.", "Recently, the emergence of Kinect systems has demonstrated the benefits of predicting an intermediate body part labeling for 3D human pose estimation, in conjunction with RGB-D imagery. The availability of depth information plays a critical role, so an important question is whether a similar representation can be developed with sufficient robustness in order to estimate 3D pose from RGB images. This paper provides evidence for a positive answer, by leveraging (a) 2D human body part labeling in images, (b) second-order label-sensitive pooling over dynamically computed regions resulting from a hierarchical decomposition of the body, and (c) iterative structured-output modeling to contextualize the process based on 3D pose estimates. For robustness and generalization, we take advantage of a recent large-scale 3D human motion capture dataset, Human3.6M[18] that also has human body part labeling annotations available with images. We provide extensive experimental studies where alternative intermediate representations are compared and report a substantial 33 error reduction over competitive discriminative baselines that regress 3D human pose against global HOG features." ] }
1609.05522
2523426658
The objective of this work is to estimate 3D human pose from a single RGB image. Extracting image representations which incorporate both spatial relation of body parts and their relative depth plays an essential role in accurate3D pose reconstruction. In this paper, for the first time, we show that camera viewpoint in combination to 2D joint lo-cations significantly improves 3D pose accuracy without the explicit use of perspective geometry mathematical models.To this end, we train a deep Convolutional Neural Net-work (CNN) to learn categorical camera viewpoint. To make the network robust against clothing and body shape of the subject in the image, we utilized 3D computer rendering to synthesize additional training images. We test our framework on the largest 3D pose estimation bench-mark, Human3.6m, and achieve up to 20 error reduction compared to the state-of-the-art approaches that do not use body part segmentation.
In @cite_2 , 3D human pose is represented as a sparse embedding in an overcomplete dictionary. The authors proposed a matching pursuit algorithm to sequentially select E basis poses that minimize the reprojection error and refine the projective camera parameters. @cite_29 extended this work by hierarchically clustering the 3D dictionary into subspaces with similar poses. To reconstruct the 3D pose from a 2D projection, the selected pose bases are drawn from a small number of subspaces that are close to each other.
{ "cite_N": [ "@cite_29", "@cite_2" ], "mid": [ "2178077220", "2155196764" ], "abstract": [ "Reconstructing 3D human poses from a single 2D image is an ill-posed problem without considering the human body model. Explicitly enforcing physiological constraints is known to be non-convex and usually leads to difficulty in finding an optimal solution. An attractive alternative is to learn a prior model of the human body from a set of human pose data. In this paper, we develop a new approach, namely pose locality constrained representation (PLCR), to model the 3D human body and use it to improve 3D human pose reconstruction. In this approach, the human pose space is first hierarchically divided into lower-dimensional pose subspaces by subspace clustering. After that, a block-structural pose dictionary is constructed by concatenating the basis poses from all the pose subspaces. Finally, PLCR utilizes the block-structural pose dictionary to explicitly encourage pose locality in human-body modeling – nonzero coefficients are only assigned to the basis poses from a small number of pose subspaces that are close to each other in the pose-subspace hierarchy. We combine PLCR into the matching-pursuit based 3D human-pose reconstruction algorithm and show that the proposed PLCR-based algorithm outperforms the state-of-the-art algorithm that uses the standard sparse representation and physiological regularity in reconstructing a variety of human poses from both synthetic data and real images.", "Reconstructing an arbitrary configuration of 3D points from their projection in an image is an ill-posed problem. When the points hold semantic meaning, such as anatomical landmarks on a body, human observers can often infer a plausible 3D configuration, drawing on extensive visual memory. We present an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory. Our method solves for anthropometrically regular body pose and explicitly estimates the camera via a matching pursuit algorithm operating on the image projections. Anthropometric regularity (i.e., that limbs obey known proportions) is a highly informative prior, but directly applying such constraints is intractable. Instead, we enforce a necessary condition on the sum of squared limb-lengths that can be solved for in closed form to discourage implausible configurations in 3D. We evaluate performance on a wide variety of human poses captured from different viewpoints and show generalization to novel 3D configurations and robustness to missing data." ] }
1609.05522
2523426658
The objective of this work is to estimate 3D human pose from a single RGB image. Extracting image representations which incorporate both spatial relation of body parts and their relative depth plays an essential role in accurate3D pose reconstruction. In this paper, for the first time, we show that camera viewpoint in combination to 2D joint lo-cations significantly improves 3D pose accuracy without the explicit use of perspective geometry mathematical models.To this end, we train a deep Convolutional Neural Net-work (CNN) to learn categorical camera viewpoint. To make the network robust against clothing and body shape of the subject in the image, we utilized 3D computer rendering to synthesize additional training images. We test our framework on the largest 3D pose estimation bench-mark, Human3.6m, and achieve up to 20 error reduction compared to the state-of-the-art approaches that do not use body part segmentation.
in @cite_22 combined two different datasets to generate many 3D-2D pairs as training examples. During inference, estimated 2D pose is used to retrieve the normalized nearest 3D poses. The final 3D pose is then estimated by minimizing the projection error under the constraint that the estimated 3D should be close to the retrieved poses. @cite_4 proposed a new framework to estimate 3D pose from ground truth 2D pose. To resolve the ambiguity, they first learn the pose-dependent joint angle limits by collecting a new mocap dataset which includes an extensive variety of stretching poses. in @cite_16 , imposed kinematic constraint through projecting a 3D model onto the input image and pruning the parts which are incompatible with the anthropomorphism. To reduce the depth ambiguity, several 3D poses were generated by regressing the initial view to multiple oriented views. Estimated orientation from 2D body part detector is used to choose the final 3D pose. Simo- in @cite_1 proposed a Bayesian framework to jointly estimate both the 3D and 2D poses. The set of 3D pose hypotheses are generated using 3D generative kinematic model, which are weighted by a discriminative part model.
{ "cite_N": [ "@cite_1", "@cite_16", "@cite_4", "@cite_22" ], "mid": [ "2111446867", "2134704262", "1943191679", "" ], "abstract": [ "We introduce a novel approach to automatically recover 3D human pose from a single image. Most previous work follows a pipelined approach: initially, a set of 2D features such as edges, joints or silhouettes are detected in the image, and then these observations are used to infer the 3D pose. Solving these two problems separately may lead to erroneous 3D poses when the feature detector has performed poorly. In this paper, we address this issue by jointly solving both the 2D detection and the 3D inference problems. For this purpose, we propose a Bayesian framework that integrates a generative model based on latent variables and discriminative 2D part detectors based on HOGs, and perform inference using evolutionary algorithms. Real experimentation demonstrates competitive results, and the ability of our methodology to provide accurate 2D and 3D pose estimations even when the 2D detectors are inaccurate.", "In this paper, an automatic approach for 3D pose reconstruction from a single image is proposed. The presence of human body articulation, hallucinated parts and cluttered background leads to ambiguity during the pose inference, which makes the problem non-trivial. Researchers have explored various methods based on motion and shading in order to reduce the ambiguity and reconstruct the 3D pose. The key idea of our algorithm is to impose both kinematic and orientation constraints. The former is imposed by projecting a 3D model onto the input image and pruning the parts, which are incompatible with the anthropomorphism. The latter is applied by creating synthetic views via regressing the input view to multiple oriented views. After applying the constraints, the 3D model is projected onto the initial and synthetic views, which further reduces the ambiguity. Finally, we borrow the direction of the unambiguous parts from the synthetic views to the initial one, which results in the 3D pose. Quantitative experiments are performed on the Human Eva-I dataset and qualitatively on unconstrained images from the Image Parse dataset. The results show the robustness of the proposed approach to accurately reconstruct the 3D pose form a single image.", "Estimating 3D human pose from 2D joint locations is central to the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collect a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. Both dataset and prior are available for research purposes. Second, we define a general parametrization of body pose and a new, multi-stage, method to estimate 3D pose from 2D joint locations using an over-complete dictionary of poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results using manual annotations on real images and automatic detections on the Leeds sports pose dataset.", "" ] }
1609.04849
2598382557
In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain specific knowledge. Although intuitive, recent work in deep learning has shown this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multi-channel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories we "fade" the player trajectories. We find that this approach is superior to a traditional FFN model. By using gradient ascent to create images using an already trained CNN, we discover what features the CNN filters learn. Last, we find that a combined CNN+FFN is the best performing network with an error rate of 39 .
With the rise of deep neural networks, sports prediction experts have new tools for analyzing players, match-ups, and team strategy in these adversarial multiagent systems. Trajectory data was not available at the time, so much previous work on basketball data using neural networks have used statistical features such as: the number of games won and the number of points scored. For example, , use statistics from 620 NBA games and a neural network to predict the winner of a game. Another interested in predicting game outcomes is @cite_5 . On the other hand, in his blog discusses predicting basketball shots based upon the type of shot (layups versus free throws and three-point shots) and where the ball was shot from. In other sports related papers, use a neural network to predict winners of soccer games in the 2006 World Cup. Also, predict goal events in video footage of soccer games.
{ "cite_N": [ "@cite_5" ], "mid": [ "2951912364" ], "abstract": [ "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art." ] }
1609.04849
2598382557
In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain specific knowledge. Although intuitive, recent work in deep learning has shown this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multi-channel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories we "fade" the player trajectories. We find that this approach is superior to a traditional FFN model. By using gradient ascent to create images using an already trained CNN, we discover what features the CNN filters learn. Last, we find that a combined CNN+FFN is the best performing network with an error rate of 39 .
Although aforementioned basketball work did not have access to raw trajectory data, , use the same dataset provided by STATS for some of their work involving basketball. They explore how to get an open shot in basketball using trajectory data to find that the number of times defensive players swapped roles positions was predictive of scoring. However, they explore open versus pressured shots (rather than shot making prediction), do not represent the data as an image, and do not implement neural networks for their findings. Other trajectory work includes using Conditional Random Fields to predict ball ownership from only player positions ( @cite_6 ), as well as predicting the next action of the ball owner via pass, shot, or dribble @cite_0 . use non-negative matrix factorization to identify different types of shooters using trajectory data. Because of a lack of defensive statistics in the sport, A. Franks et. al. create counterpoints (defensive points) to better quantify , defensive plays ( @cite_1 @cite_4 ). make use of trajectory data by segmenting a game of basketball into phases (offense, defense, and time-outs) to then analyze team behavior during these phases.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_1", "@cite_6" ], "mid": [ "2149557440", "2157731580", "2089787974", "2243825728" ], "abstract": [ "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.", "ABSTRACTBasketball games evolve continuously in space and time as players constantly interact with their teammates, the opposing team, and the ball. However, current analyses of basketball outcomes rely on discretized summaries of the game that reduce such interactions to tallies of points, assists, and similar events. In this article, we propose a framework for using optical player tracking data to estimate, in real time, the expected number of points obtained by the end of a possession. This quantity, called expected possession value (EPV), derives from a stochastic process model for the evolution of a basketball possession. We model this process at multiple levels of resolution, differentiating between continuous, infinitesimal movements of players, and discrete events such as shot attempts and turnovers. Transition kernels are estimated using hierarchical spatiotemporal models that share information across players while remaining computationally tractable on very large data sets. In addition to estima...", "Although basketball is a dualistic sport, with all players competing on both offense and defense, almost all of the sport's conventional metrics are designed to summarize offensive play. As a result, player valuations are largely based on offensive performances and to a much lesser degree on defensive ones. Steals, blocks and defensive rebounds provide only a limited summary of defensive effectiveness, yet they persist because they summarize salient events that are easy to observe. Due to the inefficacy of traditional defensive statistics, the state of the art in defensive analytics remains qualitative, based on expert intuition and analysis that can be prone to human biases and imprecision. Fortunately, emerging optical player tracking systems have the potential to enable a richer quantitative characterization of basketball performance, particularly defensive performance. Unfortunately, due to computational and methodological complexities, that potential remains unmet. This paper attempts to fill this void, combining spatial and spatio-temporal processes, matrix factorization techniques and hierarchical regression models with player tracking data to advance the state of defensive analytics in the NBA. Our approach detects, characterizes and quantifies multiple aspects of defensive play in basketball, supporting some common understandings of defensive effectiveness, challenging others and opening up many new insights into the defensive elements of basketball.", "Tracking objects like a basketball from a monocular view is challenging due to its small size, potential to move at high velocities as well as the high frequency of occlusion. However, humans with a deep knowledge of a game like basketball can predict with high accuracy the location of the ball even without seeing it due to the location and motion of nearby objects, as well as information of where it was last seen. Learning from tracking data is problematic however, due to the high variance in player locations. In this paper, we show that by simply \"permuting\" the multi-agent data we obtain a compact role-ordered feature which accurately predict the ball owner. We also show that our formulation can incorporate other information sources such as a vision-based ball detector to improve prediction accuracy." ] }
1609.05158
2949128343
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.
The goal of SISR methods is to recover a HR image from a single LR input image @cite_31 . Recent popular SISR methods can be classified into edge-based @cite_29 , image statistics-based @cite_30 @cite_53 @cite_8 @cite_9 and patch-based @cite_25 @cite_7 @cite_0 @cite_33 @cite_48 @cite_3 @cite_40 methods. A detailed review of more generic SISR methods can be found in @cite_39 . One family of approaches that has recently thrived in tackling the SISR problem is sparsity-based techniques. Sparse coding is an effective mechanism that assumes any natural image can be sparsely represented in a transform domain. This transform domain is usually a dictionary of image atoms @cite_4 @cite_22 , which can be learnt through a training process that tries to discover the correspondence between LR and HR patches. This dictionary is able to embed the prior knowledge necessary to constrain the ill-posed problem of super-resolving unseen data. This approach is proposed in the methods of @cite_54 @cite_2 . A drawback of sparsity-based techniques is that introducing the sparsity constraint through a nonlinear reconstruction is generally computationally expensive.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_33", "@cite_7", "@cite_8", "@cite_22", "@cite_48", "@cite_29", "@cite_53", "@cite_9", "@cite_54", "@cite_3", "@cite_39", "@cite_0", "@cite_40", "@cite_2", "@cite_31", "@cite_25" ], "mid": [ "2146782367", "18046889", "2103844245", "2120824855", "2164551808", "", "2087436818", "2111454493", "1977581467", "", "2121058967", "935139217", "7682646", "2160558632", "", "", "2534320940", "2118963448" ], "abstract": [ "Over the past decade, single image Super-Resolution (SR) research has focused on developing sophisticated image priors, leading to significant advances. Estimating and incorporating the blur model, that relates the high-res and low-res images, has received much less attention, however. In particular, the reconstruction constraint, namely that the blurred and down sampled high-res output should approximately equal the low-res input image, has been either ignored or applied with default fixed blur models. In this work, we examine the relative importance of the image prior and the reconstruction constraint. First, we show that an accurate reconstruction constraint combined with a simple gradient regularization achieves SR results almost as good as those of state-of-the-art algorithms with sophisticated image priors. Second, we study both empirically and theoretically the sensitivity of SR algorithms to the blur model assumed in the reconstruction constraint. We find that an accurate blur model is more important than a sophisticated image prior. Finally, using real camera data, we demonstrate that the default blur models of various SR algorithms may differ from the camera blur, typically leading to over-smoothed results. Our findings highlight the importance of accurately estimating camera blur in reconstructing raw lowers images acquired by an actual camera.", "Mallat's book is the undisputed reference in this field - it is the only one that covers the essential material in such breadth and depth. - Laurent Demanet, Stanford UniversityThe new edition of this classic book gives all the major concepts, techniques and applications of sparse representation, reflecting the key role the subject plays in today's signal processing. The book clearly presents the standard representations with Fourier, wavelet and time-frequency transforms, and the construction of orthogonal bases with fast algorithms. The central concept of sparsity is explained and applied to signal compression, noise reduction, and inverse problems, while coverage is given to sparse representations in redundant dictionaries, super-resolution and compressive sensing applications.Features:* Balances presentation of the mathematics with applications to signal processing* Algorithms and numerical examples are implemented in WaveLab, a MATLAB toolbox* Companion website for instructors and selected solutions and code available for studentsNew in this edition* Sparse signal representations in dictionaries* Compressive sensing, super-resolution and source separation* Geometric image processing with curvelets and bandlets* Wavelets for computer graphics with lifting on surfaces* Time-frequency audio processing and denoising* Image compression with JPEG-2000* New and updated exercisesA Wavelet Tour of Signal Processing: The Sparse Way, third edition, is an invaluable resource for researchers and R&D engineers wishing to apply the theory in fields such as image processing, video processing and compression, bio-sensing, medical imaging, machine vision and communications engineering.Stephane Mallat is Professor in Applied Mathematics at cole Polytechnique, Paris, France. From 1986 to 1996 he was a Professor at the Courant Institute of Mathematical Sciences at New York University, and between 2001 and 2007, he co-founded and became CEO of an image processing semiconductor company.Companion website: A Numerical Tour of Signal Processing Includes all the latest developments since the book was published in 1999, including itsapplication to JPEG 2000 and MPEG-4Algorithms and numerical examples are implemented in Wavelab, a MATLAB toolboxBalances presentation of the mathematics with applications to signal processing", "Until now, neighbor-embedding-based (NE) algorithms for super-resolution (SR) have carried out two independent processes to synthesize high-resolution (HR) image patches. In the first process, neighbor search is performed using the Euclidean distance metric, and in the second process, the optimal weights are determined by solving a constrained least squares problem. However, the separate processes are not optimal. In this paper, we propose a sparse neighbor selection scheme for SR reconstruction. We first predetermine a larger number of neighbors as potential candidates and develop an extended Robust-SL0 algorithm to simultaneously find the neighbors and to solve the reconstruction weights. Recognizing that the k-nearest neighbor (k-NN) for reconstruction should have similar local geometric structures based on clustering, we employ a local statistical feature, namely histograms of oriented gradients (HoG) of low-resolution (LR) image patches, to perform such clustering. By conveying local structural information of HoG in the synthesis stage, the k-NN of each LR input patch is adaptively chosen from their associated subset, which significantly improves the speed of synthesizing the HR image while preserving the quality of reconstruction. Experimental results suggest that the proposed method can achieve competitive SR quality compared with other state-of-the-art baselines.", "In various computer vision applications, often we need to convert an image in one style into another style for better visualization, interpretation and recognition; for examples, up-convert a low resolution image to a high resolution one, and convert a face sketch into a photo for matching, etc. A semi-coupled dictionary learning (SCDL) model is proposed in this paper to solve such cross-style image synthesis problems. Under SCDL, a pair of dictionaries and a mapping function will be simultaneously learned. The dictionary pair can well characterize the structural domains of the two styles of images, while the mapping function can reveal the intrinsic relationship between the two styles' domains. In SCDL, the two dictionaries will not be fully coupled, and hence much flexibility can be given to the mapping function for an accurate conversion across styles. Moreover, clustering and image nonlocal redundancy are introduced to enhance the robustness of SCDL. The proposed SCDL model is applied to image super-resolution and photo-sketch synthesis, and the experimental results validated its generality and effectiveness in cross-style image synthesis.", "We propose a fast regression model for practical single image super-resolution based on in-place examples, by leveraging two fundamental super-resolution approaches- learning from an external database and learning from self-examples. Our in-place self-similarity refines the recently proposed local self-similarity by proving that a patch in the upper scale image have good matches around its origin location in the lower scale image. Based on the in-place examples, a first-order approximation of the nonlinear mapping function from low-to high-resolution image patches is learned. Extensive experiments on benchmark and real-world images demonstrate that our algorithm can produce natural-looking results with sharp edges and preserved fine details, while the current state-of-the-art algorithms are prone to visual artifacts. Furthermore, our model can easily extend to deal with noise by combining the regression results on multiple in-place examples for robust estimation. The algorithm runs fast and is particularly useful for practical applications, where the input images typically contain diverse textures and they are potentially contaminated by noise or compression artifacts.", "", "We proposed a deformable patches based method for single image super-resolution. By the concept of deformation, a patch is not regarded as a fixed vector but a flexible deformation flow. Via deformable patches, the dictionary can cover more patterns that do not appear, thus becoming more expressive. We present the energy function with slow, smooth and flexible prior for deformation model. During example-based super-resolution, we develop the deformation similarity based on the minimized energy function for basic patch matching. For robustness, we utilize multiple deformed patches combination for the final reconstruction. Experiments evaluate the deformation effectiveness and super-resolution performance, showing that the deformable patches help improve the representation accuracy and perform better than the state-of-art methods.", "In this paper, we propose a novel generic image prior-gradient profile prior, which implies the prior knowledge of natural image gradients. In this prior, the image gradients are represented by gradient profiles, which are 1-D profiles of gradient magnitudes perpendicular to image structures. We model the gradient profiles by a parametric gradient profile model. Using this model, the prior knowledge of the gradient profiles are learned from a large collection of natural images, which are called gradient profile prior. Based on this prior, we propose a gradient field transformation to constrain the gradient fields of the high resolution image and the enhanced image when performing single image super-resolution and sharpness enhancement. With this simple but very effective approach, we are able to produce state-of-the-art results. The reconstructed high resolution images or the enhanced images are sharp while have rare ringing or jaggy artifacts.", "In this paper we address the problem of producing a high-resolution image from a single low-resolution image without any external training set. We propose a framework for both magnification and deblurring using only the original low-resolution image and its blurred version. In our method, each pixel is predicted by its neighbors through the Gaussian process regression. We show that when using a proper covariance function, the Gaussian process regression can perform soft clustering of pixels based on their local structures. We further demonstrate that our algorithm can extract adequate information contained in a single low-resolution image to generate a high-resolution image with sharp edges, which is comparable to or even superior in quality to the performance of other edge-directed and example-based super-resolution algorithms. Experimental results also show that our approach maintains high-quality performance at large magnifications.", "", "This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.", "We address the problem of image upscaling in the form of single image super-resolution based on a dictionary of low- and high-resolution exemplars. Two recently proposed methods, Anchored Neighborhood Regression (ANR) and Simple Functions (SF), provide state-of-the-art quality performance. Moreover, ANR is among the fastest known super-resolution methods. ANR learns sparse dictionaries and regressors anchored to the dictionary atoms. SF relies on clusters and corresponding learned functions. We propose A+, an improved variant of ANR, which combines the best qualities of ANR and SF. A+ builds on the features and anchored regressors from ANR but instead of learning the regressors on the dictionary it uses the full training material, similar to SF. We validate our method on standard images and compare with state-of-the-art methods. We obtain improved quality (i.e. 0.2–0.7 dB PSNR better than ANR) and excellent time complexity, rendering A+ the most efficient dictionary-based super-resolution method to date.", "Single-image super-resolution is of great importance for vision applications, and numerous algorithms have been proposed in recent years. Despite the demonstrated success, these results are often generated based on different assumptions using different datasets and metrics. In this paper, we present a systematic benchmark evaluation for state-of-the-art single-image super-resolution algorithms. In addition to quantitative evaluations based on conventional full-reference metrics, human subject studies are carried out to evaluate image quality based on visual perception. The benchmark evaluations demonstrate the performance and limitations of state-of-the-art algorithms which sheds light on future research in single-image super-resolution.", "Reconstruction- and example-based super-resolution (SR) methods are promising for restoring a high-resolution (HR) image from low-resolution (LR) image(s). Under large magnification, reconstruction-based methods usually fail to hallucinate visual details while example-based methods sometimes introduce unexpected details. Given a generic LR image, to reconstruct a photo-realistic SR image and to suppress artifacts in the reconstructed SR image, we introduce a multi-scale dictionary to a novel SR method that simultaneously integrates local and non-local priors. The local prior suppresses artifacts by using steering kernel regression to predict the target pixel from a small local area. The non-local prior enriches visual details by taking a weighted average of a large neighborhood as an estimate of the target pixel. Essentially, these two priors are complementary to each other. Experimental results demonstrate that the proposed method can produce high quality SR recovery both quantitatively and perceptually.", "", "", "Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.", "In this paper, we propose a novel method for solving single-image super-resolution problems. Given a low-resolution image as input, we recover its high-resolution counterpart using a set of training examples. While this formulation resembles other learning-based methods for super-resolution, our method has been inspired by recent manifold teaming methods, particularly locally linear embedding (LLE). Specifically, small image patches in the lowand high-resolution images form manifolds with similar local geometry in two distinct feature spaces. As in LLE, local geometry is characterized by how a feature vector corresponding to a patch can be reconstructed by its neighbors in the feature space. Besides using the training image pairs to estimate the high-resolution embedding, we also enforce local compatibility and smoothness constraints between patches in the target high-resolution image through overlapping. Experiments show that our method is very flexible and gives good empirical results." ] }
1609.05158
2949128343
Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.
Image representations derived via neural networks @cite_1 @cite_15 @cite_16 have recently also shown promise for SISR . These methods, employ the back-propagation algorithm @cite_5 to train on large image databases such as ImageNet @cite_18 in order to learn nonlinear mappings of LR and HR image patches. Stacked collaborative local auto-encoders are used in @cite_27 to super-resolve the LR image layer by layer. @cite_51 suggested a method for SISR based on an extension of the predictive convolutional sparse coding framework @cite_44 . A multiple layer CNN inspired by sparse-coding methods is proposed in @cite_6 . Chen et. al. @cite_14 proposed to use multi-stage TNRD as an alternative to CNN where the weights and the nonlinearity is trainable. Wang et. al @cite_38 trained a cascaded sparse coding network from end to end inspired by LISTA (Learning iterative shrinkage and thresholding algorithm) @cite_20 to fully exploit the natural sparsity of images. The network structure is not limited to neural networks, for example, a random forest @cite_45 has also been successfully used for SISR .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_14", "@cite_1", "@cite_6", "@cite_44", "@cite_27", "@cite_45", "@cite_5", "@cite_15", "@cite_16", "@cite_51", "@cite_20" ], "mid": [ "", "2117539524", "1906770428", "", "1885185971", "2172174689", "135113724", "1950594372", "", "2952186574", "1686810756", "2165939075", "2118103795" ], "abstract": [ "", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters ( i.e. , linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD— Trainable Nonlinear Reaction Diffusion . The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.", "", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.", "We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. Given an input, the optimal code minimizes the distance between the output of the decoder and the input patch while being as similar as possible to the encoder output. Learning proceeds in a two-phase EM-like fashion: (1) compute the minimum-energy code vector, (2) adjust the parameters of the encoder and decoder so as to decrease the energy. The model produces \"stroke detectors\" when trained on handwritten numerals, and Gabor-like filters when trained on natural image patches. Inference and learning are very fast, requiring no preprocessing, and no expensive sampling. Using the proposed unsupervised method to initialize the first layer of a convolutional network, we achieved an error rate slightly lower than the best reported result on the MNIST dataset. Finally, an extension of the method is described to learn topographical filter maps.", "In this paper, we propose a new model called deep network cascade (DNC) to gradually upscale low-resolution images layer by layer, each layer with a small scale factor. DNC is a cascade of multiple stacked collaborative local auto-encoders. In each layer of the cascade, non-local self-similarity search is first performed to enhance high-frequency texture details of the partitioned patches in the input image. The enhanced image patches are then input into a collaborative local auto-encoder (CLA) to suppress the noises as well as collaborate the compatibility of the overlapping patches. By closing the loop on non-local self-similarity search and CLA in a cascade layer, we can refine the super-resolution result, which is further fed into next layer until the required image scale. Experiments on image super-resolution demonstrate that the proposed DNC can gradually upscale a low-resolution image with the increase of network layers and achieve more promising results in visual quality as well as quantitative performance.", "The aim of single image super-resolution is to reconstruct a high-resolution image from a single low-resolution input. Although the task is ill-posed it can be seen as finding a non-linear mapping from a low to high-dimensional space. Recent methods that rely on both neighborhood embedding and sparse-coding have led to tremendous quality improvements. Yet, many of the previous approaches are hard to apply in practice because they are either too slow or demand tedious parameter tweaks. In this paper, we propose to directly map from low to high-resolution patches using random forests. We show the close relation of previous work on single image super-resolution to locally linear regression and demonstrate how random forests nicely fit into this framework. During training the trees, we optimize a novel and effective regularized objective that not only operates on the output space but also on the input space, which especially suits the regression task. During inference, our method comprises the same well-known computational efficiency that has made random forests popular for many computer vision problems. In the experimental part, we demonstrate on standard benchmarks for single image super-resolution that our approach yields highly accurate state-of-the-art results, while being fast in both training and evaluation.", "", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L sub 1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.", "In Sparse Coding (SC), input vectors are reconstructed using a sparse linear combination of basis vectors. SC has become a popular method for extracting features from data. For a given input, SC minimizes a quadratic reconstruction error with an L1 penalty term on the code. The process is often too slow for applications such as real-time pattern recognition. We proposed two versions of a very fast algorithm that produces approximate estimates of the sparse code that can be used to compute good visual features, or to initialize exact iterative algorithms. The main idea is to train a non-linear, feed-forward predictor with a specific architecture and a fixed depth to produce the best possible approximation of the sparse code. A version of the method, which can be seen as a trainable version of Li and Osher's coordinate descent method, is shown to produce approximate solutions with 10 times less computation than Li and Os-her's for the same approximation error. Unlike previous proposals for sparse code predictors, the system allows a kind of approximate \"explaining away\" to take place during inference. The resulting predictor is differentiable and can be included into globally-trained recognition systems." ] }
1609.05162
2952783788
This paper explores the problem of path planning under uncertainty. Specifically, we consider online receding horizon based planners that need to operate in a latent environment where the latent information can be modeled via Gaussian Processes. Online path planning in latent environments is challenging since the robot needs to explore the environment to get a more accurate model of latent information for better planning later and also achieves the task as quick as possible. We propose UCB style algorithms that are popular in the bandit settings and show how those analyses can be adapted to the online robotic path planning problems. The proposed algorithm trades-off exploration and exploitation in near-optimal manner and has appealing no-regret properties. We demonstrate the efficacy of the framework on the application of aircraft flight path planning when the winds are partially observed.
: In receding-horizon control a library of pre-computed control command sequences are simulated forward from the current state of the robot using the dynamic motion model to come up with a set of dynamically feasible trajectories up to the planning horizon. This set of trajectories is then evaluated on the map of the world in the vicinity of the robot and amongst all the currently collision-free trajectories the one that makes most progress towards the goal is chosen for traversal @cite_5 . The selected trajectory is traversed for a portion of the time and the process of trajectory evaluation and selection is repeated again. Receding-horizon based planning has been widely used in aerial and ground robot navigation in cluttered environments @cite_4 @cite_18 @cite_21 due to many attractive properties like finite runtime, adaptability to available computational budget and dynamic feasibility by construction. We use receding-horizon planning with pre-computed trajectory libraries as the framework in this paper.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_21", "@cite_4" ], "mid": [ "1531355326", "", "1495977361", "2121806728" ], "abstract": [ "While spatial sampling of points has already received much attention, the motion planning problem can also be viewed as a process which samples the function space of paths. We define a search space to be a set of candidate paths and consider the problem of designing a search space which is most likely to produce a solution given a probabilistic representation of all possible environments. We introduce the concept of relative completeness which is the prior probability, before the environment is specified, of producing a solution path in a bounded amount of computation. We show how this probability is related to the mutual separation of the set of paths searched. The problem of producing an optimal set can be related to the maximum k-facility dispersion problem which is known to be NP-hard. We propose a greedy algorithm for producing a good set of paths and demonstrate that it produces results with both low dispersion and high prior probability of success.", "", "Autonomous mobile robots are required to operate in partially known and unstructured environments. It is imperative to guarantee safety of such systems for their successful deployment. Current state of the art does not fully exploit the sensor and dynamic capabilities of a robot. Also, given the non-holonomic systems with non-linear dynamic constraints, it becomes computationally infeasible to find an optimal solution if the full dynamics are to be exploited online. In this paper we present an online algorithm to guarantee the safety of the robot through an emergency maneuver library. The maneuvers in the emergency maneuver library are optimized such that the probability of finding an emergency maneuver that lies in the known obstacle free space is maximized. We prove that the related trajectory set diversity problem is monotonic and sub-modular which enables one to develop an efficient trajectory set generation algorithm with bounded sub-optimality. We generate an off-line computed trajectory set that exploits the full dynamics of the robot and the known obstacle-free region. We test and validate the algorithm on a full-size autonomous helicopter flying up to speeds of 56m s in partially-known environments. We present results from 4 months of flight testing where the helicopter has been avoiding trees, performing autonomous landing, avoiding mountains while being guaranteed safe.", "Boss is an autonomous vehicle that uses on-board sensors (global positioning system, lasers, radars, and cameras) to track other vehicles, detect static obstacles, and localize itself relative to a road model. A three-layer planning system combines mission, behavioral, and motion planning to drive in urban environments. The mission planning layer considers which street to take to achieve a mission goal. The behavioral layer determines when to change lanes and precedence at intersections and performs error recovery maneuvers. The motion planning layer selects actions to avoid obstacles while making progress toward local goals. The system was developed from the ground up to address the requirements of the DARPA Urban Challenge using a spiral system development process with a heavy emphasis on regular, regressive system testing. During the National Qualification Event and the 85-km Urban Challenge Final Event, Boss demonstrated some of its capabilities, qualifying first and winning the challenge. © 2008 Wiley Periodicals, Inc." ] }
1609.05162
2952783788
This paper explores the problem of path planning under uncertainty. Specifically, we consider online receding horizon based planners that need to operate in a latent environment where the latent information can be modeled via Gaussian Processes. Online path planning in latent environments is challenging since the robot needs to explore the environment to get a more accurate model of latent information for better planning later and also achieves the task as quick as possible. We propose UCB style algorithms that are popular in the bandit settings and show how those analyses can be adapted to the online robotic path planning problems. The proposed algorithm trades-off exploration and exploitation in near-optimal manner and has appealing no-regret properties. We demonstrate the efficacy of the framework on the application of aircraft flight path planning when the winds are partially observed.
: POMDPs are used to model Markov Decision Processes (MDPs) where only part of the state of the world can be observed. Finding optimal policies of POMDP is NP-hard @cite_17 . Approximate solutions like Point-Based Value Iteration @cite_1 @cite_15 , Heuristic Search-Based Value Iteration @cite_20 and Monte-Carlo planning @cite_23 are popular goal-free reward oriented solvers. While goal-oriented methods like @cite_2 are more relevant to our problem scenario, they are hard to adapt to continuous observation spaces and computation and time budgets imposed by mobile robots. Belief Space Planning approaches (e.g., @cite_8 @cite_3 @cite_14 @cite_12 @cite_13 @cite_0 ) is also related to our work. But most of belief space planning approaches assume that the uncertainty is known, e.g., the form of the stochastic dynamics are fully known. We do not even assume the form of the uncertainty is known here and we utilize Gaussian Process to keep tracking the uncertainty in a online manner while the robot is moving.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_3", "@cite_0", "@cite_23", "@cite_2", "@cite_15", "@cite_13", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2164429173", "", "1532688806", "1529817735", "2024382047", "2171084228", "", "", "", "", "", "2032100464" ], "abstract": [ "We present a new approach to motion planning under sensing and motion uncertainty by computing a locally optimal solution to a continuous partially observable Markov decision process (POMDP). Our approach represents beliefs (the distributions of the robot's state estimate) by Gaussian distributions and is applicable to robot systems with non-linear dynamics and observation models. The method follows the general POMDP solution framework in which we approximate the belief dynamics using an extended Kalman filter and represent the value function by a quadratic function that is valid in the vicinity of a nominal trajectory through belief space. Using a belief space variant of iterative LQG (iLQG), our approach iterates with second-order convergence towards a linear control policy over the belief space that is locally optimal with respect to a user-defined cost function. Unlike previous work, our approach does not assume maximum-likelihood observations, does not assume fixed estimator or control gains, takes into account obstacles in the environment, and does not require discretization of the state and action spaces. The running time of the algorithm is polynomial (O[n6]) in the dimension n of the state space. We demonstrate the potential of our approach in simulation for holonomic and non-holonomic robots maneuvering through environments with obstacles with noisy and partial sensing and with non-linear dynamics and observation models.", "", "This paper introduces the Point-Based Value Iteration (PBVI) algorithm for POMDP planning. PBVI approximates an exact value iteration solution by selecting a small set of representative belief points and then tracking the value and its derivative for those points only. By using stochastic trajectories to choose belief points, and by maintaining only one value hyper-plane per point, PBVI successfully solves large problems: we present results on a robotic laser tag problem as well as three test domains from the literature.", "We cast the partially observable control problem as a fully observable underactuated stochastic control problem in belief space and apply standard planning and control techniques. One of the difficulties of belief space planning is modeling the stochastic dynamics resulting from unknown future observations. The core of our proposal is to define deterministic beliefsystem dynamics based on an assumption that the maximum likelihood observation (calculated just prior to the observation) is always obtained. The stochastic effects of future observations are modeled as Gaussian noise. Given this model of the dynamics, two planning and control methods are applied. In the first, linear quadratic regulation (LQR) is applied to generate policies in the belief space. This approach is shown to be optimal for linearGaussian systems. In the second, a planner is used to find locally optimal plans in the belief space. We propose a replanning approach that is shown to converge to the belief space goal in a finite number of replanning steps. These approaches are characterized in the context of a simple nonlinear manipulation problem where a planar robot simultaneously locates and grasps an object.", "As sampling-based motion planners become faster, they can be reexecuted more frequently by a robot during task execution to react to uncertainty in robot motion, obstacle motion, sensing noise, and uncertainty in the robot's kinematic model. We investigate and analyze high-frequency replanning (HFR) where, during each period, fast sampling-based motion planners are executed in parallel as the robot simultaneously executes the first action of the best motion plan from the previous period. We consider discrete-time systems with stochastic nonlinear (but linearizable) dynamics and observation models with noise drawn from zero mean Gaussian distributions. The objective is to maximize the probability of success (i.e., avoid collision with obstacles and reach the goal) or to minimize path length subject to a lower bound on the probability of success. We show that, as parallel computation power increases, HFR offers asymptotic optimality for these objectives during each period for goal-oriented problems. We then demonstrate the effectiveness of HFR for holonomic and nonholonomic robots including car-like vehicles and steerable medical needles.", "This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent's belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, Monte-Carlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 x 10 battleship and partially observable PacMan, with approximately 1018 and 1056 states respectively. Our Monte-Carlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.", "", "", "", "", "", "We investigate the complexity of the classical problem of optimal policy computation in Markov decision processes. All three variants of the problem finite horizon, infinite horizon discounted, and infinite horizon average cost were known to be solvable in polynomial time by dynamic programming finite horizon problems, linear programming, or successive approximation techniques infinite horizon. We show that they are complete for P, and therefore most likely cannot be solved by highly parallel algorithms. We also show that, in contrast, the deterministic cases of all three problems can be solved very fast in parallel. The version with partially observed states is shown to be PSPACE-complete, and thus even less likely to be solved in polynomial time than the NP-complete problems; in fact, we show that, most likely, it is not possible to have an efficient on-line implementation involving polynomial time on-line computations and memory of an optimal policy, even if an arbitrary amount of precomputation is allowed. Finally, the variant of the problem in which there are no observations is shown to be NP-complete." ] }
1609.04718
2523597572
Android is the most widely used smartphone OS with 82.8 market share in 2015. It is therefore the most widely targeted system by malware authors. Researchers rely on dynamic analysis to extract malware behaviors and often use emulators to do so. However, using emulators lead to new issues. Malware may detect emulation and as a result it does not execute the payload to prevent the analysis. Dealing with virtual device evasion is a never-ending war and comes with a non-negligible computation cost. To overcome this state of affairs, we propose a system that does not use virtual devices for analysing malware behavior. Glassbox is a functional prototype for the dynamic analysis of malware applications. It executes applications on real devices in a monitored and controlled environment. It is a fully automated system that installs, tests and extracts features from the application for further analysis. We present the architecture of the platform and we compare it with existing Android dynamic analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger application behaviors by measuring the average coverage of basic blocks on the AndroCoverage dataset. We show that it executes on average 13.52 more basic blocks than the Monkey program.
Such dynamic analysis systems started being designed since 2010 @cite_9 by the academic research, to circumvent the limitations of static analysis - namely, code morphism and obfuscation. Since that time, many systems have been released. For this study we have built a classification of a part of these systems, presented in Figure and Figure . The classification takes into account three categories : the dynamic features collected by the analysis, the strategies set in order to automate application testing and finally the use of real devices in dynamic analysis systems history. We discuss the results on the following sub-sections.
{ "cite_N": [ "@cite_9" ], "mid": [ "1997201541" ], "abstract": [ "Smartphones are steadily gaining popularity, creating new application areas as their capabilities increase in terms of computational power, sensors and communication. Emerging new features of mobile devices give opportunity to new threats. Android is one of the newer operating systems targeting smartphones. While being based on a Linux kernel, Android has unique properties and specific limitations due to its mobile nature. This makes it harder to detect and react upon malware attacks if using conventional techniques. In this paper, we propose an Android Application Sandbox (AASandbox) which is able to perform both static and dynamic analysis on Android programs to automatically detect suspicious applications. Static analysis scans the software for malicious patterns without installing it. Dynamic analysis executes the application in a fully isolated environment, i.e. sandbox, which intervenes and logs low-level interactions with the system for further analysis. Both the sandbox and the detection algorithms can be deployed in the cloud, providing a fast and distributed detection of suspicious software in a mobile software store akin to Google's Android Market. Additionally, AASandbox might be used to improve the efficiency of classical anti-virus applications available for the Android operating system." ] }
1609.04718
2523597572
Android is the most widely used smartphone OS with 82.8 market share in 2015. It is therefore the most widely targeted system by malware authors. Researchers rely on dynamic analysis to extract malware behaviors and often use emulators to do so. However, using emulators lead to new issues. Malware may detect emulation and as a result it does not execute the payload to prevent the analysis. Dealing with virtual device evasion is a never-ending war and comes with a non-negligible computation cost. To overcome this state of affairs, we propose a system that does not use virtual devices for analysing malware behavior. Glassbox is a functional prototype for the dynamic analysis of malware applications. It executes applications on real devices in a monitored and controlled environment. It is a fully automated system that installs, tests and extracts features from the application for further analysis. We present the architecture of the platform and we compare it with existing Android dynamic analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger application behaviors by measuring the average coverage of basic blocks on the AndroCoverage dataset. We show that it executes on average 13.52 more basic blocks than the Monkey program.
The use of real devices for dynamic analysis started with @cite_27 , a crowdsourced based analysis. Whereas this approach give good results, one cannot ask users to execute real malware on their personal device. So this system can only be an option, when we have already a trained machine learning algorithm, to find malware in the wild.
{ "cite_N": [ "@cite_27" ], "mid": [ "1990649188" ], "abstract": [ "The sharp increase in the number of smartphones on the market, with the Android platform posed to becoming a market leader makes the need for malware analysis on this platform an urgent issue. In this paper we capitalize on earlier approaches for dynamic analysis of application behavior as a means for detecting malware in the Android platform. The detector is embedded in a overall framework for collection of traces from an unlimited number of real users based on crowdsourcing. Our framework has been demonstrated by analyzing the data collected in the central server using two types of data sets: those from artificial malware created for test purposes, and those from real malware found in the wild. The method is shown to be an effective means of isolating the malware and alerting the users of a downloaded malware. This shows the potential for avoiding the spreading of a detected malware to a larger community." ] }
1609.04723
2523427831
We run experiments showing that algorithm clarans (, 2005) finds better K-medoids solutions than the Voronoi iteration algorithm. This finding, along with the similarity between the Voronoi iteration algorithm and Lloyd's K-means algorithm, suggests that clarans may be an effective K-means initializer. We show that this is the case, with clarans outperforming other seeding algorithms on 23 23 datasets with a mean decrease over K-means++ of 30 for initialization mse and 3 or final mse. We describe how the complexity and runtime of clarans can be improved, making it a viable initialization scheme for large datasets.
Alternatives to have been considered which resemble the swapping approach of . One is by @cite_5 , where points are randomly selected and reassigned. @cite_4 show how this heuristic can result in better clustering when there are few points per cluster.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2142827986", "1916116059" ], "abstract": [ "This paper presents the top 10 data mining algorithms identified by the IEEE International Conference on Data Mining (ICDM) in December 2006: C4.5, k-Means, SVM, Apriori, EM, PageRank, AdaBoost, kNN, Naive Bayes, and CART. These top 10 algorithms are among the most influential data mining algorithms in the research community. With each algorithm, we provide a description of the algorithm, discuss the impact of the algorithm, and review current and further research on the algorithm. These 10 algorithms cover classification,", "Hartigan’s method for k-means clustering is the following greedy heuristic: select a point, and optimally reassign it. This paper develops two other formulations of the heuristic, one leading to a number of consistency properties, the other showing that the data partition is always quite separated from the induced Voronoi partition. A characterization of the volume of this separation is provided. Empirical tests verify not only good optimization performance relative to Lloyd’s method, but also good running time." ] }
1609.04723
2523427831
We run experiments showing that algorithm clarans (, 2005) finds better K-medoids solutions than the Voronoi iteration algorithm. This finding, along with the similarity between the Voronoi iteration algorithm and Lloyd's K-means algorithm, suggests that clarans may be an effective K-means initializer. We show that this is the case, with clarans outperforming other seeding algorithms on 23 23 datasets with a mean decrease over K-means++ of 30 for initialization mse and 3 or final mse. We describe how the complexity and runtime of clarans can be improved, making it a viable initialization scheme for large datasets.
The work most similar to in the @math -means setting is that of @cite_3 , where it is indirectly shown that finds a solution within a factor 25 of the optimal @math -medoids clustering. The local search approximation algorithm they propose is a hybrid of and , alternating between the two, with sampling from a kd-tree during the -like step. Their source code includes an implementation of an algorithm they call Swap', which is exactly the algorithm of @cite_1 .
{ "cite_N": [ "@cite_1", "@cite_3" ], "mid": [ "1575476631", "2199495299" ], "abstract": [ "Spatial data mining is the discovery of interesting relationships and characteristics that may exist implicitly in spatial databases. In this paper, we explore whether clustering methods have a role to play in spatial data mining. To this end, we develop a new clustering method called CLAHANS which is based on randomized search. We also develop two spatial data mining algorithms that use CLAHANS. Our analysis and experiments show that with the assistance of CLAHANS, these two algorithms are very effective and can lead to discoveries that are difficult to find with current spatial data mining algorithms. Furthermore, experiments conducted to compare the performance of CLAHANS with that of existing clustering methods show that CLAHANS is the most efficient.", "In k-means clustering we are given a set of n data points in d-dimensional space Rd and an integer k, and the problem is to determine a set of k points in &#211C;d, called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the extremely high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance.We consider the question of whether there exists a simple and practical approximation algorithm for k-means clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9+e)-approximation algorithm. We show that the approximation factor is almost tight, by giving an example for which the algorithm achieves an approximation factor of (9-e). To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with Lloyd's algorithm, this heuristic performs quite well in practice." ] }
1609.04730
2521943001
This paper describes the Robotarium -- a remotely accessible, multi-robot research facility. The impetus behind the Robotarium is that multi-robot testbeds constitute an integral and essential part of the multi-robot research cycle, yet they are expensive, complex, and time-consuming to develop, operate, and maintain. These resource constraints, in turn, limit access for large groups of researchers and students, which is what the Robotarium is remedying by providing users with remote access to a state-of-the-art multi-robot test facility. This paper details the design and operation of the Robotarium and discusses the considerations one must take when making complex hardware remotely accessible. In particular, safety must be built into the system already at the design phase without overly constraining what coordinated control programs users can upload and execute, which calls for minimally invasive safety routines with provable performance guarantees.
In this section, we briefly survey the field of remote access testbeds that have been successful in their respective domains and broadly categorize them along the following dimensions: multi-robot testbeds, sensor network testbeds, and remotely accessible educational tools. A comprehensive overview can be found in @cite_9 @cite_1 @cite_14 and the references therein.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_1" ], "mid": [ "1981216120", "2014819156", "2065500547" ], "abstract": [ "The use of computers and multimedia, as well as the World Wide Web and new communication technologies, allows new forms of teaching and learning such as distance learning, blended learning, use of virtual libraries and many more. The herewith discussed remotely controlled laboratory (RCL) project shall offer an additional contribution. The basic idea is for a user to connect via the Internet with a computer from place A to a real experiment carried out in place B. An overview of our technical and didactical developments as well as an outlook on future plans is presented. Currently, about ten RCLs have been implemented. The essential characteristics of an RCL are the intuitive use and interactivity (operating the technical parameters), the possibility of different points of view of the ongoing experiment thanks to web cams and the quickest possible transfer of the data measured by the user. A reasonable use of sensibly chosen real experiments as remote labs allows a new form of homework and exercises, as well as project work and the execution of experiments, which usually would be a teacher's prerogative only.", "With the development of new technologies, these last years have witnessed the emergence of a new paradigm: the Internet of Things (IoT) and of the physical world. We are now able to communicate and interact with our surrounding environment through the use of multiple tiny sensors, RFID technologies or small wireless robots. This allows a set of new applications and usages to be envisioned ranging from logistic and traceability purposes to emergency and rescue operations going through the monitoring of volcanos or forest fires. However, all this comes with several technical and scientific issues like how to ensure the reliability of wireless communications in disturbed environments, how to manage efficiently the low resources (energy, memory, etc.) or how to set a safe and sustainable (both hardware and software) platform maintenance. All these issues are addressed by researchers all around the world but solutions designed for IoT need to face real experimentations to be validated. To ease such experimentations for IoT, several experimental testbeds have been deployed offering diverse and heterogeneous services and tools. In this article, we study the different requirements and features such facilities should offer. We survey the different experimental facilities currently available for the community, describe their characteristics. In particular, we detail the different hardware used for sensor networks and robot platforms and the scope of services the different facilities offer with a specific focus on testbeds which enable experimentations with mobility. We expect this survey assist a potential user to easily choose the one to use regarding his own needs. Finally, we identify existing gaps and difficulties and investigate new directions for such facilities.", "The growing interest in ubiquitous robotics has originated in the last years the development of a high variety of testbeds. This paper presents a survey on existing ubiquitous robotics testbeds comprising networked mobile robots and networks of distributed sensors, cameras and smartphones, among others. The survey provides an insight into the testbed design, internal behavior and use, identifying trends and existing gaps and proposing guidelines for testbed developers. The level of interoperability among different ubiquitous robotics technologies is used as the main conducting criterion of the survey. Other features analyzed include testbed architectures, target experiments and usability tools." ] }
1609.04730
2521943001
This paper describes the Robotarium -- a remotely accessible, multi-robot research facility. The impetus behind the Robotarium is that multi-robot testbeds constitute an integral and essential part of the multi-robot research cycle, yet they are expensive, complex, and time-consuming to develop, operate, and maintain. These resource constraints, in turn, limit access for large groups of researchers and students, which is what the Robotarium is remedying by providing users with remote access to a state-of-the-art multi-robot test facility. This paper details the design and operation of the Robotarium and discusses the considerations one must take when making complex hardware remotely accessible. In particular, safety must be built into the system already at the design phase without overly constraining what coordinated control programs users can upload and execute, which calls for minimally invasive safety routines with provable performance guarantees.
Numerous remotely accessible multi-robot testbeds with a focus on robot mobility have been proposed over the years - for example the Mobile Emulab @cite_0 , or the HoTDeC testbed @cite_20 . A comprehensive list of multi-robot testbeds can be found in @cite_1 . Generally speaking, testbeds in this domain contain robots that occupy a significantly larger footprint and are more expensive than the Robotarium robots, which is an inherent obstruction to using large numbers of robots. A GRITSBot can be built for approximately @math 100 - see the bill of materials at www.robotarium.org The main difference between these testbeds and the Robotarium, however, is that the Robotarium explicitly addresses the robot safety aspect of remote accessibility such that provable damage avoidance of the Robotarium's physical assets is guaranteed even with untrustworthy or malicious users. The testbeds mentioned above in principle allow remote access for cases where a user can be trusted to not damage the hardware but do not explicitly address the safety issues involved once users have been approved for use. Unlike these testbeds, the Robotarium is inherently safe to operate because built-in (online and offline) safety measures prevent users from causing accidental or purposeful damage to the robots.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_20" ], "mid": [ "2096252761", "2065500547", "2184128959" ], "abstract": [ "Simulation has been the dominant research method- ology in wireless and sensor networking. When mobility is added, real-world experimentation is especially rare. However, it is becoming clear that simulation models do not sufficiently capture radio and sensor irregularity in a complex, real-world environment, especially indoors. Unfortunately, the high labor and equipment costs of truly mobile experimental infrastructure present high barriers to such experimentation. We describe our experience in creating a testbed to lower those barriers. We have extended the Emulab network testbed software to provide the first remotely-accessible mobile wireless and sensor testbed. Robots carry motes and single board computers through a fixed indoor field of sensor-equipped motes, all running the user's selected software. In real-time, interactively or driven by a script, remote users can position the robots, control all the computers and network interfaces, run arbitrary programs, and log data. Our mobile testbed provides simple path planning, a vision-based tracking system accurate to 1 cm, live maps, and webcams. Precise positioning and automation allow quick and painless evaluation of location and mobility effects on wireless protocols, location algorithms, and sensor-driven applications. The system is robust enough that it is deployed for public use. We present the design and implementation of our mobile testbed, evaluate key aspects of its performance, and describe a few experiments demonstrating its generality and power.", "The growing interest in ubiquitous robotics has originated in the last years the development of a high variety of testbeds. This paper presents a survey on existing ubiquitous robotics testbeds comprising networked mobile robots and networks of distributed sensors, cameras and smartphones, among others. The survey provides an insight into the testbed design, internal behavior and use, identifying trends and existing gaps and proposing guidelines for testbed developers. The level of interoperability among different ubiquitous robotics technologies is used as the main conducting criterion of the survey. Other features analyzed include testbed architectures, target experiments and usability tools.", "" ] }
1609.04730
2521943001
This paper describes the Robotarium -- a remotely accessible, multi-robot research facility. The impetus behind the Robotarium is that multi-robot testbeds constitute an integral and essential part of the multi-robot research cycle, yet they are expensive, complex, and time-consuming to develop, operate, and maintain. These resource constraints, in turn, limit access for large groups of researchers and students, which is what the Robotarium is remedying by providing users with remote access to a state-of-the-art multi-robot test facility. This paper details the design and operation of the Robotarium and discusses the considerations one must take when making complex hardware remotely accessible. In particular, safety must be built into the system already at the design phase without overly constraining what coordinated control programs users can upload and execute, which calls for minimally invasive safety routines with provable performance guarantees.
A number of testbeds have originated in the educational domain. For example, the Robotic Programming Network (RPN) @cite_3 makes a single humanoid robot remotely accessible while Robotnacka @cite_21 provides access to three mobile robots. A comprehensive overview of other educational testbeds can be found in @cite_9 . While providing the required infrastructure for remote access, compared to the Robotarium, most educational testbeds contain small numbers of robots and are not explicitly designed to be research platforms for multi-robot or swarm robotics experiments.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_3" ], "mid": [ "1981216120", "2088009805", "1528911767" ], "abstract": [ "The use of computers and multimedia, as well as the World Wide Web and new communication technologies, allows new forms of teaching and learning such as distance learning, blended learning, use of virtual libraries and many more. The herewith discussed remotely controlled laboratory (RCL) project shall offer an additional contribution. The basic idea is for a user to connect via the Internet with a computer from place A to a real experiment carried out in place B. An overview of our technical and didactical developments as well as an outlook on future plans is presented. Currently, about ten RCLs have been implemented. The essential characteristics of an RCL are the intuitive use and interactivity (operating the technical parameters), the possibility of different points of view of the ongoing experiment thanks to web cams and the quickest possible transfer of the data measured by the user. A reasonable use of sensibly chosen real experiments as remote labs allows a new form of homework and exercises, as well as project work and the execution of experiments, which usually would be a teacher's prerogative only.", "Robotnacka is an autonomous drawing mobile robot, designed for eaching beginners in the Logo programming language. It can also be used as an experimental platform, in our case in a remotely accessible robotic laboratory with the possibility to control the robots via the Internet. In addition to a basic version of the robot a version equipped with a gripper is available too, one with a wireless camera, and one with additional ultrasonic distance sensors. The laboratory is available on-line permanently and provides a simple way to incorporate robotics in teaching mathematics, programming and other subjects. The laboratory has been in use several years. We provide description of its functionality and summarize our experience.", "RPN (Robotic Programming Network) is an initiative to bring existing remote robot laboratories to a new dimension, by adding the flexibility and power of writing ROS code in an Internet browser and running it in the remote robot with a single click. The code is executed in the robot server at full speed, i.e. without any communication delay, and the output of the process is returned back. Built upon Robot Web Tools, RPN works out-of-the-box in any ROS-based robot or simulator. This paper presents the core functionality of RPN in the context of a web-enabled ROS system, its possibilities for remote education and training, and some experimentation with simulators and real robots in which we have integrated the tool in a Moodle environment, creating some programming courses and make it open to researchers and students (http: robotprogramming.uji.es)." ] }
1609.04301
2519781924
TristouNet is a neural network architecture based on Long Short-Term Memory recurrent networks, meant to project speech sequences into a fixed-dimensional euclidean space. Thanks to the triplet loss paradigm used for training, the resulting sequence embeddings can be compared directly with the euclidean distance, for speaker comparison purposes. Experiments on short (between 500ms and 5s) speech turn comparison and speaker change detection show that TristouNet brings significant improvements over the current state-of-the-art techniques for both tasks.
Recently, LSTMs have been particularly successful for automatic speech recognition @cite_1 . They have also been applied recently to speaker adaptation for acoustic modelling @cite_6 @cite_11 . However, to the best of our knowledge, it is the first time they are used for an actual speaker comparison task, and for speaker turn embedding.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_11" ], "mid": [ "2950689855", "2403524927", "2406264770" ], "abstract": [ "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) architecture specializing in modeling long-range temporal dynamics. On acoustic modeling tasks, LSTM-RNNs have shown better performance than DNNs and conventional RNNs. In this paper, we conduct an extensive study on speaker adaptation of LSTM-RNNs. Speaker adaptation helps to reduce the mismatch between acoustic models and testing speakers. We have two main goals for this study. First, on a benchmark dataset, the existing DNN adaptation techniques are evaluated on the adaptation of LSTM-RNNs. We observe that LSTMRNNs can be effectively adapted by using speaker-adaptive (SA) front-end, or by inserting speaker-dependent (SD) layers. Second, we propose two adaptation approaches that implement the SD-layer-insertion idea specifically for LSTM-RNNs. Using these approaches, speaker adaptation improves word error rates by 3-4 relative over a strong LSTM-RNN baseline. This improvement is enlarged to 6-7 if we exploit SA features for further adaptation.", "Long Short-Term Memory (LSTM) is a particular type of recurrent neural network (RNN) that can model long term temporal dynamics. Recently it has been shown that LSTM-RNNs can achieve higher recognition accuracy than deep feed-forword neural networks (DNNs) in acoustic modelling. However, speaker adaption for LSTM-RNN based acoustic models has not been well investigated. In this paper, we study the LSTM-RNN speaker-aware training that incorporates the speaker information during model training to normalise the speaker variability. We first present several speaker-aware training architectures, and then empirically evaluate three types of speaker representation: I-vectors, bottleneck speaker vectors and speaking rate. Furthermore, to factorize the variability in the acoustic signals caused by speakers and phonemes respectively, we investigate the speaker-aware and phone-aware joint training under the framework of multi-task learning. In AMI meeting speech transcription task, speaker-aware training of LSTM-RNNs reduces word error rates by 6.5 relative to a very strong LSTM-RNN baseline, which uses FMLLR features." ] }
1609.04235
2626364800
The authors and Fischer recently proved that any hereditary property of two-dimensional matrices (where the row and column order is not ignored) over a finite alphabet is testable with a constant number of queries, by establishing the following (ordered) matrix removal lemma: For any finite alphabet @math , any hereditary property @math of matrices over @math , and any @math , there exists @math such that for any matrix @math over @math that is @math -far from satisfying @math , most of the @math submatrices of @math do not satisfy @math . Here being @math -far from @math means that one needs to modify at least an @math -fraction of the entries of @math to make it satisfy @math . However, in the above general removal lemma, @math grows very fast as a function of @math , even when @math is characterized by a single forbidden submatrix. In this work we establish much more efficient removal lemmas for several special cases of the above problem. In particular, we show the following: For any fixed @math binary matrix @math and any @math there exists @math polynomial in @math , such that for any binary matrix @math in which less than a @math -fraction of the @math submatrices are equal to @math , there exists a set of less than an @math -fraction of the entries of @math that intersects every @math -copy in @math . We generalize the work of Alon, Fischer and Newman [SICOMP'07] and make progress towards proving one of their conjectures. The proofs combine their efficient conditional regularity lemma for matrices with additional combinatorial and probabilistic ideas.
The induced graph removal lemma was later extended to infinite families @cite_5 , stating the following. For any finite or infinite family @math of graphs and any @math there exists @math with the following property. If an @math -vertex graph @math is @math -far from @math -freeness, then with probability at least @math , a random induced subgraph of @math on @math vertices contains a graph from @math . Note that when @math is finite, the statement of the infinite induced removal lemma is indeed equivalent to that of the finite version of the induced removal lemma.
{ "cite_N": [ "@cite_5" ], "mid": [ "1985538939" ], "abstract": [ "The problem of characterizing all the testable graph properties is considered by many to be the most important open problem in the area of property testing. Our main result in this paper is a solution of an important special case of this general problem: Call a property tester oblivious if its decisions are independent of the size of the input graph. We show that a graph property @math has an oblivious one-sided error tester if and only if @math is semihereditary. We stress that any “natural” property that can be tested (either with one-sided or with two-sided error) can be tested by an oblivious tester. In particular, all the testers studied thus far in the literature were oblivious. Our main result can thus be considered as a precise characterization of the natural graph properties, which are testable with one-sided error. One of the main technical contributions of this paper is in showing that any hereditary graph property can be tested with one-sided error. This general result contains as a special case all the previous results about testing graph properties with one-sided error. More importantly, as a special case of our main result, we infer that some of the most well-studied graph properties, both in graph theory and computer science, are testable with one-sided error. Some of these properties are the well-known graph properties of being perfect, chordal, interval, comparability, permutation, and more. None of these properties was previously known to be testable." ] }
1609.04235
2626364800
The authors and Fischer recently proved that any hereditary property of two-dimensional matrices (where the row and column order is not ignored) over a finite alphabet is testable with a constant number of queries, by establishing the following (ordered) matrix removal lemma: For any finite alphabet @math , any hereditary property @math of matrices over @math , and any @math , there exists @math such that for any matrix @math over @math that is @math -far from satisfying @math , most of the @math submatrices of @math do not satisfy @math . Here being @math -far from @math means that one needs to modify at least an @math -fraction of the entries of @math to make it satisfy @math . However, in the above general removal lemma, @math grows very fast as a function of @math , even when @math is characterized by a single forbidden submatrix. In this work we establish much more efficient removal lemmas for several special cases of the above problem. In particular, we show the following: For any fixed @math binary matrix @math and any @math there exists @math polynomial in @math , such that for any binary matrix @math in which less than a @math -fraction of the @math submatrices are equal to @math , there exists a set of less than an @math -fraction of the entries of @math that intersects every @math -copy in @math . We generalize the work of Alon, Fischer and Newman [SICOMP'07] and make progress towards proving one of their conjectures. The proofs combine their efficient conditional regularity lemma for matrices with additional combinatorial and probabilistic ideas.
Very recently, the authors and Fischer @cite_26 generalized the (finite and infinite) induced graph removal lemma by obtaining an version of it, and also showed that the same type of proof can be used to obtain a removal lemma for two-dimensional (with row and column order) over a finite alphabet; this it Theorem above.
{ "cite_N": [ "@cite_26" ], "mid": [ "2611059716" ], "abstract": [ "We consider properties of edge-colored vertex-ordered graphs, i.e., graphs with a totally ordered vertex set and a finite set of possible edge colors. We show that any hereditary property of such graphs is strongly testable, i.e., testable with a constant number of queries. We also explain how the proof can be adapted to show that any hereditary property of @math -dimensional matrices over a finite alphabet (where row and column order is not ignored) is strongly testable. The first result generalizes the result of Alon and Shapira [FOCS'05, SICOMP'08], who showed that any hereditary graph property (without vertex order) is strongly testable. The second result answers and generalizes a conjecture of Alon, Fischer and Newman [SICOMP'07] concerning testing of matrix properties. The testability is proved by establishing a removal lemma for vertex-ordered graphs. It states that for any finite or infinite family @math of forbidden vertex-ordered graphs, and any @math , there exist @math and @math so that any vertex-ordered graph which is @math -far from being @math -free contains at least @math copies of some @math (with the correct vertex order) where @math . The proof bridges the gap between techniques related to the regularity lemma, used in the long chain of papers investigating graph testing, and string testing techniques. Along the way we develop a Ramsey-type lemma for @math -partite graphs with \"undesirable\" edges, stating that one can find a Ramsey-type structure in such a graph, in which the density of the undesirable edges is not much higher than the density of those edges in the graph." ] }
1609.04235
2626364800
The authors and Fischer recently proved that any hereditary property of two-dimensional matrices (where the row and column order is not ignored) over a finite alphabet is testable with a constant number of queries, by establishing the following (ordered) matrix removal lemma: For any finite alphabet @math , any hereditary property @math of matrices over @math , and any @math , there exists @math such that for any matrix @math over @math that is @math -far from satisfying @math , most of the @math submatrices of @math do not satisfy @math . Here being @math -far from @math means that one needs to modify at least an @math -fraction of the entries of @math to make it satisfy @math . However, in the above general removal lemma, @math grows very fast as a function of @math , even when @math is characterized by a single forbidden submatrix. In this work we establish much more efficient removal lemmas for several special cases of the above problem. In particular, we show the following: For any fixed @math binary matrix @math and any @math there exists @math polynomial in @math , such that for any binary matrix @math in which less than a @math -fraction of the @math submatrices are equal to @math , there exists a set of less than an @math -fraction of the entries of @math that intersects every @math -copy in @math . We generalize the work of Alon, Fischer and Newman [SICOMP'07] and make progress towards proving one of their conjectures. The proofs combine their efficient conditional regularity lemma for matrices with additional combinatorial and probabilistic ideas.
We finish by mentioning several other relevant removal lemma type results. Removal lemmas for vectors (i.e. @ one dimensional matrices where the order is important) are generally easier to obtain; in particular, a removal lemma for vectors over a fixed finite alphabet can be derived from a removal lemma for regular languages proved in @cite_13 . A removal lemma for partially ordered sets with a grid-like structure, which can be seen as a generalization of the removal lemma for vectors, can be deduced from a result of Fischer and Newman in @cite_23 , where they mention that this problem for submatrices is more complicated and not understood. Recently, Ben-Eliezer, Korman and Reichman @cite_19 obtained a removal lemma for patterns in multi-dimensional matrices. A pattern must be taken from locations, whereas in our case the rows and columns of a submatrix need not be consecutive. The case of patterns behaves very differently than that of submatrices, and in particular, in the removal lemma for patterns the parameters are linearly related (for any alphabet size) unlike the case of submatrices (in which, for alphabets of @math letters or more, the relation cannot be polynomial).
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_23" ], "mid": [ "2609757535", "2036501955", "2091126969" ], "abstract": [ "Understanding the local behaviour of structured multi-dimensional data is a fundamental problem in various areas of computer science. As the amount of data is often huge, it is desirable to obtain sublinear time algorithms, and specifically property testers, to understand local properties of the data. We focus on the natural local problem of testing pattern freeness: given a large @math -dimensional array @math and a fixed @math -dimensional pattern @math over a finite alphabet, we say that @math is @math -free if it does not contain a copy of the forbidden pattern @math as a consecutive subarray. The distance of @math to @math -freeness is the fraction of entries of @math that need to be modified to make it @math -free. For any @math and any large enough pattern @math over any alphabet, other than a very small set of exceptional patterns, we design a tolerant tester that distinguishes between the case that the distance is at least @math and the case that it is at most @math , with query complexity and running time @math , where @math and @math depend only on @math . To analyze the testers we establish several combinatorial results, including the following @math -dimensional modification lemma, which might be of independent interest: for any large enough pattern @math over any alphabet (excluding a small set of exceptional patterns for the binary case), and any array @math containing a copy of @math , one can delete this copy by modifying one of its locations without creating new @math -copies in @math . Our results address an open question of Fischer and Newman, who asked whether there exist efficient testers for properties related to tight substructures in multi-dimensional structured data. They serve as a first step towards a general understanding of local properties of multi-dimensional arrays, as any such property can be characterized by a fixed family of forbidden patterns.", "We continue the study of combinatorial property testing, initiated by Goldreich, Goldwasser, and Ron in [J. ACM, 45 (1998), pp. 653--750]. The subject of this paper is testing regular languages. Our main result is as follows. For a regular language @math and an integer n there exists a randomized algorithm which always accepts a word w of length n if @math and rejects it with high probability if @math has to be modified in at least @math positions to create a word in L. The algorithm queries @math bits of w. This query complexity is shown to be optimal up to a factor polylogarithmic in @math . We also discuss the testability of more complex languages and show, in particular, that the query complexity required for testing context-free languages cannot be bounded by any function of @math . The problem of testing regular languages can be viewed as a part of a very general approach, seeking to probe testability of properties defined by logical means.", "Combinatorial property testing, initiated by Rubinfeld and Sudan [23] and formally defined by Goldreich, Goldwasser and Ron in [18], deals with the following relaxation of decision problems: Given a fixed property P and an input f, distinguish between the case that f satisfies P, and the case that no input that differs from f in less than some fixed fraction of the places satisfies P. An (e, q)-test for P is a randomized algorithm that queries at most q places of an input f and distinguishes with probability 2 3 between the case that f has the property and the case that at least an e-fraction of the places of f need to be changed in order for it to have the property. Here we concentrate on labeled, d-dimensional grids, where the grid is viewed as a partially ordered set (poset) in the standard way (i.e. as a product order of total orders). The main result here presents an (e, poly(1 e))-test for every property of 0 1 labeled, d-dimensional grids that is characterized by a finite collection of forbidden induced posets. Such properties include the “monotonicity” property studied in [9,8,13], other more complicated forbidden chain patterns, and general forbidden poset patterns. We also present a (less efficient) test for such properties of labeled grids with larger fixed size alphabets. All the above tests have in addition a 1-sided error probability. This class of properties is related to properties that are defined by certain first order formulae with no quantifier alternation over the syntax containing the grid order relations. We also show that with one quantifier alternation, a certain property can be defined, for which no test with query complexity of O(n 1 4) (for a small enough fixed e) exists. The above results identify new classes of properties that are defined by means of restricted logics, and that are efficiently testable. They also lay out a platform that bridges some previous results." ] }
1609.04293
2522084552
Abstract We propose the concept of a system algebra with a parallel composition operation and an interface connection operation, and formalize composition-order invariance, which postulates that the order of composing and connecting systems is irrelevant, a generalized form of associativity. Composition-order invariance explicitly captures a common property that is implicit in any context where one can draw a figure (hiding the drawing order) of several connected systems, which appears in many scientific contexts. This abstract algebra captures settings where one is interested in the behavior of a composed system in an environment and wants to abstract away anything internal not relevant for the behavior. This may include physical systems, electronic circuits, or interacting distributed systems. One specific such setting, of special interest in computer science, are functional system algebras, which capture, in the most general sense, any type of system that takes inputs and produces outputs depending on the inputs, and where the output of a system can be the input to another system. The behavior of such a system is uniquely determined by the function mapping inputs to outputs. We consider several instantiations of this very general concept. In particular, we show that Kahn networks form a functional system algebra and prove their composition-order invariance. Moreover, we define a functional system algebra of causal systems, characterized by the property that inputs can only influence future outputs, where an abstract partial order relation captures the notion of “later”. This system algebra is also shown to be composition-order invariant and appropriate instantiations thereof allow to model and analyze systems that depend on time.
Hardy has developed an abstract theory, in which composition-order invariance (there called order independence) plays an important role @cite_9 . That work, however, focuses on physical systems. Lee and Sangiovanni-Vincentelli @cite_23 also introduce an abstract system model, but it is specific to systems that consider some form of time and does not follow an algebraic approach.
{ "cite_N": [ "@cite_9", "@cite_23" ], "mid": [ "1689232663", "2119355229" ], "abstract": [ "We develop a theory for describing composite objects in physics. These can be static objects, such as tables, or things that happen in spacetime (such as a region of spacetime with fields on it regarded as being composed of smaller such regions joined together). We propose certain fundamental axioms which, it seems, should be satisfied in any theory of composition. A key axiom is the order independence axiom which says we can describe the composition of a composite object in any order. Then we provide a notation for describing composite objects that naturally leads to these axioms being satisfied. In any given physical context we are interested in the value of certain properties for the objects (such as whether the object is possible, what probability it has, how wide it is, and so on). We associate a generalized state with an object. This can be used to calculate the value of those properties we are interested in for for this object. We then propose a certain principle, the composition principle, which says that we can determine the generalized state of a composite object from the generalized states for the components by means of a calculation having the same structure as the description of the generalized state. The composition principle provides a link between description and prediction.", "We give a denotational framework (a \"meta model\") within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems." ] }
1609.04293
2522084552
Abstract We propose the concept of a system algebra with a parallel composition operation and an interface connection operation, and formalize composition-order invariance, which postulates that the order of composing and connecting systems is irrelevant, a generalized form of associativity. Composition-order invariance explicitly captures a common property that is implicit in any context where one can draw a figure (hiding the drawing order) of several connected systems, which appears in many scientific contexts. This abstract algebra captures settings where one is interested in the behavior of a composed system in an environment and wants to abstract away anything internal not relevant for the behavior. This may include physical systems, electronic circuits, or interacting distributed systems. One specific such setting, of special interest in computer science, are functional system algebras, which capture, in the most general sense, any type of system that takes inputs and produces outputs depending on the inputs, and where the output of a system can be the input to another system. The behavior of such a system is uniquely determined by the function mapping inputs to outputs. We consider several instantiations of this very general concept. In particular, we show that Kahn networks form a functional system algebra and prove their composition-order invariance. Moreover, we define a functional system algebra of causal systems, characterized by the property that inputs can only influence future outputs, where an abstract partial order relation captures the notion of “later”. This system algebra is also shown to be composition-order invariant and appropriate instantiations thereof allow to model and analyze systems that depend on time.
Closely related to our abstract system algebras are block algebras as introduced by de Alfaro and Henzinger in the context of interface theories @cite_17 . Our systems and interfaces are there called blocks and ports, respectively, and they also define parallel composition and port connection operations. A major difference compared to our system algebras is that port connections do not hide the connected ports. Moreover, while de Alfaro and Henzinger require the parallel composition to be commutative and associative, they do not define a notion that corresponds to our composition-order invariance, i.e., their port connections not necessarily commute with other port connections and parallel composition.
{ "cite_N": [ "@cite_17" ], "mid": [ "1756169151" ], "abstract": [ "We classify component-based models of computation into component models and interface models. A component model specifies for each component howthe component behaves in an arbitrary environment; an interface model specifies for each component what the component expects from the environment. Component models support compositional abstraction, and therefore component-based verification. Interface models support compositional refinement, and therefore componentbased design. Many aspects of interface models, such as compatibility and refinement checking between interfaces, are properly viewed in a gametheoretic setting, where the input and output values of an interface are chosen by different players." ] }
1609.04293
2522084552
Abstract We propose the concept of a system algebra with a parallel composition operation and an interface connection operation, and formalize composition-order invariance, which postulates that the order of composing and connecting systems is irrelevant, a generalized form of associativity. Composition-order invariance explicitly captures a common property that is implicit in any context where one can draw a figure (hiding the drawing order) of several connected systems, which appears in many scientific contexts. This abstract algebra captures settings where one is interested in the behavior of a composed system in an environment and wants to abstract away anything internal not relevant for the behavior. This may include physical systems, electronic circuits, or interacting distributed systems. One specific such setting, of special interest in computer science, are functional system algebras, which capture, in the most general sense, any type of system that takes inputs and produces outputs depending on the inputs, and where the output of a system can be the input to another system. The behavior of such a system is uniquely determined by the function mapping inputs to outputs. We consider several instantiations of this very general concept. In particular, we show that Kahn networks form a functional system algebra and prove their composition-order invariance. Moreover, we define a functional system algebra of causal systems, characterized by the property that inputs can only influence future outputs, where an abstract partial order relation captures the notion of “later”. This system algebra is also shown to be composition-order invariant and appropriate instantiations thereof allow to model and analyze systems that depend on time.
Several works have defined system models. Lee and Sangiovanni-Vincentelli @cite_23 define , which intuitively requires that each output must be provoked by an input that occurred at least a @math -difference earlier. They show that fixed points exist, based on Banach's theorem. @cite_1 generalize this to a notion of superdense'' time where multiple events may occur simultaneously. @cite_5 , in the quantum scenario, describe a type of strict causality based on a that can be seen as a generalization of delta causality. Naundorf @cite_10 considers causality without any minimal time distance, and proves that fixed points still exist. Matsikoudis and Lee @cite_2 then show a fixed point theorem for the same notion, which they refer to as . They show that it is implied by a more natural notion of (strict) causality where outputs can be influenced only by inputs that occur strictly earlier, under the assumption that the ordering of inputs is well-founded A partial order on a set @math is if every nonempty subset of @math has one or more minimal elements. . We show in app:equivalence-causal that the strict causality notion of @cite_2 is essentially equivalent to the definition we introduce in this work.
{ "cite_N": [ "@cite_1", "@cite_23", "@cite_2", "@cite_5", "@cite_10" ], "mid": [ "2101808924", "2119355229", "2146303385", "2190871789", "1978514016" ], "abstract": [ "Deterministic timed systems can be modeled as fixed point problems (B. Roscoe and G. Reed, 1988), (R. K. Yates, 1993), (E. A. Lee, 1999). In particular, any connected network of timed systems can be modeled as a single system with feedback, and the system behavior is the fixed point of the corresponding system equation, when it exists. For delta-causal systems, we can use the Cantor metric to measure the distance between signals and the Banach fixed-point theorem to prove the existence and uniqueness of a system behavior. Moreover, the Banach fixed-point theorem is constructive: it provides a method to construct the unique fixed point through iteration. In this paper, we extend this result to systems modeled with the superdense model of time (O. , 1992), (Z. Manna and A. Pnueli, 1993) used in hybrid systems. We call the systems we consider eventually delta-causal, a strict generalization of delta-causal in which multiple events may be generated on a signal in zero time. With this model of time, we can use a generalized ultrametric (Generalized ultrametric spaces, I, 1996) instead of a metric to model the distance between signals. The existence and uniqueness of behaviors for such systems comes from the fixed-point theorem of (S. Priess-Crampe and P. Ribenboim, 1993), but this theorem gives no constructive method to compute the fixed point This leads us to define petrics, a generalization of metrics, which we use to generalize the Banach fixed-point theorem to provide a constructive fixed-point theorem. This new fixed-point theorem allows us to construct the unique behavior of eventually delta-causal systems.", "We give a denotational framework (a \"meta model\") within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.", "We ask whether strictly causal components form well defined systems when arranged in feedback configurations. The standard interpretation for such configurations induces a fixed-point constraint on the function modeling the component involved. We define strictly causal functions formally, and show that the corresponding fixed-point problem does not always have a well defined solution. We examine the relationship between these functions and the functions that are strictly contracting with respect to a generalized distance function on signals, and argue that these strictly contracting functions are actually the functions that one ought to be interested in. We prove a constructive fixed-point theorem for these functions, introduce a corresponding induction principle, and study the related convergence process.", "Complex information-processing systems, for example, quantum circuits, cryptographic protocols, or multi-player games, are naturally described as networks composed of more basic information-processing systems. A modular analysis of such systems requires a mathematical model of systems that is closed under composition, i.e., a network of these objects is again an object of the same type. We propose such a model and call the corresponding systems causal boxes . Causal boxes capture superpositions of causal structures, e.g., messages sent by a causal box @math can be in a superposition of different orders or in a superposition of being sent to box @math and box @math . Furthermore, causal boxes can model systems whose behavior depends on time. By instantiating the abstract cryptography framework with causal boxes, we obtain the first composable security framework that can handle arbitrary quantum protocols and relativistic protocols.", "Abstract The denotational semantics of a deterministic timed system can be described by a function F : (T → V) → (T → V) with T partially ordered. The semantics of a feedback loop then is usually defined by a special (unique) fixed point of F but it is not always obvious that such a fixed point exists. This paper proves that every function F in the very general class of strictly causal functions has a unique fixed point." ] }
1609.04079
2952655681
We present a single-shot system to recover surface geometry of objects with spatially-varying albedos, from images captured under a calibrated RGB photometric stereo setup---with three light directions multiplexed across different color channels in the observed RGB image. Since the problem is ill-posed point-wise, we assume that the albedo map can be modeled as piece-wise constant with a restricted number of distinct albedo values. We show that under ideal conditions, the shape of a non-degenerate local constant albedo surface patch can theoretically be recovered exactly. Moreover, we present a practical and efficient algorithm that uses this model to robustly recover shape from real images. Our method first reasons about shape locally in a dense set of patches in the observed image, producing shape distributions for every patch. These local distributions are then combined to produce a single consistent surface normal map. We demonstrate the efficacy of the approach through experiments on both synthetic renderings as well as real captured images.
Formalized initially by Horn @cite_7 , the SFS problem has been the focus of considerable research over the last few decades @cite_10 @cite_1 . A remarkably successful solution to the problem was recently proposed by @cite_13 , who introduced a versatile method to recover object shape from a single image of a diffuse object with spatially-varying albedo. However, since it was designed for general un-calibrated natural lighting, their inference algorithm is computationally expensive and relies heavily on strong geometric smoothness priors. In contrast, our method is designed for a known optimized lighting setup, and is able to efficiently recover shape with a higher degree of surface detail.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_13", "@cite_7" ], "mid": [ "2033664843", "2118304946", "", "1576579612" ], "abstract": [ "Many algorithms have been suggested for the shape-from-shading problem, and some years have passed since the publication of the survey paper by [R. Zhang, P.-S. Tsai, J.E. Cryer, M. Shah, Shape from shading: a survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (8) (1999) 690-706]. In this new survey paper, we try to update their presentation including some recent methods which seem to be particularly representative of three classes of methods: methods based on partial differential equations, methods using optimization and methods approximating the image irradiance equation. One of the goals of this paper is to set the comparison of these methods on a firm basis. To this end, we provide a brief description of each method, highlighting its basic assumptions and mathematical properties. Moreover, we propose some numerical benchmarks in order to compare the methods in terms of their efficiency and accuracy in the reconstruction of surfaces corresponding to synthetic, as well as to real images.", "Since the first shape-from-shading (SFS) technique was developed by Horn in the early 1970s, many different approaches have emerged. In this paper, six well-known SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error, and CPU timing. Each algorithm works well for certain images, but performs poorly for others. In general, minimization approaches are more robust, while the other approaches are faster.", "", "A method will be described for finding the shape of a smooth opaque object from a monocular image, given a knowledge of the surface photometry, the position of the light-source and certain auxiliary information to resolve ambiguities. This method is complementary to the use of stereoscopy which relies on matching up sharp detail and will fail on smooth objects. Until now the image processing of single views has been restricted to objects which can meaningfully be considered two-dimensional or bounded by plane surfaces. It is possible to derive a first-order non-linear partial differential equation in two unknowns relating the intensity at the image points to the shape of the object. This equation can be solved by means of an equivalent set of five ordinary differential equations. A curve traced out by solving this set of equations for one set of starting values is called a characteristic strip. Starting one of these strips from each point on some initial curve will produce the whole solution surface. The initial curves can usually be constructed around so-called singular points. A number of applications of this method will be discussed including one to lunar topography and one to the scanning electron microscope. In both of these cases great simplifications occur in the equations. A note on polyhedra follows and a quantitative theory of facial make-up is touched upon. An implementation of some of these ideas on the PDP-6 computer with its attached image-dissector camera at the Artificial Intelligence Laboratory will be described, and also a nose-recognition program." ] }
1609.04079
2952655681
We present a single-shot system to recover surface geometry of objects with spatially-varying albedos, from images captured under a calibrated RGB photometric stereo setup---with three light directions multiplexed across different color channels in the observed RGB image. Since the problem is ill-posed point-wise, we assume that the albedo map can be modeled as piece-wise constant with a restricted number of distinct albedo values. We show that under ideal conditions, the shape of a non-degenerate local constant albedo surface patch can theoretically be recovered exactly. Moreover, we present a practical and efficient algorithm that uses this model to robustly recover shape from real images. Our method first reasons about shape locally in a dense set of patches in the observed image, producing shape distributions for every patch. These local distributions are then combined to produce a single consistent surface normal map. We demonstrate the efficacy of the approach through experiments on both synthetic renderings as well as real captured images.
RGB-PS was introduced as a means to overcome the requirement in classical PS of capturing multiple images, which makes the latter unusable on moving or deforming objects (although, some methods attempt to handle such cases using multi-view setups @cite_0 ). However, the degree of ambiguity (5 unknowns for 3 observations) in RGB-PS reconstruction @cite_14 @cite_5 is the same as that in single image SFS (3 unknowns for 1 observation). Previous work addressed this by disallowing albedo variations @cite_12 @cite_9 , or by exploiting the temporal constancy of surface reflectance @cite_2 . Anderson al @cite_11 use a stereo rig with multiplexed color lights. They reconstruct coarse shape and align shading intensities using stereo. This is used to segment the scene into constant albedo regions, followed by albedo estimation and refinement of surface depth and orientation estimates.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_0", "@cite_2", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "1973635906", "2169364846", "2040436296", "1579055455", "2045286529", "2144534260", "2147191585" ], "abstract": [ "We propose a method for shape reconstruction from color shades produced by multiple chromatic light sources. The linear relation between surface-normal vectors and three-dimensional response vectors for a uniformly colored and illuminated region of a surface can be reconstructed in two steps. In the first step a quadratic form of metric in response space induced from a natural metric in normal space is reconstructed. At this stage proper image segmentation can be obtained. In the second step an exact mapping from response space into the space of surface normals is reconstructed. The matrix for this mapping is one of the square roots of the quadratic-form matrix that satisfies the integrability constraint. The method is in all respects much simpler than existing methods for solving the depth-from-shading task for monochromatic images.", "We describe a novel device that can be used as a 2.5D “scanner” for acquiring surface texture and shape. The device consists of a slab of clear elastomer covered with a reflective skin. When an object presses on the skin, the skin distorts to take on the shape of the object's surface. When viewed from behind (through the elastomer slab), the skin appears as a relief replica of the surface. A camera records an image of this relief, using illumination from red, green, and blue light sources at three different positions. A photometric stereo algorithm that is tailored to the device is then used to reconstruct the surface. There is no problem dealing with transparent or specular materials because the skin supplies its own BRDF. Complete information is recorded in a single frame; therefore we can record video of the changing deformation of the skin, and then generate an animation of the changing surface. Our sensor has no moving parts (other than the elastomer slab), uses inexpensive materials, and can be made into a portable device that can be used “in the field” to record surface shape and texture.", "We describe a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shading (photometric stereo), with a large lighting dome providing a series of novel spherical lighting configurations. To compensate for low-frequency deformation, we perform multi-view matching and thin-plate spline deformation on the initial surfaces obtained by integrating the normal maps. Next, the corrected meshes are merged into a single mesh using a volumetric method. The final output is a set of meshes, which were impossible to produce with previous methods. The meshes exhibit details on the order of a few millimeters, and represent the performance over human-size working volumes at a temporal resolution of 60Hz.", "In this paper we present a novel method to apply photometric stereo on textured dynamic surfaces. We aim at exploiting the high accuracy of photometric stereo and reconstruct local surface orientation from illumination changes. The main difficulty derives from the fact that photometric stereo requires varying illumination while the object remains still, which makes it quite impractical to use for dynamic surfaces. Using coloured lights gives a clear solution to this problem; however, the system of equations is still ill-posed and it is ambiguous whether the change of an observed surface colour is due to the change of the surface gradient or of the surface reflectance. In order to separate surface orientation from reflectance, our method tracks texture changes over time and exploits surface reflectance's temporal constancy. This additional constraint allows us to reformulate the problem as an energy functional minimisation, solved by a standard quasi-Newton method. Our method is tested both on real and synthetic data, quantitatively evaluated and compared to a state-of-the-art method.", "The photometric-stereo method is one technique for three-dimensional shape determination that has been implemented in a variety of experimental settings and that has produced consistently good results. The idea is to use intensity values recorded from multiple images obtained from the same viewpoint but under different conditions of illumination. The resulting radiometric constraint makes it possible to obtain local estimates of both surface orientation and surface curvature without requiring either global smoothness assumptions or prior image segmentation. Photometric stereo is moved one step closer to practical possibility by a description of an experimental setting in which surface gradient estimation is achieved on full-frame video data at near-video-frame rates (i.e., 15 Hz). The implementation uses commercially available hardware. Reflectance is modeled empirically with measurements obtained from a calibration sphere. Estimation of the gradient (p, q) requires only simple table lookup. Curvature estimation additionally uses the reflectance map R(p, q). The required lookup table and reflectance maps are derived during calibration. Because reflectance is modeled empirically, no prior physical model of the reflectance characteristics of the objects to be analyzed is assumed. At the same time, if a good physical model is available, it can be retrofitted to the method for implementation purposes. Photometric stereo is subject to error in the presence of cast shadows and interreflection. No purely local technique can succeed because these phenomena are inherently nonlocal. Nevertheless, it is demonstrated that one can exploit the redundancy in three-light-source photometric stereo to detect locally, in most cases, the presence of cast shadows and interreflection. Detection is facilitated by the explicit inclusion of a local confidence estimate in the lookup table used for gradient estimation.", "We present an algorithm and the associated single-view capture methodology to acquire the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an untextured surface. We show how to capture such data, which in turn allows us to demonstrate the strengths and limitations of our simple frame-to-frame registration over time. Experiments were performed on monocular video sequences of untextured cloth and faces with and without white makeup. Subjects were filmed under spatially separated red, green, and blue lights. Our first finding is that the color photometric stereo setup is able to produce smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D tracking results, one can register both the surfaces and relax the homogenous-color restriction of the single-hue subject. Quantitative and qualitative experiments explore both the practicality and limitations of this simple multispectral capture system.", "We present a multispectral photometric stereo method for capturing geometry of deforming surfaces. A novel photometric calibration technique allows calibration of scenes containing multiple piecewise constant chromaticities. This method estimates per-pixel photometric properties, then uses a RANSAC-based approach to estimate the dominant chromaticities in the scene. A likelihood term is developed linking surface normal, image intensity and photometric properties, which allows estimating the number of chromaticities present in a scene to be framed as a model estimation problem. The Bayesian Information Criterion is applied to automatically estimate the number of chromaticities present during calibration. A two-camera stereo system provides low resolution geometry, allowing the likelihood term to be used in segmenting new images into regions of constant chromaticity. This segmentation is carried out in a Markov Random Field framework and allows the correct photometric properties to be used at each pixel to estimate a dense normal map. Results are shown on several challenging real-world sequences, demonstrating state-of-the-art results using only two cameras and three light sources. Quantitative evaluation is provided against synthetic ground truth data." ] }
1609.04079
2952655681
We present a single-shot system to recover surface geometry of objects with spatially-varying albedos, from images captured under a calibrated RGB photometric stereo setup---with three light directions multiplexed across different color channels in the observed RGB image. Since the problem is ill-posed point-wise, we assume that the albedo map can be modeled as piece-wise constant with a restricted number of distinct albedo values. We show that under ideal conditions, the shape of a non-degenerate local constant albedo surface patch can theoretically be recovered exactly. Moreover, we present a practical and efficient algorithm that uses this model to robustly recover shape from real images. Our method first reasons about shape locally in a dense set of patches in the observed image, producing shape distributions for every patch. These local distributions are then combined to produce a single consistent surface normal map. We demonstrate the efficacy of the approach through experiments on both synthetic renderings as well as real captured images.
An exception is the work of Fyffe al @cite_8 , who like us, rely on the statistics of natural albedos. They assume that surface albedo, as a function of spectral wavelength, is low-dimensional. Since this assumption doesn't provide an informative constraint for albedos in just three color channels, their setup involves multi-spectral capture under six spectrally distinct color sources. However, this requires a more complex imaging system and also suffers from lower light efficiency---since the visible spectrum now is split into six, instead of three, non-overlapping bands for both illumination and sensing. In contrast, we rely on the spatial, instead of spectral, statistics of albedos, and are able to employ regular three-channel RGB cameras.
{ "cite_N": [ "@cite_8" ], "mid": [ "2175612008" ], "abstract": [ "Spectral multiplexing allows multiple channels of information to be captured simultaneously, using readily available color cameras. Information may be multiplexed across the color channels of a camera by use of colored lights (e.g. [Woodham 1980; Hernandez and Vogiatzis 2010]) or colored filters (e.g. [ 2008]). We propose a novel method for single-shot photometric stereo by spectral multiplexing. The output of our method is a simultaneous per-pixel estimate of the surface normal and full-color reflectance. Our method is well suited to materials with varying color and texture, requires no time-varying illumination, and no high-speed cameras. Being a single-shot method, it may be applied to dynamic scenes without any need for optical flow. Our key contributions are a generalization of three-color photometric stereo to multiple (more than three) color channels, and the design of a practical six-color-channel system using off-the-shelf parts only." ] }
1609.04079
2952655681
We present a single-shot system to recover surface geometry of objects with spatially-varying albedos, from images captured under a calibrated RGB photometric stereo setup---with three light directions multiplexed across different color channels in the observed RGB image. Since the problem is ill-posed point-wise, we assume that the albedo map can be modeled as piece-wise constant with a restricted number of distinct albedo values. We show that under ideal conditions, the shape of a non-degenerate local constant albedo surface patch can theoretically be recovered exactly. Moreover, we present a practical and efficient algorithm that uses this model to robustly recover shape from real images. Our method first reasons about shape locally in a dense set of patches in the observed image, producing shape distributions for every patch. These local distributions are then combined to produce a single consistent surface normal map. We demonstrate the efficacy of the approach through experiments on both synthetic renderings as well as real captured images.
Our estimation algorithm employs a similar computational framework as Xiong al @cite_3 , who used a combination of dense local estimation and globalization for traditional SFS, assuming known albedo and a single known directional light. Our goal is different---we seek to recover high resolution geometric detail in the presence of spatially-varying albedo, from images captured under the RGB-PS setup. To this end, we employ a piece-wise constant assumption on albedo which we show to be informative in our setup, while @cite_3 assumed piece-wise smooth shape.
{ "cite_N": [ "@cite_3" ], "mid": [ "2038294257" ], "abstract": [ "We develop a framework for extracting a concise representation of the shape information available from diffuse shading in a small image patch. This produces a mid-level scene descriptor, comprised of local shape distributions that are inferred separately at every image patch across multiple scales. The framework is based on a quadratic representation of local shape that, in the absence of noise, has guarantees on recovering accurate local shape and lighting. And when noise is present, the inferred local shape distributions provide useful shape information without over-committing to any particular image explanation. These local shape distributions naturally encode the fact that some smooth diffuse regions are more informative than others, and they enable efficient and robust reconstruction of object-scale shape. Experimental results show that this approach to surface reconstruction compares well against the state-of-art on both synthetic images and captured photographs." ] }
1609.03947
2950547378
In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping between visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a pre-trained CNN for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN and identifies the 3D positions of features that capture the hierarchical support relations between filters in different CNN layers by tracing the activation of higher level features in the CNN backward. When this backward trace terminates in the RGB-D image, important manipulable structures comprising the objects are, thus, localized. These features located in different layers of the CNN are then associated to controllers belonging to different hierarchies of the robot morphology for grasping. A Grasping Dataset is collected using demonstrated hand object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms base-line approaches in cluttered scenarios on the Grasping Dataset and a point cloud based approach on a grasping task using Robonaut-2.
The idea that our brain encodes visual stimuli in two separate regions was first proposed by Schneider @cite_7 . Ungerleider and Mishkin further discovered the ventral and dorsal streams and proposed the hypothesis often known as the distinction of what" and where" between the two visual pathways @cite_6 . However, in 1992 Goodale and Milner proposed an alternative perspective on the functionality of these two visual pathways based on many observations made with patient DF @cite_20 . Patient DF developed a profound visual form agnosia due to damage to her ventral stream. Despite DF's inability to recognize the shape, size and orientation of visual objects, she is capable of grasping the object with accurate hand and finger movements. Based on a series of experiments @cite_31 , Goodale and Milner proposed the , which suggests that the dorsal pathway provides action-relevant information about the structural characteristic of objects in addition to their position. Our work is inspired by these observations and associates grasp configurations with visual features instead of object identities.
{ "cite_N": [ "@cite_31", "@cite_20", "@cite_6", "@cite_7" ], "mid": [ "1526787185", "2082627290", "1996348120", "" ], "abstract": [ "Prologue 1. A tragic accident 2. Doing without seeing 3. When vision for action fails 4. The origins of vision: from modules to models 5. Streams within streams 6. Why do we need two systems? 7. Getting it all together 8. Postscript: Dee's life 15 years on Epilogue", "Abstract Accumulating neuropsychological, electrophysiological and behavioural evidence suggests that the neural substrates of visual perception may be quite distinct from those underlying the visual control of actions. In other words, the set of object descriptions that permit identification and recognition may be computed independently of the set of descriptions that allow an observer to shape the hand appropriately to pick up an object. We propose that the ventral stream of projections from the striate cortex to the inferotemporal cortex plays the major role in the perceptual identification of objects, while the dorsal stream projecting from the striate cortex to the posterior pariet al region mediates the required sensorimotor transformations for visually guided actions directed at such objects.", "Abstract Evidence is reviewed indicating that striate cortex in the monkey is the source of two multisynaptic corticocortical pathways. One courses ventrally, interconnecting the striate, prestriate, and inferior temporal areas, and enables the visual identification of objects. The other runs dorsally, interconnecting the striate, prestriate, and inferior pariet al areas, and allows instead the visual location of objects. How the information carried in these two separate pathways is reintegrated has become an important question for future research.", "" ] }
1609.03947
2950547378
In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping between visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a pre-trained CNN for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN and identifies the 3D positions of features that capture the hierarchical support relations between filters in different CNN layers by tracing the activation of higher level features in the CNN backward. When this backward trace terminates in the RGB-D image, important manipulable structures comprising the objects are, thus, localized. These features located in different layers of the CNN are then associated to controllers belonging to different hierarchies of the robot morphology for grasping. A Grasping Dataset is collected using demonstrated hand object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms base-line approaches in cluttered scenarios on the Grasping Dataset and a point cloud based approach on a grasping task using Robonaut-2.
Our approach associates CNN features trained on the ImageNet dataset with a demonstrated grasp. Hierarchical features consider both the local pattern and higher level structures that it belongs to. This visually-guided grasp can be successful even when there is insufficient 3D point cloud data. For example, a side grasp on a cylinder can be inferred even when only the top face is observed. In @cite_32 , a deep network trained on 1035 examples is used to determine a successful grasp based on RGB-D data. Grasp positions are exhaustively generated and evaluated. Our approach localizes features in a pre-trained CNN and generates grasp points based on a small set of grasping examples.
{ "cite_N": [ "@cite_32" ], "mid": [ "1999156278" ], "abstract": [ "We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms." ] }
1609.03947
2950547378
In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping between visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a pre-trained CNN for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN and identifies the 3D positions of features that capture the hierarchical support relations between filters in different CNN layers by tracing the activation of higher level features in the CNN backward. When this backward trace terminates in the RGB-D image, important manipulable structures comprising the objects are, thus, localized. These features located in different layers of the CNN are then associated to controllers belonging to different hierarchies of the robot morphology for grasping. A Grasping Dataset is collected using demonstrated hand object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms base-line approaches in cluttered scenarios on the Grasping Dataset and a point cloud based approach on a grasping task using Robonaut-2.
Several authors have applied CNNs to robotics. In @cite_33 , visuomotor policies are learned using an end-to-end neural network that takes images and outputs joint torques. A three layer CNN is used without any max pooling layer to maintain spatial information. In our work, we also use filters in the third convolution layers; but unlike the previous work, we consider their relationship with higher layer filters. In @cite_23 , an autoencoder is used to learn spatial information of features in a neural network. Our approach finds the multi-level receptive field of a hierarchical feature in a particular image by repeatedly back tracing along a single path. In @cite_15 , a CNN is used to learn what features are graspable through 50 thousand trials collected using a Baxter robot. The final layer is used to select 1 out of 18 grasp orientations. In contrast, our approach considers multi-objective configurations capable of control for more sophisticated and higher degree-of-freedom hand arm systems like Robonaut-2.
{ "cite_N": [ "@cite_15", "@cite_33", "@cite_23" ], "mid": [ "2201912979", "2155007355", "2210483910" ], "abstract": [ "Current model free learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.", "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a partially observed guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.", "Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm." ] }
1609.03947
2950547378
In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping between visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a pre-trained CNN for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN and identifies the 3D positions of features that capture the hierarchical support relations between filters in different CNN layers by tracing the activation of higher level features in the CNN backward. When this backward trace terminates in the RGB-D image, important manipulable structures comprising the objects are, thus, localized. These features located in different layers of the CNN are then associated to controllers belonging to different hierarchies of the robot morphology for grasping. A Grasping Dataset is collected using demonstrated hand object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms base-line approaches in cluttered scenarios on the Grasping Dataset and a point cloud based approach on a grasping task using Robonaut-2.
A great deal of research has been done on understanding the relationship between CNN filter activation and the input image. In @cite_14 , deconvolution is used to find what pixels activate each filter. In other work, the gradient of the network response with respect to the image is calculated to obtain a saliency map used for object localization @cite_18 . In @cite_8 , an approach that adds a guidance signal to backpropagation for better visualization of higher level filters is also introduced. In our work, backpropagation is performed on a single filter per layer to consider the hierarchical relationship between filters. Recent work by Zhang . introduces the excitation backprop that uses a probabilistic winner-take-all process to generate attention maps for different categories @cite_2 . Our work localizes features based on similar concepts.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_2", "@cite_8" ], "mid": [ "2962851944", "1849277567", "2503388974", "2123045220" ], "abstract": [ "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks.", "Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the \"deconvolution approach\" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches." ] }
1609.03947
2950547378
In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping between visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a pre-trained CNN for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN and identifies the 3D positions of features that capture the hierarchical support relations between filters in different CNN layers by tracing the activation of higher level features in the CNN backward. When this backward trace terminates in the RGB-D image, important manipulable structures comprising the objects are, thus, localized. These features located in different layers of the CNN are then associated to controllers belonging to different hierarchies of the robot morphology for grasping. A Grasping Dataset is collected using demonstrated hand object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms base-line approaches in cluttered scenarios on the Grasping Dataset and a point cloud based approach on a grasping task using Robonaut-2.
Some authors have explored using intermediate filter activation in addition to the the response of the output layer of a CNN. Hypercolumns, which are defined as the activation of all CNN units above a pixel, are used on tasks such as simultaneous detection and segmentation, keypoint localization, and part labelling @cite_1 . Our approach groups filters in different layers based on their hierarchical activation instead of just the spatial relationship. In @cite_21 , the last two layers of two CNNs, one that takes an image as input and one that takes depth as input, are used to identify object category, instance, and pose. In @cite_3 , the last layer is used to identify the object instance while the fifth convolution layer is used to determine the of an object. In our work we consider a feature as the activation of a lower layer filter that causes a specific higher layer filter to activate and plan grasp poses based on these features.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_3" ], "mid": [ "1593727536", "1948751323", "2200395211" ], "abstract": [ "Object recognition and pose estimation from RGB-D images are important tasks for manipulation robots which can be learned from examples. Creating and annotating datasets for learning is expensive, however. We address this problem with transfer learning from deep convolutional neural networks (CNN) that are pre-trained for image categorization and provide a rich, semantically meaningful feature set. We incorporate depth information, which the CNN was not trained with, by rendering objects from a canonical perspective and colorizing the depth channel according to distance from the object center. We evaluate our approach on the Washington RGB-D Objects dataset, where we find that the generated feature set naturally separates classes and instances well and retains pose manifolds. We outperform state-of-the-art on a number of subtasks and show that our approach can yield superior results when only little training data is available.", "Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.", "We study the problem of object recognition on robotic platforms where large image collections of target objects are unavailable and where new models of previously unseen objects must be added dynamically. This situation is common in robotics, where task related objects can require recognition over multiple viewpoints and training examples are sparse. The proposed framework uses pre-trained convolutional neural network layers to support aspect object models while emphasizing a minimal computational footprint. In this paper, we maintain an object model database consisting of aspect and class descriptors computed from images of target objects at varying view points. By querying the model database we show how to recognize objects with respect to previously seen exemplars. We investigate the effectiveness of different dimensionality reduction techniques for key generation on query efficiency and accuracy. We also demonstrate a working system with a small collection of objects including classes that do not appear in the network's pre-training data set." ] }
1609.03947
2950547378
In this work, we provide a solution for posturing the anthropomorphic Robonaut-2 hand and arm for grasping based on visual information. A mapping between visual features extracted from a convolutional neural network (CNN) to grasp points is learned. We demonstrate that a pre-trained CNN for image classification can be applied to a grasping task based on a small set of grasping examples. Our approach takes advantage of the hierarchical nature of the CNN and identifies the 3D positions of features that capture the hierarchical support relations between filters in different CNN layers by tracing the activation of higher level features in the CNN backward. When this backward trace terminates in the RGB-D image, important manipulable structures comprising the objects are, thus, localized. These features located in different layers of the CNN are then associated to controllers belonging to different hierarchies of the robot morphology for grasping. A Grasping Dataset is collected using demonstrated hand object relationships for Robonaut-2 to evaluate the proposed approach in terms of the precision of the resulting preshape postures. We demonstrate that this approach outperforms base-line approaches in cluttered scenarios on the Grasping Dataset and a point cloud based approach on a grasping task using Robonaut-2.
Our work is also inspired by @cite_11 where a class model is composed of smaller models of parts; e.g., wheels are parts of a bicycle. We view CNNs similarly; if a higher layer filter response represents a high level structure, the lower layer filter responses that contribute to this higher layer response can be seen as representing local parts of this structure and may provide useful information for manipulating an object.
{ "cite_N": [ "@cite_11" ], "mid": [ "2168356304" ], "abstract": [ "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function." ] }
1609.03659
2518902831
Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the ground-truth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. In addition, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: foreground object segmentation and object proposal detection.
Object skeleton extraction has been studied a lot in recent decades. However, most works in the early stages @cite_5 @cite_4 only focus on skeleton extraction from pre-segmented images. As these works make a strict assumption that object silhouettes are provided, i.e., the object has already been segmented, they cannot be applied to our task.
{ "cite_N": [ "@cite_5", "@cite_4" ], "mid": [ "2106990937", "2104093257" ], "abstract": [ "A comprehensive survey of skeletonization algorithms and their applications.Different Skeletonization approaches are summarized.Topology preservation and parallel skeletonization are discussed.A review of multi-scale skeletonization is presented.Applications and performance evaluation of skeletonization are discussed. Skeletonization provides an effective and compact representation of objects, which is useful for object description, retrieval, manipulation, matching, registration, tracking, recognition, and compression. It also facilitates efficient assessment of local object properties, e.g., scale, orientation, topology, etc. Several computational approaches are available in literature toward extracting the skeleton of an object, some of which are widely different in terms of their principles. In this paper, we present a comprehensive and concise survey of different skeletonization algorithms and discuss their principles, challenges, and benefits. Topology preservation, parallelization, and multi-scale skeletonization approaches are discussed. Finally, various applications of skeletonization are reviewed and the fundamental challenges of assessing the performance of different skeletonization algorithms are discussed.", "In this paper, we introduce a new skeleton pruning method based on contour partitioning. Any contour partition can be used, but the partitions obtained by discrete curve evolution (DCE) yield excellent results. The theoretical properties and the experiments presented demonstrate that obtained skeletons are in accord with human visual perception and stable, even in the presence of significant noise and shape variations, and have the same topology as the original skeletons. In particular, we have proven that the proposed approach never produces spurious branches, which are common when using the known skeleton pruning methods. Moreover, the proposed pruning method does not displace the skeleton points. Consequently, all skeleton points are centers of maximal disks. Again, many existing methods displace skeleton points in order to produces pruned skeletons" ] }
1609.03659
2518902831
Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the ground-truth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. In addition, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: foreground object segmentation and object proposal detection.
Recent learning based skeleton extraction methods are better at dealing with complex scene. One type of methods formulates skeleton extraction as a per-pixel classification problem. Tsogkas and Kokkinos @cite_43 computed hand-designed features of multi-scale and multi-orientation at each pixel, and employed multiple instance learning to determine whether it is symmetric Although symmetry detection is not the same problem as skeleton extraction, we also compare the methods for it with ours, as skeletons can be considered a subset of symmetry. or not. Shen @cite_9 then improved this method by training MIL models on automatically learned scale- and orientation-related subspaces. Sironi @cite_22 transformed the per-pixel classification problem to a regression one to achieve skeleton localization and learn the distance to the closest skeleton segment in scale-space. Another type of learning based methods aims to learn the similarity between local skeleton segments (represented by superpixel @cite_38 @cite_24 or spine model @cite_10 ), and links them by hierarchical clustering @cite_38 , dynamic programming @cite_24 or particle filtering @cite_10 . Due to the limited power of hand-designed features, these methods are not effective at detecting skeleton pixels with large scales, as large context information is needed.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_9", "@cite_24", "@cite_43", "@cite_10" ], "mid": [ "2537310184", "2052516389", "2160306297", "2114379931", "174734558", "1989066343" ], "abstract": [ "Skeletonization algorithms typically decompose an object's silhouette into a set of symmetric parts, offering a powerful representation for shape categorization. However, having access to an object's silhouette assumes correct figure-ground segmentation, leading to a disconnect with the mainstream categorization community, which attempts to recognize objects from cluttered images. In this paper, we present a novel approach to recovering and grouping the symmetric parts of an object from a cluttered scene. We begin by using a multiresolution superpixel segmentation to generate medial point hypotheses, and use a learned affinity function to perceptually group nearby medial points likely to belong to the same medial branch. In the next stage, we learn higher granularity affinity functions to group the resulting medial branches likely to belong to the same object. The resulting framework yields a skelet al approximation that's free of many of the instabilities plaguing traditional skeletons. More importantly, it doesn't require a closed contour, enabling the application of skeleton-based categorization systems to more realistic imagery", "We propose a robust and accurate method to extract the centerlines and scale of tubular structures in 2D images and 3D volumes. Existing techniques rely either on filters designed to respond to ideal cylindrical structures, which lose accuracy when the linear structures become very irregular, or on classification, which is inaccurate because locations on centerlines and locations immediately next to them are extremely difficult to distinguish. We solve this problem by reformulating centerline detection in terms of a regression problem. We first train regressors to return the distances to the closest centerline in scale-space, and we apply them to the input images or volumes. The centerlines and the corresponding scale then correspond to the regressors local maxima, which can be easily identified. We show that our method outperforms state-of-the-art techniques for various 2D and 3D datasets.", "Local reflection symmetry detection in nature images is a quite important but challenging task in computer vision. The main obstacle is both the scales and the orientations of symmetric structure are unknown. The multiple instance learning (MIL) framework sheds lights onto this task owing to its capability to well accommodate the unknown scales and orientations of the symmetric structures. However, to differentiate symmetry vs non-symmetry remains to face extreme confusions caused by clutters scenes and ambiguous object structures. In this paper, we propose a novel multiple instance learning framework for local reflection symmetry detection, named multiple instance subspace learning (MISL), which instead learns a group of models respectively on well partitioned subspaces. To obtain such subspaces, we propose an efficient dividing strategy under MIL setting, named partial random projection tree (PRPT), by taking advantage of the fact that each sample (bag) is represented by the proposed symmetry features computed at specific scale and orientation combinations (instances). Encouraging experimental results on two datasets demonstrate that the proposed local reflection symmetry detection method outperforms current state-of-the-arts. HighlightsWe perform clustering on samples represented by multiple instances.We learn a group of MIL classifiers on subspaces.We report state-of-the-arts results on the symmetry detection benchmark.", "Symmetry is a powerful shape regularity that's been exploited by perceptual grouping researchers in both human and computer vision to recover part structure from an image without a priori knowledge of scene content. Drawing on the concept of a medial axis, defined as the locus of centers of maximal inscribed discs that sweep out a symmetric part, we model part recovery as the search for a sequence of deformable maximal inscribed disc hypotheses generated from a multiscale super pixel segmentation, a framework proposed by LEV09. However, we learn affinities between adjacent super pixels in a space that's invariant to bending and tapering along the symmetry axis, enabling us to capture a wider class of symmetric parts. Moreover, we introduce a global cost that perceptually integrates the hypothesis space by combining a pair wise and a higher-level smoothing term, which we minimize globally using dynamic programming. The new framework is demonstrated on two datasets, and is shown to significantly outperform the baseline LEV09.", "In this work we propose a learning-based approach to symmetry detection in natural images. We focus on ribbon-like structures, i.e. contours marking local and approximate reflection symmetry and make three contributions to improve their detection. First, we create and make publicly available a ground-truth dataset for this task by building on the Berkeley Segmentation Dataset. Second, we extract features representing multiple complementary cues, such as grayscale structure, color, texture, and spectral clustering information. Third, we use supervised learning to learn how to combine these cues, and employ MIL to accommodate the unknown scale and orientation of the symmetric structures. We systematically evaluate the performance contribution of each individual component in our pipeline, and demonstrate that overall we consistently improve upon results obtained using existing alternatives.", "" ] }
1609.03619
2949658964
We consider the problem of object recognition in 3D using an ensemble of attribute-based classifiers. We propose two new concepts to improve classification in practical situations, and show their implementation in an approach implemented for recognition from point-cloud data. First, the viewing conditions can have a strong influence on classification performance. We study the impact of the distance between the camera and the object and propose an approach to fuse multiple attribute classifiers, which incorporates distance into the decision making. Second, lack of representative training samples often makes it difficult to learn the optimal threshold value for best positive and negative detection rate. We address this issue, by setting in our attribute classifiers instead of just one threshold value, two threshold values to distinguish a positive, a negative and an uncertainty class, and we prove the theoretical correctness of this approach. Empirical studies demonstrate the effectiveness and feasibility of the proposed concepts.
Creating practical object recognition systems that can work reliably under different viewing conditions, including varying distance, viewing angle, illumination and occlusions, is still a challenging problem in Computer Vision. Current single source based recognition methods have robustness to some extent: features like SIFT @cite_9 or the multifractal spectrum vector (MFS) @cite_1 in practice are invariant to a certain degree to deformations of the scene and viewpoint changes; geometric-based matching algorithms like BOR3D @cite_2 and LINEMOD @cite_14 can recognize objects under large changes in illumination, where color based algorithms tend to fail. But in complicated working environments, these systems have difficulties to achieve robust performance.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_1", "@cite_2" ], "mid": [ "2151103935", "1526868886", "2118773079", "2167907250" ], "abstract": [ "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "We propose a framework for automatic modeling, detection, and tracking of 3D objects with a Kinect. The detection part is mainly based on the recent template-based LINEMOD approach [1] for object detection. We show how to build the templates automatically from 3D models, and how to estimate the 6 degrees-of-freedom pose accurately and in real-time. The pose estimation and the color information allow us to check the detection hypotheses and improves the correct detection rate by 13 with respect to the original LINEMOD. These many improvements make our framework suitable for object manipulation in Robotics applications. Moreover we propose a new dataset made of 15 registered, 1100+ frame video sequences of 15 various objects for the evaluation of future competing methods.", "Image texture analysis has received a lot of attention in the past years. Researchers have developed many texture signatures based on texture measurements, for the purpose of uniquely characterizing the texture. Existing texture signatures, in general, are not invariant to 3D transforms such as view-point changes and non-rigid deformations of the texture surface, which is a serious limitation for many applications. In this paper, we introduce a new texture signature, called the multifractal spectrum (MFS). It provides an efficient framework combining global spatial invariance and local robust measurements. The MFS is invariant under the bi-Lipschitz map, which includes view-point changes and non-rigid deformations of the texture surface, as well as local affine illumination changes. Experiments demonstrate that the MFS captures the essential structure of textures with quite low dimension.", "3-D object recognition has become a major research topic. New low-cost sensors hit the market making 3-D vision affordable for anyone. On the software side, promising open-source tools and libraries have prospered recently. The most important one, regarding 3-D data processing, undoubtedly is the Point Cloud Library (PCL). From an integrator's point of view, the main benchmarks applied to such a library are the amount of use cases that can be implemented and the effort that is entailed. We have noticed that the scientific community pays little attention to these needs. We hope to change this situation by proposing a framework that is primarily inspired by the PCL. Clearly separating the roles of its users, it leaves integration to the integrators and algorithms to the specialists. This way we hope to provide a means for development teams to participate in the recent advances even if they do not have a special focus on machine vision." ] }
1609.03619
2949658964
We consider the problem of object recognition in 3D using an ensemble of attribute-based classifiers. We propose two new concepts to improve classification in practical situations, and show their implementation in an approach implemented for recognition from point-cloud data. First, the viewing conditions can have a strong influence on classification performance. We study the impact of the distance between the camera and the object and propose an approach to fuse multiple attribute classifiers, which incorporates distance into the decision making. Second, lack of representative training samples often makes it difficult to learn the optimal threshold value for best positive and negative detection rate. We address this issue, by setting in our attribute classifiers instead of just one threshold value, two threshold values to distinguish a positive, a negative and an uncertainty class, and we prove the theoretical correctness of this approach. Empirical studies demonstrate the effectiveness and feasibility of the proposed concepts.
One way to deal with variations in viewing conditions is to incorporate different sources of information (or cues) into the recognition process @cite_0 . However, how to fuse the information from multiple sources, is still an open problem.
{ "cite_N": [ "@cite_0" ], "mid": [ "2291511137" ], "abstract": [ "The rise of social network and crowdsourcing platforms makes it convenient to take advantage of the collective intelligence to estimate true labels of questions of interest. However, input from workers is often noisy and even malicious. Trust is used to model workers in order to better estimate true labels of questions. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. Finally, we demonstrate that our model is superior to state-of-the-art by testing it on multiple real-world datasets." ] }
1609.03619
2949658964
We consider the problem of object recognition in 3D using an ensemble of attribute-based classifiers. We propose two new concepts to improve classification in practical situations, and show their implementation in an approach implemented for recognition from point-cloud data. First, the viewing conditions can have a strong influence on classification performance. We study the impact of the distance between the camera and the object and propose an approach to fuse multiple attribute classifiers, which incorporates distance into the decision making. Second, lack of representative training samples often makes it difficult to learn the optimal threshold value for best positive and negative detection rate. We address this issue, by setting in our attribute classifiers instead of just one threshold value, two threshold values to distinguish a positive, a negative and an uncertainty class, and we prove the theoretical correctness of this approach. Empirical studies demonstrate the effectiveness and feasibility of the proposed concepts.
Besides early fusion, late fusion also has gained much attention and achieves good results. Lutz at al. @cite_17 proposes a probabilistic fusion approach, called MOPED @cite_6 , to combine a 3D model matcher, color histograms and feature based detection algorithm, where a quality factor, representing each method's discriminative capability, is integrated in the final classification score. Meta information @cite_11 can also be added to create a new feature. @cite_15 blends classification scores from SIFT, shape, and color models with meta features providing information about each model's fitness from the input scene, which results in high precision and recall on the Challenge and Willow datasets. Considering influences due to viewing conditions, Ahmed @cite_13 applies an AND OR graph representation of different features and updates a Bayes conditional probability table based on measurements of the environment, such as intensity, distance and occlusions. However, these methods may suffer from inaccurate estimation of the conditional probabilities involved, because of insufficient training data.
{ "cite_N": [ "@cite_17", "@cite_6", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2030017813", "2058761328", "2072890037", "2031987504", "2057114134" ], "abstract": [ "Reliable object recognition is a mandatory prerequisite for Service Robots in everyday environments. Typical approaches for object recognition use single algorithms or features. However, none is yet able to classify across all types of objects and the field of object recognition is thus still an open challenge. We propose an approach for object recognition and pose estimation that combines existing algorithms. Probabilistic methods are used to fuse the classification and pose estimation results, considering the error introduced by the measurements, actuators (sensor on manipulator) and algorithms. Since integration is one of the real challenges from the laboratory towards the real world, we demonstrate the approach in two fully integrated scenarios. We run the experiments on two platforms and focus on the distinction of few but similar objects.", "We present MOPED, a framework for Multiple Object Pose Estimation and Detection that seamlessly integrates single-image and multi-image object recognition and pose estimation in one optimized, robust, and scalable framework. We address two main challenges in computer vision for robotics: robust performance in complex scenes, and low latency for real-time operation. We achieve robust performance with Iterative Clustering Estimation (ICE), a novel algorithm that iteratively combines feature clustering with robust pose estimation. Feature clustering quickly partitions the scene and produces object hypotheses. The hypotheses are used to further refine the feature clusters, and the two steps iterate until convergence. ICE is easy to parallelize, and easily integrates single- and multi-camera object recognition and pose estimation. We also introduce a novel object hypothesis scoring function based on M-estimator theory, and a novel pose clustering algorithm that robustly handles recognition outliers. We achieve scalability and low latency with an improved feature matching algorithm for large databases, a GPU CPU hybrid architecture that exploits parallelism at all levels, and an optimized resource scheduler. We provide extensive experimental results demonstrating state-of-the-art performance in terms of recognition, scalability, and latency in real-world robotic applications.", "Despite the rich information provided by sensors such as the Microsoft Kinect in the robotic perception setting, the problem of detecting object instances remains unsolved, even in the tabletop setting, where segmentation is greatly simplified. Existing object detection systems often focus on textured objects, for which local feature descriptors can be used to reliably obtain correspondences between different views of the same object. We examine the benefits of dense feature extraction and multimodal features for improving the accuracy and robustness of an instance recognition system. By combining multiple modalities and blending their scores through an ensemble-based method in order to generate our final object hypotheses, we obtain significant improvements over previously published results on two RGB-D datasets. On the Challenge dataset, our method results in only one missed detection (achieving 100 precision and 99.77 recall). On the Willow dataset, we also make significant gains on the prior state of the art (achieving 98.28 precision and 87.78 recall), resulting in an increase in F-score from 0.8092 to 0.9273.", "Ensuring robustness in object recognition pose estimation under a wide variation of environmental parameters, such as illumination, scale, perspective as well as occlusion, is still of a challenge in computer vision. One way to meet this challenge is by using multiple features evidences that offer their own strengths against particular environmental variations. To this end, methods of how to choose an optimal combination of features evidences and of how to design an optimal classifier decision-maker with the assignment of proper weights to the chosen individual features evidences, for a given environmental parameter reading, are to be addressed. This paper presents a framework of adaptive Bayesian recognition that puts its particular emphasis on addressing the two methods described above while integrating multiple evidences. The novelty of the proposed method lies in 1) an AND OR graph representation of evidence structure for individual object, representing explicitly a set of combined evidences sufficient for decision, 2) An automatic update of the Bayesian network tables of conditional probabilities based on the current environmental parameters measured, and 3) the incorporation of occlusions into the computation of Bayesian posterior probabilities for decision. The experimental results show that the proposed method is capable of dealing with adverse situations for which conventional methods fail to provide recognition.", "Robust object recognition is a crucial requirement for many robotic applications. We propose a method towards increasing reliability and flexibility of object recognition for robotics. This is achieved by the fusion of diverse recognition frameworks and algorithms on score level which use characteristics like shape, texture and color of the objects. Machine Learning allows for the automatic combination of the respective recognition methods' outputs instead of having to adapt their hypothesis metrics to a common basis. We show the applicability of our approach through several real-world experiments in a service robotics environment. Great importance is attached to robustness, especially in varying environments." ] }
1609.03544
2951198042
In an era of ubiquitous large-scale streaming data, the availability of data far exceeds the capacity of expert human analysts. In many settings, such data is either discarded or stored unprocessed in datacenters. This paper proposes a method of online data thinning, in which large-scale streaming datasets are winnowed to preserve unique, anomalous, or salient elements for timely expert analysis. At the heart of this proposed approach is an online anomaly detection method based on dynamic, low-rank Gaussian mixture models. Specifically, the high-dimensional covariances matrices associated with the Gaussian components are associated with low-rank models. According to this model, most observations lie near a union of subspaces. The low-rank modeling mitigates the curse of dimensionality associated with anomaly detection for high-dimensional data, and recent advances in subspace clustering and subspace tracking allow the proposed method to adapt to dynamic environments. Furthermore, the proposed method allows subsampling, is robust to missing data, and uses a mini-batch online optimization approach. The resulting algorithms are scalable, efficient, and are capable of operating in real time. Experiments on wide-area motion imagery and e-mail databases illustrate the efficacy of the proposed approach.
The proposed method is also closely related to the subspace clustering and tracking algorithms. Subspace clustering is a relatively new, but vibrant field of study. These methods cluster observations into low-dimensional subspaces to mitigate the curse of dimensionality, which often make nearest-neighbors-based methods inaccurate @cite_42 . Early works in the field can only identify subspaces that are parallel to the axes, which is not useful when the data is not sparse, but lives on an arbitrarily oriented hyperplane. Newer methods @cite_79 @cite_41 @cite_25 @cite_20 @cite_88 @cite_22 , which are also called correlation clustering methods, can identify multiple arbitrarily angled subspaces at the same time, but all share the same problem of high computational cost. Even @cite_33 , which is shown to beat other methods in speed, still has an overall complexity of @math , where @math is the dimension of the problem, and @math is the total number of data points. More recent methods based on sparse modeling ( @cite_78 @cite_7 @cite_18 @cite_107 @cite_29 ) require solving convex optimization problems that can be inefficient in high-dimensional settings. Thus, the high complexity of the algorithms make them less than ideal candidates for an efficient online algorithm.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_33", "@cite_78", "@cite_41", "@cite_7", "@cite_107", "@cite_42", "@cite_29", "@cite_79", "@cite_88", "@cite_25", "@cite_20" ], "mid": [ "2615383372", "", "2108706544", "", "", "1993962865", "", "1672197616", "2517389697", "2112210867", "", "", "" ], "abstract": [ "This paper considers the problem of subspace clustering under noise. Specifically, we study the behavior of Sparse Subspace Clustering (SSC) when either adversarial or random noise is added to the unlabeled input data points, which are assumed to be in a union of low-dimensional subspaces. We show that a modified version of SSC is provably effective in correctly identifying the underlying subspaces, even with noisy data. This extends theoretical guarantee of this algorithm to more practical settings and provides justification to the success of SSC in a class of real applications.", "", "In high dimensional data, clusters often only exist in arbitrarily oriented subspaces of the feature space. In addition, these so-called correlation clusters may have complex relationships between each other. For example, a correlation cluster in a 1-D subspace (forming a line) may be enclosed within one or even several correlation clusters in 2- D superspaces (forming planes). In general, such relationships can be seen as a complex hierarchy that allows multiple inclusions, i.e. clusters may be embedded in several super-clusters rather than only in one. Obviously, uncovering the hierarchical relationships between the detected correlation clusters is an important information gain. Since existing approaches cannot detect such complex hierarchical relationships among correlation clusters, we propose the algorithm ERiC to tackle this problem and to visualize the result by means of a graph-based representation. In our experimental evaluation, we show that ERiC finds more information than state-of-the-art correlation clustering methods and outperforms existing competitors in terms of efficiency.", "", "", "Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering.", "", "We explore the effect of dimensionality on the \"nearest neighbor\" problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!", "This paper explores algorithms for subspace clustering with missing data. In many high-dimensional data analysis settings, data points Lie in or near a union of subspaces. Subspace clustering is the process of estimating these subspaces and assigning each data point to one of them. However, in many modern applications the data are severely corrupted by missing values. This paper describes two novel methods for subspace clustering with missing data: (a) group-sparse sub-space clustering (GSSC), which is based on group-sparsity and alternating minimization, and (b) mixture subspace clustering (MSC), which models each data point as a convex combination of its projections onto all subspaces in the union. Both of these algorithms are shown to converge to a local minimum, and experimental results show that they outperform the previous state-of-the-art, with GSSC yielding the highest overall clustering accuracy.", "High dimensional data has always been a challenge for clustering algorithms because of the inherent sparsity of the points. Recent research results indicate that in high dimensional data, even the concept of proximity or clustering may not be meaningful. We discuss very general techniques for projected clustering which are able to construct clusters in arbitrarily aligned subspaces of lower dimensionality. The subspaces are specific to the clusters themselves. This definition is substantially more general and realistic than currently available techniques which limit the method to only projections from the original set of attributes. The generalized projected clustering technique may also be viewed as a way of trying to redefine clustering for high dimensional applications by searching for hidden subspaces with clusters which are created by inter-attribute correlations. We provide a new concept of using extended cluster feature vectors in order to make the algorithm scalable for very large databases. The running time and space requirements of the algorithm are adjustable, and are likely ta tradeoff with better accuracy.", "", "", "" ] }
1609.03544
2951198042
In an era of ubiquitous large-scale streaming data, the availability of data far exceeds the capacity of expert human analysts. In many settings, such data is either discarded or stored unprocessed in datacenters. This paper proposes a method of online data thinning, in which large-scale streaming datasets are winnowed to preserve unique, anomalous, or salient elements for timely expert analysis. At the heart of this proposed approach is an online anomaly detection method based on dynamic, low-rank Gaussian mixture models. Specifically, the high-dimensional covariances matrices associated with the Gaussian components are associated with low-rank models. According to this model, most observations lie near a union of subspaces. The low-rank modeling mitigates the curse of dimensionality associated with anomaly detection for high-dimensional data, and recent advances in subspace clustering and subspace tracking allow the proposed method to adapt to dynamic environments. Furthermore, the proposed method allows subsampling, is robust to missing data, and uses a mini-batch online optimization approach. The resulting algorithms are scalable, efficient, and are capable of operating in real time. Experiments on wide-area motion imagery and e-mail databases illustrate the efficacy of the proposed approach.
Subspace tracking is a classical problem that experienced recent attention with the development of algorithms that are robust to missing and outlier elements of the data points @math . For example, the Grassmannian Rank-One Update Subspace Estimation (GROUSE) @cite_86 , Parallel Estimation and Tracking by REcursive Least Squares (PETRELS) @cite_13 @cite_64 , and Robust Online Subspace Estimation and Tracking Algorithm (ROSETA) @cite_27 effectively track a single subspace using incomplete data vectors. These algorithms are capable of tracking and adapting to changing environments. The subspace model used in these methods, however, is inherently strong, whereas a plethora of empirical studies have demonstrated that high-dimensional data often lie near manifolds with non-negligible curvature @cite_75 @cite_51 @cite_68 .
{ "cite_N": [ "@cite_64", "@cite_75", "@cite_27", "@cite_51", "@cite_86", "@cite_68", "@cite_13" ], "mid": [ "2132743051", "2964012263", "1605379937", "2053186076", "", "2097308346", "2075406189" ], "abstract": [ "We consider the problem of reconstructing a data stream from a small subset of its entries, where the data stream is assumed to lie in a low-dimensional linear subspace, possibly corrupted by noise. It is also important to track the change of underlying subspace for many applications. This problem can be viewed as a sequential low-rank matrix completion problem in which the subspace is learned in an online fashion. The proposed algorithm, called Parallel Estimation and Tracking by REcursive Least Squares (PETRELS), identifies the underlying low-dimensional subspace via a recursive procedure for each row of the subspace matrix in parallel, and then reconstructs the missing entries via least-squares estimation if required. PETRELS outperforms previous approaches by discounting observations in order to capture long-term behavior of the data stream and be able to adapt to it. Numerical examples are provided for direction-of-arrival estimation and matrix completion, comparing PETRELS with state of the art batch algorithms.", "Data sets are often modeled as samples from a probability distribution in RD, for D large. It is often assumed that the data has some interesting low-dimensional structure, for example that of a d-dimensional manifold M, with d much smaller than D. When M is simply a linear subspace, one may exploit this assumption for encoding efficiently the data by projecting onto a dictionary of d vectors in RD (for example found by SVD), at a cost (n+D)d for n data points. When M is nonlinear, there are no “explicit” and algorithmically efficient constructions of dictionaries that achieve a similar efficiency: typically one uses either random dictionaries, or dictionaries obtained by black-box global optimization. In this paper we construct data-dependent multi-scale dictionaries that aim at efficiently encoding and manipulating the data. Their construction is fast, and so are the algorithms that map data points to dictionary coefficients and vice versa, in contrast with L1-type sparsity-seeking algorithms, but like adaptive nonlinear approximation in classical multi-scale analysis. In addition, data points are guaranteed to have a compressible representation in terms of the dictionary, depending on the assumptions on the geometry of the underlying probability distribution.", "In this paper, we present a robust online subspace estimation and tracking algorithm (ROSETA) that is capable of identifying and tracking a time-varying low dimensional subspace from incomplete measurements and in the presence of sparse outliers. Our algorithm minimizes a robust l 1 norm cost function between the observed measurements and their projection onto the estimated subspace. The projection coefficients and sparse outliers are computed using ADMM solver and the subspace estimate is updated using a proximal point iteration with adaptive parameter selection. We demonstrate using simulated experiments and a video background subtraction example that ROSETA succeeds in identifying and tracking low dimensional subspaces using fewer iterations than other state of art algorithms.", "Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in", "", "One of the central problems in machine learning and pattern recognition is to develop appropriate representations for complex data. We consider the problem of constructing a representation for data lying on a low-dimensional manifold embedded in a high-dimensional space. Drawing on the correspondence between the graph Laplacian, the Laplace Beltrami operator on the manifold, and the connections to the heat equation, we propose a geometrically motivated algorithm for representing the high-dimensional data. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering. Some potential applications and illustrative examples are discussed.", "Many real world datasets exhibit an embedding of low-dimensional structure in a high-dimensional manifold. Examples include images, videos and internet traffic data. It is of great significance to estimate and track the low-dimensional structure with small storage requirements and computational complexity when the data dimension is high. Therefore we consider the problem of reconstructing a data stream from a small subset of its entries, where the data is assumed to lie in a low-dimensional linear subspace, possibly corrupted by noise. We further consider tracking the change of the underlying subspace, which can be applied to applications such as video denoising, network monitoring and anomaly detection. Our setting can be viewed as a sequential low-rank matrix completion problem in which the subspace is learned in an online fashion. The proposed algorithm, dubbed Parallel Estimation and Tracking by REcursive Least Squares (PETRELS), first identifies the underlying low-dimensional subspace, and then reconstructs the missing entries via least-squares estimation if required. Subspace identification is performed via a recursive procedure for each row of the subspace matrix in parallel with discounting for previous observations. Numerical examples are provided for direction-of-arrival estimation and matrix completion, comparing PETRELS with state of the art batch algorithms." ] }
1609.03663
2531738943
Text simplification (TS) aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning. Current automatic TS techniques are limited to either lexical-level applications or manually defining a large amount of rules. Since deep neural networks are powerful models that have achieved excellent performance over many difficult tasks, in this paper, we propose to use the Long Short-Term Memory (LSTM) Encoder-Decoder model for sentence level TS, which makes minimal assumptions about word sequence. We conduct preliminary experiments to find that the model is able to learn operation rules such as reversing, sorting and replacing from sequence pairs, which shows that the model may potentially discover and apply rules such as modifying sentence structure, substituting words, and removing words for TS.
Automatic TS is a complicated natural language processing (NLP) task, it consists of lexical and syntactic simplification levels. Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia (EW) and Simple English Wikipedia (SEW) @cite_18 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task @cite_7 @cite_12 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content.
{ "cite_N": [ "@cite_18", "@cite_12", "@cite_7" ], "mid": [ "", "1843891098", "2258706460" ], "abstract": [ "", "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.", "Evaluation of automatic text summarization is a challenging task due to the difficulty of calculating similarity of two texts. In this paper, we define a new dissimilarity measure --- compression dissimilarity to compute the dissimilarity between documents. Then we propose a new automatic evaluating method based on compression dissimilarity. The proposed method is a completely \"black box\" and does not need preprocessing steps. Experiments show that compression dissimilarity could clearly distinct automatic summaries from human summaries. Compression dissimilarity evaluating measure could evaluate an automatic summary by comparing with high-quality human summaries, or comparing with its original document. The evaluating results are highly correlated with human assessments, and the correlation between compression dissimilarity of summaries and compression dissimilarity of documents can serve as a meaningful measure to evaluate the consistency of an automatic text summarization system." ] }
1609.03663
2531738943
Text simplification (TS) aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning. Current automatic TS techniques are limited to either lexical-level applications or manually defining a large amount of rules. Since deep neural networks are powerful models that have achieved excellent performance over many difficult tasks, in this paper, we propose to use the Long Short-Term Memory (LSTM) Encoder-Decoder model for sentence level TS, which makes minimal assumptions about word sequence. We conduct preliminary experiments to find that the model is able to learn operation rules such as reversing, sorting and replacing from sequence pairs, which shows that the model may potentially discover and apply rules such as modifying sentence structure, substituting words, and removing words for TS.
The limitation of aforementioned methods requires syntax parsing or hand-crafted rules to simplify sentences. Compared with traditional machine learning @cite_10 @cite_4 and data mining techniques @cite_9 @cite_11 @cite_22 , deep learning has shown to produce state-of-the-art results on various difficult tasks, with the help of the development of big data platforms @cite_21 @cite_2 . The RNN Encoder-Decoder is a very popular deep neural network model that performs exceptionally well at the machine translation task @cite_0 @cite_8 @cite_3 . @cite_14 proposed a preliminary work to use RNN Encoder-Decoder model for text simplification task, which is similar to the proposed model in this paper.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_8", "@cite_9", "@cite_21", "@cite_3", "@cite_0", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2551704653", "2251734881", "2405480216", "2130942839", "1496549598", "1992928432", "2950635152", "2133564696", "1965075927", "2282974639", "2407858842" ], "abstract": [ "Text simplification (TS) is the technique of reducing the lexical, syntactical complexity of text. Existing automatic TS systems can simplify text only by lexical simplification or by manually defined rules. Neural Machine Translation (NMT) is a recently proposed approach for Machine Translation (MT) that is receiving a lot of research interest. In this paper, we regard original English and simplified English as two languages, and apply a NMT model–Recurrent Neural Network (RNN) encoder-decoder on TS to make the neural network to learn text simplification rules by itself. Then we discuss challenges and strategies about how to apply a NMT model to the task of text simplification.", "Topic Model such as Latent Dirichlet Allocation(LDA) makes assumption that topic assignment of different words are conditionally independent. In this paper, we propose a new model Extended Global Topic Random Field (EGTRF) to model non-linear dependencies between words. Specifically, we parse sentences into dependency trees and represent them as a graph, and assume the topic assignment of a word is influenced by its adjacent words and distance-2 words. Word similarity information learned from large corpus is incorporated to enhance word topic assignment. Parameters are estimated efficiently by variational inference and experimental results on two datasets show EGTRF achieves lower perplexity and higher log predictive probability.", "A type of extreme disastrous floods are associated with a sequence of prior heavy precipitation events occurring frequently from over several days to several weeks. Transitional methods for precipitation clusters prediction usually rely on the measurement and analyses of meteorological variables. However while a short-term prediction of certain location depends only on variables in near spatial and temporal neighborhood, predictions with long lead time must consider variables in a long time window and large spatial neighborhoods, this means an enormous amount of potentially influencing variables and only a subset of them strongly relate to prediction. Processing a deluge of variables and discovering strongly relevant features pose a significant challenge for big data analytics. Finding influencing variables calls for automated methods of strongly relevant feature selection, which is what online streaming feature selection provides. In particular, online streaming feature selection, which deals with the stream of features sequentially added while the total data observations are fixed, aims to select a subset of strongly relevant features from the original feature set. In this paper, we apply four state-of-the-art online streaming feature selection methods for building long-lead extreme floods forecasting models, which identify optimal size of strongly relevant meteorological variables and confine learning the prediction model on the relevant feature set instead of the original entire feature set. The prediction models are evaluated and compared systematically on the historical precipitation and associated meteorological data collected in the State of Iowa.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "Data compression plays an important role in data mining in assessing the minability of data and a modality of evaluating similarities between complex objects. We focus on compressibility of strings of symbols and on using compression in computing similarity in text corpora; also we propose a novel approach for assessing the quality of text summarization.", "Hadoop is an emerging framework for parallel big data processing. While becoming popular, Hadoop is too complex for regular users to fully understand all the system parameters and tune them appropriately. Especially when processing a batch of jobs, default Hadoop setting may cause inefficient resource utilization and unnecessarily prolong the execution time. This paper considers an extremely important setting of slot configuration which by default is fixed and static. We proposed an enhanced Hadoop system called FRESH which can derive the best slot setting, dynamically configure slots, and appropriately assign tasks to the available slots. The experimental results show that when serving a batch of MapReduce jobs, FRESH significantly improves the makespan as well as the fairness among jobs.", "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "This paper studies an evolutionary game theoretic framework for adaptive and stable application deployment in clouds. The framework, called Cielo, aids cloud operators to adapt the resource allocation to applications and their locations to the operational conditions in a cloud (e.g., workload and resource availability) with respect to multiple conflicting objectives (e.g., response time and power consumption). Moreover, Cielo theoretically guarantees that each application performs an evolutionarily stable deployment strategy, which is an equilibrium solution under given operational conditions. Simulation results verify this theoretical analysis, applications seek equilibria to perform adaptive and evolutionarily stable deployment strategies. Cielo outperforms well-known existing heuristics.", "Long-lead prediction of heavy precipitation events has a significant impact since it can provide an early warning of disasters, like a flood. However, the performance of existed prediction models has been constrained by the high dimensional space and non-linear relationship among variables. In this study, we study the prediction problem from the prospective of machine learning. In our machine-learning framework for forecasting heavy precipitation events, we use global hydro-meteorological variables with spatial and temporal influences as features, and the target weather events that last several days have been formulated as weather clusters. Our study has three phases: 1) identify weather clusters in different sizes, 2) handle the imbalance problem within the data, 3) select the most-relevant features through the large feature space. We plan to evaluate our methods with several real world data sets for predicting the heavy precipitation events.", "Heavy precipitation for several days and weeks always leads to some extreme nature disasters. Long-lead term precipitation forecasting plays an important role on the prevision of such calamities. Most works focus on the generation of training labels with allocation of the proper corresponding spatio-temporal information. In this paper, we will provide a different path by performing regression analysis using the precipitation amounts at particular locations. This method is called Hierarchical Clustering based Bayesian Structural Vector Autoregression (HC-BSVAR). The approach for HC-BSVAR is divided into two steps. First, we apply a hierarchical clustering algorithm to identify the Elite locations and then transfer the 3-dimensional data space into a new traditional 2-dimensional data space. Every column of the new data frame is a hydro-meteorological feature of the original data and each row represents a time point (day) in the original space. Secondly, an economic-based multivariate time series model called Bayesian-based Structural Vector Autoregression (BSVAR) is exploited to perform the final prediction result. The prediction quality will be vary for different cut of tree structure which generated by hierarchical clustering. The coefficient for determination of each location by each level of cut is applied to quantize the quality of prediction. The relationship between the cut level of clustering geographic locations and the regression model performance are also discussed, based on the result of prediction quality." ] }
1609.03759
2521863123
Recent trends in robot arm control have seen a shift towards end-to-end solutions, using deep reinforcement learning to learn a controller directly from raw sensor data, rather than relying on a hand-crafted, modular pipeline. However, the high dimensionality of the state space often means that it is impractical to generate sufficient training data with real-world experiments. As an alternative solution, we propose to learn a robot controller in simulation, with the potential of then transferring this to a real robot. Building upon the recent success of deep Q-networks, we present an approach which uses 3D simulations to train a 7-DOF robotic arm in a control task without any prior knowledge. The controller accepts images of the environment as its only input, and outputs motor actions for the task of locating and grasping a cube, over a range of initial configurations. To encourage efficient learning, a structured reward function is designed with intermediate rewards. We also present preliminary results in direct transfer of policies over to a real robot, without any further training.
We believe that training in simulation is a more scalable solution for the most complex manipulation tasks. Simulation has been used for a number of tasks in computer vision and robotics, including object recognition @cite_13 , semantic segmentation @cite_8 , robot grasping @cite_10 , and autonomous driving @cite_17 . For robot arm control, it was recently shown that policies could be learned in simulation for a 2D target reaching task @cite_19 , but this failed to show any feasibility of transferring to the real world. To address this issue of transfer learning, a cross-domain loss was proposed in @cite_5 to incorporate both simulated and real-world data within the same loss function. An alternative approach has made use of progressive neural networks @cite_4 , which ensure that information from simulation is not forgotten when further training is carried out on a real robot @cite_15 . However, our approach differs in that we do not require any real-world training, and attempt to directly apply policies learned in simulation, over to a real robot.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_8", "@cite_19", "@cite_5", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "2962724911", "", "2341569833", "2963513913", "2174364281", "2952629144", "2485911221", "2431874326" ], "abstract": [ "A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both.", "", "Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data— performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-theart RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset.", "This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images.", "Real-world robotics problems often occur in domains that differ significantly from the robot's prior training environment. For many robotic control tasks, real world experience is expensive to obtain, but data is easy to collect in either an instrumented environment or in simulation. We propose a novel domain adaptation approach for robot perception that adapts visual representations learned on a large easy-to-obtain source dataset (e.g. synthetic images) to a target real-world domain, without requiring expensive manual data annotation of real world data before policy search. Supervised domain adaptation methods minimize cross-domain differences using pairs of aligned images that contain the same object or scene in both the source and target domains, thus learning a domain-invariant representation. However, they require manual alignment of such image pairs. Fully unsupervised adaptation methods rely on minimizing the discrepancy between the feature distributions across domains. We propose a novel, more powerful combination of both distribution and pairwise image alignment, and remove the requirement for expensive annotation by using weakly aligned pairs of images in the source and target domains. Focusing on adapting from simulation to real world data using a PR2 robot, we evaluate our approach on a manipulation task and show that by using weakly paired images, our method compensates for domain shift more effectively than previous techniques, enabling better robot performance in the real world.", "Applying end-to-end learning to solve complex, interactive, pixel-driven control tasks on a robot is an unsolved problem. Deep Reinforcement Learning algorithms are too slow to achieve performance on a real robot, but their potential has been demonstrated in simulated environments. We propose using progressive networks to bridge the reality gap and transfer learned policies from simulation to the real world. The progressive net approach is a general framework that enables reuse of everything from low-level visual features to high-level policies for transfer to new tasks, enabling a compositional, yet simple, approach to building complex skills. We present an early demonstration of this approach with a number of experiments in the domain of robot manipulation that focus on bridging the reality gap. Unlike other proposed approaches, our real-world experiments demonstrate successful task learning from raw visual input on a fully actuated robot manipulator. Moreover, rather than relying on model-based trajectory optimisation, the task learning is accomplished using only deep reinforcement learning and sparse rewards.", "This paper presents a new method for parallel-jaw grasping of isolated objects from depth images, under large gripper pose uncertainty. Whilst most approaches aim to predict the single best grasp pose from an image, our method first predicts a score for every possible grasp pose, which we denote the grasp function. With this, it is possible to achieve grasping robust to the gripper's pose uncertainty, by smoothing the grasp function with the pose uncertainty function. Therefore, if the single best pose is adjacent to a region of poor grasp quality, that pose will no longer be chosen, and instead a pose will be chosen which is surrounded by a region of high grasp quality. To learn this function, we train a Convolutional Neural Network which takes as input a single depth image of an object, and outputs a score for each grasp pose across the image. Training data for this is generated by use of physics simulation and depth image simulation with 3D object meshes, to enable acquisition of sufficient data without requiring exhaustive real-world experiments. We evaluate with both synthetic and real experiments, and show that the learned grasp score is more robust to gripper pose uncertainty than when this uncertainty is not accounted for.", "Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images, thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this paper, we propose to use a virtual world to automatically generate realistic synthetic images with pixel-level annotations. Then, we address the question of how useful such data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show how the inclusion of SYNTHIA in the training stage significantly improves performance on the semantic segmentation task." ] }
1609.03173
2949546727
One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Solomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting.
Our focus is on broadcasting for streaming services, where are required low complexity decoding algorithms in order to enable reception for energy constrained devices, together with short decoding delays in order to trigger the content reception as soon as possible without sacrificing throughput.In previous studies the decoding delay reduction is reached thanks to the use of systematic encoding constructions with progressive decoding . The complexity is decreased by the use of binary fields for network coding @cite_13 .
{ "cite_N": [ "@cite_13" ], "mid": [ "1882115122" ], "abstract": [ "We consider binary systematic network codes and investigate their capability of decoding a source message either in full or in part. We carry out a probability analysis, derive closed-form expressions for the decoding probability and show that systematic network coding outperforms conventional network coding. We also develop an algorithm based on Gaussian elimination that allows progressive decoding of source packets. Simulation results show that the proposed decoding algorithm can achieve the theoretical optimal performance. Furthermore, we demonstrate that systematic network codes equipped with the proposed algorithm are good candidates for progressive packet recovery owing to their overall decoding delay characteristics." ] }
1609.03289
2949976481
Heat-Diffusion (HD) routing is our recently-developed queue-aware routing policy for multi-hop wireless networks inspired by Thermodynamics. In the prior theoretical studies, we have shown that HD routing guarantees throughput optimality, minimizes a quadratic routing cost, minimizes queue congestion on the network, and provides a trade-off between routing cost and queueing delay that is Pareto-Optimal. While striking, these guarantees are based on idealized assumptions (including global synchronization, centralized control, and infinite buffers) and heretofore have only been evaluated through simplified numerical simulations. We present here the first practical decentralized Heat-Diffusion Collection Protocol (HDCP) for wireless sensor networks and detail its implementation on Contiki OS. We present a thorough evaluation of HDCP based on real testbed experiments, including a comparative analysis of its performance with respect to the state of the art Collection Tree Protocol (CTP) and Backpressure Collection Protocol (BCP) for wireless sensor networks. We find that HDCP has a significantly higher throughput region and greater resilience to interference compared to CTP. However, we also find that the best performance of HDCP is comparable to the best performance of BCP, due to the similarity in their neighbor rankings, which we verify through a Kendall's-Tau test.
Besides the original Backpressure routing algorithm, other throughput optimal policies @cite_10 @cite_14 @cite_11 have also been proposed in the existing network theory literature. The HD algorithm also provides the same throughput optimality guarantee in theory. However, what motivated us to implement HD were the striking additional expected performance capabilities (based on our theoretical results)--- that it also offers a Pareto-optimal trade-off between routing cost and queue congestion.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_11" ], "mid": [ "2103468752", "2059060883", "2136954685" ], "abstract": [ "The input-queued switch architecture is widely used in Internet routers, due to its ability to run at very high line speeds. A central problem in designing an input-queued switch is choosing the scheduling algorithm, i.e. deciding which packets to transfer from ingress ports to egress ports in a given timeslot. Important metrics for evaluating a scheduling algorithm are its throughput and average delay. The well-studied ‘Maximum-Weight’ algorithm has been proved to have maximal throughput [1]; later work [2]–[4] found a wider class of algorithms which also have maximal throughput. The delay performance of these algorithms is less well understood. In this paper, we present a new technique for analysing scheduling algorithms which can explain their delay performance. In particular, we are able to explain the empirical observations in [2] about the average delay in a parameterized class of algorithms akin to Maximum-Weight. We also propose an optimal scheduling algorithm. Our technique is based on critically-balanced fluid model equations.", ".We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each quadratic holding cost structure, there is a maximum pressure policy that asymptotically minimizes the holding cost. A key to the optimality proofs is to prove a state space collapse result and a heavy traffic limit theorem for the network processes under a maximum pressure policy. We extend a framework of Bramson [Queueing Systems Theory Appl. 30 (1998) 89–148] and Williams [Queueing Systems Theory Appl. 30 (1998b) 5–25] from the multiclass queueing network setting to the stochastic processing network setting to prove the state space collapse result and the heavy traffic limit theorem. The extension can be adapted to other studies of stochastic processing networks. 1. Introduction. This paper is a continuation of Dai and Lin (2005), in which maximum pressure policies are shown to be throughput optimal for a class of stochastic processing networks. Throughput optimality is an important, first-order objective for many networks, but it ignores some key secondary performance measures like queueing delays experienced by jobs in these networks. In this paper we show that maximum pressure policies enjoy additional optimality properties; they are asymptotically optimal in minimizing a certain workload or holding cost of a stochastic processing network. Stochastic processing networks have been introduced in a series of three papers by Harrison (2000, 2002, 2003). In Dai and Lin (2005) and this paper we consider a special class of Harrison’s model. This class of stochastic processing networks is much more general than multiclass queueing networks that have been a subject of intensive study in the last 20 years; see, for example, Harrison (1988), Williams", "This paper considers the problem of throughput optimal routing scheduling in a multi-hop constrained queueing network with random connectivity whose special cases include opportunistic multi-hop wireless networks and input-queued switch fabrics. The main challenge in the design of throughput optimal routing policies is closely related to identifying appropriate and universal Lyapunov functions with negative expected drift. The few well-known throughput optimal policies in the literature are constructed using simple quadratic or exponential Lyapunov functions of the queue backlogs and as such they seek to balance the queue backlogs across network independent of the topology. By considering a class of continuous, differentiable, and piece-wise quadratic Lyapunov functions, this paper provides a large class of throughput optimal routing policies. The proposed class of Lyapunov functions allow for the routing policy to control the traffic along short paths for a large portion of state-space while ensuring a negative expected drift. This structure enables the design of a large class of routing policies. In particular, and in addition to recovering the throughput optimality of the well-known backpressure routing policy, an opportunistic routing policy with congestion diversity is proved to be throughput optimal." ] }
1609.03289
2949976481
Heat-Diffusion (HD) routing is our recently-developed queue-aware routing policy for multi-hop wireless networks inspired by Thermodynamics. In the prior theoretical studies, we have shown that HD routing guarantees throughput optimality, minimizes a quadratic routing cost, minimizes queue congestion on the network, and provides a trade-off between routing cost and queueing delay that is Pareto-Optimal. While striking, these guarantees are based on idealized assumptions (including global synchronization, centralized control, and infinite buffers) and heretofore have only been evaluated through simplified numerical simulations. We present here the first practical decentralized Heat-Diffusion Collection Protocol (HDCP) for wireless sensor networks and detail its implementation on Contiki OS. We present a thorough evaluation of HDCP based on real testbed experiments, including a comparative analysis of its performance with respect to the state of the art Collection Tree Protocol (CTP) and Backpressure Collection Protocol (BCP) for wireless sensor networks. We find that HDCP has a significantly higher throughput region and greater resilience to interference compared to CTP. However, we also find that the best performance of HDCP is comparable to the best performance of BCP, due to the similarity in their neighbor rankings, which we verify through a Kendall's-Tau test.
Besides BCP, there are a number of other prior works on routing and collection protocols for wireless sensor networks, including the Collection Tree Protocol (CTP) @cite_9 , Glossy @cite_7 , Dozer @cite_19 , Low-power Wireless Bus @cite_0 , ORW @cite_8 and Oppcast @cite_15 . We provide a side by side comparison of HDCP with the well-known CTP and BCP protocols. We believe this provides a meaningful comparison with a state of the art minimum cost quasi-static routing protocol as well as a state of the art queue and cost-aware dynamic routing protocol.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_9", "@cite_0", "@cite_19", "@cite_15" ], "mid": [ "", "2129227302", "2160343401", "2132680304", "2148863828", "2343861399" ], "abstract": [ "", "ABSTRACT Traditionally, routing in wireless sensor networks consists of two steps: First, the routing protocol selects a next hop, and, second, the MAC protocol waits for the intended destination to wake up and receive the data. This design makes it difficult to adapt to link dynamics and introduces delays while waiting for the next hop to wake up. In this paper we introduce ORW, a practical opportunistic routing scheme for wireless sensor networks. In a duty-cycled setting, packets are addressed to sets of potential receivers and forwarded by the neighbor that wakes up first and successfully receives the packet. This reduces delay and energy consumption by utilizing all neighbors as potential forwarders. Furthermore, this increases resilience to wireless link dynamics by exploiting spatial diversity. Our results show that ORW reduces radio duty-cycles on average by 50 (up to 90 on individual nodes) and delays by 30 to 90 when compared to the state of the art.", "This paper presents and evaluates two principles for wireless routing protocols. The first is datapath validation: data traffic quickly discovers and fixes routing inconsistencies. The second is adaptive beaconing: extending the Trickle algorithm to routing control traffic reduces route repair latency and sends fewer beacons. We evaluate datapath validation and adaptive beaconing in CTP Noe, a sensor network tree collection protocol. We use 12 different testbeds ranging in size from 20--310 nodes, comprising seven platforms, and six different link layers, on both interference-free and interference-prone channels. In all cases, CTP Noe delivers > 90 of packets. Many experiments achieve 99.9 . Compared to standard beaconing, CTP Noe sends 73 fewer beacons while reducing topology repair latency by 99.8 . Finally, when using low-power link layers, CTP Noe has duty cycles of 3 while supporting aggregate loads of 30 packets minute.", "We present the Low-Power Wireless Bus (LWB), a communication protocol that supports several traffic patterns and mobile nodes immersed in static infrastructures. LWB turns a multi-hop low-power wireless network into an infrastructure similar to a shared bus, where all nodes are potential receivers of all data. It achieves this by mapping all traffic demands on fast network floods, and by globally scheduling every flood. As a result, LWB inherently supports one-to-many, many-to-one, and many-to-many traffic. LWB also keeps no topology-dependent state, making it more resilient to link changes due to interference, node failures, and mobility than prior approaches. We compare the same LWB prototype on four testbeds with seven state-of-the-art protocols and show that: (i) LWB performs comparably or significantly better in many-to-one scenarios, and adapts efficiently to varying traffic loads; (ii) LWB outperforms our baselines in many-to-many scenarios, at times by orders of magnitude; (iii) external interference and node failures affect LWB's performance only marginally; (iv) LWB supports mobile nodes acting as sources, sinks, or both without performance loss.", "Environmental monitoring is one of the driving applications in the domain of sensor networks. The lifetime of such systems is envisioned to exceed several years. To achieve this longevity in unattended operation it is crucial to minimize energy consumption of the battery-powered sensor nodes. This paper proposes Dozer, a data gathering protocol meeting the requirements of periodic data collection and ultra-low power consumption. The protocol comprises MAC-layer, topology control, and routing all coordinated to reduce energy wastage of the communication subsystem. Using a tree-based network structure, packets are reliably routed towards the data sink. Parents thereby schedule precise rendezvous times for all communication with their children. In a deployed network consisting of 40 TinyOS- enabled sensor nodes, Dozer achieves radio duty cycles in the magnitude of 0.2 .", "ZigBee shares the 2.4 GHz ISM band with a number of wireless technologies like WiFi, Bluetooth, and common household appliances like a microwave and a cordless phone to name a few. Due to the large-scale penetration of these technologies in urban environments, ZigBee communication suffers from severe cross-technology interference (CTI). Data collection in the presence of such highly dynamic CTI is quite challenging. Our work first examines the different deployment environments under the influence of planned and unplanned CTI and later proposes Oppcast, a robust and energy-efficient data collection protocol that carefully exploits a combination of spatial and channel diversity to eliminate the need for performing expensive channel estimation in advance. Our extensive evaluation in both a large-scale testbed (Academic Institution) and various urban environments (Carpark, Residential Complex, Shopping Mall and Cafeteria) shows that Oppcast is not only robust to CTI with reliability consistently maintained above 98.55 , but is also up to 2.4 times more energy efficient than the state-of-the-art data collection protocols. The rationale behind Oppcast exhibiting high robustness in highly dynamic environments is a significant increase in the number of communication opportunities it gets by exploiting multiple routes over multiple channels towards the destination." ] }
1609.03289
2949976481
Heat-Diffusion (HD) routing is our recently-developed queue-aware routing policy for multi-hop wireless networks inspired by Thermodynamics. In the prior theoretical studies, we have shown that HD routing guarantees throughput optimality, minimizes a quadratic routing cost, minimizes queue congestion on the network, and provides a trade-off between routing cost and queueing delay that is Pareto-Optimal. While striking, these guarantees are based on idealized assumptions (including global synchronization, centralized control, and infinite buffers) and heretofore have only been evaluated through simplified numerical simulations. We present here the first practical decentralized Heat-Diffusion Collection Protocol (HDCP) for wireless sensor networks and detail its implementation on Contiki OS. We present a thorough evaluation of HDCP based on real testbed experiments, including a comparative analysis of its performance with respect to the state of the art Collection Tree Protocol (CTP) and Backpressure Collection Protocol (BCP) for wireless sensor networks. We find that HDCP has a significantly higher throughput region and greater resilience to interference compared to CTP. However, we also find that the best performance of HDCP is comparable to the best performance of BCP, due to the similarity in their neighbor rankings, which we verify through a Kendall's-Tau test.
In our prior works, we have presented the idealized Heat Diffusion routing algorithm @cite_20 @cite_1 . All of these are network theory papers that spell out a centralized algorithm, assume global synchronization, assume that at each time step a NP-hard Maximum Weight Independent Set problem can be solved, and that all queues are of unlimited size, and under these assumptions prove various properties of the HD algorithm. The only evaluations presented in these works are idealized MATLAB simulations. This work is clearly inspired by and built up on our earlier works on HD routing, but is the first to develop and implement it as a realistic distributed protocol (HDCP) and evaluate it on a real testbed.
{ "cite_N": [ "@cite_1", "@cite_20" ], "mid": [ "2073507812", "2041248965" ], "abstract": [ "Minimum cost routing is considered on multiclass multihop wireless networks influenced by stochastic arrivals, inter-channel interference, and time-varying topology. Endowing each air link with a cost factor, possibly time-varying and different for different classes, we define the Dirichlet routing cost as the square of the link packet transmissions weighted by the link cost-factors. Our recently-proposed Heat-Diffusion (HD) routing protocol [3] is extended to minimize this cost, while ensuring queue stability for all stabilizable traffic demands, and without requiring any information about network topology or packet arrivals. This is the first time in literature that such a multiclass routing penalty can be minimized at network layer subject to queue stability. Further, when all links are of unit cost factor, our protocol here reduces to the one in our recent paper [4], leading to minimum average network delay among all routing protocols that act based only on current queue congestion and current channel states. Our approach is based on mapping a communication network into an electrical network by showing that the fluid limit of wireless network under our routing protocol follows Ohm's law on a nonlinear resistive network.", "This paper presents the expected transmission count metric (ETX), which finds high-throughput paths on multi-hop wireless networks. ETX minimizes the expected total number of packet transmissions (including retransmissions) required to successfully deliver a packet to the ultimate destination. The ETX metric incorporates the effects of link loss ratios, asymmetry in the loss ratios between the two directions of each link, and interference among the successive links of a path. In contrast, the minimum hop-count metric chooses arbitrarily among the different paths of the same minimum length, regardless of the often large differences in throughput among those paths, and ignoring the possibility that a longer path might offer higher throughput.This paper describes the design and implementation of ETX as a metric for the DSDV and DSR routing protocols, as well as modifications to DSDV and DSR which allow them to use ETX. Measurements taken from a 29-node 802.11b test-bed demonstrate the poor performance of minimum hop-count, illustrate the causes of that poor performance, and confirm that ETX improves performance. For long paths the throughput improvement is often a factor of two or more, suggesting that ETX will become more useful as networks grow larger and paths become longer." ] }
1609.03433
2548129800
We present a novel motion planning algorithm for transferring a liquid body from a source to a target container. Our approach uses a receding-horizon optimization strategy that takes into account fluid constraints and avoids collisions. In order to efficiently handle the high-dimensional configuration space of a liquid body, we use system identification to learn its dynamics characteristics using a neural network. We generate the training dataset using stochastic optimization in a transfer-problem-specific search space. The runtime feedback motion planner is used for real-time planning and we observe high success rate in our simulated 2D and 3D fluid transfer benchmarks.
A motion planning algorithm searches for a trajectory that satisfies a set of constraints (collision-free, smoothness), which may also be optimal under a given quality measure. Many early motion planners such as @cite_29 and its descendants @cite_2 @cite_23 @cite_11 consider only collision-free constraints. Unlike these methods, which tend to compute a trajectory by sampling in the space of possible trajectories, optimization-based motion planners such as @cite_8 @cite_28 @cite_24 can easily take into account other constraints, such as dynamics, smoothness, etc. Many of these approaches formulate the problem as a spacetime continuous optimization. Such optimization methods have also been used for liquid transfer @cite_3 @cite_10 based on simplified dynamics when limited to static environments.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_29", "@cite_3", "@cite_24", "@cite_23", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2142224528", "2019965290", "131069610", "2579689856", "61873113", "2102128251", "2166052572", "", "2135677376" ], "abstract": [ "We present a new optimization-based approach for robotic motion planning among obstacles. Like CHOMP (Covariant Hamiltonian Optimization for Motion Planning), our algorithm can be used to find collision-free trajectories from naA¯ve, straight-line initializations that might be in collision. At the core of our approach are (a) a sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary, and (b) an efficient formulation of the no-collisions constraint that directly considers continuous-time safety Our algorithm is implemented in a software package called TrajOpt. We report results from a series of experiments comparing TrajOpt with CHOMP and randomized planners from OMPL, with regard to planning time and path quality. We consider motion planning for 7 DOF robot arms, 18 DOF full-body robots, statically stable walking motion for the 34 DOF Atlas humanoid robot, and physical experiments with the 18 DOF PR2. We also apply TrajOpt to plan curvature-constrained steerable needle trajectories in the SE(3) configuration space and multiple non-intersecting curved channels within 3D-printed implants for intracavitary brachytherapy. Details, videos, and source code are freely available at: http: rll.berkeley.edu trajopt ijrr.", "We present a new approach to motion planning using a stochastic trajectory optimization framework. The approach relies on generating noisy trajectories to explore the space around an initial (possibly infeasible) trajectory, which are then combined to produced an updated trajectory with lower cost. A cost function based on a combination of obstacle and smoothness cost is optimized in each iteration. No gradient information is required for the particular optimization algorithm that we use and so general costs for which derivatives may not be available (e.g. costs corresponding to constraints and motor torques) can be included in the cost function. We demonstrate the approach both in simulation and on a mobile manipulation system for unconstrained and constrained tasks. We experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.", "", "We present a new algorithm to compute a collision-free trajectory for a robot manipulator to pour liquid from one container to the other. Our formulation uses a physical fluid model to simulate its highly deformable motion. We present a simulation guided and optimization based method to automatically compute the transferring trajectory. We use the full-featured and accurate Navier-Stokes model that provides the fine-grained information of velocity distribution inside the liquid body. Moreover, this information is used as an additional guiding energy term for the planner. Our approach includes a tight integration between the fine-grained fluid simulator, liquid transfer controller, and the optimization-based planner. We have implemented the method using a hybrid particle-mesh fluid simulator (FLIP) and demonstrated its performance on 4 benchmarks with different cup shapes and viscosity coefficients.", "We present a novel optimization-based algorithm for motion planning in dynamic environments. Our approach uses a stochastic trajectory optimization framework to avoid collisions and satisfy smoothness and dynamics constraints. Our algorithm does not require a priori knowledge about global motion or trajectories of dynamic obstacles. Rather, we compute a conservative local bound on the position or trajectory of each obstacle over a short time and use the bound to compute a collision-free trajectory for the robot in an incremental manner. Moreover, we interleave planning and execution of the robot in an adaptive manner to balance between the planning horizon and responsiveness to obstacle. We highlight the performance of our planner in a simulated dynamic environment with the 7-DOF PR2 robot arm and dynamic obstacles.", "We explore global randomized joint space path planning for articulated robots that are subject to task space constraints. This paper describes a representation of constrained motion for joint space planners and develops two simple and efficient methods for constrained sampling of joint configurations: Tangent Space Sampling (TS) and First-Order Retraction (FR). Constrained joint space planning is important for many real world problems involving redundant manipulators. On the one hand, tasks are designated in work space coordinates: rotating doors about fixed axes, sliding drawers along fixed trajectories or holding objects level during transport. On the other, joint space planning gives alternative paths that use redundant degrees of freedom to avoid obstacles or satisfy additional goals while performing a task. In simulation, we demonstrate that our methods are faster and significantly more invariant to problem algorithm parameters than existing techniques.", "We address the problem of real-time navigation in dynamic environments for car-like robots. We present an approach to identify controls that will lead to a collision with a moving obstacle at some point in the future. Our approach generalizes the concept of velocity obstacles, which have been used for navigation among dynamic obstacles, and takes into account the constraints of a car-like robot. We use this formulation to find controls that will allow collision free navigation in dynamic environments. Finally, we demonstrate the performance of our algorithm on a simulated car-like robot among moving obstacles.", "", "We present an approach to path planning for manipulators that uses Workspace Goal Regions (WGRs) to specify goal end-effector poses. Instead of specifying a discrete set of goals in the manipulator's configuration space, we specify goals more intuitively as volumes in the manipulator's workspace. We show that WGRs provide a common framework for describing goal regions that are useful for grasping and manipulation. We also describe two randomized planning algorithms capable of planning with WGRs. The first is an extension of RRT-JT that interleaves exploration using a Rapidly-exploring Random Tree (RRT) with exploitation using Jacobian-based gradient descent toward WGR samples. The second is the IKBiRRT algorithm, which uses a forward-searching tree rooted at the start and a backward-searching tree that is seeded by WGR samples. We demonstrate both simulation and experimental results for a 7DOF WAM arm with a mobile base performing reaching and pick-and-place tasks. Our results show that planning with WGRs provides an intuitive and powerful method of specifying goals for a variety of tasks without sacrificing efficiency or desirable completeness properties." ] }
1609.03433
2548129800
We present a novel motion planning algorithm for transferring a liquid body from a source to a target container. Our approach uses a receding-horizon optimization strategy that takes into account fluid constraints and avoids collisions. In order to efficiently handle the high-dimensional configuration space of a liquid body, we use system identification to learn its dynamics characteristics using a neural network. We generate the training dataset using stochastic optimization in a transfer-problem-specific search space. The runtime feedback motion planner is used for real-time planning and we observe high success rate in our simulated 2D and 3D fluid transfer benchmarks.
There is considerable work on feedback motion planning that uses refinement schemes based on feedback control laws. This can be performed using replanning @cite_4 @cite_30 @cite_21 or by formulating the problem as a Markov Decision Process @cite_22 . These ideas have been applied to high dimensional continuous systems such as humanoid robots @cite_19 @cite_32 . In this work, we present such a feedback motion planning algorithm for liquid transfer.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_22", "@cite_21", "@cite_32", "@cite_19" ], "mid": [ "2091338725", "2108643443", "2099430963", "2113256452", "2091638990", "2121103318" ], "abstract": [ "Replanning is a powerful mechanism for controlling robot motion under hard constraints and unpredictable disturbances, but it involves an inherent tradeoff between the planner's power (e.g., a planning horizon or time cutoff) and its responsiveness to disturbances. This paper presents an adaptive time-stepping architecture for real-time planning with several advantageous properties. By dynamically adapting to the amount of time needed for a sample-based motion planner to make progress toward the goal, the technique is robust to the typically high variance exhibited by replanning queries. The technique is proven to be safe and asymptotically complete in a deterministic environment and a static objective. For unpredictably moving obstacles, the technique can be applied to keep the robot safe more reliably than reactive obstacle avoidance or fixed time-step replanning. It can also be applied in a contingency planning algorithm that achieves simultaneous safety-seeking and goal-seeking motion. These techniques generate responsive and safe motion in both simulated and real robots across a range of difficulties, including applications to bounded-acceleration pursuit-evasion, indoor navigation among moving obstacles, and aggressive collision-free teleoperation of an industrial robot arm.", "We present a replanning algorithm for repairing rapidly-exploring random trees when changes are made to the configuration space. Instead of abandoning the current RRT, our algorithm efficiently removes just the newly-invalid parts and maintains the rest. It then grows the resulting tree until a new solution is found. We use this algorithm to create a probabilistic analog to the widely-used D* family of deterministic algorithms, and demonstrate its effectiveness in a multirobot planning domain", "IN Proc. Robotics: Science & Systems, 2008 Abstract—Motion planning in uncertain and dynamic environ- ments is an essential capability for autonomous robots. Partially observable Markov decision processes (POMDPs) provide a principled mathematical framework for solving such problems, but they are often avoided in robotics due to high computational complexity. Our goal is to create practical POMDP algorithms and software for common robotic tasks. To this end, we have developed a new point-based POMDP algorithm that exploits the notion of optimally reachable belief spaces to improve com- putational efficiency. In simulation, we successfully applied the algorithm to a set of common robotic tasks, including instances of coastal navigation, grasping, mobile robot exploration, and target tracking, all modeled as POMDPs with a large number of states. In most of the instances studied, our algorithm substantially outperformed one of the fastest existing point-based algorithms. A software package implementing our algorithm will soon be released at http: motion.comp.nus.edu.sg projects pomdp pomdp.html.", "Mobile robots often operate in domains that are only incompletely known, for example, when they have to move from given start coordinates to given goal coordinates in unknown terrain. In this case, they need to be able to replan quickly as their knowledge of the terrain changes. Stentz' Focussed Dynamic A sup * (D sup * ) is a heuristic search method that repeatedly determines a shortest path from the current robot coordinates to the goal coordinates while the robot moves along the path. It is able to replan faster than planning from scratch since it modifies its previous search results locally. Consequently, it has been extensively used in mobile robotics. In this article, we introduce an alternative to D sup * that determines the same paths and thus moves the robot in the same way but is algorithmically different. D sup * Lite is simple, can be rigorously analyzed, extendible in multiple ways, and is at least as efficient as D sup * . We believe that our results will make D sup * -like replanning methods even more popular and enable robotics researchers to adapt them to additional applications.", "We present a novel optimization-based motion planning algorithm for high degree-of-freedom (DOF) robots in dynamic environments. Our approach decomposes the high-dimensional motion planning problem into a sequence of low-dimensional sub-problems. We compute collision-free and smooth paths using optimization-based planning and trajectory perturbation for each sub-problem. The overall algorithm does not require a priori knowledge about global motion or trajectories of dynamic obstacles. Rather, we compute a conservative local bound on the position or trajectory of each obstacle over a short time and use the bound to incrementally compute a collision-free trajectory for the robot. The high-DOF robot is treated as a tightly coupled system, and we incrementally use constrained coordination to plan its motion. We highlight the performance of our planner in simulated environments on robots with tens of DOFs.", "We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation." ] }
1609.03433
2548129800
We present a novel motion planning algorithm for transferring a liquid body from a source to a target container. Our approach uses a receding-horizon optimization strategy that takes into account fluid constraints and avoids collisions. In order to efficiently handle the high-dimensional configuration space of a liquid body, we use system identification to learn its dynamics characteristics using a neural network. We generate the training dataset using stochastic optimization in a transfer-problem-specific search space. The runtime feedback motion planner is used for real-time planning and we observe high success rate in our simulated 2D and 3D fluid transfer benchmarks.
The extension of conventional motion planning algorithms to the manipulation of non-rigid objects has been addressed in the context of virtual suturing @cite_12 , cloth folding @cite_25 , and surgical simulation @cite_6 . It can be challenging to deal with non-rigid objects with high-dimensional configuration spaces. This is especially the case with liquid manipulation tasks, where the dimension can be as high as several million (see @cite_3 for a detailed discussion). For certain types of fluids such as smoke and fire, optimization-based motion planning can be adapted to solve the problem by exploiting the special structure of the resulting fluid simulator @cite_7 @cite_16 . However, it is non-trivial to extend these methods to control liquid bodies with non-smooth, rapidly-changing free surfaces. Moreover, prior methods are designed for offline applications and computationally very costly. Previous work @cite_10 reduced the computational cost by using a much simplified liquid model, dependent on just two variables.
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_6", "@cite_16", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2105673236", "2579689856", "2021923954", "1971086298", "", "2950229073", "" ], "abstract": [ "We describe a method for controlling smoke simulations through user-specified keyframes. To achieve the desired behavior, a continuous quasi-Newton optimization solves for appropriate \"wind\" forces to be applied to the underlying velocity field throughout the simulation. The cornerstone of our approach is a method to efficiently compute exact derivatives through the steps of a fluid simulation. We formulate an objective function corresponding to how well a simulation matches the user's keyframes, and use the derivatives to solve for force parameters that minimize this function. For animations with several keyframes, we present a novel multiple-shooting approach. By splitting large problems into smaller overlapping subproblems, we greatly speed up the optimization process while avoiding certain local minima.", "We present a new algorithm to compute a collision-free trajectory for a robot manipulator to pour liquid from one container to the other. Our formulation uses a physical fluid model to simulate its highly deformable motion. We present a simulation guided and optimization based method to automatically compute the transferring trajectory. We use the full-featured and accurate Navier-Stokes model that provides the fine-grained information of velocity distribution inside the liquid body. Moreover, this information is used as an additional guiding energy term for the planner. Our approach includes a tight integration between the fine-grained fluid simulator, liquid transfer controller, and the optimization-based planner. We have implemented the method using a hybrid particle-mesh fluid simulator (FLIP) and demonstrated its performance on 4 benchmarks with different cup shapes and viscosity coefficients.", "We present algorithms for simulating and visualizing the insertion and steering of needles through deformable tissues for surgical training and planning. Needle insertion is an essential component of many clinical procedures such as biopsies, injections, neurosurgery, and brachytherapy cancer treatment. The success of these procedures depends on accurate guidance of the needle tip to a clinical target while avoiding vital tissues. Needle insertion deforms body tissues, making accurate placement difficult. Our interactive needle insertion simulator models the coupling between a steerable needle and deformable tissue. We introduce (1) a novel algorithm for local remeshing that quickly enforces the conformity of a tetrahedral mesh to a curvilinear needle path, enabling accurate computation of contact forces, (2) an efficient method for coupling a 3D finite element simulation with a 1D inextensible rod with stick-slip friction, and (3) optimizations that reduce the computation time for physically based simulations. We can realistically and interactively simulate needle insertion into a prostate mesh of 13,375 tetrahedra and 2,763 vertices at a 25 Hz frame rate on an 8-core 3.0 GHz Intel Xeon PC. The simulation models prostate brachytherapy with needles of varying stiffness, steering needles around obstacles, and supports motion planning for robotic needle insertion. We evaluate the accuracy of the simulation by comparing against real-world experiments in which flexible, steerable needles were inserted into gel tissue phantoms.", "During the last decade, sampling-based path planning algorithms, such as probabilistic roadmaps (PRM) and rapidly exploring random trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g. as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g. showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically optimal, i.e. such that the cost of the returned solution converges almost surely to the optimum. Moreover, it is shown that the computational complexity of the new algorithms is within a constant factor of that of their probabilistically complete (but not asymptotically optimal) counterparts. The analysis in this paper hinges on novel connections between stochastic sampling-based path planning algorithms and the theory of random geometric graphs.", "", "Robotic manipulation of deformable objects remains a challenging task. One such task is folding a garment autonomously. Given start and end folding positions, what is an optimal trajectory to move the robotic arm to fold a garment? Certain trajectories will cause the garment to move, creating wrinkles, and gaps, other trajectories will fail altogether. We present a novel solution to find an optimal trajectory that avoids such problematic scenarios. The trajectory is optimized by minimizing a quadratic objective function in an off-line simulator, which includes material properties of the garment and frictional force on the table. The function measures the dissimilarity between a user folded shape and the folded garment in simulation, which is then used as an error measurement to create an optimal trajectory. We demonstrate that our two-arm robot can follow the optimized trajectories, achieving accurate and efficient manipulations of deformable objects.", "" ] }
1609.03433
2548129800
We present a novel motion planning algorithm for transferring a liquid body from a source to a target container. Our approach uses a receding-horizon optimization strategy that takes into account fluid constraints and avoids collisions. In order to efficiently handle the high-dimensional configuration space of a liquid body, we use system identification to learn its dynamics characteristics using a neural network. We generate the training dataset using stochastic optimization in a transfer-problem-specific search space. The runtime feedback motion planner is used for real-time planning and we observe high success rate in our simulated 2D and 3D fluid transfer benchmarks.
Reinforcement and imitation learning have been shown to be effective in terms of controlling high dimensional dynamic systems, e.g., a humanoid robot @cite_19 @cite_15 . Recently, imitation learning has been used to perform liquid manipulation using example container trajectories from a human demonstrator @cite_33 @cite_31 @cite_26 . However, the learning framework in this work does not take fluid dynamics constraints into account. Moreover, trajectories of liquid body shapes from real-life experiments have to be captured and digitized to construct the dataset, which is challenging in and of itself (see @cite_20 @cite_13 ). More recently, reinforcement learning has also been used to learn pouring of granular materials in @cite_5 @cite_9 . Our methods differ from these methods in that we only use supervised learning, but we combine it with trajectory optimization to enhance the robustness of our motion planner.
{ "cite_N": [ "@cite_31", "@cite_26", "@cite_33", "@cite_9", "@cite_19", "@cite_5", "@cite_15", "@cite_13", "@cite_20" ], "mid": [ "2003612239", "2076235166", "2086281586", "2419438630", "2121103318", "", "1571530861", "", "2025333109" ], "abstract": [ "This paper focuses on improving performance with practice for tasks that are difficult to model or plan, such as pouring (manipulating a liquid or granular material such as sugar). We are also interested in tasks that involve the possible use of many skills, such as pouring by tipping, shaking, and tapping. Although our ultimate goal is to learn and optimize skills automatically from demonstration and practice, in this paper, we explore manually obtaining skills from human demonstration, and automatically selecting skills and optimizing continuous parameters for these skills. Behaviors such as pouring, shaking, and tapping are modeled with finite state machines. We unify the pouring and the two shaking skills as a general pouring model. The constructed models are verified by implementing them on a PR2 robot. The robot experiments demonstrate that our approach is able to appropriately generalize knowledge about different pouring skills and optimize behavior parameters.", "One of the key challenges for learning manipulation skills is generalizing between different objects. The robot should adapt both its actions and the task constraints to the geometry of the object being manipulated. In this paper, we propose computing geometric parameters of novel objects by warping known objects to match their shape. We refer to the parameters computed in this manner as warped parameters, as they are defined as functions of the warped object's point cloud. The warped parameters form the basis of the features for the motor skill learning process, and they are used to generalize between different objects. The proposed method was successfully evaluated on a pouring task both in simulation and on a real robot.", "We present a motion planning approach for per- forming a learned task while avoiding obstacles and reacting to the movement of task-relevant objects. We employ a closed-loop sampling-based motion planner that acquires new sensor infor- mation, generates new collision-free plans that are based on a learned task model, and replans at an average rate of more than 10 times per second for a 7-DOF manipulator. The task model is learned from expert demonstrations prior to task execution and is represented as a hidden Markov model. During task execution, our motion planner quickly searches in the Cartesian product of the task model and a probabilistic roadmap for a plan with features most similar to the demonstrations given the locations of the task-relevant objects. We improve the replan rate by using a fast bidirectional search and by biasing the sampling distribution using information from the learned task model to construct high-quality roadmaps. We illustrate the efficacy of our approach by performing a simulated navigation task with a 2D point robot and a physical powder transfer task with the Baxter robot.", "We explore a model-based approach to reinforcement learning where partially or totally unknown dynamics are learned and explicit planning is performed. We learn dynamics with neural networks, and plan behaviors with differential dynamic programming (DDP). In order to handle complicated dynamics, such as manipulating liquids (pouring), we consider temporally decomposed dynamics. We start from our recent work [1] where we used locally weighted regression (LWR) to model dynamics. The major contribution of this paper is making use of deep learning in the form of neural networks with stochastic DDP, and showing the advantages of neural networks over LWR. For this purpose, we extend neural networks for: (1) modeling prediction error and output noise, (2) computing an output probability distribution for a given input distribution, and (3) computing gradients of output expectation with respect to an input. Since neural networks have nonlinear activation functions, these extensions were not easy. We provide an analytic solution for these extensions using some simplifying assumptions. We verified this method in pouring simulation experiments. The learning performance with neural networks was better than that of LWR. The amount of spilled materials was reduced. We also present early results of robot experiments using a PR2. Accompanying video: https: youtu.be aM3hE1J5W98", "We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.", "", "We consider the problem of system identification of helicopter dynamics. Helicopters are complex systems, coupling rigid body dynamics with aerodynamics, engine dynamics, vibration, and other phenomena. Resultantly, they pose a challenging system identification problem, especially when considering non-stationary flight regimes. We pose the dynamics modeling problem as direct high-dimensional regression, and take inspiration from recent results in Deep Learning to represent the helicopter dynamics with a Rectified Linear Unit (ReLU) Network Model, a hierarchical neural network model. We provide a simple method for initializing the parameters of the model, and optimization details for training. We describe three baseline models and show that they are significantly outperformed by the ReLU Network Model in experiments on real data, indicating the power of the model to capture useful structure in system dynamics across a rich array of aerobatic maneuvers. Specifically, the ReLU Network Model improves 58 overall in RMS acceleration prediction over state-of-the-art methods. Predicting acceleration along the helicopter's up-down axis is empirically found to be the most difficult, and the ReLU Network Model improves by 60 over the prior state-of-the-art. We discuss explanations of these performance gains, and also investigate the impact of hyperparameters in the novel model.", "", "We present an image-based reconstruction framework to model real water scenes captured by stereoscopic video. In contrast to many image-based modeling techniques that rely on user interaction to obtain high-quality 3D models, we instead apply automatically calculated physically-based constraints to refine the initial model. The combination of image-based reconstruction with physically-based simulation allows us to model complex and dynamic objects such as fluid. Using a depth map sequence as initial conditions, we use a physically based approach that automatically fills in missing regions, removes outliers, and refines the geometric shape so that the final 3D model is consistent to both the input video data and the laws of physics. Physically-guided modeling also makes interpolation or extrapolation in the space-time domain possible, and even allows the fusion of depth maps that were taken at different times or viewpoints. We demonstrated the effectiveness of our framework with a number of real scenes, all captured using only a single pair of cameras." ] }
1609.03286
2520305281
Natural language understanding (NLU) is a core component of a spoken dialogue system. Recently recurrent neural networks (RNN) obtained strong results on NLU due to their superior ability of preserving sequential information over time. Traditionally, the NLU module tags semantic slots for utterances considering their flat structures, as the underlying RNN structure is a linear chain. However, natural language exhibits linguistic properties that provide rich, structured information for better understanding. This paper introduces a novel model, knowledge-guided structural attention networks (K-SAN), a generalization of RNN to additionally incorporate non-flat network topologies guided by prior knowledge. There are two characteristics: 1) important substructures can be captured from small training data, allowing the model to generalize to previously unseen test data; 2) the model automatically figures out the salient substructures that are essential to predict the semantic tags of the given sentences, so that the understanding performance can be improved. The experiments on the benchmark Air Travel Information System (ATIS) data show that the proposed K-SAN architecture can effectively extract salient knowledge from substructures with an attention mechanism, and outperform the performance of the state-of-the-art neural network based frameworks.
There is an emerging trend of learning representations at different levels, such as word embeddings @cite_17 , character embeddings @cite_2 , and sentence embeddings @cite_18 @cite_30 . In addition to fully unsupervised embedding learning, knowledge bases have been widely utilized to learn entity embeddings with specific functions or relations @cite_31 @cite_23 . Different from prior work, this paper focuses on learning composable substructure embeddings that are informative for understanding.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_23", "@cite_2", "@cite_31", "@cite_17" ], "mid": [ "2136189984", "2949547296", "2951077644", "2949563612", "2471178169", "2950133940" ], "abstract": [ "Latent semantic models, such as LSA, intend to map a query to its relevant documents at the semantic level where keyword-based matching often fails. In this study we strive to develop a series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them. The proposed deep structured semantic models are discriminatively trained by maximizing the conditional likelihood of the clicked documents given a query using the clickthrough data. To make our models applicable to large-scale Web search applications, we also use a technique called word hashing, which is shown to effectively scale up our semantic models to handle large vocabularies which are common in such tasks. The new models are evaluated on a Web document ranking task using a real-world data set. Results show that our best model significantly outperforms other latent semantic models, which were considered state-of-the-art in the performance prior to the work presented in this paper.", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.", "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our \"composed\" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "Unsupervised word embeddings provide rich linguistic and conceptual information about words. However, they may provide weak information about domain specific semantic relations for certain tasks such as semantic parsing of natural language queries, where such information about words or phrases can be valuable. To encode the prior knowledge about the semantic word relations, we extended the neural network based lexical word embedding objective function by incorporating the information about relationship between entities that we extract from knowledge bases [1]. In this paper, we focus on the semantic tagging of conversational utterances as our end task and we investigate two different ways of using these embeddings: as additional features to a linear sequence learning method, Conditional Random Fields (CRF), and as initial embeddings to a convolutional neural networks based CRF model (CNN-CRF) with shared feature layers and globally normalized sequence modeling components. While we obtain an average of 2 improvement in F-score compared to the previous baselines when the enriched embeddings are used as additional features for CRF models, we obtain slightly more gains when the embeddings are used as initial word representations for the CNN-based CRF models.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1609.03286
2520305281
Natural language understanding (NLU) is a core component of a spoken dialogue system. Recently recurrent neural networks (RNN) obtained strong results on NLU due to their superior ability of preserving sequential information over time. Traditionally, the NLU module tags semantic slots for utterances considering their flat structures, as the underlying RNN structure is a linear chain. However, natural language exhibits linguistic properties that provide rich, structured information for better understanding. This paper introduces a novel model, knowledge-guided structural attention networks (K-SAN), a generalization of RNN to additionally incorporate non-flat network topologies guided by prior knowledge. There are two characteristics: 1) important substructures can be captured from small training data, allowing the model to generalize to previously unseen test data; 2) the model automatically figures out the salient substructures that are essential to predict the semantic tags of the given sentences, so that the understanding performance can be improved. The experiments on the benchmark Air Travel Information System (ATIS) data show that the proposed K-SAN architecture can effectively extract salient knowledge from substructures with an attention mechanism, and outperform the performance of the state-of-the-art neural network based frameworks.
One of the earliest work with a memory component applied to language processing is memory networks @cite_11 @cite_27 , which encode facts into vectors and store them in the memory for question answering (QA). Following their success, proposed dynamic memory networks (DMN) to additionally capture position and temporality of transitive reasoning steps for different QA tasks. The idea is to encode important knowledge and store it into memory for future usage with attention mechanisms. Attention mechanisms allow neural network models to selectively pay attention to specific parts. There are also various tasks showing the effectiveness of attention mechanisms. However, most previous work focused on the classification or prediction tasks (predicting a single word given a question), and there are few studies for NLU tasks (slot tagging). Based on the fact that the linguistic or knowledge-based substructures can be treated as prior knowledge to benefit language understanding, this work borrows the idea from memory models to improve NLU. Unlike the prior NLU work that utilized representations learned from knowledge bases to enrich features of the current sentence, this paper directly learns a sentence representation incorporating memorized substructures with an automatically decided attention mechanism in an end-to-end manner.
{ "cite_N": [ "@cite_27", "@cite_11" ], "mid": [ "2951008357", "2293453011" ], "abstract": [ "We introduce a neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (, 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.", "Neural network architectures with memory and attention mechanisms exhibit certain reasoning capabilities required for question answering. One such architecture, the dynamic memory network (DMN), obtained high accuracy on a variety of language tasks. However, it was not shown whether the architecture achieves strong results for question answering when supporting facts are not marked during training or whether it could be applied to other modalities such as images. Based on an analysis of the DMN, we propose several improvements to its memory and input modules. Together with these changes we introduce a novel input module for images in order to be able to answer visual questions. Our new DMN+ model improves the state of the art on both the Visual Question Answering dataset and the -10k text question-answering dataset without supporting fact supervision." ] }
1609.03204
2951147438
We present a computational analysis of three language varieties: native, advanced non-native, and translation. Our goal is to investigate the similarities and differences between non-native language productions and translations, contrasting both with native language. Using a collection of computational methods we establish three main results: (1) the three types of texts are easily distinguishable; (2) non-native language and translations are closer to each other than each of them is to native language; and (3) some of these characteristics depend on the source or native language, while others do not, reflecting, perhaps, unified principles that similarly affect translations and non-native language.
Corpus-based investigation of translationese has been a prolific field of recent research, laying out an empirical foundation for the theoretically motivated hypotheses on the characteristics of translationese. More specifically, identification of translated texts by means of automatic classification shed light on the manifestation of translation universals and cross-linguistic influences as markers of translated texts , while @cite_1 introduced a dataset for investigation of potential common traits between translations and non-native texts. Such studies prove to be important for the development of parallel corpora , the improvement in quality of plagiarism detection , language modeling, and statistical machine translation .
{ "cite_N": [ "@cite_1" ], "mid": [ "570535444" ], "abstract": [ "Providing an examination of translation as an interpretative process which encompasses its wide-ranging nature, this work presents a descriptive frame and tries to capture the creative potential of the translation process. It outlines the six essential elements and the five stages of translation, recognizes the three main types of translation, and discusses the personal, creative and cultural influences which can affect a translated piece, as well as the possible applications of computer analysis." ] }
1609.02825
2520456074
Tracking Facial Points in unconstrained videos is challenging due to the non-rigid deformation that changes over time. In this paper, we propose to exploit incremental learning for person-specific alignment in wild conditions. Our approach takes advantage of part-based representation and cascade regression for robust and efficient alignment on each frame. Unlike existing methods that usually rely on models trained offline, we incrementally update the representation subspace and the cascade of regressors in a unified framework to achieve personalized modeling on the fly. To alleviate the drifting issue, the fitting results are evaluated using a deep neural network, where well-aligned faces are picked out to incrementally update the representation and fitting models. Both image and video datasets are employed to valid the proposed method. The results demonstrate the superior performance of our approach compared with existing approaches in terms of fitting accuracy and efficiency.
The aforementioned methods have shown impressive results in standard benchmark datasets @cite_36 . However, they still suffer from limited performance in the sequential task as they completely rely on static models trained offline. To address this limitation, efforts of constructing person-specific models are made to improve the performance of sequential face alignment.
{ "cite_N": [ "@cite_36" ], "mid": [ "2058961190" ], "abstract": [ "Automatic facial point detection plays arguably the most important role in face analysis. Several methods have been proposed which reported their results on databases of both constrained and unconstrained conditions. Most of these databases provide annotations with different mark-ups and in some cases the are problems related to the accuracy of the fiducial points. The aforementioned issues as well as the lack of a evaluation protocol makes it difficult to compare performance between different systems. In this paper, we present the 300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge which is held in conjunction with the International Conference on Computer Vision 2013, Sydney, Australia. The main goal of this challenge is to compare the performance of different methods on a new-collected dataset using the same evaluation protocol and the same mark-up and hence to develop the first standardized benchmark for facial landmark localization." ] }
1609.02825
2520456074
Tracking Facial Points in unconstrained videos is challenging due to the non-rigid deformation that changes over time. In this paper, we propose to exploit incremental learning for person-specific alignment in wild conditions. Our approach takes advantage of part-based representation and cascade regression for robust and efficient alignment on each frame. Unlike existing methods that usually rely on models trained offline, we incrementally update the representation subspace and the cascade of regressors in a unified framework to achieve personalized modeling on the fly. To alleviate the drifting issue, the fitting results are evaluated using a deep neural network, where well-aligned faces are picked out to incrementally update the representation and fitting models. Both image and video datasets are employed to valid the proposed method. The results demonstrate the superior performance of our approach compared with existing approaches in terms of fitting accuracy and efficiency.
Some of them achieve person-specific modeling via joint face alignment. A representative example was proposed in @cite_33 , which used a clean face subspace trained offline to minimize fitting errors of all frames at the same time. However, these methods are usually limited to offline tasks due to their intensive computational costs. Others attempt to incrementally construct personalized models on the fly. For instance, Sung @cite_42 proposed to employ incremental principle component analysis to adapt the holistic AAMs to achieve personalized representation. Asthana @cite_3 further explored SDM in incremental face alignment (IFA) by simultaneously updating regressors in the cascade using incremental least squares. However, faithful personalized models can hardly be achieved without joint adaptation of the representation and fitting models in a unified framework. More importantly, blind model adaptation without correction would inevitably result in model drifting. How to effectively detect misalignment is still a challenging question that is seldom investigated. To address this issue, we propose a deep neural network for robust fitting evaluation to pick out well-aligned faces from misalignment, which are then used to incrementally update the representation subspace and fitting strategy for robust person-specific modeling on the fly.
{ "cite_N": [ "@cite_42", "@cite_33", "@cite_3" ], "mid": [ "2069897905", "", "2121684305" ], "abstract": [ "The active appearance model (AAM) is a well-known model that can represent a non-rigid object like the face effectively. However, the AAM often fails to converge correctly when the illumination conditions of face images change largely because it uses a set of fixed appearance basis vectors that are usually obtained in a training phase. To overcome this problem, we propose an adaptive AAM that updates the appearance basis vectors with the current face image by the incremental principal component analysis (PCA). However, the update of the appearance basis vectors with ill-fitted face images can worsen the AAM fitting to the forthcoming face images. To avoid this situation, we devise a conditional update method that updates the appearance basis vectors when the AAM fitting is good and the AAM reconstruction error is large. We evaluate the goodness of AAM fitting in terms of the number of outliers. When the AAM fitting is good we update the online appearance model (OAM) parameters, where the OAM is taken to keep the variation of input face image continuously, and also evaluate the goodness of the appearance basis vectors in terms of the magnitude of AAM reconstruction error. When the appearance basis vectors of the current AAM produces a large AAM reconstruction error, we update the appearance basis vectors using the incremental PCA. The proposed conditional update of the appearance basis vectors stabilizes the AAM fitting and improves the face tracking performance especially when the illumination condition changes very dynamically. Experimental results show that the adaptive AAM is superior to the conventional AAM in terms of the occurrence rate of fitting error and the fitting accuracy.", "", "The development of facial databases with an abundance of annotated facial data captured under unconstrained 'in-the-wild' conditions have made discriminative facial deformable models the de facto choice for generic facial landmark localization. Even though very good performance for the facial landmark localization has been shown by many recently proposed discriminative techniques, when it comes to the applications that require excellent accuracy, such as facial behaviour analysis and facial motion capture, the semi-automatic person-specific or even tedious manual tracking is still the preferred choice. One way to construct a person-specific model automatically is through incremental updating of the generic model. This paper deals with the problem of updating a discriminative facial deformable model, a problem that has not been thoroughly studied in the literature. In particular, we study for the first time, to the best of our knowledge, the strategies to update a discriminative model that is trained by a cascade of regressors. We propose very efficient strategies to update the model and we show that is possible to automatically construct robust discriminative person and imaging condition specific models 'in-the-wild' that outperform state-of-the-art generic face alignment strategies." ] }
1609.02781
2519898457
Image classification is one of the main research problems in computer vision and machine learning. Since in most real-world image classification applications there is no control over how the images are captured, it is necessary to consider the possibility that these images might be affected by noise (e.g. sensor noise in a low-quality surveillance camera). In this paper we analyse the impact of three different types of noise on descriptors extracted by two widely used feature extraction methods (LBP and HOG) and how denoising the images can help to mitigate this problem. We carry out experiments on two different datasets and consider several types of noise, noise levels, and denoising methods. Our results show that noise can hinder classification performance considerably and make classes harder to separate. Although denoising methods were not able to reach the same performance of the noise-free scenario, they improved classification results for noisy data.
divide image classification in five stages (see Figure ) and show that the method used to convert the images from RGB to grayscale can have a substantial impact on classification performance. They also demonstrate that RGB to grayscale conversion can be used as an effective dimensionality reduction procedure. Their results show that early stages of the classification pipeline -- despite being neglected in most image classification applications -- can directly influence classification performance. Some other papers @cite_1 @cite_3 also point out the importance of these early stages. Nonetheless, as in @cite_8 , they only focus on RGB to grayscale conversion and do not consider noisy images. analyse how image quality can hamper the performance of some state-of-the-art deep learning models by using networks trained on noise-free images to classify noisy, blurred and compressed images. Their results show that image classification is directly affected by image quality. Similarly, evaluate noise robustness of several LBP variants. Given that on both these papers the classifiers are trained in noise-free images, it is not possible to infer if the learned models are not able to deal with noisy images or if noise makes the classes harder to separate.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_8" ], "mid": [ "2032060170", "2004878341", "1772174758" ], "abstract": [ "In image recognition it is often assumed the method used to convert color images to grayscale has little impact on recognition performance. We compare thirteen different grayscale algorithms with four types of image descriptors and demonstrate that this assumption is wrong: not all color-to-grayscale algorithms work equally well, even when using descriptors that are robust to changes in illumination. These methods are tested using a modern descriptor-based image recognition framework, on face, object, and texture datasets, with relatively few training instances. We identify a simple method that generally works best for face and object recognition, and two that work well for recognizing textures.", "Color features for recognition are often extracted from 8-bit images. However, some studies recommended the use of an arbitrary number of colors, often less than 256 colors. Because the use of less colors can led to a lower computational cost and less power consumption, this paper investigates the extraction of features using images with different pixel depth, i.e., number of colors, and using two resolution settings. We show that it is possible to obtain compact and effective descriptors, by extracting features with images of lower quantization and resolution parameters. Also, we propose a bitwise quantization algorithm that codifies the most significant color features. While it reduces the number of colors, the distances between the feature vectors are kept similar, benefiting mobile image applications.", "The image-based visual recognition pipeline includes a step that converts color images into images with a single channel, obtaining a color-quantized image that can be processed by feature extraction methods. In this paper we explore this step in order to produce compact features that can be used in retrieval and classification systems. We show that different quantization methods produce very different results in terms of accuracy. While compared with more complex methods, this procedure allows the feature extraction in order to achieve a significant dimensionality reduction, while preserving or improving system accuracy. The results indicate that quantization simplify images before feature extraction and dimensionality reduction, producing more compact vectors and reducing system complexity." ] }
1609.02907
2519887557
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
Neural networks that operate on graphs have previously been introduced in @cite_1 @cite_0 as a form of recurrent neural network. Their framework requires the repeated application of contraction maps as propagation functions until node representations reach a stable fixed point. This restriction was later alleviated in @cite_5 by introducing modern practices for recurrent neural network training to the original graph neural network framework. @cite_13 introduced a convolution-like propagation rule on graphs and methods for graph-level classification. Their approach requires to learn node degree-specific weight matrices which does not scale to large graphs with wide node degree distributions. Our model instead uses a single weight matrix per layer and deals with varying node degrees through an appropriate normalization of the adjacency matrix (see Section ).
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_1", "@cite_13" ], "mid": [ "", "2244807774", "1501856433", "2406128552" ], "abstract": [ "", "Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (, 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.", "In several applications the information is naturally represented by graphs. Traditional approaches cope with graphical data structures using a preprocessing phase which transforms the graphs into a set of flat vectors. However, in this way, important topological information may be lost and the achieved results may heavily depend on the preprocessing stage. This paper presents a new neural model, called graph neural network (GNN), capable of directly processing graphs. GNNs extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs. A learning algorithm for GNNs is proposed and some experiments are discussed which assess the properties of the model.", "Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient." ] }
1609.02907
2519887557
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
A related approach to node classification with a graph-based neural network was recently introduced in @cite_8 . They report @math complexity, limiting the range of possible applications. In a different yet related model, @cite_12 convert graphs locally into sequences that are fed into a conventional 1D convolutional neural network, which requires the definition of a node ordering in a pre-processing step.
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2366141641", "2187089797" ], "abstract": [ "Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.", "We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets." ] }
1609.02907
2519887557
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
Our method is based on spectral graph convolutional neural networks, introduced in @cite_3 and later extended by @cite_4 with fast localized convolutions. In contrast to these works, we consider here the task of transductive node classification within networks of significantly larger scale. We show that in this setting, a number of simplifications (see Section ) can be introduced to the original frameworks of @cite_3 and @cite_4 that improve scalability and classification performance in large-scale networks.
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2095705004", "2964311892" ], "abstract": [ "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Abstract: Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures." ] }
1609.02745
2518806256
Opinion mining from customer reviews has become pervasive in recent years. Sentences in reviews, however, are usually classified independently, even though they form part of a review's argumentative structure. Intuitively, sentences in a review build and elaborate upon each other; knowledge of the review structure and sentential context should thus inform the classification of each sentence. We demonstrate this hypothesis for the task of aspect-based sentiment analysis by modeling the interdependencies of sentences in a review with a hierarchical bidirectional LSTM. We show that the hierarchical model outperforms two non-hierarchical baselines, obtains results competitive with the state-of-the-art, and outperforms the state-of-the-art on five multilingual, multi-domain datasets without any hand-engineered features or external resources.
Past approaches use classifiers with expensive hand-crafted features based on n-grams, parts-of-speech, negation words, and sentiment lexica @cite_10 @cite_18 . The model by Zhang and Lan is the only approach we are aware of that considers more than one sentence. However, it is less expressive than ours, as it only extracts features from the preceding and subsequent sentence without any notion of structure. Neural network-based approaches include an LSTM that determines sentiment towards a target word based on its position @cite_7 as well as a recursive neural network that requires parse trees @cite_13 . In contrast, our model requires no feature engineering, no positional information, and no parser outputs, which are often unavailable for low-resource languages. We are also the first -- to our knowledge -- to frame sentiment analysis as a sequence tagging task.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_10", "@cite_7" ], "mid": [ "", "2251294039", "2251648804", "2529550020" ], "abstract": [ "", "SemEval-2015 Task 12, a continuation of SemEval-2014 Task 4, aimed to foster research beyond sentenceor text-level sentiment classification towards Aspect Based Sentiment Analysis. The goal is to identify opinions expressed about specific entities (e.g., laptops) and their aspects (e.g., price). The task provided manually annotated reviews in three domains (restaurants, laptops and hotels), and a common evaluation procedure. It attracted 93 submissions from 16 teams.", "Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint. The majority of current approaches, however, attempt to detect the overall polarity of a sentence, paragraph, or text span, irrespective of the entities mentioned (e.g., laptops) and their aspects (e.g., battery, screen). SemEval2014 Task 4 aimed to foster research in the field of aspect-based sentiment analysis, where the goal is to identify the aspects of given target entities and the sentiment expressed for each aspect. The task provided datasets containing manually annotated reviews of restaurants and laptops, as well as a common evaluation procedure. It attracted 163 submissions from 32 teams.", "Target-dependent sentiment classification remains a challenge: modeling the semantic relatedness of a target with its context words in a sentence. Different context words have different influences on determining the sentiment polarity of a sentence towards the target. Therefore, it is desirable to integrate the connections between target word and context words when building a learning system. In this paper, we develop two target dependent long short-term memory (LSTM) models, where target information is automatically taken into account. We evaluate our methods on a benchmark dataset from Twitter. Empirical results show that modeling sentence representation with standard LSTM does not perform well. Incorporating target information into LSTM can significantly boost the classification accuracy. The target-dependent LSTM models achieve state-of-the-art performances without using syntactic parser or external sentiment lexicons." ] }
1609.02948
2519163251
We investigate the reasons why context in object detection has limited utility by isolating and evaluating the predictive power of different context cues under ideal conditions in which context provided by an oracle. Based on this study, we propose a region-based context re-scoring method with dynamic context selection to remove noise and emphasize informative context. We introduce latent indicator variables to select (or ignore) potential contextual regions, and learn the selection strategy with latent-SVM. We conduct experiments to evaluate the performance of the proposed context selection method on the SUN RGB-D dataset. The method achieves a significant improvement in terms of mean average precision (mAP), compared with both appearance based detectors and a conventional context model without the selection scheme.
Many techniques have used context to improve performance for image understanding tasks. For instance, Torralba @cite_24 proposed a framework for modeling the relationship between context and object properties, based on correlations between the statistics of low-level features across the entire scene and the objects that it contains. Divvala @cite_15 defined several context sources and proposed a context re-scoring method that uses a regression model on multiple contextual features. Felzenszwalb @cite_4 proposed a simple context re-scoring model running on appearance-based detections. Graphical models have been widely applied to image segmentation and recognition tasks by jointly modeling appearance, geometry and contextual relations @cite_13 @cite_10 @cite_18 @cite_6 . In @cite_0 , context clues were extended from 2D to 3D object detection.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_4", "@cite_6", "@cite_24", "@cite_0", "@cite_15", "@cite_10" ], "mid": [ "1528789833", "2536208356", "", "1982522767", "2166761907", "2152571752", "2141364309", "2137881638" ], "abstract": [ "This paper proposes a new approach to learning a discriminative model of object classes, incorporating appearance, shape and context information efficiently. The learned model is used for automatic visual recognition and semantic segmentation of photographs. Our discriminative model exploits novel features, based on textons, which jointly model shape and texture. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating these classifiers in a conditional random field. Efficient training of the model on very large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy are demonstrated on three different databases: i) our own 21-object class database of photographs of real objects viewed under general lighting conditions, poses and viewpoints, ii) the 7-class Corel subset and iii) the 7-class Sowerby database used in [1]. The proposed algorithm gives competitive results both for highly textured (e.g. grass, trees), highly structured (e.g. cars, faces, bikes, aeroplanes) and articulated objects (e.g. body, cow).", "High-level, or holistic, scene understanding involves reasoning about objects, regions, and the 3D relationships between them. This requires a representation above the level of pixels that can be endowed with high-level attributes such as class of object region, its orientation, and (rough 3D) location within the scene. Towards this goal, we propose a region-based model which combines appearance and scene geometry to automatically decompose a scene into semantically meaningful regions. Our model is defined in terms of a unified energy function over scene appearance and structure. We show how this energy function can be learned from data and present an efficient inference technique that makes use of multiple over-segmentations of the image to propose moves in the energy-space. We show, experimentally, that our method achieves state-of-the-art performance on the tasks of both multi-class image segmentation and geometric reasoning. Finally, by understanding region classes and geometry, we show how our model can be used as the basis for 3D reconstruction of the scene.", "", "There has been a growing interest in exploiting contextual information in addition to local features to detect and localize multiple object categories in an image. Context models can efficiently rule out some unlikely combinations or locations of objects and guide detectors to produce a semantically coherent interpretation of a scene. However, the performance benefit from using context models has been limited because most of these methods were tested on datasets with only a few object categories, in which most images contain only one or two object categories. In this paper, we introduce a new dataset with images that contain many instances of different object categories and propose an efficient model that captures the contextual information among more than a hundred of object categories. We show that our context model can be applied to scene understanding tasks that local detectors alone cannot solve.", "There is general consensus that context can be a rich source of information about an object's identity, location and scale. In fact, the structure of many real-world scenes is governed by strong configurational rules akin to those that apply to a single object. Here we introduce a simple framework for modeling the relationship between context and object properties based on the correlation between the statistics of low-level features across the entire scene and the objects that it contains. The resulting scheme serves as an effective procedure for object priming, context driven focus of attention and automatic scale-selection on real-world scenes.", "In this paper, we tackle the problem of indoor scene understanding using RGBD data. Towards this goal, we propose a holistic approach that exploits 2D segmentation, 3D geometry, as well as contextual relations between scenes and objects. Specifically, we extend the CPMC [3] framework to 3D in order to generate candidate cuboids, and develop a conditional random field to integrate information from different sources to classify the cuboids. With this formulation, scene classification and 3D object recognition are coupled and can be jointly solved through probabilistic inference. We test the effectiveness of our approach on the challenging NYU v2 dataset. The experimental results demonstrate that through effective evidence integration and holistic reasoning, our approach achieves substantial improvement over the state-of-the-art.", "This paper presents an empirical evaluation of the role of context in a contemporary, challenging object detection task - the PASCAL VOC 2008. Previous experiments with context have mostly been done on home-grown datasets, often with non-standard baselines, making it difficult to isolate the contribution of contextual information. In this work, we present our analysis on a standard dataset, using top-performing local appearance detectors as baseline. We evaluate several different sources of context and ways to utilize it. While we employ many contextual cues that have been used before, we also propose a few novel ones including the use of geographic context and a new approach for using object spatial support.", "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks." ] }
1609.02727
2250299341
Online reviews have increasingly become a very important resource for consumers when making purchases. Though it is becoming more and more difficult for people to make well-informed buying decisions without being deceived by fake reviews. Prior works on the opinion spam problem mostly considered classifying fake reviews using behavioral user patterns. They focused on prolific users who write more than a couple of reviews, discarding one-time reviewers. The number of singleton reviewers however is expected to be high for many review websites. While behavioral patterns are effective when dealing with elite users, for one-time reviewers, the review text needs to be exploited. In this paper we tackle the problem of detecting fake reviews written by the same person using multiple names, posting each review under a different name. We propose two methods to detect similar reviews and show the results generally outperform the vectorial similarity measures used in prior works. The first method extends the semantic similarity between words to the reviews level. The second method is based on topic modeling and exploits the similarity of the reviews topic distributions using two models: bag-of-words and bag-of-opinion-phrases. The experiments were conducted on reviews from three different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset (800 reviews).
The opinion spam problem was first formulated by Jindal and Liu in the context of product reviews @cite_21 . By analyzing Amazon data and using near-duplicate reviews as positive training data, they showed how widespread the problem of fake reviews was at that time.
{ "cite_N": [ "@cite_21" ], "mid": [ "2047756776" ], "abstract": [ "Evaluative texts on the Web have become a valuable source of opinions on products, services, events, individuals, etc. Recently, many researchers have studied such opinion sources as product reviews, forum posts, and blogs. However, existing research has been focused on classification and summarization of opinions using natural language processing and data mining techniques. An important issue that has been neglected so far is opinion spam or trustworthiness of online opinions. In this paper, we study this issue in the context of product reviews, which are opinion rich and are widely used by consumers and product manufacturers. In the past two years, several startup companies also appeared which aggregate opinions from product reviews. It is thus high time to study spam in reviews. To the best of our knowledge, there is still no published study on this topic, although Web spam and email spam have been investigated extensively. We will see that opinion spam is quite different from Web spam and email spam, and thus requires different detection techniques. Based on the analysis of 5.8 million reviews and 2.14 million reviewers from amazon.com, we show that opinion spam in reviews is widespread. This paper analyzes such spam activities and presents some novel techniques to detect them" ] }
1609.02727
2250299341
Online reviews have increasingly become a very important resource for consumers when making purchases. Though it is becoming more and more difficult for people to make well-informed buying decisions without being deceived by fake reviews. Prior works on the opinion spam problem mostly considered classifying fake reviews using behavioral user patterns. They focused on prolific users who write more than a couple of reviews, discarding one-time reviewers. The number of singleton reviewers however is expected to be high for many review websites. While behavioral patterns are effective when dealing with elite users, for one-time reviewers, the review text needs to be exploited. In this paper we tackle the problem of detecting fake reviews written by the same person using multiple names, posting each review under a different name. We propose two methods to detect similar reviews and show the results generally outperform the vectorial similarity measures used in prior works. The first method extends the semantic similarity between words to the reviews level. The second method is based on topic modeling and exploits the similarity of the reviews topic distributions using two models: bag-of-words and bag-of-opinion-phrases. The experiments were conducted on reviews from three different datasets: Yelp (57K reviews), Trustpilot (9K reviews) and Ott dataset (800 reviews).
The first study to tackle the opinion spam as a distributional anomaly was described in @cite_6 . It claimed product reviews are characterized by natural distributions which are distorted by hired spammers when writing fake reviews. They conducted a range of experiments that found a connection between distributional anomalies and the time windows when spam reviews were written.
{ "cite_N": [ "@cite_6" ], "mid": [ "2202307757" ], "abstract": [ "This paper postulates that there are natural distributions of opinions in product reviews. In particular, we hypothesize that for a given domain, there is a set of representative distributions of review rating scores. A deceptive business entity that hires people to write fake reviews will necessarily distort its distribution of review scores, leaving distributional footprints behind. In order to validate this hypothesis, we introduce strategies to create dataset with pseudo-gold standard that is labeled automatically based on different types of distributional footprints. A range of experiments confirm the hypothesized connection between the distributional anomaly and deceptive reviews. This study also provides novel quantitative insights into the characteristics of natural distributions of opinions in the TripAdvisor hotel review and the Amazon product review domains." ] }
1609.03076
2543862624
Guided policy search is a method for reinforcement learning that trains a general policy for accomplishing a given task by guiding the learning of the policy with multiple guiding distributions. Guided policy search relies on learning an underlying dynamical model of the environment and then, at each iteration of the algorithm, using that model to gradually improve the policy. This model, though, often makes the assumption that the environment dynamics are markovian, e.g., depend only on the current state and control signal. In this paper we apply guided policy search to a problem with non-markovian dynamics. Specifically, we apply it to the problem of pouring a precise amount of liquid from a cup into a bowl, where many of the sensor measurements experience non-trivial amounts of delay. We show that, with relatively simple state augmentation, guided policy search can be extended to non-markovian dynamical systems, where the non-markovianess is caused by delayed sensor readings.
Many various aspects of the task of robotic pouring have been investigated by prior work. Some studies have focused on utilizing specialized hardware and algorithms to achieve very precise pouring results @cite_2 , while others have focused on learning the broad motions of pouring through human demonstrations @cite_25 @cite_23 . Okada @cite_18 used a motion planner to manipulate a pouring object, Yamaguchi and Atkeson @cite_11 used differential dynamic programming to pour in a simulator, and Tamosiunaite used dynamic movement primitives to learn the goal and shape of a pouring trajectory, but all of these were designed to pour the entire contents of the container out, rather than a precise amount. To the authors' knowledge, the only study to attempt to combine learning and precise pouring was done by Rozo @cite_20 , who used human demonstrations to learn to pour 100 ml from a bottle into a cup.
{ "cite_N": [ "@cite_18", "@cite_23", "@cite_2", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2540400161", "2158582431", "2118278397", "", "2081857580", "2215032184" ], "abstract": [ "This paper describes integrated intelligent humanoid robot system for daily-life environment tasks. We have realized complex behaviors of a humanoid robot in daily-life environment based on motion planner technique using an environment and manipulation knowledge. However in order to adapt to unknown or dynamic situations, sensor based behavior variation is essentially important. In this paper, we present a design and implementation of sensor based behavior verification system using an environment and manipulation knowledge, which is also used in manipulation motion planner. We also present software architecture that allows us to write a single stream code to perform complex concurrent humanoid motions. By using this architecture, sensor based verification functions are easily integrated in motion generation functions. Finally, we demonstrated a water-pouring task and a dishwashing task of the life-sized humanoid robot HRP2-JSK in a real environment while verifying its own motion.", "Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called Active Learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. In this paper, we identify three types of questions (label, demonstration and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three question types within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments we provide guidelines for designing question asking behaviors on a robot learner.", "This paper presents the advanced control of an automatic pouring process, with special attention being paid to the suppression of sloshing (liquid vibration) as well as to high-speed transfer in a 3D space. In order to realize these objectives, a novel automatic pouring robot (APR) is developed. The controllers are designed by a hybrid shape approach, considering both time and frequency characteristics, in order to construct an APR which satisfies various control specifications such as the sloshing suppression, overshooting, the settling time and restriction of the control input's magnitude. The effectiveness of the proposed system is shown through some experiments.", "", "Robot learning from demonstration faces new challenges when applied to tasks in which forces play a key role. Pouring liquid from a bottle into a glass is one such task, where not just a motion with a certain force profile needs to be learned, but the motion is subtly conditioned by the amount of liquid in the bottle. In this paper, the pouring skill is taught to a robot as follows. In a training phase, the human teleoperates the robot using a haptic device, and data from the demonstrations are statistically encoded by a parametric hidden Markov model, which compactly encapsulates the relation between the task parameter (dependent on the bottle weight) and the force-torque traces. Gaussian mixture regression is then used at the reproduction stage for retrieving the suitable robot actions based on the force perceptions. Computational and experimental results show that the robot is able to learn to pour drinks using the proposed framework, outperforming other approaches such as the classical hidden Markov models in that it requires less training, yields more compact encodings and shows better generalization capabilities.", "We explore a temporal decomposition of dynamics in order to enhance policy learning with unknown dynamics. There are model-free methods and model-based methods for policy learning with unknown dynamics, but both approaches have problems: in general, model-free methods have less generalization ability, while model-based methods are often limited by the assumed model structure or need to gather many samples to make models. We consider a temporal decomposition of dynamics to make learning models easier. To obtain a policy, we apply differential dynamic programming (DDP). A feature of our method is that we consider decomposed dynamics even when there is no action to be taken, which allows us to decompose dynamics more flexibly. Consequently learned dynamics become more accurate. Our DDP is a first-order gradient descent algorithm with a stochastic evaluation function. In DDP with learned models, typically there are many local maxima. In order to avoid them, we consider multiple criteria evaluation functions. In addition to the stochastic evaluation function, we use a reference value function. This method was verified with pouring simulation experiments where we created complicated dynamics. The results show that we can optimize actions with DDP while learning dynamics models." ] }
1609.03076
2543862624
Guided policy search is a method for reinforcement learning that trains a general policy for accomplishing a given task by guiding the learning of the policy with multiple guiding distributions. Guided policy search relies on learning an underlying dynamical model of the environment and then, at each iteration of the algorithm, using that model to gradually improve the policy. This model, though, often makes the assumption that the environment dynamics are markovian, e.g., depend only on the current state and control signal. In this paper we apply guided policy search to a problem with non-markovian dynamics. Specifically, we apply it to the problem of pouring a precise amount of liquid from a cup into a bowl, where many of the sensor measurements experience non-trivial amounts of delay. We show that, with relatively simple state augmentation, guided policy search can be extended to non-markovian dynamical systems, where the non-markovianess is caused by delayed sensor readings.
Levine @cite_3 @cite_6 have developed a methodology similar to @cite_12 called guided policy search (GPS) that works on real robots in physical environments. Initially, they applied GPS only to simulated problems @cite_3 , but in follow-up work @cite_6 they showed how it can be used to solve tasks on a real robot such as putting a cap on a bottle, inserting a brick into a block, and hanging a hanger on a rack. Our work in this paper is heavily inspired by the work of Levine . Here, we apply GPS on a real robot in an environment with non-trivial sensor delay, specifically, to the task of pouring a precise amount of liquid into a bowl.
{ "cite_N": [ "@cite_12", "@cite_6", "@cite_3" ], "mid": [ "", "2964161785", "2104733512" ], "abstract": [ "", "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.", "Direct policy search can effectively scale to high-dimensional systems, but complex policies with hundreds of parameters often present a challenge for such methods, requiring numerous samples and often falling into poor local optima. We present a guided policy search algorithm that uses trajectory optimization to direct policy learning and avoid poor local optima. We show how differential dynamic programming can be used to generate suitable guiding samples, and describe a regularized importance sampled policy optimization that incorporates these samples into the policy search. We evaluate the method by learning neural network controllers for planar swimming, hopping, and walking, as well as simulated 3D humanoid running." ] }
1609.02687
2519836110
Layouts and sub-layouts constitute an important clue while searching a document on the basis of its structure, or when textual content is unknown irrelevant. A sub-layout specifies the arrangement of document entities within a smaller portion of the document. We propose an efficient graph-based matching algorithm, integrated with hash-based indexing, to prune a possibly large search space. A user can specify a combination of sub-layouts of interest using sketch-based queries. The system supports partial matching for unspecified layout entities. We handle cases of segmentation pre-processing errors (for text non-text blocks) with a symmetry maximization-based strategy, and accounting for multiple domain-specific plausible segmentation hypotheses. We show promising results of our system on a database of unstructured entities, containing 4776 newspaper images.
Image retrieval and classification has been an interesting research problem from last two decades ( @cite_6 , @cite_18 , @cite_16 , @cite_2 , @cite_1 , @cite_19 ). Layout information can be used for document image classification as well as retrieval.
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_6", "@cite_19", "@cite_2", "@cite_16" ], "mid": [ "40026635", "", "205301110", "2168125451", "2128119911", "1727881885" ], "abstract": [ "Nowadays, Digital Libraries have become a widely used service to store and share both digital born documents and digital versions of works stored by traditional libraries. Document images are intrinsically non-structured and the structure and semantic of the digitized documents is in most part lost during the conversion. Several techniques related to the Document Image Analysis research area have been proposed in the past to deal with document image retrieval applications. In this chapter a survey about the more recent techniques applied in the field of recognition and retrieval of text and graphical documents is presented. In particular we describe techniques related to recognition-free approaches.", "", "In this paper, we describe issues related to the measurement of structural similarity between document images. We define structural similarity, and discuss the benefits of using it as a complement to content similarity for querying document image databases. We present an approach to computing a geometrically invariant structural similarity, and use this measure to search document image databases. Our approach supports both full image matching using query by example (QBE) and sub-image matching using query by sketch (QBS). The similarity measure considers spatial and layout structure, and is computed by aggregating content area overlap measures with respect to their underlying column structures. These techniques are tested within the Intelligent Document Image Retrieval (IDIR) System, and results demonstrating effectiveness and efficiency of structure queries with respect to human relevance judgments are presented.", "The need for content-based access to image and video information from media archives has captured the attention of researchers in recent years. Research e0orts have led to the development of methods that provide access to image and video data. These methods have their roots in pattern recognition. The methods are used to determine the similarity in the visual information content extracted from low level features. These features are then clustered for generation of database indices. This paper presents a comprehensive surveyon the use of these pattern recognition methods which enable image and video retrieval bycontent. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "The economic feasibility of maintaining large data bases of document images has created a tremendous demand for robust ways to access and manipulate the information these images contain. In an attempt to move toward a paperless office, large quantities of printed documents are often scanned and archived as images, without adequate index information. One way to provide traditional data-base indexing and retrieval capabilities is to fully convert the document to an electronic representation which can be indexed automatically. Unfortunately, there are many factors which prohibit complete conversion including high cost, low document quality, and the fact that many nontext components cannot be adequately represented in a converted form. In such cases, it can be advantageous to maintain a copy of and use the document in image form. In this paper, we provide a survey of methods developed by researchers to access and manipulate document images without the need for complete and accurate conversion. We briefly discuss traditional text indexing techniques on imperfect data and the retrieval of partially converted documents. This is followed by a more comprehensive review of techniques for the direct characterization, manipulation, and retrieval, of images of documents containing text, graphics, and scene images.", "Given the phenomenal growth in the variety and quantity of data available to users through electronic media, there is a great demand for efficient and effective ways to organize and search through all this information. Besides speech, our principal means of communication is through visual media, and in particular, through documents. In this paper, we provide an update on Doermann's comprehensive survey (1998) of research results in the broad area of document-based information retrieval. The scope of this survey is also somewhat broader, and there is a greater emphasis on relating document image analysis methods to conventional IR methods. Documents are available in a wide variety of formats. Technical papers are often available as ASCII files of clean, correct, text. Other documents may only be available as hardcopies. These documents have to be scanned and stored as images so that they may be processed by a computer. The textual content of these documents may also be extracted and recognized using OCR methods. Our survey covers the broad spectrum of methods that are required to handle different formats like text and images. The core of the paper focuses on methods that manipulate document images directly, and perform various information processing tasks such as retrieval, categorization, and summarization, without attempting to completely recognize the textual content of the document. We start, however, with a brief overview of traditional IR techniques that operate on clean text. We also discuss research dealing with text that is generated by running OCR on document images. Finally, we also briefly touch on the related problem of content-based image retrieval." ] }
1609.02687
2519836110
Layouts and sub-layouts constitute an important clue while searching a document on the basis of its structure, or when textual content is unknown irrelevant. A sub-layout specifies the arrangement of document entities within a smaller portion of the document. We propose an efficient graph-based matching algorithm, integrated with hash-based indexing, to prune a possibly large search space. A user can specify a combination of sub-layouts of interest using sketch-based queries. The system supports partial matching for unspecified layout entities. We handle cases of segmentation pre-processing errors (for text non-text blocks) with a symmetry maximization-based strategy, and accounting for multiple domain-specific plausible segmentation hypotheses. We show promising results of our system on a database of unstructured entities, containing 4776 newspaper images.
@cite_12 compute the distance between image rows after a segmentation into a grid of equal-sized cells. Each cell is labeled as text or whitespace on the basis of its overlapping with the text block. Document images are then compared using dynamic programming on a row-based representation of documents.
{ "cite_N": [ "@cite_12" ], "mid": [ "1986107388" ], "abstract": [ "The paper describes features and methods for document image comparison and classification at the spatial layout level. The methods are useful for visual similarity based document retrieval as well as fast algorithms for initial document type classification without OCR. A novel feature set called interval encoding is introduced to capture elements of spatial layout. This feature set encodes region layout information in fixed-length vectors which can be used for fast page layout comparison. The paper describes experiments and results to rank-order a set of document pages in terms of their layout similarity to a test document. We also demonstrate the usefulness of the features derived from interval encoding in a hidden Markov model based page layout classification system that is trainable and extendible." ] }
1609.02687
2519836110
Layouts and sub-layouts constitute an important clue while searching a document on the basis of its structure, or when textual content is unknown irrelevant. A sub-layout specifies the arrangement of document entities within a smaller portion of the document. We propose an efficient graph-based matching algorithm, integrated with hash-based indexing, to prune a possibly large search space. A user can specify a combination of sub-layouts of interest using sketch-based queries. The system supports partial matching for unspecified layout entities. We handle cases of segmentation pre-processing errors (for text non-text blocks) with a symmetry maximization-based strategy, and accounting for multiple domain-specific plausible segmentation hypotheses. We show promising results of our system on a database of unstructured entities, containing 4776 newspaper images.
van @cite_5 use layout information for document retrieval. A class of distance measure based on two-step procedure is introduced. In the first step, the distances between the blocks of document and query layouts are calculated. Various types of distance measures like, Manhattan distance of corner points, overlapping area of blocks, difference in width and height, etc., were used to compute distance between every pair of blocks in given layouts. Then, the matching step matches the blocks of query layout to the blocks of reference layout by minimizing the total distance. Since the method uses distance measure, it is not invariant to position and shape of the blocks. Besides, the number of blocks in case of sub-layouts will be lesser and their position and aspect ratios can be different, the total distance between two layouts will be larger and thus, may not match.
{ "cite_N": [ "@cite_5" ], "mid": [ "2170288539" ], "abstract": [ "Most methods for document image retrieval rely solely on text information to find similar documents. This paper describes a way to use layout information for document image retrieval instead. A new class of distance measures is introduced for documents with Manhattan layouts, based on a two-step procedure: First, the distances between the blocks of two layouts are calculated. Then, the blocks of one layout are assigned to the blocks of the other layout in a matching step. Different block distances and matching methods are compared and evaluated using the publicly available MARG database. On this dataset, the layout type can be determined successfully in 92.6 of the cases using the best distance measure in a nearest neighbor classifier. The experiments show that the best distance measure for this task is the overlapping area combined with the Manhattan distance of the corner points as block distance together with the minimum weight edge cover matching." ] }
1609.02687
2519836110
Layouts and sub-layouts constitute an important clue while searching a document on the basis of its structure, or when textual content is unknown irrelevant. A sub-layout specifies the arrangement of document entities within a smaller portion of the document. We propose an efficient graph-based matching algorithm, integrated with hash-based indexing, to prune a possibly large search space. A user can specify a combination of sub-layouts of interest using sketch-based queries. The system supports partial matching for unspecified layout entities. We handle cases of segmentation pre-processing errors (for text non-text blocks) with a symmetry maximization-based strategy, and accounting for multiple domain-specific plausible segmentation hypotheses. We show promising results of our system on a database of unstructured entities, containing 4776 newspaper images.
The above systems work on the basis of the layout of the complete document image, and not specific sub-layouts. Shin and Doermann @cite_6 describe a system for sub-layout based document matching. They measure the query-to-document similarity by comparing the edges of blocks at approximately the same location in the query and the candidate image, after scale normalization. Thus, a solution is required for the problem of sub-layout-based search where the query layout can be present at different scales and translations. Fig. shows an example of query image and the retrieved image from the database using our method, whose layout entities are of different size, aspect ratio and the layout itself is at a position different from the layout in the query image. For images in this paper, cyan blue represents text, pink red represent non-text non-background blocks, and grey indicates that the specific block type (text non-text) is irrelevant.
{ "cite_N": [ "@cite_6" ], "mid": [ "205301110" ], "abstract": [ "In this paper, we describe issues related to the measurement of structural similarity between document images. We define structural similarity, and discuss the benefits of using it as a complement to content similarity for querying document image databases. We present an approach to computing a geometrically invariant structural similarity, and use this measure to search document image databases. Our approach supports both full image matching using query by example (QBE) and sub-image matching using query by sketch (QBS). The similarity measure considers spatial and layout structure, and is computed by aggregating content area overlap measures with respect to their underlying column structures. These techniques are tested within the Intelligent Document Image Retrieval (IDIR) System, and results demonstrating effectiveness and efficiency of structure queries with respect to human relevance judgments are presented." ] }
1609.02687
2519836110
Layouts and sub-layouts constitute an important clue while searching a document on the basis of its structure, or when textual content is unknown irrelevant. A sub-layout specifies the arrangement of document entities within a smaller portion of the document. We propose an efficient graph-based matching algorithm, integrated with hash-based indexing, to prune a possibly large search space. A user can specify a combination of sub-layouts of interest using sketch-based queries. The system supports partial matching for unspecified layout entities. We handle cases of segmentation pre-processing errors (for text non-text blocks) with a symmetry maximization-based strategy, and accounting for multiple domain-specific plausible segmentation hypotheses. We show promising results of our system on a database of unstructured entities, containing 4776 newspaper images.
@cite_7 present a graph-based method for document retrieval, which can perform sub-graph search and spotting. The authors start from constructed graphs, and do not consider segmentation of images and obtaining the subsequent graph structure. The paper mentions that the overall accuracy depends on the formation of the graph structure.
{ "cite_N": [ "@cite_7" ], "mid": [ "2017844419" ], "abstract": [ "Structural pattern recognition approaches offer the most expressive, convenient, powerful but computational expensive representations of underlying relational information. To benefit from mature, less expensive and efficient state-of-the-art machine learning models of statistical pattern recognition they must be mapped to a low-dimensional vector space. Our method of explicit graph embedding bridges the gap between structural and statistical pattern recognition. We extract the topological, structural and attribute information from a graph and encode numeric details by fuzzy histograms and symbolic details by crisp histograms. The histograms are concatenated to achieve a simple and straightforward embedding of graph into a low-dimensional numeric feature vector. Experimentation on standard public graph datasets shows that our method outperforms the state-of-the-art methods of graph embedding for richly attributed graphs. Highlights? We propose an explicit graph embedding method. ? We perform multilevel analysis of graph to extract global, topological structural and attribute information. ? We use homogeneity of subgraphs in graph for extracting topological structural details. ? We encode numeric information by fuzzy histograms and symbolic information by crisp histograms. ? Our method outperforms graph embedding methods for richly attributed graphs." ] }
1609.02532
2520256186
Named Data Networks provide a clean-slate redesign of the Future Internet for efficient content distribution. Because Internet of Things are expected to compose a significant part of Future Internet, most content will be managed by constrained devices. Such devices are often equipped with limited CPU, memory, bandwidth, and energy supply. However, the current Named Data Networks design neglects the specific requirements of Internet of Things scenarios and many data structures need to be further optimised. The purpose of this research is to provide an efficient strategy to route in Named Data Networks by constructing a Forwarding Information Base using Iterated Bloom Filters defined as I(FIB)F. We propose the use of content names based on iterative hashes. This strategy leads to reduce the overhead of packets. Moreover, the memory and the complexity required in the forwarding strategy are lower than in current solutions. We compare our proposal with solutions based on hierarchical names and Standard Bloom Filters. We show how to further optimise I(FIB)F by exploiting the structure information contained in hierarchical content names. Finally, two strategies may be followed to reduce: (i) the overall memory for routing or (ii) the probability of false positives.
Furthermore, some name lookup techniques are specifically designed for NDN. In Name Lookup engine with Adaptive Prefix Bloom filter (NLAPB) @cite_9 name prefixes are divided in B-prefixes and T-suffixes. Standard BFs match B-prefixes while a small-scale trie is used for T-suffixes. The division is based on the popularity of names to speed up the lookup.
{ "cite_N": [ "@cite_9" ], "mid": [ "2142928069" ], "abstract": [ "Complex name constitution plus huge-sized name routing table makes wire speed name lookup a challenging task in Named Data Networking. To overcome this challenge, we propose two techniques to significantly speed up the lookup process. First, we look up name prefixes in an order based on the distribution of prefix length in the forwarding table, which can find the longest match much faster than the linear search of current prototype CCNx. The search order can be dynamically adjusted as the forwarding table changes. Second, we propose a new near-perfect hash table data structure that combines many small sparse perfect hash tables into a larger dense one while keeping the worst-case access time of O(1) and supporting fast update. Also the hash table stores the signature of a key instead of the key itself, which further improves lookup speed and reduces memory use." ] }
1609.02667
2519327604
Return-Oriented Programming (ROP) is a software exploit for system compromise. By chaining short instruction sequences from existing code pieces, ROP can bypass static code-integrity checking approaches and non-executable page protections. Existing defenses either require access to source code or binary, a customized compiler or hardware modifications, or suffer from high performance and storage overhead. In this work, we propose SIGDROP, a low-cost approach for ROP detection which uses low-level properties inherent to ROP attacks. Specifically, we observe special patterns of certain hardware events when a ROP attack occurs during program execution. Such hardware event-based patterns form signatures to flag ROP attacks at runtime. SIGDROP leverages Hardware Performance Counters, which are already present in commodity processors, to efficiently capture and extract the signatures. Our evaluation demonstrates that SIGDROP can effectively detect ROP attacks with acceptable performance overhead and negligible storage overhead.
KBouncer @cite_8 and ROPecker @cite_30 use the last branch record (LBR) hardware registers to trace the target addresses of indirect branches and compare them against the golden control flow path of the software. Since LBR registers are only available on Intel platforms, KBouncer and ROPecker are not portable to AMD and ARM platforms. On the other hand, SIGDROP can be adapted to commodity platforms with readily available HPCs.
{ "cite_N": [ "@cite_30", "@cite_8" ], "mid": [ "1968002620", "70478248" ], "abstract": [ "Return-Oriented Programming (ROP) is a sophisticated exploitation technique that is able to drive target applications to perform arbitrary unintended operations by constructing a gadget chain reusing existing small code sequences (gadgets). Existing defense mechanisms either only handle specific types of gadgets, require access to source code and or a customized compiler, break the integrity of application binary, or suffer from high performance overhead. In this paper, we present a novel system, ROPecker, to efficiently and effectively defend against ROP attacks without relying on any other side information (e.g., source code and compiler support) or binary rewriting. ROPecker detects an ROP attack at run-time by checking the presence of a sufficiently long chain of gadgets in past and future execution flow, with the assistance of the taken branches recorded in the Last Branch Record (LBR) registers and an efficient technique combining offline analysis with run-time emulation. We also design a sliding window mechanism to invoke the detection logic in proper timings, which achieves both high detection accuracy and efficiency. We build an ROPecker prototype on x86-based Linux computers and evaluate its security effectiveness, space cost and performance overhead. In our experiment, ROPecker can detect all ROP attacks from real-world examples and generated by the generalpurpose ROP compiler Q. It has small footprints on memory and disk storage, and only incurs acceptable performance overhead on CPU computation, disk I O and network I O.", "Return-oriented programming (ROP) has become the primary exploitation technique for system compromise in the presence of non-executable page protections. ROP exploits are facilitated mainly by the lack of complete address space randomization coverage or the presence of memory disclosure vulnerabilities, necessitating additional ROP-specific mitigations. In this paper we present a practical runtime ROP exploit prevention technique for the protection of third-party applications. Our approach is based on the detection of abnormal control transfers that take place during ROP code execution. This is achieved using hardware features of commodity processors, which incur negligible runtime overhead and allow for completely transparent operation without requiring any modifications to the protected applications. Our implementation for Windows 7, named kBouncer, can be selectively enabled for installed programs in the same fashion as user-friendly mitigation toolkits like Microsoft's EMET. The results of our evaluation demonstrate that kBouncer has low runtime overhead of up to 4 , when stressed with specially crafted workloads that continuously trigger its core detection component, while it has negligible overhead for actual user applications. In our experiments with in-the-wild ROP exploits, kBouncer successfully protected all tested applications, including Internet Explorer, Adobe Flash Player, and Adobe Reader." ] }
1609.02667
2519327604
Return-Oriented Programming (ROP) is a software exploit for system compromise. By chaining short instruction sequences from existing code pieces, ROP can bypass static code-integrity checking approaches and non-executable page protections. Existing defenses either require access to source code or binary, a customized compiler or hardware modifications, or suffer from high performance and storage overhead. In this work, we propose SIGDROP, a low-cost approach for ROP detection which uses low-level properties inherent to ROP attacks. Specifically, we observe special patterns of certain hardware events when a ROP attack occurs during program execution. Such hardware event-based patterns form signatures to flag ROP attacks at runtime. SIGDROP leverages Hardware Performance Counters, which are already present in commodity processors, to efficiently capture and extract the signatures. Our evaluation demonstrates that SIGDROP can effectively detect ROP attacks with acceptable performance overhead and negligible storage overhead.
Davi et. al. propose two new processor instructions that enforce a golden CFI model @cite_21 . For each direct and indirect @math instruction, a @math instruction is added, where @math is a hard-coded, unique immediate value associated to the @math . @math pushes the label in a protected memory segment. If no @math is found for the @math instruction, the processor assumes a CFI violation. Each @math instruction has an associated @math instruction that verifies if the @math is in the protected memory segment. If no matching @math is found, a CFI violation is detected. The proposed approach requires changes to different stages of the processor pipeline to incorporate the new instructions and is thus not portable to platforms currently available.
{ "cite_N": [ "@cite_21" ], "mid": [ "2128171167" ], "abstract": [ "Embedded systems have become pervasive and are built into a vast number of devices such as sensors, vehicles, mobile and wearable devices. However, due to resource constraints, they fail to provide sufficient security, and are particularly vulnerable to runtime attacks (code injection and ROP). Previous works have proposed the enforcement of control-flow integrity (CFI) as a general defense against runtime attacks. However, existing solutions either suffer from performance overhead or only enforce coarse-grain CFI policies that a sophisticated adversary can undermine. In this paper, we tackle these limitations and present the design of novel security hardware mechanisms to enable fine-grained CFI checks. Our CFI proposal is based on a state model and a per-function CFI label approach. In particular, our CFI policies ensure that function returns can only transfer control to active call sides (i.e., return landing pads of functions currently executing). Further, we restrict indirect calls to target the beginning of a function, and lastly, deploy behavioral heuristics for indirect jumps." ] }
1609.02612
2949099979
We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.
Our technical approach builds on recent work in generative adversarial networks for image modeling @cite_18 @cite_38 @cite_5 @cite_28 @cite_43 , which we extend to video. To our knowledge, there has been relatively little work extensively studying generative adversarial networks for video. Most notably, @cite_20 also uses adversarial networks for video frame prediction. Our framework can generate videos for longer time scales and learn representations of video using unlabeled data. Our work is also related to efforts to predict the future in video @cite_2 @cite_20 @cite_7 @cite_11 @cite_27 @cite_19 @cite_37 @cite_35 as well as concurrent work in future generation @cite_48 @cite_36 @cite_17 @cite_9 @cite_29 @cite_8 . Often these works may be viewed as a generative model conditioned on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to generate videos from scratch (not conditioned on the past). Secondly, while prior work has used generative models in video settings mostly on a single frame, we jointly generate a sequence of frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due to errors accumulating.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_29", "@cite_43", "@cite_2", "@cite_5", "@cite_20", "@cite_38", "@cite_18", "@cite_8", "@cite_48", "@cite_17", "@cite_37", "@cite_7", "@cite_28", "@cite_19", "@cite_27", "@cite_9", "@cite_11" ], "mid": [ "2232035143", "", "2951536054", "", "", "2951523806", "", "2173520492", "", "2952390294", "2400532028", "2401640538", "", "2056120433", "2298992465", "", "2422305492", "2470475590", "1537098388" ], "abstract": [ "Given a video of an activity, can we predict what will happen next? In this paper we explore two simple tasks related to temporal prediction in egocentric videos of everyday activities. We provide both human experiments to understand how well people can perform on these tasks and computational models for prediction. Experiments indicate that humans and computers can do well on temporal prediction and that personalization to a particular individual or environment provides significantly increased performance. Developing methods for temporal prediction could have far reaching benefits for robots or intelligent agents to anticipate what a person will do, before they do it.", "", "Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models.", "", "", "In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.", "", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "", "In a given scene, humans can often easily predict a set of immediate future events that might happen. However, generalized pixel-level anticipation in computer vision systems is difficult because machine learning struggles with the ambiguity inherent in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene, specifically what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories, while latent variables encode any necessary information that is not available in the image. We show that our method is able to successfully predict events in a wide variety of scenes and can produce multiple different predictions when the future is ambiguous. Our algorithm is trained on thousands of diverse, realistic videos and requires absolutely no human labeling. In addition to non-semantic action prediction, we find that our method learns a representation that is applicable to semantic vision tasks.", "A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a \"visual imagination\" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods.", "While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network (\"PredNet\") architecture that is inspired by the concept of \"predictive coding\" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. Altogether, these results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.", "", "In this paper we present a conceptually simple but surprisingly powerful method for visual prediction which combines the effectiveness of mid-level visual elements with temporal modeling. Our framework can be learned in a completely unsupervised manner from a large collection of videos. However, more importantly, because our approach models the prediction framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict visual appearances--how are appearances going to change with time. This yields a visual \"hallucination\" of probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events, we also show that our approach is comparable to supervised methods for event prediction.", "Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network ( ( S ^2 )-GAN). Our ( S ^2 )-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our ( S ^2 )-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.", "", "Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.", "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations.", "When given a single static picture, humans can not only interpret the instantaneous content captured by the image, but also they are able to infer the chain of dynamic events that are likely to happen in the near future. Similarly, when a human observes a short video, it is easy to decide if the event taking place in the video is normal or unexpected, even if the video depicts a an unfamiliar place for the viewer. This is in contrast with work in surveillance and outlier event detection, where the models rely on thousands of hours of video recorded at a single place in order to identify what constitutes an unusual event. In this work we present a simple method to identify videos with unusual events in a large collection of short video clips. The algorithm is inspired by recent approaches in computer vision that rely on large databases. In this work we show how, relying on large collections of videos, we can retrieve other videos similar to the query to build a simple model of the distribution of expected motions for the query. Consequently, the model can evaluate how unusual is the video as well as make event predictions. We show how a very simple retrieval model is able to provide reliable results." ] }
1609.02612
2949099979
We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.
We leverage approaches for recognizing actions in video with deep networks, but apply them for video generation instead. We use spatio-temporal 3D convolutions to model videos @cite_14 , but we use fractionally strided convolutions @cite_34 instead because we are interested in generation. We also use two-streams to model video @cite_53 , but apply them for video generation instead of action recognition. However, our approach does not explicitly use optical flow; instead, we expect the network to learn motion features on its own. Finally, this paper is related to a growing body of work that capitalizes on large amounts of unlabeled video for visual recognition tasks @cite_47 @cite_25 @cite_33 @cite_0 @cite_22 @cite_31 @cite_30 @cite_6 @cite_23 @cite_49 @cite_41 @cite_50 @cite_27 @cite_26 . We instead leverage large amounts of unlabeled video for generation.
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_14", "@cite_26", "@cite_33", "@cite_22", "@cite_41", "@cite_53", "@cite_6", "@cite_0", "@cite_27", "@cite_23", "@cite_49", "@cite_50", "@cite_47", "@cite_34", "@cite_25" ], "mid": [ "2124573631", "2145038566", "", "2544224704", "", "", "", "2952186347", "2950091256", "2198618282", "2422305492", "", "2951353470", "", "2950789693", "", "219040644" ], "abstract": [ "We propose an approach to learn action categories from static images that leverages prior observations of generic human motion to augment its training process. Using unlabeled video containing various human activities, the system first learns how body pose tends to change locally in time. Then, given a small number of labeled static images, it uses that model to extrapolate beyond the given exemplars and generate \"synthetic\" training examples-poses that could link the observed images and or immediately precede or follow them in time. In this way, we expand the training set without requiring additional manually labeled examples. We explore both example-based and manifold-based methods to implement our idea. Applying our approach to recognize actions in both images and video, we show it enhances a state-of-the-art technique when very few labeled training examples are available.", "This work proposes a learning method for deep architectures that takes advantage of sequential data, in particular from the temporal coherence that naturally exists in unlabeled video recordings. That is, two successive frames are likely to contain the same object or objects. This coherence is used as a supervisory signal over the unlabeled data, and is used to improve the performance on a supervised task of interest. We demonstrate the effectiveness of this method on some pose invariant object and face recognition tasks.", "", "We learn rich natural sound representations by capitalizing on large amounts of unlabeled sound data collected in the wild. We leverage the natural synchronization between vision and sound to learn an acoustic representation using two-million unlabeled videos. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about natural sound. We propose a student-teacher training procedure which transfers discriminative visual knowledge from well established visual recognition models into the sound modality using unlabeled video as a bridge. Our sound representation yields significant performance improvements over the state-of-the-art results on standard benchmarks for acoustic scene object classification. Visualizations suggest some high-level semantics automatically emerge in the sound network, even though it is trained without ground truth labels.", "", "", "", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet. These videos possess the implicit weak label that they are sequences of temporally and semantically coherent images. We leverage this information to learn temporal embeddings for video frames by associating frames with the temporal context that they appear in. To do this, we propose a scheme for incorporating temporal context based on past and future frames in videos, and compare this to other contextual representations. In addition, we show how data augmentation using multi-resolution samples and hard negatives helps to significantly improve the quality of the learned embeddings. We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order recovery in unconstrained Internet video.", "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.", "Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.", "", "The sound of crashing waves, the roar of fast-moving cars -- sound conveys important information about the objects in our surroundings. In this work, we show that ambient sounds can be used as a supervisory signal for learning visual models. To demonstrate this, we train a convolutional neural network to predict a statistical summary of the sound associated with a video frame. We show that, through this process, the network learns a representation that conveys information about objects and scenes. We evaluate this representation on several recognition tasks, finding that its performance is comparable to that of other state-of-the-art unsupervised learning methods. Finally, we show through visualizations that the network learns units that are selective to objects that are often associated with characteristic sounds.", "", "We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.", "", "Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation." ] }
1609.02622
2953133609
Most real-world social networks are inherently dynamic, composed of communities that are constantly changing in membership. To track these evolving communities, we need dynamic community detection techniques. This article evaluates the performance of a set of game theoretic approaches for identifying communities in dynamic networks. Our method, D-GT (Dynamic Game Theoretic community detection), models each network node as a rational agent who periodically plays a community membership game with its neighbors. During game play, nodes seek to maximize their local utility by joining or leaving the communities of network neighbors. The community structure emerges after the game reaches a Nash equilibrium. Compared to the benchmark community detection methods, D-GT more accurately predicts the number of communities and finds community assignments with a higher normalized mutual information, while retaining a good modularity.
Some studies have focused on studying the evolution of communities over time. For instance, @cite_3 identified subsets of nodes, natural communities", that were stable to small perturbations of the input data. Communities detected in later snapshots were matched to earlier snapshots using the natural community tree structure. palla2009social proposed an innovative method for detecting communities in dynamic networks based on the k-clique percolation technique; in their approach, communities are defined as adjacent k-cliques, that share @math nodes. Machine learning has also been employed to model changes in community structure; for instance, @cite_4 predict transitions in community structure by learning supervised machine learning classifiers. This requires data on past transitions to train the classifiers, which limits its applicability to certain datasets. @cite_0 adopt a data mining approach to detect clusters on time-evolving graphs; community discovery and change detection are performed using the minimum description length (MDL) paradigm.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_3" ], "mid": [ "2155640700", "2080447039", "2131681506" ], "abstract": [ "How can we find communities in dynamic networks of socialinteractions, such as who calls whom, who emails whom, or who sells to whom? How can we spot discontinuity time-points in such streams of graphs, in an on-line, any-time fashion? We propose GraphScope, that addresses both problems, using information theoretic principles. Contrary to the majority of earlier methods, it needs no user-defined parameters. Moreover, it is designed to operate on large graphs, in a streaming fashion. We demonstrate the efficiency and effectiveness of our GraphScope on real datasets from several diverse domains. In all cases it produces meaningful time-evolving patterns that agree with human intuition.", "Finding patterns of interaction and predicting the future structure of networks has many important applications, such as recommendation systems and customer targeting. Community structure of social networks may undergo different temporal events and transitions. In this paper, we propose a framework to predict the occurrence of different events and transition for communities in dynamic social networks. Our framework incorporates key features related to a community - its structure, history, and influential members, and automatically detects the most predictive features for each event and transition. Our experiments on real world datasets confirms that the evolution of communities can be predicted with a very high accuracy, while we further observe that the most significant features vary for the predictability of each event and transition.", "We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks." ] }
1609.02622
2953133609
Most real-world social networks are inherently dynamic, composed of communities that are constantly changing in membership. To track these evolving communities, we need dynamic community detection techniques. This article evaluates the performance of a set of game theoretic approaches for identifying communities in dynamic networks. Our method, D-GT (Dynamic Game Theoretic community detection), models each network node as a rational agent who periodically plays a community membership game with its neighbors. During game play, nodes seek to maximize their local utility by joining or leaving the communities of network neighbors. The community structure emerges after the game reaches a Nash equilibrium. Compared to the benchmark community detection methods, D-GT more accurately predicts the number of communities and finds community assignments with a higher normalized mutual information, while retaining a good modularity.
Optimization can be used to identify minimum cost community assignments in dynamic graphs. FacetNet is a framework for analyzing communities in dynamic networks based on an optimization of snapshot costs. It is guaranteed to converge to a local optimal solution; however, its convergence speed is slow, and it needs to be initialized with the number of communities which is usually unknown in practice. @cite_2 modeled dynamic community detection as a multi-objective optimization problem. Their approach is parameter free and uses evolutionary clustering to optimize a dual objective function. The first objective selects for highly modular structures at the current time step, and the second minimizes the differences between community structures in the current and previous time steps. D-GT also uses a stochastic optimization procedure, but all of the agents individually optimize their utilities based on local network information.
{ "cite_N": [ "@cite_2" ], "mid": [ "2022651954" ], "abstract": [ "The discovery of evolving communities in dynamic networks is an important research topic that poses challenging tasks. Evolutionary clustering is a recent framework for clustering dynamic networks that introduces the concept of temporal smoothness inside the community structure detection method. Evolutionary-based clustering approaches try to maximize cluster accuracy with respect to incoming data of the current time step, and minimize clustering drift from one time step to the successive one. In order to optimize both these two competing objectives, an input parameter that controls the preference degree of a user towards either the snapshot quality or the temporal quality is needed. In this paper the detection of communities with temporal smoothness is formulated as a multiobjective problem and a method based on genetic algorithms is proposed. The main advantage of the algorithm is that it automatically provides a solution representing the best trade-off between the accuracy of the clustering obtained, and the deviation from one time step to the successive. Experiments on synthetic data sets show the very good performance of the method when compared with state-of-the-art approaches." ] }
1609.01962
2518413782
Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a rumourous conversation as either supporting, denying or questioning the rumour. Using a classifier based on Gaussian Processes, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will warn both ordinary users of Twitter and professional news practitioners when a rumour is being rebutted.
There have been multiple attempts at defining rumours in the literature. Most of them are complementary to one another, with slight variations depending on the context of their analyses. The core concept that most researchers agree on matches the definition that major dictionaries provide, such as the Oxford English Dictionary http: www.oxforddictionaries.com definition english rumour defining a rumour as . For instance, DiFonzo and Bordia @cite_15 defined rumours as unverified and instrumentally relevant information statements in circulation.''
{ "cite_N": [ "@cite_15" ], "mid": [ "1974674099" ], "abstract": [ "The term ‘rumor’ is often used interchangeably with ‘gossip’ and ‘urban legend’ by both laypersons and scholars. In this article we attempt to clarify the construct of rumor by proposing a definition that delineates the situational and motivational contexts from which rumors arise (ambiguous, threatening or potentially threatening situations), the functions that rumors perform (sense-making and threat management), and the contents of rumor statements (unverified and instrumentally relevant information statements in circulation). To further clarify the rumor construct we also investigate the contexts, functions and contents of gossip and urban legends, juxtapose these with rumor, and analyze their similarities and differences." ] }
1609.01962
2518413782
Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a rumourous conversation as either supporting, denying or questioning the rumour. Using a classifier based on Gaussian Processes, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will warn both ordinary users of Twitter and professional news practitioners when a rumour is being rebutted.
In contrast with these three theories, Guerin and Miyazaki @cite_4 state that a rumour is a form of relationship-enhancing talk. Building on their previous work, they recall that many ways of talking serve the purpose of forming and maintaining social relationships. Rumours, they say, can be explained by such means.
{ "cite_N": [ "@cite_4" ], "mid": [ "1603504877" ], "abstract": [ "A conversational approach is developed to explain the ubiquitous presence of rumors, urban legends, and gossip as arising from their conversational properties rather than from side effects of cognitive processing or “effort after meaning.” It is suggested that the primary function of telling rumors, gossip, and urban legends is not to impart information to the listener or alleviate listener anxiety about the topic but to entertain or keep the listener’s attention, thereby enhancing social relationships. In this way, the traditional views of such stories are turned on their head, and an implication is that there is no essential feature of such stories just a range of conversational properties. The model also predicts hybrid forms that cannot be placed into one of the commonly named forms of talk. Some examples of these are given. The wider ramifications for changing “cognitive processing” effects into properties of social relationships are also drawn out." ] }
1609.01962
2518413782
Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a rumourous conversation as either supporting, denying or questioning the rumour. Using a classifier based on Gaussian Processes, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will warn both ordinary users of Twitter and professional news practitioners when a rumour is being rebutted.
In our work, we adhere to the widely accepted fact that rumours are unverified pieces of information. More specifically, following @cite_23 , we regard a rumour in the context of breaking news, as a .
{ "cite_N": [ "@cite_23" ], "mid": [ "2281420995" ], "abstract": [ "As breaking news unfolds people increasingly rely on social media to stay abreast of the latest updates. The use of social media in such situations comes with the caveat that new information being released piecemeal may encourage rumours, many of which remain unverified long after their point of release. Little is known, however, about the dynamics of the life cycle of a social media rumour. In this paper we present a methodology that has enabled us to collect, identify and annotate a dataset of 330 rumour threads (4,842 tweets) associated with 9 newsworthy events. We analyse this dataset to understand how users spread, support, or deny rumours that are later proven true or false, by distinguishing two levels of status in a rumour life cycle i.e., before and after its veracity status is resolved. The identification of rumours associated with each event, as well as the tweet that resolved each rumour as true or false, was performed by journalist members of the research team who tracked the events in real time. Our study shows that rumours that are ultimately proven true tend to be resolved faster than those that turn out to be false. Whilst one can readily see users denying rumours once they have been debunked, users appear to be less capable of distinguishing true from false rumours when their veracity remains in question. In fact, we show that the prevalent tendency for users is to support every unverified rumour. We also analyse the role of different types of users, finding that highly reputable users such as news organisations endeavour to post well-grounded statements, which appear to be certain and accompanied by evidence. Nevertheless, these often prove to be unverified pieces of information that give rise to false rumours. Our study reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumours. The findings of our study provide useful insights for achieving this aim." ] }
1609.01962
2518413782
Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a rumourous conversation as either supporting, denying or questioning the rumour. Using a classifier based on Gaussian Processes, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will warn both ordinary users of Twitter and professional news practitioners when a rumour is being rebutted.
Another study that shows insightful conclusions with respect to stance towards rumours is that by @cite_17 . The authors conducted an analysis of a large dataset of tweets related to the riots in the UK, which took place in August 2011. The dataset collected in the riots study is one of the two used in our experiments, and we describe it in more detail in section . After grouping the tweets into topics, where each represents a rumour, they were manually categorised into different classes, namely:
{ "cite_N": [ "@cite_17" ], "mid": [ "1990474689" ], "abstract": [ "For social scientists, the widespread adoption of social media presents both an opportunity and a challenge. Data that can shed light on people’s habits, opinions and behaviour is available now on a scale never seen before, but this also means that it is impossible to analyse using conventional methodologies and tools. This article represents an experiment in applying a computationally assisted methodology to the analysis of a large corpus of tweets sent during the August 2011 riots in England." ] }
1609.01962
2518413782
Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a rumourous conversation as either supporting, denying or questioning the rumour. Using a classifier based on Gaussian Processes, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will warn both ordinary users of Twitter and professional news practitioners when a rumour is being rebutted.
This leads the authors to the conclusion that the process of 'inter-subjective sense making' by Twitter users plays a key role in exposing false rumours. This finding, together with subsequent work by into the conversational characteristics of microblogging @cite_38 has motivated our research into automating stance classification as a methodology for accelerating this process.
{ "cite_N": [ "@cite_38" ], "mid": [ "2253306907" ], "abstract": [ "Inspired by a European project, PHEME, that requires the close analysis of Twitter-based conversations in order to look at the spread of rumors via social media, this paper has two objectives. The first of these is to take the analysis of microblogs back to first principles and lay out what microblog analysis should look like as a foundational programme of work. The other is to describe how this is of fundamental relevance to Human-Computer Interaction's interest in grasping the constitution of people's interactions with technology within the social order. Our critical finding is that, despite some surface similarities, Twitter-based conversations are a wholly distinct social phenomenon requiring an independent analysis that treats them as unique phenomena in their own right, rather than as another species of conversation that can be handled within the framework of existing Conversation Analysis. This motivates the argument that Microblog Analysis be established as a foundationally independent programme, examining the organizational characteristics of microblogging from the ground up. We articulate how aspects of this approach have already begun to shape our design activities within the PHEME project." ] }
1609.01962
2518413782
Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a rumourous conversation as either supporting, denying or questioning the rumour. Using a classifier based on Gaussian Processes, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will warn both ordinary users of Twitter and professional news practitioners when a rumour is being rebutted.
@cite_2 conducted early work on rumour stance classification. They introduced a system that analyzes a set of tweets associated with a given topic predefined by the user. Their system would then classify each of the tweets as supporting, denying or questioning a tweet. We have adopted this scheme in terms of the different types of stance in the work we report here. However, their work ended up merging denying and questioning tweets for each rumour into a single class, converting it into a 2-way classification problem of supporting vs denying-or-questioning. Instead, we keep those classes separate and, following , we conduct a 3-way classification @cite_36 .
{ "cite_N": [ "@cite_36", "@cite_2" ], "mid": [ "2291319738", "2159981908" ], "abstract": [ "FP7-ICT Collaborative Project ICT-2013-611233 PHEME Deliverable D2.1 (WP2) This document outlines a preliminary definition of an annotation scheme for rumours spread through social media, as well as the code frames that will be used to mark up the corpora collected for PHEME. It has been developed through an iterative process of revisions, and it is intended to encompass the different kinds of rumours that can be spread and discussed in the context of different events and situations. It especially considers the way conversations flow in social media, and has been developed in an interdisciplinary style by building on work in sociolinguistics, including the related approaches of Conversation Analysis [51] and Ethnomethodology [16]. The resulting annotation scheme will be used for the annotation of a larger corpora of social media rumours through a crowdsourcing platform. Annotation guidelines will be defined in following work and tested with small social media corpora before running the large annotation task. Keyword list: rumors, veracity, annotation scheme, social media", "A rumor is commonly defined as a statement whose true value is unverifiable. Rumors may spread misinformation (false information) or disinformation (deliberately false information) on a network of people. Identifying rumors is crucial in online social media where large amounts of information are easily spread across a large network by sources with unverified authority. In this paper, we address the problem of rumor detection in microblogs and explore the effectiveness of 3 categories of features: content-based, network-based, and microblog-specific memes for correctly identifying rumors. Moreover, we show how these features are also effective in identifying disinformers, users who endorse a rumor and further help it to spread. We perform our experiments on more than 10,000 manually annotated tweets collected from Twitter and show how our retrieval model achieves more than 0.95 in Mean Average Precision (MAP). Finally, we believe that our dataset is the first large-scale dataset on rumor detection. It can open new dimensions in analyzing online misinformation and other aspects of microblog conversations." ] }
1609.01962
2518413782
Social media tend to be rife with rumours while new reports are released piecemeal during breaking news. Interestingly, one can mine multiple reactions expressed by social media users in those situations, exploring their stance towards rumours, ultimately enabling the flagging of highly disputed rumours as being potentially false. In this work, we set out to develop an automated, supervised classifier that uses multi-task learning to classify the stance expressed in each individual tweet in a rumourous conversation as either supporting, denying or questioning the rumour. Using a classifier based on Gaussian Processes, and exploring its effectiveness on two datasets with very different characteristics and varying distributions of stances, we show that our approach consistently outperforms competitive baseline classifiers. Our classifier is especially effective in estimating the distribution of different types of stance associated with a given rumour, which we set forth as a desired characteristic for a rumour-tracking system that will warn both ordinary users of Twitter and professional news practitioners when a rumour is being rebutted.
The work closest to ours in terms of aims is @cite_18 , who explored the use of three different classifiers for automated rumour stance classification on unseen rumours. In their case, classifiers were set up on a 2-way classification problem dealing with tweets that support or deny rumours. In the present work, we extend this research by performing 3-way classification that also deals with tweets that question the rumours. Moreover, we adopt the three classifiers used in their work, namely Random Forest, Naive Bayes and Logistic Regression, as baselines in our work.
{ "cite_N": [ "@cite_18" ], "mid": [ "1971494700" ], "abstract": [ "Purpose – Twitter is a popular microblogging service which has proven, in recent years, its potential for propagating news and information about developing events. The purpose of this paper is to focus on the analysis of information credibility on Twitter. The purpose of our research is to establish if an automatic discovery process of relevant and credible news events can be achieved. Design methodology approach – The paper follows a supervised learning approach for the task of automatic classification of credible news events. A first classifier decides if an information cascade corresponds to a newsworthy event. Then a second classifier decides if this cascade can be considered credible or not. The paper undertakes this effort training over a significant amount of labeled data, obtained using crowdsourcing tools. The paper validates these classifiers under two settings: the first, a sample of automatically detected Twitter “trends” in English, and second, the paper tests how well this model transfers to..." ] }
1609.02036
2952342161
Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.
Generative image models generally fall into two categories: parametric models and non-parametric models. typically use a compressed representation to capture an image's global appearance. In recent years, deep networks such as autoencoders @cite_50 and adversarial networks @cite_19 @cite_5 have achieved substantial improvement in generating images with regular structures such as faces or digits. , including @cite_32 @cite_17 @cite_24 and @cite_41 @cite_25 @cite_36 , instead rely on a large set of exemplars to capture local patterns. Whereas these methods can produce high quality images with local patterns directly sampled from realistic images. Exhaustive search over a large exemplar set limits their scalability and often leads to computational difficulties. Our work draws inspiration from both lines of work. By using DNNs to express local interactions in an MRF, our model can capture highly complex patterns while maintaining strong scalability. [-8pt]
{ "cite_N": [ "@cite_41", "@cite_36", "@cite_32", "@cite_24", "@cite_19", "@cite_50", "@cite_5", "@cite_25", "@cite_17" ], "mid": [ "1999360130", "", "", "", "2099471712", "", "2951523806", "2171011251", "2232702494" ], "abstract": [ "We present a simple image-based method of generating novel visual appearance in which a new image is synthesized by stitching together small patches of existing images. We call this process image quilting. First, we use quilting as a fast and very simple texture synthesis algorithm which produces surprisingly good results for a wide range of textures. Second, we extend the algorithm to perform texture transfer — rendering an object with a texture taken from a different object. More generally, we demonstrate how an image can be re-rendered in the style of a different image. The method works directly on the images and does not require 3D information.", "", "", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "", "In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.", "What can you do with a million images? In this paper we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data-driven, requiring no annotations or labelling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of results for each input image and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.", "Texture synthesis is important for many applications in computer graphics, vision, and image processing. However, it remains difficult to design an algorithm that is both efficient and capable of generating high quality results. In this paper, we present an efficient algorithm for realistic texture synthesis. The algorithm is easy to use and requires only a sample texture as input. It generates textures with perceived quality equal to or better than those produced by previous techniques, but runs two orders of magnitude faster. This permits us to apply texture synthesis to problems where it has traditionally been considered impractical. In particular, we have applied it to constrained synthesis for image editing and temporal texture generation. Our algorithm is derived from Markov Random Field texture models and generates textures through a deterministic searching process. We accelerate this synthesis process using tree-structured vector quantization." ] }
1609.02036
2952342161
Markov Random Fields (MRFs), a formulation widely used in generative image modeling, have long been plagued by the lack of expressive power. This issue is primarily due to the fact that conventional MRFs formulations tend to use simplistic factors to capture local patterns. In this paper, we move beyond such limitations, and propose a novel MRF model that uses fully-connected neurons to express the complex interactions among pixels. Through theoretical analysis, we reveal an inherent connection between this model and recurrent neural networks, and thereon derive an approximated feed-forward network that couples multiple RNNs along opposite directions. This formulation combines the expressive power of deep neural networks and the cyclic dependency structure of MRF in a unified model, bringing the modeling capability to a new level. The feed-forward approximation also allows it to be efficiently learned from data. Experimental results on a variety of low-level vision tasks show notable improvement over state-of-the-arts.
, a special family of deep models, use a chain of nonlinear units to capture sequential relations. In computer vision, RNNs are primarily used to model sequential changes in videos @cite_29 , visual attention @cite_12 @cite_28 , and hand-written digit recognition @cite_15 . Previous work explores multi-dimensional RNNs @cite_55 for scene labeling @cite_16 as well as object detections @cite_8 . The most related work is perhaps the use of 2D RNNs for generating gray-scale textures @cite_43 or color images @cite_7 . A key distinction of these models from ours is that 2D RNNs rely on an to model spatial dependency, each pixel depends only on its left and upper neighbors -- this severely limits the spatial coherence. Our model, instead, allows dependencies from all directions via iterative inference unrolling. [-8pt]
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_28", "@cite_55", "@cite_29", "@cite_43", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "2953318193", "2951829713", "1850742715", "2950956209", "2951183276", "2953250761", "", "1909234690", "2951527505" ], "abstract": [ "Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.", "It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are important. ION improves state-of-the-art on PASCAL VOC 2012 object detection from 73.9 to 76.4 mAP. On the new and more challenging MS COCO dataset, we improve state-of-art-the from 19.7 to 33.1 mAP. In the 2015 MS COCO Detection Challenge, our ION model won the Best Student Entry and finished 3rd place overall. As intuition suggests, our detection results provide strong evidence that context and multi-scale representations improve small object detection.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "Recurrent neural networks (RNNs) have proved effective at one dimensional sequence learning tasks, such as speech and online handwriting recognition. Some of the properties that make RNNs suitable for such tasks, for example robustness to input warping, and the ability to access contextual information, are also desirable in multidimensional domains. However, there has so far been no direct way of applying RNNs to data with more than one spatio-temporal dimension. This paper introduces multi-dimensional recurrent neural networks (MDRNNs), thereby extending the potential applicability of RNNs to vision, video processing, medical imaging and many other areas, while avoiding the scaling problems that have plagued other multi-dimensional models. Experimental results are provided for two image segmentation tasks.", "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.", "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.", "", "This paper addresses the problem of pixel-level segmentation and classification of scene images with an entirely learning-based approach using Long Short Term Memory (LSTM) recurrent neural networks, which are commonly used for sequence classification. We investigate two-dimensional (2D) LSTM networks for natural scene images taking into account the complex spatial dependencies of labels. Prior methods generally have required separate classification and image segmentation stages and or pre- and post-processing. In our approach, classification, segmentation, and context integration are all carried out by 2D LSTM networks, allowing texture and spatial model parameters to be learned within a single model. The networks efficiently capture local and global contextual information over raw RGB values and adapt well for complex scene images. Our approach, which has a much lower computational complexity than prior methods, achieved state-of-the-art performance over the Stanford Background and the SIFT Flow datasets. In fact, if no pre- or post-processing is applied, LSTM networks outperform other state-of-the-art approaches. Hence, only with a single-core Central Processing Unit (CPU), the running time of our approach is equivalent or better than the compared state-of-the-art approaches which use a Graphics Processing Unit (GPU). Finally, our networks' ability to visualize feature maps from each layer supports the hypothesis that LSTM networks are overall suited for image processing tasks.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so." ] }
1609.01859
2510121982
A popular approach to semantic image understanding is to manually tag images with keywords and then learn a mapping from vi- sual features to keywords. Manually tagging images is a subjective pro- cess and the same or very similar visual contents are often tagged with different keywords. Furthermore, not all tags have the same descriptive power for visual contents and large vocabulary available from natural language could result in a very diverse set of keywords. In this paper, we propose an unsupervised visual theme discovery framework as a better (more compact, efficient and effective) alternative to semantic represen- tation of visual contents. We first show that tag based annotation lacks consistency and compactness for describing visually similar contents. We then learn the visual similarity between tags based on the visual features of the images containing the tags. At the same time, we use a natural language processing technique (word embedding) to measure the seman- tic similarity between tags. Finally, we cluster tags into visual themes based on their visual similarity and semantic similarity measures using a spectral clustering algorithm. We conduct user studies to evaluate the effectiveness and rationality of the visual themes discovered by our unsu- pervised algorithm and obtains promising result. We then design three common computer vision tasks, example based image search, keyword based image search and image labelling to explore potential applica- tion of our visual themes discovery framework. In experiments, visual themes significantly outperforms tags on semantic image understand- ing and achieve state-of-art performance in all three tasks. This again demonstrate the effectiveness and versatility of proposed framework.
Our definition of visual theme is partly inspired by the naming of visual concept @cite_6 . A visual concept is denoted as a subset of human language vocabularies that refer to particular visual entities (e.g. fireman, policeman). Visual concepts have long been collected and used by computer vision researchers in multiple domains @cite_17 @cite_4 @cite_21 @cite_19 . A example in image analysis is ImageNet @cite_8 , where visual concepts (only nouns) are selected and organised hierarchically on the basis of WordNet @cite_0 . A drawback of visual concepts is, they are often manually defined, and sometimes they may fail to capture complex information within the visual world. This makes them less applicable in multiple domains.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_21", "@cite_6", "@cite_0", "@cite_19", "@cite_17" ], "mid": [ "2952202763", "2108598243", "2949769367", "2221837760", "2081580037", "2226011429", "2063153269" ], "abstract": [ "Discovering visual knowledge from weakly labeled data is crucial to scale up computer vision recognition system, since it is expensive to obtain fully labeled data for a large number of concept categories. In this paper, we propose ConceptLearner, which is a scalable approach to discover visual concepts from weakly labeled image collections. Thousands of visual concept detectors are learned automatically, without human in the loop for additional annotation. We show that these learned detectors could be applied to recognize concepts at image-level and to detect concepts at image region-level accurately. Under domain-specific supervision, we further evaluate the learned concepts for scene recognition on SUN database and for object detection on Pascal VOC 2007. ConceptLearner shows promising performance compared to fully supervised and weakly supervised methods.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.", "Humans connect language and vision to perceive the world. How to build a similar connection for computers? One possible way is via visual concepts, which are text terms that relate to visually discriminative entities. We propose an automatic visual concept discovery algorithm using parallel text and visual corpora, it filters text terms based on the visual discriminative power of the associated images, and groups them into concepts using visual and semantic similarities. We illustrate the applications of the discovered concepts using bidirectional image and sentence retrieval task and image tagging task, and show that the discovered concepts not only outperform several large sets of manually selected concepts significantly, but also achieves the state-of-the-art performance in the retrieval task.", "Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet 1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4].", "Complex event retrieval is a challenging research problem, especially when no training videos are available. An alternative to collecting training videos is to train a large semantic concept bank a priori. Given a text description of an event, event retrieval is performed by selecting concepts linguistically related to the event description and fusing the concept responses on unseen videos. However, defining an exhaustive concept lexicon and pre-training it requires vast computational resources. Therefore, recent approaches automate concept discovery and training by leveraging large amounts of weakly annotated web data. Compact visually salient concepts are automatically obtained by the use of concept pairs or, more generally, n-grams. However, not all visually salient n-grams are necessarily useful for an event query--some combinations of concepts may be visually compact but irrelevant--and this drastically affects performance. We propose an event retrieval algorithm that constructs pairs of automatically discovered concepts and then prunes those concepts that are unlikely to be helpful for retrieval. Pruning depends both on the query and on the specific video instance being evaluated. Our approach also addresses calibration and domain adaptation issues that arise when applying concept detectors to unseen videos. We demonstrate large improvements over other vision based systems on the TRECVID MED 13 dataset.", "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2 on KTH (better by 3.3 ), 95.0 on UCF Sports (better by 3.7 ), 57.9 on UCF50 (baseline is 47.9 ), and 26.9 on HMDB51 (baseline is 23.2 ). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier." ] }
1609.02087
2509784253
We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.
Due to the redundant temporal information that exists in video, rain streaks can be more easily identified and removed in this domain @cite_16 @cite_3 @cite_1 @cite_13 . For example, in @cite_16 the authors first propose a rain streak detection algorithm based on a correlation model. After detecting the location of rain streaks, the method uses the average pixel value taken from the neighboring frames to remove streaks. In @cite_3 , the authors analyze the properties of rain and establish a model of visual effect of rain in frequency space. In @cite_1 , the histogram of streak orientation is used to detect rain and a Gaussian mixture model is used to extract the rain layer. In @cite_13 , based on the minimization of registration error between frames, phase congruency is used to detect and remove the rain streaks. Many of these methods work well, but are significantly aided by the temporal content of video. In this paper we instead focus on removing rain from a single image.
{ "cite_N": [ "@cite_13", "@cite_16", "@cite_1", "@cite_3" ], "mid": [ "2017416107", "2119535410", "2122596619", "1976056157" ], "abstract": [ "In the context of extracting information from video, bad weather conditions like rain can have a detrimental effect. In this paper, a novel framework to detect and remove rain streaks from video is proposed. The first part of the proposed framework for rain removal is a technique to detect rain streaks based on phase congruency features. The variation of features from frame to frame is used to estimate the candidate rain pixels in a frame. In order to reduce the number of false candidates due to global motion, frames are registered using phase correlation. The second part of the proposed framework is a novel reconstruction technique that utilizes information from three different sources, which are intensities of the rain affected pixel, spatial neighbors, and temporal neighbors. An optimal estimate for the actual intensity of the rain affected pixel is made based on the minimization of registration error between frames. An optical flow technique using local phase information is adopted for registration. This part of the proposed framework for removing rain is modeled such that the presence of local motion will not distort the features in the reconstructed video. The proposed framework is evaluated quantitatively and qualitatively on a variety of videos with varying complexities. The effectiveness of the algorithm is quantitatively verified by computing a no-reference image quality measure on individual frames of the reconstructed video. From a variety of experiments that are performed on output videos, it is shown that the proposed technique performs better than state-of-the-art techniques.", "The visual effects of rain are complex. Rain consists of spatially distributed drops falling at high velocities. Each drop refracts and reflects the environment, producing sharp intensity changes in an image. A group of such falling drops creates a complex time varying signal in images and videos. In addition, due to the finite exposure time of the camera, intensities due to rain are motion blurred and hence depend on the background intensities. Thus, the visual manifestations of rain are a combination of both the dynamics of rain and the photometry of the environment. In this paper, we present the first comprehensive analysis of the visual effects of rain on an imaging system. We develop a correlation model that captures the dynamics of rain and a physics-based motion blur model that explains the photometry of rain. Based on these models, we develop efficient algorithms for detecting and removing rain from videos. The effectiveness of our algorithms is demonstrated using experiments on videos of complex scenes with moving objects and time-varying textures. The techniques described in this paper can be used in a wide range of applications including video surveillance, vision based navigation, video movie editing and video indexing retrieval.", "The detection of bad weather conditions is crucial for meteorological centers, specially with demand for air, sea and ground traffic management. In this article, a system based on computer vision is presented which detects the presence of rain or snow. To separate the foreground from the background in image sequences, a classical Gaussian Mixture Model is used. The foreground model serves to detect rain and snow, since these are dynamic weather phenomena. Selection rules based on photometry and size are proposed in order to select the potential rain streaks. Then a Histogram of Orientations of rain or snow Streaks (HOS), estimated with the method of geometric moments, is computed, which is assumed to follow a model of Gaussian-uniform mixture. The Gaussian distribution represents the orientation of the rain or the snow whereas the uniform distribution represents the orientation of the noise. An algorithm of expectation maximization is used to separate these two distributions. Following a goodness-of-fit test, the Gaussian distribution is temporally smoothed and its amplitude allows deciding the presence of rain or snow. When the presence of rain or of snow is detected, the HOS makes it possible to detect the pixels of rain or of snow in the foreground images, and to estimate the intensity of the precipitation of rain or of snow. The applications of the method are numerous and include the detection of critical weather conditions, the observation of weather, the reliability improvement of video-surveillance systems and rain rendering.", "Dynamic weather such as rain and snow causes complex spatio-temporal intensity fluctuations in videos. Such fluctuations can adversely impact vision systems that rely on small image features for tracking, object detection and recognition. While these effects appear to be chaotic in space and time, we show that dynamic weather has a predictable global effect in frequency space. For this, we first develop a model of the shape and appearance of a single rain or snow streak in image space. Detecting individual streaks is difficult even with an accurate appearance model, so we combine the streak model with the statistical characteristics of rain and snow to create a model of the overall effect of dynamic weather in frequency space. Our model is then fit to a video and is used to detect rain or snow streaks first in frequency space, and the detection result is then transferred to image space. Once detected, the amount of rain or snow can be reduced or increased. We demonstrate that our frequency analysis allows for greater accuracy in the removal of dynamic weather and in the performance of feature extraction than previous pixel-based or patch-based methods. We also show that unlike previous techniques, our approach is effective for videos with both scene and camera motions." ] }
1609.02087
2509784253
We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.
Compared with video-based methods, removing rain from individual images is much more challenging since much less information is available for detecting and removing rain streaks. Single-image based methods have been proposed to deal with this challenging problem, but success is less noticeable than in video-based algorithms, and there is still much room for improvement. To give three examples, in @cite_15 rain streak detection and removal is achieved using kernel regression and a non-local mean filtering. In @cite_25 , a related work based on deep learning was introduced to remove static raindrops and dirt spots from pictures taken through windows. However, focusing on a specific application this method uses a different physical model from the one in this paper. As our later comparisons show, this physical model limits its ability to transfer to rain streak removal. In @cite_19 , a generalized low-rank model; both single-image and video rain removal can be achieved through this the spatial and temporal correlations learned by this method.
{ "cite_N": [ "@cite_19", "@cite_15", "@cite_25" ], "mid": [ "2154621477", "2054604489", "2154815154" ], "abstract": [ "In this paper, we propose a novel low-rank appearance model for removing rain streaks. Different from previous work, our method needs neither rain pixel detection nor time-consuming dictionary learning stage. Instead, as rain streaks usually reveal similar and repeated patterns on imaging scene, we propose and generalize a low-rank model from matrix to tensor structure in order to capture the spatio-temporally correlated rain streaks. With the appearance model, we thus remove rain streaks from image video (and also other high-order image structure) in a unified way. Our experimental results demonstrate competitive (or even better) visual quality and efficient run-time in comparison with state of the art.", "An adaptive rain streak removal algorithm for a single image is proposed in this work. We observe that a typical rain streak has an elongated elliptical shape with a vertical orientation. Thus, we first detect rain streak regions by analyzing the rotation angle and the aspect ratio of the elliptical kernel at each pixel location. We then perform the nonlocal means filtering on the detected rain streak regions by selecting nonlocal neighbor pixels and their weights adaptively. Experimental results demonstrate that the proposed algorithm removes rain streaks more efficiently and provides higher restored image qualities than conventional algorithms.", "Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions." ] }