aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1606.02193
2409923045
Monitoring Wireless Sensor Networks (WSNs) are composed of sensor nodes that report temperature, relative humidity, and other environmental parameters. The time between two successive measurements is a critical parameter to set during the WSN configuration because it can impact the WSN's lifetime, the wireless medium contention and the quality of the reported data. As trends in monitored parameters can significantly vary between scenarios and within time, identifying a sampling interval suitable for several cases is also challenging. In this work, we propose a dynamic sampling rate adaptation scheme based on reinforcement learning, able to tune sensors' sampling interval on-the-fly, according to environmental conditions and application requirements. The primary goal is to set the sampling interval to the best value possible so as to avoid oversampling and save energy, while not missing environmental changes that can be relevant for the application. In simulations, our mechanism could reduce up to 73 the total number of transmissions compared to a fixed strategy and, simultaneously, keep the average quality of information provided by the WSN. The inherent flexibility of the reinforcement learning algorithm facilitates its use in several scenarios, so as to exploit the broad scope of the Internet of Things.
Our work does not necessarily rely on the computational capacity of sensor nodes or s because the incorporation of WSNs into the IoT allows the use of external entities and cloud computing services that can perform powerful machine learning techniques over sensed data if needed @cite_1 . To the best of our knowledge, this is the first approach that dynamically adapts the sampling interval of the sensor nodes based on a reinforcement learning technique.
{ "cite_N": [ "@cite_1" ], "mid": [ "2416983539" ], "abstract": [ "Wireless sensor networks (WSNs) have been adopted as merely data producers for years. However, the data collected by WSNs can also be used to manage their operation and avoid unnecessary measurements that do not provide any new knowledge about the environment. The benefits are twofold because wireless sensor nodes may save their limited energy resources and also reduce the wireless medium occupancy. We present a self-managed platform that collects and stores data from sensor nodes, analyzes its contents and uses the built knowledge to adjust the operation of the entire network. The system architecture facilitates the incorporation of traditional WSNs into the Internet of Things by abstracting the lower communication layers and allowing decisions based on the data relevance. Finally, we demonstrate the platform optimizing a WSN's operation at runtime, based on different real-time data analysis." ] }
1606.02601
2952928559
This paper presents a joint model for performing unsupervised morphological analysis on words, and learning a character-level composition function from morphemes to word embeddings. Our model splits individual words into segments, and weights each segment according to its ability to predict context words. Our morphological analysis is comparable to dedicated morphological analyzers at the task of morpheme boundary recovery, and also performs better than word-based embedding models at the task of syntactic analogy answering. Finally, we show that incorporating morphology explicitly into character-level models help them produce embeddings for unseen words which correlate better with human judgments.
Unsupervised morphology induction aims to decide whether two words are morphologically related or to generate a morphological analysis for a word @cite_19 @cite_30 . While they may use semantic insights to perform the morphological analysis @cite_10 , they typically are not concerned with obtaining a semantic representation for morphemes, nor of the resulting word.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_10" ], "mid": [ "2101711363", "2159399018", "2294970769" ], "abstract": [ "This study reports the results of using minimum description length (MDL) analysis to model unsupervised learning of the morphological segmentation of European languages, using corpora ranging in size from 5,000 words to 500,000 words. We develop a set of heuristics that rapidly develop a probabilistic morphological grammar, and use MDL as our primary tool to determine whether the modifications proposed by the heuristics will be adopted or not. The resulting grammar matches well the analysis that would be developed by a human morphologist.In the final section, we discuss the relationship of this style of MDL grammatical analysis to the notion of evaluation metric in early generative grammar.", "Standard statistical models of language fail to capture one of the most striking properties of natural languages: the power-law distribution in the frequencies of word tokens. We present a framework for developing statistical models that generically produce power-laws, augmenting standard generative models with an adaptor that produces the appropriate pattern of token frequencies. We show that taking a particular stochastic process - the Pitman-Yor process - as an adaptor justifies the appearance of type frequencies in formal analyses of natural language, and improves the performance of a model for unsupervised learning of morphology.", "We present a language agnostic, unsupervised method for inducing morphological transformations between words. The method relies on certain regularities manifest in highdimensional vector spaces. We show that this method is capable of discovering a wide range of morphological rules, which in turn are used to build morphological analyzers. We evaluate this method across six different languages and nine datasets, and show significant improvements across all languages." ] }
1606.02601
2952928559
This paper presents a joint model for performing unsupervised morphological analysis on words, and learning a character-level composition function from morphemes to word embeddings. Our model splits individual words into segments, and weights each segment according to its ability to predict context words. Our morphological analysis is comparable to dedicated morphological analyzers at the task of morpheme boundary recovery, and also performs better than word-based embedding models at the task of syntactic analogy answering. Finally, we show that incorporating morphology explicitly into character-level models help them produce embeddings for unseen words which correlate better with human judgments.
Excitingly, character-level models seem to capture morphological effects. Examining nearest neighbours of morphologically complex words in character-aware models often shows other words with the same morphology @cite_0 @cite_12 . Furthermore, morphosyntactic features such as capitalization and suffix information have long been used in tasks such as POS tagging @cite_9 @cite_28 . By explicitly modelling these features, one might expect good performance gains in many NLP tasks.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_28", "@cite_12" ], "mid": [ "1899794420", "2250489794", "1996430422", "" ], "abstract": [ "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form‐function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).", "Recent work on supertagging using a feedforward neural network achieved significant improvements for CCG supertagging and parsing (Lewis and Steedman, 2014). However, their architecture is limited to considering local contexts and does not naturally model sequences of arbitrary length. In this paper, we show how directly capturing sequence information using a recurrent neural network leads to further accuracy improvements for both supertagging (up to 1.9 ) and parsing (up to 1 F1), on CCGBank, Wikipedia and biomedical text.", "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24 accuracy on the Penn Treebank WSJ, an error reduction of 4.4 on the best previous single automatically learned tagging result.", "" ] }
1606.02311
2403060820
Giving a formal semantic to an UML Activity diagram (UML AD) is a hard task. The reason of this difficulty is the ambiguity and the absence of a precise formal semantic of such semi-formal formalism. A variety of semantics exist in the literature having tackled the aspects covered by this language. We can give as example denotational, functional and compositional semantics. To cope with the recent tendency which gave a heterogeneous semantic to UML diagrams, we aim to define an algebraic presentation of the semantic of UML AD. In this work, we define a formal semantic of UML 2.0 AD based on institution theory. For UML AD formalism, which is a graphical language, no precise formal semantic is given to it. We use the institution theory to define the intended semantic. Thus, the UML AD formalism will be defined in its own natural semantic.
The second category of works focus on the use of institution theory in the specification of graphical formalism such as UML diagrams. In this category, we mention the work present in @cite_16 @cite_3 @cite_11 @cite_12 . The approach defined by aims to define a semantic for UML class diagram, UML interactions diagram and OCL. Each diagram is described in its natural semantic because of the use of the algebraic formalization of each formalism. In addition, relations between diagrams are expressed via institution morphism and comorphism. We note here that this approach is inspired by Mossakowski works in the heterogeneous institution setting.
{ "cite_N": [ "@cite_16", "@cite_12", "@cite_3", "@cite_11" ], "mid": [ "1871900403", "2758015226", "", "2758246041" ], "abstract": [ "UML models consist of several diagrams of different types describing different views of a software system ranging from specifications of the static system structure to descriptions of system snapshots and dynamic behaviour. In this paper a heterogeneous approach to the semantics of UML is proposed where each diagram type can be described in its \"natural\" semantics, and the relations between diagram types are expressed by appropriate translations. More formally, the UML family of diagram types is represented as a \"heterogeneous institution environment\": each diagram type is described as an appropriate institution where typically the data structures occurring in a diagram are represented by signature elements whereas the relationships between data and the dynamic behaviour of objects are captured by sentences; in several cases, the diagrams are themselves the sentences. The relationship between two diagram types is described by a socalled institution comorphism, and in case no institution comorphism exists, by a co-span of such comorphisms. Consistency conditions between different diagrams are derived from the comorphism translations. This heterogeneous semantic approach to UML is illustrated by several example diagram types including class diagrams, OCL, and interaction diagrams.", "This work presents the theory of UML 2.0 static structures (or class diagrams), that is proven to define an institution.", "", "This work presents the theory of UML 2.0 interactions, that is proven to define an institution." ] }
1606.02311
2403060820
Giving a formal semantic to an UML Activity diagram (UML AD) is a hard task. The reason of this difficulty is the ambiguity and the absence of a precise formal semantic of such semi-formal formalism. A variety of semantics exist in the literature having tackled the aspects covered by this language. We can give as example denotational, functional and compositional semantics. To cope with the recent tendency which gave a heterogeneous semantic to UML diagrams, we aim to define an algebraic presentation of the semantic of UML AD. In this work, we define a formal semantic of UML 2.0 AD based on institution theory. For UML AD formalism, which is a graphical language, no precise formal semantic is given to it. We use the institution theory to define the intended semantic. Thus, the UML AD formalism will be defined in its own natural semantic.
The third category of works uses this theory for a specific intention and a precise case study. The work in @cite_6 is a good candidate in this category where authors defined a heterogeneous framework of services oriented system, using institution theory. Authors (in @cite_6 ) aims to define a heterogeneous specification approach for service-oriented architecture (SOA). The developed framework consists of a several individual services specification written in a local logic. The specification of their interactions is written in a global logic. The two defined logics are described via institution theory and an institution comorphism is used to link the two defined institution. This approach is inspired by the work of Mossakowski. Another work is developed in @cite_5 where the authors propose to use institution to represent the logics underling OWL and Z. Then, they propose a formal semantic for the transformation of OWL to Z specification via the use of institution comorphism.
{ "cite_N": [ "@cite_5", "@cite_6" ], "mid": [ "1513793174", "2053569992" ], "abstract": [ "Checking for properties of Web ontologies is important for the development of reliable Semantic Web systems. Software specification and verification tools can be used to complement the Knowledge Representation tools in reasoning about Semantic Web. The key to this approach is to develop sound transformation techniques from Web ontology to software specifications so that the associated verification tools can be applied to check the transformed specification models. Our previous work has demonstrated a practical approach to translating Web ontologies to Z specifications. However, from a sound engineering point of view, the translation is lacking the theoretical work that can formally relate the respective underlying logical systems of OWL and Z. In this paper, we take the advantage that the logics underlying OWL and Z can be represented as institutions and we show that the institution comorphism provides a formal semantic foundation for the transformation from OWL to Z.", "Service-oriented architecture (SOA) is a relatively new approach to software system development. It divides system functionality to independent, loosely coupled, interoperable services. In this paper we propose a new heterogeneous specification approach for SOA systems where a heterogeneous structured specification consists of a number of specifications of individual services written in a \"local\" logic and where the specification of their interactions is separately described in a \"global\" logic. A main feature of our global logic is the possibility of describing the dynamic change of service communications over time. Our approach is based on the theory of institutions: we show that both logics form institutions and that these institutions are connected by an institution comorphism. We illustrate our approach by a simple scenario of an e-university management system and show the power of the heterogeneous specification approach by a compositional refinement of the scenario." ] }
1606.02314
2409026712
The ability to construct domain specific knowledge graphs (KG) and perform question-answering or hypothesis generation is a transformative capability. Despite their value, automated construction of knowledge graphs remains an expensive technical challenge that is beyond the reach for most enterprises and academic institutions. We propose an end-to-end framework for developing custom knowledge graph driven analytics for arbitrary application domains. The uniqueness of our system lies A) in its combination of curated KGs along with knowledge extracted from unstructured text, B) support for advanced trending and explanatory questions on a dynamic KG, and C) the ability to answer queries where the answer is embedded across multiple data sources.
Knowledge Graphs with their ability to represent complex relationships between real world entities have become the de facto standard to store KBs. Knowledge graph construction from web data @cite_8 @cite_4 @cite_2 @cite_17 has been studied comprehensively over the last decade. @cite_10 provide an in depth discussion of the process and associated challenges. Openly available KBs like YAGO @cite_14 , Freebase @cite_18 and NELL @cite_12 provide massive amount of highly confident triples. We view these general purpose KB's as complimentary, to be used in conjunction with NOUS to build a custom domain KG. Most of these KGs or frameworks are limited in their querying capabilities. While algorithms for querying or mining dynamic graphs have been studied @cite_0 , much of the research happened without addressing Knowledge Graph specific issues.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_8", "@cite_0", "@cite_2", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2094728533", "2022166150", "", "2016753842", "2964293433", "", "2049770245", "1512387364", "" ], "abstract": [ "Freebase is a practical, scalable tuple database used to structure general human knowledge. The data in Freebase is collaboratively created, structured, and maintained. Freebase currently contains more than 125,000,000 tuples, more than 4000 types, and more than 7000 properties. Public read write access to Freebase is allowed through an HTTP-based graph-query API using the Metaweb Query Language (MQL) as a data query and manipulation language. MQL provides an easy-to-use object-oriented interface to the tuple data in Freebase and is designed to facilitate the creation of collaborative, Web-based data-oriented applications.", "We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as HASONEPRIZE). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships - and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95 . YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.", "", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", "Cyber security is one of the most significant technical challenges in current times. Detecting adversarial activities, prevention of theft of intellectual properties and customer data is a high priority for corporations and government agencies around the world. Cyber defenders need to analyze massive-scale, high-resolution network flows to identify, categorize, and mitigate attacks involving networks spanning institutional and national boundaries. Many of the cyber attacks can be described as subgraph patterns, with prominent examples being insider infiltrations (path queries), denial of service (parallel paths) and malicious spreads (tree queries). This motivates us to explore subgraph matching on streaming graphs in a continuous setting. The novelty of our work lies in using the subgraph distributional statistics collected from the streaming graph to determine the query processingstrategy. Weintroducea“Lazy Search\"algorithmwhere the search strategy is decided on a vertex-to-vertex basis depending on the likelihood of a match in the vertex neighborhood. We also propose a metric named “Relative Selectivity\" that is used to select between different query processing strategies. Our experiments performed on real online news, network traffic stream and a synthetic social network benchmark demonstrate 10-100x speedups over selectivity agnostic approaches.", "", "A knowledge base (KB) contains a set of concepts, instances, and relationships. Over the past decade, numerous KBs have been built, and used to power a growing array of applications. Despite this flurry of activities, however, surprisingly little has been published about the end-to-end process of building, maintaining, and using such KBs in industry. In this paper we describe such a process. In particular, we describe how we build, update, and curate a large KB at Kosmix, a Bay Area startup, and later at WalmartLabs, a development and research lab of Walmart. We discuss how we use this KB to power a range of applications, including query understanding, Deep Web search, in-context advertising, event monitoring in social media, product search, social gifting, and social mining. Finally, we discuss how the KB team is organized, and the lessons learned. Our goal with this paper is to provide a real-world case study, and to contribute to the emerging direction of building, maintaining, and using knowledge bases for data management applications.", "We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.", "" ] }
1606.02479
2408862202
In many domains (e.g. Internet of Things, neuroimaging) signals are naturally supported on graphs. These graphs usually convey information on similarity between the values taken by the signal at the corresponding vertices. An interest of using graphs is that it allows to define ad hoc operators to perform signal processing. Among them, ones of paramount importance in many tasks are translations. In this paper we are interested in defining translations on graphs using a few simple properties. Namely we propose to define translations as functions from vertices to adjacent ones, that preserve neighborhood properties of the graph. We show that our definitions, contrary to other works on the subject, match usual translations on grid graphs.
@cite_5 , the authors propose to define a convolution operator first. To this end, they use the Laplacian matrix of the graph, defined as @math , where @math is the diagonal matrix such that @math is the number of vertices the @math -th vertex is connected to. Since @math is symmetric, @math also is, and it can be decomposed as @math , where @math is an orthonormal matrix, @math is a diagonal matrix and @math denotes the transpose operator. Then they define @math , called the graph Fourier transform of @math , and @math , termed the inverse graph Fourier transform of @math . The authors introduce a convolution operator for two signals @math and @math as @math , where @math denotes the elementwise product of vectors. They particularize the translation by convolving @math with a signal @math in the canonical basis. Thus, a translation is not defined by a shift'' but by a destination vertex @math .
{ "cite_N": [ "@cite_5" ], "mid": [ "2101491865" ], "abstract": [ "In applications such as social, energy, transportation, sensor, and neuronal networks, high-dimensional data naturally reside on the vertices of weighted graphs. The emerging field of signal processing on graphs merges algebraic and spectral graph theoretic concepts with computational harmonic analysis to process such signals on graphs. In this tutorial overview, we outline the main challenges of the area, discuss different ways to define graph spectral domains, which are the analogs to the classical frequency domain, and highlight the importance of incorporating the irregular structures of graph data domains when processing signals on graphs. We then review methods to generalize fundamental operations such as filtering, translation, modulation, dilation, and downsampling to the graph setting and survey the localized, multiscale transforms that have been proposed to efficiently extract information from high-dimensional data on graphs. We conclude with a brief discussion of open issues and possible extensions." ] }
1606.02479
2408862202
In many domains (e.g. Internet of Things, neuroimaging) signals are naturally supported on graphs. These graphs usually convey information on similarity between the values taken by the signal at the corresponding vertices. An interest of using graphs is that it allows to define ad hoc operators to perform signal processing. Among them, ones of paramount importance in many tasks are translations. In this paper we are interested in defining translations on graphs using a few simple properties. Namely we propose to define translations as functions from vertices to adjacent ones, that preserve neighborhood properties of the graph. We show that our definitions, contrary to other works on the subject, match usual translations on grid graphs.
@cite_0 , the authors focus on defining an isometric operator with respect to the @math -norm. The translation of a signal they propose consists in multiplying the signal by some matrix which is an exponential of an imaginary diagonal matrix.
{ "cite_N": [ "@cite_0" ], "mid": [ "356566955" ], "abstract": [ "We extend the concept of stationary temporal signals to stationary graph signals. Doing so, we introduce the concept of strict sense stationary and wide sense stationary graph signals as a statistical invariance through an isometric graph translation operator. Using these definitions, we propose a spectral characterisation of WSS graph signals allowing to study stationarity using only the spectral components of a graph signal. Finally, we apply this characterisation on a synthetic graph in order to study a few important stochastic graph signals. Also, using geographic data, we study weather readings on a graph of weather stations and show evidence of stationarity in the temperature readings." ] }
1606.02276
2341242007
The impact of culture in visual emotion perception has recently captured the attention of multimedia research. In this study, we provide powerful computational linguistics tools to explore, retrieve and browse a dataset of 16K multilingual affective visual concepts and 7.3M Flickr images. First, we design an effective crowdsourcing experiment to collect human judgements of sentiment connected to the visual concepts. We then use word embeddings to represent these concepts in a low dimensional vector space, allowing us to expand the meaning around concepts, and thus enabling insight about commonalities and differences among different languages. We compare a variety of concept representations through a novel evaluation task based on the notion of visual semantic relatedness. Based on these representations, we design clustering schemes to group multilingual visual concepts, and evaluate them with novel metrics based on the crowdsourced sentiment annotations as well as visual semantic relatedness. The proposed clustering framework enables us to analyze the full multilingual dataset in-depth and also show an application on a facial data subset, exploring cultural insights of portrait-related affective visual concepts.
Research on distributed word representations @cite_17 @cite_21 @cite_19 @cite_26 have recently extended to multiple languages either by using bilingual word alignments or parallel corpora to transfer linguistic information from multiple languages. For instance, @cite_6 proposed to learn distributed representations of words across languages by using a multilingual corpus from Wikipedia. @cite_0 @cite_25 proposed to learn bilingual embeddings in the context of neural language models utilizing multilingual word alignments. @cite_12 proposed to learn joint-space embeddings across multiple languages without relying on word alignments. Similarly, @cite_15 proposed auto-encoder-based methods to learn multilingual word embeddings. A limitation when dealing with many languages is the scarcity of data for all pairs. In this study, we use a pivot language to align the multiple languages using machine translation.
{ "cite_N": [ "@cite_26", "@cite_21", "@cite_6", "@cite_0", "@cite_19", "@cite_15", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2250539671", "2158899491", "1523296404", "2251033195", "1614298861", "2950682695", "2118090838", "2142074148", "2158139315" ], "abstract": [ "Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.", "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "Distributed word representations (word embeddings) have recently contributed to competitive performance in language modeling and several NLP tasks. In this work, we train word embeddings for more than 100 languages using their corresponding Wikipedias. We quantitatively demonstrate the utility of our word embeddings by using them as the sole features for training a part of speech tagger for a subset of these languages. We find their performance to be competitive with near state-of-art methods in English, Danish and Swedish. Moreover, we investigate the semantic features captured by these embeddings through the proximity of word groupings. We will release these embeddings publicly to help researchers in the development and enhancement of multilingual applications.", "Distributed representations of words have proven extremely useful in numerous natural language processing tasks. Their appeal is that they can help alleviate data sparsity problems common to supervised learning. Methods for inducing these representations require only unlabeled language data, which are plentiful for many natural languages. In this work, we induce distributed representations for a pair of languages jointly. We treat it as a multitask learning problem where each task corresponds to a single word, and task relatedness is derived from co-occurrence statistics in bilingual parallel data. These representations can be used for a number of crosslingual learning tasks, where a learner can be trained on annotations present in one language and applied to test data in another. We show that our representations are informative by using them for crosslingual document classification, where classifiers trained on these representations substantially outperform strong baselines (e.g. machine translation) when applied to a new language.", "", "Cross-language learning allows us to use training data from one language to build models for a different language. Many approaches to bilingual learning require that we have word-level alignment of sentences from parallel corpora. In this work we explore the use of autoencoder-based methods for cross-language learning of vectorial word representations that are aligned between two languages, while not relying on word-level alignments. We show that by simply learning to reconstruct the bag-of-words representations of aligned sentences, within and between languages, we can in fact learn high-quality representations and do without word alignments. Since training autoencoders on word observations presents certain computational issues, we propose and compare different variations adapted to this setting. We also propose an explicit correlation maximizing regularizer that leads to significant improvement in the performance. We empirically investigate the success of our approach on the problem of cross-language test classification, where a classifier trained on a given language (e.g., English) must learn to generalize to a different language (e.g., German). These experiments demonstrate that our approaches are competitive with the state-of-the-art, achieving up to 10-14 percentage point improvements over the best reported results on this task.", "We introduce bilingual word embeddings: semantic embeddings associated across two languages in the context of neural language models. We propose a method to learn bilingual embeddings from a large unlabeled corpus, while utilizing MT word alignments to constrain translational equivalence. The new embeddings significantly out-perform baselines in word semantic similarity. A single semantic similarity feature induced with bilingual embeddings adds near half a BLEU point to the results of NIST08 Chinese-English machine translation task.", "We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data.", "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs" ] }
1606.02276
2341242007
The impact of culture in visual emotion perception has recently captured the attention of multimedia research. In this study, we provide powerful computational linguistics tools to explore, retrieve and browse a dataset of 16K multilingual affective visual concepts and 7.3M Flickr images. First, we design an effective crowdsourcing experiment to collect human judgements of sentiment connected to the visual concepts. We then use word embeddings to represent these concepts in a low dimensional vector space, allowing us to expand the meaning around concepts, and thus enabling insight about commonalities and differences among different languages. We compare a variety of concept representations through a novel evaluation task based on the notion of visual semantic relatedness. Based on these representations, we design clustering schemes to group multilingual visual concepts, and evaluate them with novel metrics based on the crowdsourced sentiment annotations as well as visual semantic relatedness. The proposed clustering framework enables us to analyze the full multilingual dataset in-depth and also show an application on a facial data subset, exploring cultural insights of portrait-related affective visual concepts.
To our knowledge, the visual context of sentiment concepts for evaluating multilingual embeddings have not been considered before. However, studies on multimodal distributional semantics have combined visual and textual features to learn more informed word embeddings and have used the notion of semantics @cite_29 @cite_1 and visual similarity to evaluate word embeddings @cite_23 @cite_5 . Furthermore, there are studies which have combined language and vision for image caption generation and retrieval @cite_7 @cite_9 @cite_4 @cite_16 based on multimodal neural language models. We argue that our evaluation metric is a rich resource for learning more informed multimodal embeddings which can benefit these systems. Perhaps the most related study to ours is in @cite_28 which aimed to learn visually grounded word embeddings to capture visual notions of semantic relatedness using abstract visual scenes. Our differences are that we focus on sentiment concepts and we define visual semantic relatedness based on real-world images which are annotated by community users of Flickr instead of abstract scenes.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_28", "@cite_29", "@cite_9", "@cite_1", "@cite_23", "@cite_5", "@cite_16" ], "mid": [ "2149557440", "2953276893", "2173294139", "1854884267", "1527575280", "2112184938", "2250742840", "2963871344", "2159243025" ], "abstract": [ "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.", "We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.", "We propose a model to learn visually grounded word embeddings (vis-w2v) to capture visual notions of semantic relatedness. While word embeddings trained using text have been extremely successful, they cannot uncover notions of semantic relatedness implicit in our visual world. For instance, although \"eats\" and \"stares at\" seem unrelated in text, they share semantics visually. When people are eating something, they also tend to stare at the food. Grounding diverse relations like \"eats\" and \"stares at\" into vision remains challenging, despite recent progress in vision. We note that the visual grounding of words depends on semantics, and not the literal pixels. We thus use abstract scenes created from clipart to provide the visual grounding. We find that the embeddings we learn capture fine-grained, visually grounded notions of semantic relatedness. We show improvements over text-only word embeddings (word2vec) on three tasks: common-sense assertion classification, visual paraphrasing and text-based image retrieval. Our code and datasets are available online.", "We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways. First, in contrast to gold standards such as WordSim-353 and MEN, it explicitly quantifies similarity rather than association or relatedness so that pairs of entities that are associated but not actually similar Freud, psychology have a low rating. We show that, via this focus on similarity, SimLex-999 incentivizes the development of models with a different, and arguably wider, range of applications than those which reflect conceptual association. Second, SimLex-999 contains a range of concrete and abstract adjective, noun, and verb pairs, together with an independent rating of concreteness and free association strength for each pair. This diversity enables fine-grained analyses of the performance of models on concepts of different types, and consequently greater insight into how architectures can be improved. Further, unlike existing gold standard evaluations, for which automatic approaches have reached or surpassed the inter-annotator agreement ceiling, state-of-the-art models perform well below this ceiling on SimLex-999. There is therefore plenty of scope for SimLex-999 to quantify future improvements to distributional semantic models, guiding the development of the next generation of representation-learning architectures.", "Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison.", "Distributional semantic models derive computational representations of word meaning from the patterns of co-occurrence of words in text. Such models have been a success story of computational linguistics, being able to provide reliable estimates of semantic relatedness for the many semantic tasks requiring them. However, distributional models extract meaning information exclusively from text, which is an extremely impoverished basis compared to the rich perceptual sources that ground human semantic knowledge. We address the lack of perceptual grounding of distributional models by exploiting computer vision techniques that automatically identify discrete \"visual words\" in images, so that the distributional representation of a word can be extended to also encompass its co-occurrence with the visual words of images it is associated with. We propose a flexible architecture to integrate text- and image-based distributional information, and we show in a set of empirical tests that our integrated model is superior to the purely text-based approach, and it provides somewhat complementary semantic information with respect to the latter.", "In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from textual and visual input. The two modalities are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similarity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models.", "We extend the SKIP-GRAM model of (2013a) by taking visual information into account. Like SKIP-GRAM, our multimodal models (MMSKIP-GRAM) build vector-based word representations by learning to predict linguistic contexts in text corpora. However, for a restricted set of words, the models are also exposed to visual representations of the objects they denote (extracted from natural images), and must predict linguistic and visual features jointly. The MMSKIP-GRAM models achieve good performance on a variety of semantic benchmarks. Moreover, since they propagate visual information to all words, we use them to improve image labeling and retrieval in the zero-shot setup, where the test concepts are never seen during model training. Finally, the MMSKIP-GRAM models discover intriguing visual properties of abstract words, paving the way to realistic implementations of embodied theories of meaning.", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval." ] }
1606.02518
2410355540
The multivariate normal density is a monotonic function of the distance to the mean, and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to replace this metric with a locally adaptive, smoothly changing (Riemannian) metric that favors regions of high local density. The resulting locally adaptive normal distribution (LAND) is a generalization of the normal distribution to the "manifold" setting, where data is assumed to lie near a potentially low-dimensional manifold embedded in @math . The LAND is parametric, depending only on a mean and a covariance, and is the maximum entropy distribution under the given metric. The underlying metric is, however, non-parametric. We develop a maximum likelihood algorithm to infer the distribution parameters that relies on a combination of gradient descent and Monte Carlo integration. We further extend the LAND to mixture models, and provide the corresponding EM algorithm. We demonstrate the efficiency of the LAND to fit non-trivial probability distributions over both synthetic data, and EEG measurements of human sleep.
We are not the first to consider Riemannian normal distributions, e.g. gives a theoretical analysis of the distribution, and consider the Riemannian counterpart of probabilistic PCA. Both consider the scenario where the manifold is known a priori. We adapt the distribution to the manifold learning'' setting by constructing a Riemannian metric that adapts to the data. This is our overarching contribution. Traditionally, manifold learning is seen as an where a low-dimensional representation of the data is sought. This is useful for visualization , clustering @cite_4 , semi-supervised learning @cite_16 and more. However, in embedding approaches, the relation between a new point and the embedded points are less well-defined, and consequently these approaches are less suited for building generative models. In contrast, the Riemannian approach gives the ability to measure continuous geodesics that follow the structure of the data. This makes the learned Riemannian manifold a suitable space for a generative model.
{ "cite_N": [ "@cite_16", "@cite_4" ], "mid": [ "2104290444", "2132914434" ], "abstract": [ "We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework.", "In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed." ] }
1606.02382
2419518659
Recently there has been an increasing trend to use deep learning frameworks for both 2D consumer images and for 3D medical images. However, there has been little effort to use deep frameworks for volumetric vascular segmentation. We wanted to address this by providing a freely available dataset of 12 annotated two-photon vasculature microscopy stacks. We demonstrated the use of deep learning framework consisting both 2D and 3D convolutional filters (ConvNet). Our hybrid 2D-3D architecture produced promising segmentation result. We derived the architectures from who used the ZNN framework initially designed for electron microscope image segmentation. We hope that by sharing our volumetric vasculature datasets, we will inspire other researchers to experiment with vasculature dataset and improve the used network architectures.
There is relatively more work devoted on natural image processing compared to biomedical image analysis. In natural image processing literature, the corresponding application to our biomedical image segmentation is semantic segmentation @cite_3 @cite_83 @cite_147 @cite_122 , also referred as scene parsing @cite_118 or scene labeling @cite_109 . Semantic segmentation with natural images tries to answer to the question What is where in your image?'' for example segmenting the driver view'' in autonomous driving to road, lanes and other vehicles @cite_195 . In typical semantic segmentation tasks there are a lot more possible labels than in our two-label segmentation of vessels and non-vessel voxels, further complicating the segmentation.
{ "cite_N": [ "@cite_195", "@cite_118", "@cite_122", "@cite_109", "@cite_3", "@cite_83", "@cite_147" ], "mid": [ "2103328396", "2951277909", "1923697677", "", "2952632681", "1529410181", "2158865742" ], "abstract": [ "We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets.", "Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.", "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL", "Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014." ] }
1606.02382
2419518659
Recently there has been an increasing trend to use deep learning frameworks for both 2D consumer images and for 3D medical images. However, there has been little effort to use deep frameworks for volumetric vascular segmentation. We wanted to address this by providing a freely available dataset of 12 annotated two-photon vasculature microscopy stacks. We demonstrated the use of deep learning framework consisting both 2D and 3D convolutional filters (ConvNet). Our hybrid 2D-3D architecture produced promising segmentation result. We derived the architectures from who used the ZNN framework initially designed for electron microscope image segmentation. We hope that by sharing our volumetric vasculature datasets, we will inspire other researchers to experiment with vasculature dataset and improve the used network architectures.
Deep learning based approaches have been extensively used for volumetric electron microscopy (EM) segmentation @cite_253 @cite_110 @cite_191 @cite_159 @cite_126 . Other biomedical image segmentation tasks with deep learning frameworks include for example brain segmentation @cite_12 @cite_219 @cite_222 @cite_223 , prediction of Alzheimer's disease from magnetic resonance imaging (MRI) scans @cite_181 , microscopic cell segmentation @cite_153 , glaucoma detection @cite_15 , computational mammography @cite_19 , pancreas segmentation @cite_19 , bi-ventrical volume estimation @cite_55 , and carotid artery bifurcation detection @cite_161
{ "cite_N": [ "@cite_159", "@cite_161", "@cite_55", "@cite_219", "@cite_191", "@cite_19", "@cite_181", "@cite_253", "@cite_126", "@cite_223", "@cite_110", "@cite_153", "@cite_15", "@cite_222", "@cite_12" ], "mid": [ "2951978378", "", "", "", "619933701", "2547496647", "2132587081", "2153378748", "2952232639", "2949314557", "1721069559", "2269649163", "2402050431", "", "1884191083" ], "abstract": [ "Efforts to automate the reconstruction of neural circuits from 3D electron microscopic (EM) brain images are critical for the field of connectomics. An important computation for reconstruction is the detection of neuronal boundaries. Images acquired by serial section EM, a leading 3D EM technique, are highly anisotropic, with inferior quality along the third dimension. For such images, the 2D max-pooling convolutional network has set the standard for performance at boundary detection. Here we achieve a substantial gain in accuracy through three innovations. Following the trend towards deeper networks for object recognition, we use a much deeper network than previously employed for boundary detection. Second, we incorporate 3D as well as 2D filters, to enable computations that use 3D context. Finally, we adopt a recursively trained architecture in which a first network generates a preliminary boundary map that is provided as input along with the original image to a second network that generates a final boundary map. Backpropagation training is accelerated by ZNN, a new implementation of 3D convolutional networks that uses multicore CPU parallelism for speed. Our hybrid 2D-3D architecture could be more generally applicable to other types of anisotropic 3D images, including video, and our recursive framework for any image labeling problem.", "", "", "", "To build the connectomics map of the brain, we developed a new algorithm that can automatically refine the Membrane Detection Probability Maps (MDPM) generated to perform automatic segmentation of electron microscopy (EM) images. To achieve this, we executed supervised training of a convolutional neural network to recover the removed center pixel label of patches sampled from a MDPM. MDPM can be generated from other machine learning based algorithms recognizing whether a pixel in an image corresponds to the cell membrane. By iteratively applying this network over MDPM for multiple rounds, we were able to significantly improve membrane segmentation results.", "Automatic tissue classification from medical images is an important step in pathology detection and diagnosis. Here, we deal with mammography images and present a novel supervised deep learning-based framework for region classification into semantically coherent tissues. The proposed method uses Convolutional Neural Network (CNN) to learn discriminative features automatically. We overcome the difficulty involved in a medium-size database by training the CNN in an overlapping patch-wise manner. In order to accelerate the pixel-wise automatic class prediction, we use convolutional layers instead of the classical fully connected layers. This approach results in significantly faster computation, while preserving the classification accuracy. The proposed method was tested on annotated mammography images and demonstrates promising image segmentation and tissue classification results.", "Pattern recognition methods using neuroimaging data for the diagnosis of Alzheimer's disease have been the subject of extensive research in recent years. In this paper, we use deep learning methods, and in particular sparse autoencoders and 3D convolutional neural networks, to build an algorithm that can predict the disease status of a patient, based on an MRI scan of the brain. We report on experiments using the ADNI data set involving 2,265 historical scans. We demonstrate that 3D convolutional neural networks outperform several other classifiers reported in the literature and produce state-of-art results.", "Feedforward multilayer networks trained by supervised learning have recently demonstrated state of the art performance on image labeling problems such as boundary prediction and scene parsing. As even very low error rates can limit practical usage of such systems, methods that perform closer to human accuracy remain desirable. In this work, we propose a new type of network with the following properties that address what we hypothesize to be limiting aspects of existing methods: (1) a wide' structure with thousands of features, (2) a large field of view, (3) recursive iterations that exploit statistical dependencies in label space, and (4) a parallelizable architecture that can be trained in a fraction of the time compared to benchmark multilayer convolutional networks. For the specific image labeling problem of boundary prediction, we also introduce a novel example weighting algorithm that improves segmentation accuracy. Experiments in the challenging domain of connectomic reconstruction of neural circuity from 3d electron microscopy data show that these \"Deep And Wide Multiscale Recursive\" (DAWMR) networks lead to new levels of image labeling performance. The highest performing architecture has twelve layers, interwoven supervised and unsupervised stages, and uses an input field of view of 157,464 voxels ( @math ) to make a prediction at each image location. We present an associated open source software package that enables the simple and flexible creation of DAWMR networks.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "Convolutional Neural Networks (CNNs) can be shifted across 2D images or 3D videos to segment them. They have a fixed input size and typically perceive only small local contexts of the pixels to be classified as foreground or background. In contrast, Multi-Dimensional Recurrent NNs (MD-RNNs) can perceive the entire spatio-temporal context of each pixel in a few sweeps through all pixels, especially when the RNN is a Long Short-Term Memory (LSTM). Despite these theoretical advantages, however, unlike CNNs, previous MD-LSTM variants were hard to parallelize on GPUs. Here we re-arrange the traditional cuboid order of computations in MD-LSTM in pyramidal fashion. The resulting PyraMiD-LSTM is easy to parallelize, especially for 3D data such as stacks of brain slice images. PyraMiD-LSTM achieved best known pixel-wise brain image segmentation results on MRBrainS13 (and competitive results on EM-ISBI12).", "We introduce a new machine learning approach for image segmentation that uses a neural network to model the conditional energy of a segmentation given an image. Our approach, combinatorial energy learning for image segmentation (CELIS) places a particular emphasis on modeling the inherent combinatorial nature of dense image segmentation problems. We propose efficient algorithms for learning deep neural networks to model the energy function, and for local optimization of this energy in the space of supervoxel agglomerations. We extensively evaluate our method on a publicly available 3-D microscopy dataset with 25 billion voxels of ground truth data. On an 11 billion voxel test set, we find that our method improves volumetric reconstruction accuracy by more than 20 as compared to two state-of-the-art baseline methods: graph-based segmentation of the output of a 3-D convolutional neural network trained to predict boundaries, as well as a random forest classifier trained to agglomerate supervoxels that were generated by a 3-D convolutional neural network.", "Motivation: High-content screening (HCS) technologies have enabled large scale imaging experiments for studying cell biology and for drug screening. These systems produce hundreds of thousands of microscopy images per day and their utility depends on automated image analysis. Recently, deep learning approaches that learn feature representations directly from pixel intensity values have dominated object recognition challenges. These tasks typically have a single centered object per image and existing models are not directly applicable to microscopy datasets. Here we develop an approach that combines deep convolutional neural networks (CNNs) with multiple instance learning (MIL) in order to classify and segment microscopy images using only whole image level annotations. Results: We introduce a new neural network architecture that uses MIL to simultaneously classify and segment microscopy images with populations of cells. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. To facilitate aggregating across large numbers of instances in CNN feature maps we present the Noisy-AND pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using whole microscopy images with image level labels. We show that training end-to-end MIL CNNs outperforms several previous methods on both mammalian and yeast datasets without requiring any segmentation steps. Availability and implementation: Torch7 implementation available upon request. Contact: ac.otnorotu.liam@suark.nero", "Glaucoma is a chronic and irreversible eye disease in which the optic nerve is progressively damaged, leading to deterioration in vision and quality of life. In this paper, we present an Automatic feature Learning for glAucoma Detection based on Deep LearnINg (ALADDIN), with deep convolutional neural network (CNN) for feature learning. Different from the traditional convolutional layer that uses linear filters followed by a nonlinear activation function to scan the input, the adopted network embeds micro neural networks (multilayer perceptron) with more complex structures to abstract the data within the receptive field. Moreover, a contextualizing deep learning structure is proposed in order to obtain a hierarchical representation of fundus images to discriminate between glaucoma and non-glaucoma pattern, where the network takes the outputs from other CNN as the context information to boost the performance. Extensive experiments are performed on the ORIGA and SCES datasets. The results show area under curve (AUC) of the receiver operating characteristic curve in glaucoma detection at 0.838 and 0.898 in the two databases, much better than state-of-the-art algorithms. The method could be used for glaucoma diagnosis.", "", "Abstract In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we’ve found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster." ] }
1606.02012
2419292002
We investigate the potential of attention-based neural machine translation in simultaneous translation. We introduce a novel decoding algorithm, called simultaneous greedy decoding, that allows an existing neural machine translation model to begin translating before a full source sentence is received. This approach is unique from previous works on simultaneous translation in that segmentation and translation are done jointly to maximize the translation quality and that translating each segment is strongly conditioned on all the previous segments. This paper presents a first step toward building a full simultaneous translation system based on neural machine translation.
Another major contribution by was to let the policy predict the final verb of a source sentence, which is especially useful in the case of translating from a verb-final language (e.g., German) to another type of language. They do so by having a separate verb-conditioned @math -gram language model of all possible source prefixes. This prediction, explicit in @cite_8 is however done implicitly in neural machine translation, where the decoder acts as strong language model.
{ "cite_N": [ "@cite_8" ], "mid": [ "2251955814" ], "abstract": [ "We introduce a reinforcement learningbased approach to simultaneous machine translation—producing a translation while receiving input words— between languages with drastically different word orders: from verb-final languages (e.g., German) to verb-medial languages (English). In traditional machine translation, a translator must “wait” for source material to appear before translation begins. We remove this bottleneck by predicting the final verb in advance. We use reinforcement learning to learn when to trust predictions about unseen, future portions of the sentence. We also introduce an evaluation metric to measure expeditiousness and quality. We show that our new translation model outperforms batch and monotone translation strategies." ] }
1606.01735
2412879760
Modern discriminative predictors have been shown to match natural intelligences in specific perceptual tasks in image classification, object and part detection, boundary extraction, etc. However, a major advantage that natural intelligences still have is that they work well for perceptual problems together, solving them efficiently and coherently in an . In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks can learn universal image representations, useful not only for a single task but for all of them, and how the solutions to the different tasks can be integrated in this framework. We answer by proposing a new architecture, which we call , in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data. In this manner, we show that the performance of individual tasks in standard benchmarks can be improved first by sharing features between them and then, more significantly, by integrating their solutions in the common representation.
MTL has been shown to improve results in many computer vision problems. Typically, researchers incorporate auxiliary tasks into their target tasks, jointly train them in parallel and achieve performance gains in object tracking @cite_25 , object detection @cite_19 , facial landmark detection @cite_0 . Differently, Dai @cite_29 propose multi-task network cascades in which convolutional layer parameters are shared between three tasks and the tasks are predicted sequentially. Unlike @cite_29 , our method can train multiple tasks in parallel and does not require a specification of task execution.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_29", "@cite_25" ], "mid": [ "1896424170", "", "2949295283", "2102674365" ], "abstract": [ "Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].", "", "Semantic segmentation research has recently witnessed rapid progress, but many leading methods are unable to identify object instances. In this paper, we present Multi-task Network Cascades for instance-aware semantic segmentation. Our model consists of three networks, respectively differentiating instances, estimating masks, and categorizing objects. These networks form a cascaded structure, and are designed to share their convolutional features. We develop an algorithm for the nontrivial end-to-end training of this causal, cascaded structure. Our solution is a clean, single-step training framework and can be generalized to cascades that have more stages. We demonstrate state-of-the-art instance-aware semantic segmentation accuracy on PASCAL VOC. Meanwhile, our method takes only 360ms testing an image using VGG-16, which is two orders of magnitude faster than previous systems for this challenging problem. As a by product, our method also achieves compelling object detection results which surpass the competitive Fast Faster R-CNN systems. The method described in this paper is the foundation of our submissions to the MS COCO 2015 segmentation competition, where we won the 1st place.", "In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing @math mixed norms @math and @math we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular @math tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259---2272, 2011) is a special case of our MTT formulation (denoted as the @math tracker) when @math Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers." ] }
1606.01735
2412879760
Modern discriminative predictors have been shown to match natural intelligences in specific perceptual tasks in image classification, object and part detection, boundary extraction, etc. However, a major advantage that natural intelligences still have is that they work well for perceptual problems together, solving them efficiently and coherently in an . In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks can learn universal image representations, useful not only for a single task but for all of them, and how the solutions to the different tasks can be integrated in this framework. We answer by proposing a new architecture, which we call , in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data. In this manner, we show that the performance of individual tasks in standard benchmarks can be improved first by sharing features between them and then, more significantly, by integrating their solutions in the common representation.
Our work is also related to recurrent neural networks (RNN) @cite_10 which has been successfully used in language modelling @cite_12 , speech recognition @cite_26 , hand-written recognition @cite_9 , semantic image segmentation @cite_13 and human pose estimation @cite_18 . Related to our work, Carreira @cite_27 propose an iterative segmentation model that progressively updates an initial solution by feeding back error signal. Najibi @cite_21 propose an efficient grid based object detector that iteratively refine the predicted object coordinates by minimising the training error. While these methods @cite_27 @cite_21 are also based on an iterative solution correcting mechanism, our main goal is to improve generalisation performance for multiple tasks by sharing the previous predictions across them and learning output correlations.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_10", "@cite_9", "@cite_21", "@cite_27", "@cite_13", "@cite_12" ], "mid": [ "2363162442", "2950689855", "1498436455", "2122585011", "2963873508", "2963474899", "2951277909", "" ], "abstract": [ "We propose a novel ConvNet model for predicting 2D human body poses in an image. The model regresses a heatmap representation for each body keypoint, and is able to learn and represent both the part appearances and the context of the part configuration. We make the following three contributions: (i) an architecture combining a feed forward module with a recurrent module, where the recurrent module can be run iteratively to improve the performance; (ii) the model can be trained end-to-end and from scratch, with auxiliary losses incorporated to improve performance; (iii) we investigate whether keypoint visibility can also be predicted. The model is evaluated on two benchmark datasets. The result is a simple architecture that achieves performance on par with the state of the art, but without the complexity of a graphical model stage (or layers).", "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.", "Recognizing lines of unconstrained handwritten text is a challenging task. The difficulty of segmenting cursive or overlapping characters, combined with the need to exploit surrounding context, has led to low recognition rates for even the best current recognizers. Most recent progress in the field has been made either through improved preprocessing or through advances in language modeling. Relatively little work has been done on the basic recognition algorithms. Indeed, most systems rely on the same hidden Markov models that have been used for decades in speech and handwriting recognition, despite their well-known shortcomings. This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. In experiments on two large unconstrained handwriting databases, our approach achieves word recognition accuracies of 79.7 percent on online data and 74.1 percent on offline data, significantly outperforming a state-of-the-art HMM-based system. In addition, we demonstrate the network's robustness to lexicon size, measure the individual influence of its hidden layers, and analyze its use of context. Last, we provide an in-depth discussion of the differences between the network and HMMs, suggesting reasons for the network's superior performance.", "We introduce G-CNN, an object detection technique based on CNNs which works without proposal algorithms. G-CNN starts with a multi-scale grid of fixed bounding boxes. We train a regressor to move and scale elements of the grid towards objects iteratively. G-CNN models the problem of object detection as finding a path from a fixed grid to boxes tightly surrounding the objects. G-CNN with around 180 boxes in a multi-scale grid performs comparably to Fast R-CNN which uses around 2K bounding boxes generated with a proposal technique. This strategy makes detection faster by removing the object proposal stage as well as reducing the number of boxes to be processed.", "Hierarchical feature extractors such as Convolutional Networks (ConvNets) have achieved impressive performance on a variety of classification tasks using purely feedforward processing. Feedforward architectures can learn rich representations of the input space but do not explicitly model dependencies in the output spaces, that are quite structured for tasks such as articulated human pose estimation or object segmentation. Here we propose a framework that expands the expressive power of hierarchical feature extractors to encompass both input and output spaces, by introducing top-down feedback. Instead of directly predicting the outputs in one go, we use a self-correcting model that progressively changes an initial solution by feeding back error predictions, in a process we call Iterative Error Feedback (IEF). IEF shows excellent performance on the task of articulated pose estimation in the challenging MPII and LSP benchmarks, matching the state-of-the-art without requiring ground truth scale annotation.", "Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.", "" ] }
1606.01885
2409744450
Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the first method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and or the final objective value.
Early work has explored the general theme of speeding up learning with accumulation of learning experience. This line of work, known as learning to learn'' or meta-learning'' @cite_24 @cite_9 @cite_13 @cite_29 , considers the problem of devising methods that can take advantage of knowledge learned on other related tasks to train faster, a problem that is today better known as multi-task learning and transfer learning. In contrast, the proposed method can learn to accelerate the training procedure itself, without necessarily requiring any training on related auxiliary tasks.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_29", "@cite_13" ], "mid": [ "", "2145680191", "99485931", "116375701" ], "abstract": [ "", "Different researchers hold different views of what the term meta-learning exactly means. The first part of this paper provides our own perspective view in which the goal is to build self-adaptive learners (i.e. learning algorithms that improve their bias dynamically through experience by accumulating meta-knowledge). The second part provides a survey of meta-learning as reported by the machine-learning literature. We find that, despite different views and research lines, a question remains constant: how can we exploit knowledge about learning (i.e. meta-knowledge) to improve the performance of learning algorithms? Clearly the answer to this question is key to the advancement of the field and continues being the subject of intensive research.", "Preface. Part I: Overview Articles. 1. Learning to Learn: Introduction and Overview S. Thrun, L. Pratt. 2. A Survey of Connectionist Network Reuse Through Transfer L. Pratt, B. Jennings. 3. Transfer in Cognition A. Robins. Part II: Prediction. 4. Theoretical Models of Learning to Learn J. Baxter. 5. Multitask Learning R. Caruana. 6. Making a Low-Dimensional Representation Suitable for Diverse Tasks N. Intrator, S. Edelman. 7. The Canonical Distortion Measure for Vector Quantization and Function Approximation J. Baxter. 8. Lifelong Learning Algorithms S. Thrun. Part III: Relatedness. 9. The Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness D.L. Silver, R.E. Mercer. 10. Clustering Learning Tasks and the Selective Cross-Task Transfer of Knowledge S. Thrun, J. O'Sullivan. Part IV: Control. 11. CHILD: A First Step Towards Continual Learning M.B. Ring. 12. Reinforcement Learning with Self-Modifying Policies J. Schmidhuber, et al 13. Creating Advice-Taking Reinforcement Learners R. Maclin, J.W. Shavlik. Contributing Authors. Index.", "Met alearning is the study of principled methods that exploit metaknowledge to obtain efficient models and solutions by adapting machine learning and data mining processes. While the variety of machine learning and data mining techniques now available can, in principle, provide good model solutions, a methodology is still needed to guide the search for the most appropriate model in an efficient way. Met alearning provides one such methodology that allows systems to become more effective through experience. This book discusses several approaches to obtaining knowledge concerning the performance of machine learning and data mining algorithms. It shows how this knowledge can be reused to select, combine, compose and adapt both algorithms and models to yield faster, more effective solutions to data mining problems. It can thus help developers improve their algorithms and also develop learning systems that can improve themselves. The book will be of interest to researchers and graduate students in the areas of machine learning, data mining and artificial intelligence." ] }
1606.01885
2409744450
Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the first method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and or the final objective value.
A different line of work, known as programming by demonstration'' @cite_21 , considers the problem of learning programs from examples of input and output. Several different approaches have been proposed: @cite_27 represents programs explicitly using a formal language, constructs a hierarchical Bayesian prior over programs and performs inference using an MCMC sampling procedure and @cite_25 represents programs implicitly as sequences of memory access operations and trains a recurrent neural net to learn the underlying patterns in the memory access operations. Subsequent work proposes variants of this model that use different primitive memory access operations @cite_22 , more expressive operations @cite_1 @cite_12 or other non-differentiable operations @cite_26 @cite_10 . Others consider building models that permit parallel execution @cite_16 or training models with stronger supervision in the form of execution traces @cite_20 . The aim of this line of work is to replicate the behaviour of simple existing algorithms from examples, rather than to learn a new algorithm that is better than existing algorithms.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_21", "@cite_1", "@cite_20", "@cite_27", "@cite_16", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2204302769", "2949626814", "1492324553", "2950898700", "", "2131257014", "2173051530", "2175344611", "2950527759", "2290588870" ], "abstract": [ "The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.", "Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence. In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory.", "Part 1 Systems: Pygmalion tinker a predictive calculator rehearsal world smallStar peridot metamouse TELS eager garnet the Turvy experience chimera the geometer's sketchpad tourmaline a history-based macro by example system mondrian triggers the AIDE project. Part 2 Components: a history of editable graphical histories graphical representation and feedback in a PBD system PBD invocation techniques a system-wide macro facility based on aggregate events making programming accessible to visual problem solvers using voice input to disambiguate intent. Part 3 Perspectives: characterizing PBD systems demonstrational interfaces just-in-time programming.", "In this paper, we propose and investigate a new neural network architecture called Neural Random Access Machine. It can manipulate and dereference pointers to an external variable-size random-access memory. The model is trained from pure input-output examples using backpropagation. We evaluate the new model on a number of simple algorithmic tasks whose solutions require pointer manipulation and dereferencing. Our results show that the proposed model can learn to solve algorithmic tasks of such type and is capable of operating on simple data structures like linked-lists and binary trees. For easier tasks, the learned solutions generalize to sequences of arbitrary length. Moreover, memory access during inference can be done in a constant time under some assumptions.", "", "We are interested in learning programs for multiple related tasks given only a few training examples per task. Since the program for a single task is underdetermined by its data, we introduce a nonparametric hierarchical Bayesian prior over programs which shares statistical strength across multiple tasks. The key challenge is to parametrize this multi-task sharing. For this, we introduce a new representation of programs based on combinatory logic and provide an MCMC algorithm that can perform safe program transformations on this representation to reveal shared inter-program substructures.", "Learning an algorithm from examples is a fundamental problem that has been widely studied. Recently it has been addressed using neural networks, in particular by Neural Turing Machines (NTMs). These are fully differentiable computers that use backpropagation to learn their own programming. Despite their appeal NTMs have a weakness that is caused by their sequential nature: they are not parallel and are are hard to train due to their large depth when unfolded. We present a neural network architecture to address this problem: the Neural GPU. It is based on a type of convolutional gated recurrent unit and, like the NTM, is computationally universal. Unlike the NTM, the Neural GPU is highly parallel which makes it easier to train and efficient to run. An essential property of algorithms is their ability to handle inputs of arbitrary size. We show that the Neural GPU can be trained on short instances of an algorithmic task and successfully generalize to long instances. We verified it on a number of tasks including long addition and long multiplication of numbers represented in binary. We train the Neural GPU on numbers with upto 20 bits and observe no errors whatsoever while testing it, even on much longer numbers. To achieve these results we introduce a technique for training deep recurrent networks: parameter sharing relaxation. We also found a small amount of dropout and gradient noise to have a large positive effect on learning and generalization.", "We present an approach for learning simple algorithms such as copying, multi-digit addition and single digit multiplication directly from examples. Our framework consists of a set of interfaces, accessed by a controller. Typical interfaces are 1-D tapes or 2-D grids that hold the input and output data. For the controller, we explore a range of neural network-based models which vary in their ability to abstract the underlying algorithm from training instances and generalize to test examples with many thousands of digits. The controller is trained using @math -learning with several enhancements and we show that the bottleneck is in the capabilities of the controller rather than in the search incurred by @math -learning.", "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.", "Following the recent trend in explicit neural memory structures, we present a new design of an external memory, wherein memories are stored in an Euclidean key space @math . An LSTM controller performs read and write via specialized read and write heads. It can move a head by either providing a new address in the key space (aka random access) or moving from its previous position via a Lie group action (aka Lie access). In this way, the \"L\" and \"R\" instructions of a traditional Turing Machine are generalized to arbitrary elements of a fixed Lie group action. For this reason, we name this new model the Lie Access Neural Turing Machine, or LANTM. We tested two different configurations of LANTM against an LSTM baseline in several basic experiments. We found the right configuration of LANTM to outperform the baseline in all of our experiments. In particular, we trained LANTM on addition of @math -digit numbers for @math , but it was able to generalize almost perfectly to @math , all with the number of parameters 2 orders of magnitude below the LSTM baseline." ] }
1606.01885
2409744450
Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the first method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and or the final objective value.
There is a rich body of work on hyperparameter optimization, which studies the optimization of hyperparameters used to train a model, such as the learning rate, the momentum decay factor and regularization parameters. Most methods @cite_28 @cite_0 @cite_8 @cite_6 @cite_15 rely on sequential model-based Bayesian optimization @cite_5 @cite_4 , while others adopt a random search approach @cite_11 or use gradient-based optimization @cite_7 @cite_3 @cite_18 . Because each hyperparameter setting corresponds to a particular instantiation of an optimization algorithm, these methods can be viewed as a way to search over different instantiations of the same optimization algorithm. The proposed method, on the other hand, can search over the space of all possible optimization algorithms. In addition, when presented with a new objective function, hyperparameter optimization needs to conduct multiple trials with different hyperparameter settings to find the optimal hyperparameters. In contrast, once training is complete, the autonomous algorithm knows how to choose hyperparameters on-the-fly without needing to try different hyperparameter settings, even when presented with an objective function that it has not seen during training.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_8", "@cite_28", "@cite_6", "@cite_3", "@cite_0", "@cite_5", "@cite_15", "@cite_11" ], "mid": [ "1868018859", "2950338507", "2166107799", "2950182411", "", "", "2187061624", "", "", "2200000192", "" ], "abstract": [ "Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.", "", "Many machine learning algorithms can be formulated as the minimization of a training criterion that involves a hyperparameter. This hyperparameter is usually chosen by trial and error with a model selection criterion. In this article we present a methodology to optimize several hyperparameters, based on the computation of the gradient of a model selection criterion with respect to the hyperparameters. In the case of a quadratic training criterion, the gradient of the selection criterion with respect to the hyperparameters is efficiently computed by backpropagating through a Cholesky decomposition. In the more general case, we show that the implicit function theorem can be used to derive a formula for the hyperparameter gradient involving second derivatives of the training criterion.", "Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a \"black art\" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks.", "", "", "“Energy” models for continuous domains can be applied to many problems, but often suffer from high computational expense in training, due to the need to repeatedly minimize the energy function to high accuracy. This paper considers a modified setting, where the model is trained in terms of results after optimization is truncated to a fixed number of iterations. We derive “backpropagating” versions of gradient descent, heavy-ball and LBFGS. These are simple to use, as they require as input only routines to compute the gradient of the energy with respect to the domain and parameters. Experimental results on denoising and image labeling problems show that learning with truncated optimization greatly reduces computational expense compared to “full” fitting.", "", "", "Model selection and hyperparameter optimization is crucial in applying machine learning to a novel dataset. Recently, a sub-community of machine learning has focused on solving this problem with Sequential Model-based Bayesian Optimization (SMBO), demonstrating substantial successes in many applications. However, for computationally expensive algorithms the overhead of hyperparameter optimization can still be prohibitive. In this paper we mimic a strategy human domain experts use: speed up optimization by starting from promising configurations that performed well on similar datasets. The resulting initialization technique integrates naturally into the generic SMBO framework and can be trivially applied to any SMBO method. To validate our approach, we perform extensive experiments with two established SMBO frameworks (Spearmint and SMAC) with complementary strengths; optimizing two machine learning frameworks on 57 datasets. Our initialization procedure yields mild improvements for low-dimensional hyperparameter optimization and substantially improves the state of the art for the more complex combined algorithm selection and hyperparameter optimization problem.", "" ] }
1606.01933
2950738719
We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.
Our method is motivated by the central role played by alignment in machine translation @cite_8 and previous approaches to sentence similarity modeling @cite_9 @cite_22 @cite_26 @cite_3 , natural language inference @cite_27 @cite_0 @cite_7 @cite_12 , and semantic parsing @cite_6 . The neural counterpart to alignment, @cite_4 , which is a key part of our approach, was originally proposed and has been predominantly used in conjunction with LSTMs @cite_10 @cite_19 and to a lesser extent with CNNs @cite_20 . In contrast, our use of attention is purely based on word embeddings and our method essentially consists of feed-forward networks that operate largely independently of word order.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_3", "@cite_6", "@cite_0", "@cite_19", "@cite_27", "@cite_20", "@cite_10", "@cite_12" ], "mid": [ "1816599501", "2133564696", "2099884836", "2092543775", "1916559533", "2112609142", "2131726681", "2124204950", "2068206659", "2221711388", "", "2211192759", "2154579312", "2121495183" ], "abstract": [ "This paper proposes a general learning framework for a class of problems that require learning over latent intermediate representations. Many natural language processing (NLP) decision problems are defined over an expressive intermediate representation that is not explicit in the input, leaving the algorithm with both the task of recovering a good intermediate representation and learning to classify correctly. Most current systems separate the learning problem into two stages by solving the first step of recovering the intermediate representation heuristically and using it to learn the final classifier. This paper develops a novel joint learning algorithm for both tasks, that uses the final prediction to guide the selection of the best intermediate representation. We evaluate our algorithm on three different NLP tasks -- transliteration, paraphrase identification and textual entailment -- and show that our joint method significantly improves performance.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "We present a novel approach to deciding whether two sentences hold a paraphrase relationship. We employ a generative model that generates a paraphrase of a given sentence, and we use probabilistic inference to reason about whether two sentences share the paraphrase relationship. The model cleanly incorporates both syntax and lexical semantics using quasi-synchronous dependency grammars (Smith and Eisner, 2006). Furthermore, using a product of experts (Hinton, 2002), we combine the model with a complementary logistic regression model based on state-of-the-art lexical overlap features. We evaluate our models on the task of distinguishing true paraphrase pairs from false ones on a standard corpus, giving competitive state-of-the-art performance.", "In this paper, we introduce a new framework for recognizing textual entailment which depends on extraction of the set of publicly-held beliefs -- known as discourse commitments -- that can be ascribed to the author of a text or a hypothesis. Once a set of commitments have been extracted from a t-h pair, the task of recognizing textual entailment is reduced to the identification of the commitments from a t which support the inference of the h. Promising results were achieved: our system correctly identified more than 80 of examples from the RTE-3 Test Set correctly, without the need for additional sources of training data or other web-based resources.", "This introductory text to statistical machine translation (SMT) provides all of the theories and methods needed to build a statistical machine translator, such as Google Language Tools and Babelfish. In general, statistical techniques allow automatic translation systems to be built quickly for any language-pair using only translated texts and generic software. With increasing globalization, statistical machine translation will be central to communication and commerce. Based on courses and tutorials, and classroom-tested globally, it is ideal for instruction or self-study, for advanced undergraduates and graduate students in computer science and or computational linguistics, and researchers in natural language processing. The companion website provides open-source corpora and tool-kits.", "We present a system for deciding whether a given sentence can be inferred from text. Each sentence is represented as a directed graph (extracted from a dependency parser) in which the nodes represent words or phrases, and the links represent syntactic and semantic relationships. We develop a learned graph matching approach to approximate entailment using the amount of the sentence's semantic content which is contained in the text. We present results on the Recognizing Textual Entailment dataset (, 2005), and show that our approach outperforms Bag-Of-Words and TF-IDF models. In addition, we explore common sources of errors in our approach and how to remedy them.", "We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. Given a large, community-authored, question-paraphrase corpus, we demonstrate that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions. Our approach automatically generalizes a seed lexicon and includes a scalable, parallelized perceptron parameter estimation scheme. Experiments show that our approach more than quadruples the recall of the seed lexicon, with only an 8 loss in precision.", "Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance. Here we approach it as a straightforward machine translation task, and demonstrate that standard machine translation components can be adapted into a semantic parser. In experiments on the multilingual GeoQuery corpus we find that our parser is competitive with the state of the art, and in some cases achieves higher accuracy than recently proposed purpose-built systems. These results support the use of machine translation methods as an informative baseline in semantic parsing evaluations, and suggest that research in semantic parsing could benefit from advances in machine translation.", "This paper advocates a new architecture for textual inference in which finding a good alignment is separated from evaluating entailment. Current approaches to semantic inference in question answering and textual entailment have approximated the entailment problem as that of computing the best alignment of the hypothesis to the text, using a locally decomposable matching score. We argue that there are significant weaknesses in this approach, including flawed assumptions of monotonicity and locality. Instead we propose a pipelined approach where alignment is followed by a classification step, in which we extract features representing high-level characteristics of the entailment problem, and pass the resulting feature vector to a statistical classifier trained on development data. We report results on data from the 2005 Pascal RTE Challenge which surpass previously reported results for alignment-based systems.", "Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for natural language inference (NLI). In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neural attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a match-LSTM to perform word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. On the SNLI corpus, our model achieves an accuracy of 86.1 , outperforming the state of the art.", "", "How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). Most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence's representation separately, rarely considering the impact of the other sentence; or (iii) relies fully on manually designed, task-specific linguistic features. This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences. We make three contributions. (i) The ABCNN can be applied to a wide variety of tasks that require modeling of sentence pairs. (ii) We propose three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. These interdependent sentence pair representations are more powerful than isolated sentence representations. (iii) ABCNNs achieve state-of-the-art performance on AS, PI and TE tasks. We release code at: https: github.com yinwenpeng Answer_Selection.", "We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 error rate and about a 9 reject rate on zipcode digits provided by the U.S. Postal Service.", "The alignment problem---establishing links between corresponding phrases in two related sentences---is as important in natural language inference (NLI) as it is in machine translation (MT). But the tools and techniques of MT alignment do not readily transfer to NLI, where one cannot assume semantic equivalence, and for which large volumes of bitext are lacking. We present a new NLI aligner, the MANLI system, designed to address these challenges. It uses a phrase-based alignment representation, exploits external lexical resources, and capitalizes on a new set of supervised training data. We compare the performance of MANLI to existing NLI and MT aligners on an NLI alignment task over the well-known Recognizing Textual Entailment data. We show that MANLI significantly outperforms existing aligners, achieving gains of 6.2 in F1 over a representative NLI aligner and 10.5 over GIZA++." ] }
1606.01849
2951627844
Device-to-Device (D2D) communication is expected to enable a number of new services and applications in future mobile networks and has attracted significant research interest over the last few years. Remarkably, little attention has been placed on the issue of D2D communication for users belonging to different operators. In this paper, we focus on this aspect for D2D users that belong to different tenants (virtual network operators), assuming virtualized and programmable future 5G wireless networks. Under the assumption of a cross-tenant orchestrator, we show that significant gains can be achieved in terms of network performance by optimizing resource sharing from the different tenants, i.e., slices of the substrate physical network topology. To this end, a sum-rate optimization framework is proposed for optimal sharing of the virtualized resources. Via a wide site of numerical investigations, we prove the efficacy of the proposed solution and the achievable gains compared to legacy approaches.
In order to efficiently leverage the radical merits that D2D offers, the interference patterns that are being developed due to the resource reuse need to be limited. To this direction, several works within the literature have been elaborating on devising efficient resource allocation and mode selection techniques for D2D communications that mainly aim at improving network throughput as well as spectrum efficiency @cite_1 . Optimal usage of the available radio resources is a well-known nonlinear NP-hard problem that becomes even more complex with the integration of D2D communications that underlay the cellular network @cite_10 . For this reason, a common practice has been to devise low-complexity, heuristic algorithmic solutions or to relax part of the constraints (e.g. power or resource block allocation related) and propose sub-optimal techniques for D2D-aware resource allocation @cite_10 , @cite_15 .
{ "cite_N": [ "@cite_10", "@cite_15", "@cite_1" ], "mid": [ "", "1975186862", "1992942660" ], "abstract": [ "", "Device-to-device (D2D) communications underlaying a cellular infrastructure has recently been proposed as a means of increasing the cellular capacity, improving the user throughput and extending the battery lifetime of user equipments by facilitating the reuse of spectrum resources between D2D and cellular links. In network assisted D2D communications, when two devices are in the proximity of each other, the network can not only help the devices to set the appropriate transmit power and schedule time and frequency resources but also to determine whether communication should take place via the direct D2D link (D2D mode) or via the cellular base station (cellular mode). In this paper we formulate the joint mode selection, scheduling and power control task as an optimization problem that we first solve assuming the availability of a central entity. We also propose a distributed suboptimal joint mode selection and resource allocation scheme that we benchmark with respect to the centralized optimal solution. We find that the distributed scheme performs close to the optimal scheme both in terms of resource efficiency and user fairness.", "Device-to-device (D2D) communications was initially proposed in cellular networks as a new paradigm for enhancing network performance. The emergence of new applications such as content distribution and location-aware advertisement introduced new user cases for D2D communications in cellular networks. The initial studies showed that D2D communications has advantages such as increased spectral efficiency and reduced communication delay. However, this communication mode introduces complications in terms of interference control overhead and protocols that are still open research problems. The feasibility of D2D communications in Long-Term Evolution Advanced is being studied by academia, industry, and standardization bodies. To date, there are more than 100 papers available on D2D communications in cellular networks, but there is no survey on this field. In this paper, we provide a taxonomy based on the D2D communicating spectrum and review the available literature extensively under the proposed taxonomy. Moreover, we provide new insights into the over-explored and under-explored areas that lead us to identify open research problems of D2D communications in cellular networks." ] }
1606.01990
2416562333
Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Surface features achieve good performance, but they are not readily applicable to other languages without semantic lexicons. Previous neural models require parses, surface features, or a small label set to work well. Here, we propose neural network models that are based on feedforward and long-short term memory architecture without any surface features. To our surprise, our best configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Under various fine-grained label sets and a cross-linguistic setting, our feedforward models perform consistently better or at least just as well as systems that require hand-crafted surface features. Our models present the first neural Chinese discourse parser in the style of Chinese Discourse Treebank, showing that our results hold cross-linguistically.
The prevailing approach for this task is to use surface features derived from various semantic lexicons @cite_25 , reducing the number of parameters by mapping raw word tokens in the arguments of discourse relations to a limited number of entries in a semantic lexicon such as polarity and verb classes.
{ "cite_N": [ "@cite_25" ], "mid": [ "2109462987" ], "abstract": [ "We present a series of experiments on automatically identifying the sense of implicit discourse relations, i.e. relations that are not marked with a discourse connective such as \"but\" or \"because\". We work with a corpus of implicit relations present in newspaper text and report results on a test set that is representative of the naturally occurring distribution of senses. We use several linguistically informed features, including polarity tags, Levin verb classes, length of verb phrases, modality, context, and lexical features. In addition, we revisit past approaches using lexical pairs from unannotated text as features, explain some of their shortcomings and propose modifications. Our best combination of features outperforms the baseline from data intensive approaches by 4 for comparison and 16 for contingency." ] }
1606.01990
2416562333
Inferring implicit discourse relations in natural language text is the most difficult subtask in discourse parsing. Surface features achieve good performance, but they are not readily applicable to other languages without semantic lexicons. Previous neural models require parses, surface features, or a small label set to work well. Here, we propose neural network models that are based on feedforward and long-short term memory architecture without any surface features. To our surprise, our best configured feedforward architecture outperforms LSTM-based model in most cases despite thorough tuning. Under various fine-grained label sets and a cross-linguistic setting, our feedforward models perform consistently better or at least just as well as systems that require hand-crafted surface features. Our models present the first neural Chinese discourse parser in the style of Chinese Discourse Treebank, showing that our results hold cross-linguistically.
Along the same vein, Brown cluster assignments have also been used as a general purpose lexicon that requires no human manual annotation @cite_22 . However, these solutions still suffer from the data sparsity problem and almost always require extensive feature selection to work well @cite_28 @cite_4 @cite_30 . The work we report here explores the use of the expressive power of distributed representations to overcome the data sparsity problem found in the traditional feature engineering paradigm.
{ "cite_N": [ "@cite_28", "@cite_30", "@cite_4", "@cite_22" ], "mid": [ "29849901", "1855867616", "2131861279", "2109318894" ], "abstract": [ "We provide a systematic study of previously proposed features for implicit discourse relation identification, identifying new feature combinations that optimize F1-score. The resulting classifiers achieve the best F1-scores to date for the four top-level discourse relation classes of the Penn Discourse Tree Bank: COMPARISON, CONTINGENCY, EXPANSION, and TEMPORAL. We further identify factors for feature extraction that can have a major impact on performance and determine that some features originally proposed for the task no longer provide performance gains in light of more powerful, recently discovered features. Our results constitute a new set of baselines for future studies of implicit discourse relation identification.", "Discourse relations bind smaller linguistic units into coherent texts. Automatically identifying discourse relations is difficult, because it requires understanding the semantics of the linked arguments. A more subtle challenge is that it is not enough to represent the meaning of each argument of a discourse relation, because the relation may depend on links between lower-level components, such as entity mentions. Our solution computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree. We also perform a downward compositional pass to capture the meaning of coreferent entity mentions. Implicit discourse relations are then predicted from these two representations, obtaining substantial improvements on the Penn Discourse Treebank.", "We present an implicit discourse relation classifier in the Penn Discourse Treebank (PDTB). Our classifier considers the context of the two arguments, word pair information, as well as the arguments' internal constituent and dependency parses. Our results on the PDTB yields a significant 14.1 improvement over the baseline. In our error analysis, we discuss four challenges in recognizing implicit relations in the PDTB.", "Sentences form coherent relations in a discourse without discourse connectives more frequently than with connectives. Senses of these implicit discourse relations that hold between a sentence pair, however, are challenging to infer. Here, we employ Brown cluster pairs to represent discourse relation and incorporate coreference patterns to identify senses of implicit discourse relations in naturally occurring text. Our system improves the baseline performance by as much as 25 . Feature analyses suggest that Brown cluster pairs and coreference patterns can reveal many key linguistic characteristics of each type of discourse relation." ] }
1606.02006
2410217169
Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence. We propose a method to alleviate this problem by augmenting NMT systems with discrete translation lexicons that efficiently encode translations of these low-frequency words. We describe a method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.
From the beginning of work on NMT, unknown words that do not exist in the system vocabulary have been focused on as a weakness of these systems. Early methods to handle these unknown words replaced them with appropriate words in the target vocabulary @cite_19 @cite_8 according to a lexicon similar to the one used in this work. In contrast to our work, these only handle unknown words and do not incorporate information from the lexicon in the learning procedure.
{ "cite_N": [ "@cite_19", "@cite_8" ], "mid": [ "2950344723", "2950580142" ], "abstract": [ "Neural machine translation, a recently proposed approach to machine translation based purely on neural networks, has shown promising results compared to the existing approaches such as phrase-based statistical machine translation. Despite its recent success, neural machine translation has its limitation in handling a larger vocabulary, as training complexity as well as decoding complexity increase proportionally to the number of target words. In this paper, we propose a method that allows us to use a very large target vocabulary without increasing training complexity, based on importance sampling. We show that decoding can be efficiently done even with the model having a very large target vocabulary by selecting only a small subset of the whole target vocabulary. The models trained by the proposed approach are empirically found to outperform the baseline models with a small vocabulary as well as the LSTM-based neural machine translation models. Furthermore, when we use the ensemble of a few models with very large target vocabularies, we achieve the state-of-the-art translation performance (measured by BLEU) on the English->German translation and almost as high performance as state-of-the-art English->French translation system.", "Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT systems is their inability to correctly translate very rare words: end-to-end NMTs tend to have relatively small vocabularies with a single unk symbol that represents every possible out-of-vocabulary (OOV) word. In this paper, we propose and implement an effective technique to address this problem. We train an NMT system on data that is augmented by the output of a word alignment algorithm, allowing the NMT system to emit, for each OOV word in the target sentence, the position of its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates every OOV word using a dictionary. Our experiments on the WMT14 English to French translation task show that this method provides a substantial improvement of up to 2.8 BLEU points over an equivalent NMT system that does not use this technique. With 37.5 BLEU points, our NMT system is the first to surpass the best result achieved on a WMT14 contest task." ] }
1606.02006
2410217169
Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence. We propose a method to alleviate this problem by augmenting NMT systems with discrete translation lexicons that efficiently encode translations of these low-frequency words. We describe a method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.
There have also been other approaches that incorporate models that learn when to copy words as-is into the target language @cite_10 @cite_34 @cite_28 . These models are similar to the linear approach of , but are only applicable to words that can be copied as-is into the target language. In fact, these models can be thought of as a subclass of the proposed approach that use a lexicon that assigns a all its probability to target words that are the same as the source. On the other hand, while we are simply using a static interpolation coefficient @math , these works generally have a more sophisticated method for choosing the interpolation between the standard and copy'' models. Incorporating these into our linear method is a promising avenue for future work.
{ "cite_N": [ "@cite_28", "@cite_34", "@cite_10" ], "mid": [ "2333611780", "", "2962995178" ], "abstract": [ "The problem of rare and unknown words is an important issue that can potentially influence the performance of many NLP systems, including both the traditional count-based and the deep learning models. We propose a novel way to deal with the rare and unseen words for the neural network models using attention. Our model uses two softmax layers in order to predict the next word in conditional language models: one predicts the location of a word in the source sentence, and the other predicts a word in the shortlist vocabulary. At each time-step, the decision of which softmax layer to use choose adaptively made by an MLP which is conditioned on the context. We motivate our work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known. We observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset using our proposed model.", "", "Attention mechanisms in neural networks have proved useful for problems in which the input and output do not have fixed dimension. Often there exist features that are locally translation invariant and would be valuable for directing the model’s attention, but previous attentional architectures are not constructed to learn such features specifically. We introduce an attentional neural network that employs convolution on the input tokens to detect local time-invariant and long-range topical attention features in a context-dependent way. We apply this architecture to the problem of extreme summarization of source code snippets into short, descriptive function name-like summaries. Using those features, the model sequentially generates a summary by marginalizing over two attention mechanisms: one that predicts the next summary token based on the attention weights of the input tokens and another that is able to copy a code token as-is directly into the summary. We demonstrate our convolutional attention neural network’s performance on 10 popular Java projects showing that it achieves better performance compared to previous attentional mechanisms." ] }
1606.02006
2410217169
Neural machine translation (NMT) often makes mistakes in translating low-frequency content words that are essential to understanding the meaning of the sentence. We propose a method to alleviate this problem by augmenting NMT systems with discrete translation lexicons that efficiently encode translations of these low-frequency words. We describe a method to calculate the lexicon probability of the next word in the translation candidate by using the attention vector of the NMT model to select which source word lexical probabilities the model should focus on. We test two methods to combine this probability with the standard NMT probability: (1) using it as a bias, and (2) linear interpolation. Experiments on two corpora show an improvement of 2.0-2.3 BLEU and 0.13-0.44 NIST score, and faster convergence time.
Finally, there have been a number of recent works that improve accuracy of low-frequency words using character-based translation models @cite_12 @cite_30 @cite_35 . However, have found that even when using character-based models, incorporating information about words allows for gains in translation accuracy, and it is likely that our lexicon-based method could result in improvements in these hybrid systems as well.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_12" ], "mid": [ "", "2311921240", "2292633562" ], "abstract": [ "", "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder-decoder with a subword-level encoder and a character-level decoder on four language pairs--En-Cs, En-De, En-Ru and En-Fi-- using the parallel corpora from WMT'15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.", "Neural Machine Translation (MT) has reached state-of-the-art results. However, one of the main challenges that neural MT still faces is dealing with very large vocabularies and morphologically rich languages. In this paper, we propose a neural MT system using character-based embeddings in combination with convolutional and highway layers to replace the standard lookup-based word representations. The resulting unlimited-vocabulary and affix-aware source word embeddings are tested in a state-of-the-art neural MT based on an attention-based bidirectional recurrent neural network. The proposed MT scheme provides improved results even when the source language is not morphologically rich. Improvements up to 3 BLEU points are obtained in the German-English WMT task." ] }
1606.01269
2412715517
This paper presents a model for end-to-end learning of task-oriented dialog systems. The main component of the model is a recurrent neural network (an LSTM), which maps from raw dialog history directly to a distribution over system actions. The LSTM automatically infers a representation of dialog history, which relieves the system developer of much of the manual feature engineering of dialog state. In addition, the developer can provide software that expresses business rules and provides access to programmatic APIs, enabling the LSTM to take actions in the real world on behalf of the user. The LSTM can be optimized using supervised learning (SL), where a domain expert provides example dialogs which the LSTM should imitate; or using reinforcement learning (RL), where the system improves by interacting directly with end users. Experiments show that SL and RL are complementary: SL alone can derive a reasonable initial policy from a small number of training dialogs; and starting RL optimization with a policy trained with SL substantially accelerates the learning rate of RL.
By contrast, our method in the recurrent neural network which is optimal for predicting actions to take at future timesteps. This is an important contribution because designing an effective state space can be quite labor intensive: omissions can cause aliasing, and spurious features can slow learning. Worse, as learning progresses, the set of optimal history features may change. Thus, the ability to automatically infer a dialog state representation in tandem with dialog policy optimization simplifies developer work. On the other hand, like past work, the set of possible user goals in our method is hand-crafted -- for many task-oriented systems, this seems desirable in order to support integration with back-end databases, such as a large table of restaurant names, price ranges, etc. Therefore, our method delegates tracking of user goals to the developer-provided code. When entity extraction is reliable -- as it may be in text-based interfaces, which do not have speech recognition errors -- a simple name value store can track user goals, and this is the approach taken in our example application below. If entity extraction errors are more prevalent, methods from the dialog state tracking literature for tracking user goals could be applied @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2468710617" ], "abstract": [ "In a spoken dialog system, dialog state tracking refers to the task of correctly inferring the state of the conversation -- such as the user's goal -- given all of the dialog history up to that turn. Dialog state tracking is crucial to the success of a dialog system, yet until recently there were no common resources, hampering progress. The Dialog State Tracking Challenge series of 3 tasks introduced the first shared testbed and evaluation metrics for dialog state tracking, and has underpinned three key advances in dialog state tracking: the move from generative to discriminative models; the adoption of discriminative sequential techniques; and the incorporation of the speech recognition results directly into the dialog state tracker. This paper reviews this research area, covering both the challenge tasks themselves and summarizing the work they have enabled." ] }
1606.01269
2412715517
This paper presents a model for end-to-end learning of task-oriented dialog systems. The main component of the model is a recurrent neural network (an LSTM), which maps from raw dialog history directly to a distribution over system actions. The LSTM automatically infers a representation of dialog history, which relieves the system developer of much of the manual feature engineering of dialog state. In addition, the developer can provide software that expresses business rules and provides access to programmatic APIs, enabling the LSTM to take actions in the real world on behalf of the user. The LSTM can be optimized using supervised learning (SL), where a domain expert provides example dialogs which the LSTM should imitate; or using reinforcement learning (RL), where the system improves by interacting directly with end users. Experiments show that SL and RL are complementary: SL alone can derive a reasonable initial policy from a small number of training dialogs; and starting RL optimization with a policy trained with SL substantially accelerates the learning rate of RL.
Another line of research has sought to predict the words of the next utterance directly from the history of the dialog, using a recurrent neural network trained on a large corpus of dialogs @cite_16 . This work does infer a representation of state; however, our approach differs in several respects: first, in our work, entities are tracked separately -- this allows generalization to entities which have not appeared in the training data; second, our approach includes first-class support for action masking and API calls, which allows the agent to encode business rules and take real-world actions on behalf of the system; finally, in addition to supervised learning, we show how our method can also be trained using reinforcement learning.
{ "cite_N": [ "@cite_16" ], "mid": [ "2962854379" ], "abstract": [ "This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response." ] }
1606.01269
2412715517
This paper presents a model for end-to-end learning of task-oriented dialog systems. The main component of the model is a recurrent neural network (an LSTM), which maps from raw dialog history directly to a distribution over system actions. The LSTM automatically infers a representation of dialog history, which relieves the system developer of much of the manual feature engineering of dialog state. In addition, the developer can provide software that expresses business rules and provides access to programmatic APIs, enabling the LSTM to take actions in the real world on behalf of the user. The LSTM can be optimized using supervised learning (SL), where a domain expert provides example dialogs which the LSTM should imitate; or using reinforcement learning (RL), where the system improves by interacting directly with end users. Experiments show that SL and RL are complementary: SL alone can derive a reasonable initial policy from a small number of training dialogs; and starting RL optimization with a policy trained with SL substantially accelerates the learning rate of RL.
Second, action selection may be learned from example dialogs using supervised learning (SL). For example, when a user input is received, a corpus of example dialogs can be searched for the most similar user input and dialog state, and the following system action can be output to the user @cite_15 @cite_1 @cite_5 @cite_16 @cite_23 . The benefit of this approach is that the policy can be improved at any time by adding more example dialogs, and in this respect it is rather easy to make corrections in SL-based systems. However, the system doesn't learn directly from interaction with end users.
{ "cite_N": [ "@cite_1", "@cite_23", "@cite_5", "@cite_15", "@cite_16" ], "mid": [ "1979299372", "2563741217", "2104544334", "2048369091", "2962854379" ], "abstract": [ "This paper proposes a generic dialog modeling framework for a multi-domain dialog system to simultaneously manage goal-oriented and chat dialogs for both information access and entertainment. We developed a dialog modeling technique using an example-based approach to implement multiple applications such as car navigation, weather information, TV program guidance, and chatbot. Example-based dialog modeling (EBDM) is a simple and effective method for prototyping and deploying of various dialog systems. This paper also introduces the system architecture of multi-domain dialog systems using the EBDM framework and the domain spotting technique. In our experiments, we evaluate our system using both simulated and real users. We expect that our approach can support flexible management of multi-domain dialogs on the same framework.", "While example-based dialog is a popular option for the construction of dialog systems, creating example bases for a specific task or domain requires significant human effort. To reduce this human effort, in this paper, we propose an active learning framework to construct example-based dialog systems efficiently. Specifically, we propose two uncertainty sampling strategies for selecting inputs to present to human annotators who create system responses for the selected inputs. We compare performance of these proposed strategies with a random selection strategy in simulation-based evaluation on 6 different domains. Evaluation results show that the proposed strategies are good alternatives to random selection in domains where the complexity of system utterances is low.", "We have proposed an expandable dialog scenario description and platform to manage dialog systems using a weighted finite-state transducer (WFST) in which user concept and system action tags are input and output of the transducer, respectively. In this paper, we apply this framework to statistical dialog management in which a dialog strategy is acquired from a corpus of human-to-human conversation for hotel reservation. A scenario WFST for dialog management was automatically created from an N-gram model of a tag sequence that was annotated in the corpus with Interchange Format (IF). Additionally, a word-to-concept WFST for spoken language understanding (SLU) was obtained from the same corpus. The acquired scenario WFST and SLU WFST were composed together and then optimized. We evaluated the proposed WFST-based statistic dialog management in terms of correctness to detect the next system actions and have confirmed the automatically acquired dialog scenario from a corpus can manage dialog reasonably on the WFST-based dialog management platform.", "In this article, we present an approach to the development of a stochastic dialog manager. The model used by this dialog manager to generate its turns takes into account both the last turns of the user and system, and the information supplied by the user throughout the dialog. As the space of situations that can be presented in the dialogs is too large, some techniques for reducing this space have been proposed. This system has been developed in the DIHANA project, whose goal is the design and development of a dialog system to access a railway information system using spontaneous speech in Spanish. A training corpus of 900 dialogs, that was acquired through the Wizard of Oz, was used to learn the models. An evaluation of the dialog manager is also presented", "This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This provides a unique resource for research into building dialogue managers based on neural language models that can make use of large amounts of unlabeled data. The dataset has both the multi-turn property of conversations in the Dialog State Tracking Challenge datasets, and the unstructured nature of interactions from microblog services such as Twitter. We also describe two neural learning architectures suitable for analyzing this dataset, and provide benchmark performance on the task of selecting the best next response." ] }
1606.01269
2412715517
This paper presents a model for end-to-end learning of task-oriented dialog systems. The main component of the model is a recurrent neural network (an LSTM), which maps from raw dialog history directly to a distribution over system actions. The LSTM automatically infers a representation of dialog history, which relieves the system developer of much of the manual feature engineering of dialog state. In addition, the developer can provide software that expresses business rules and provides access to programmatic APIs, enabling the LSTM to take actions in the real world on behalf of the user. The LSTM can be optimized using supervised learning (SL), where a domain expert provides example dialogs which the LSTM should imitate; or using reinforcement learning (RL), where the system improves by interacting directly with end users. Experiments show that SL and RL are complementary: SL alone can derive a reasonable initial policy from a small number of training dialogs; and starting RL optimization with a policy trained with SL substantially accelerates the learning rate of RL.
Finally, action selection may be learned through reinforcement learning (RL). In RL, the agent receives a that indicates the quality of an entire dialog, but does not indicate what actions should have been taken. Action selection via RL was originally framed as a Markov decision process @cite_25 , and later as a partially observable Markov decision process @cite_17 . If the reward signal naturally occurs, such as whether the user successfully completed a task, then RL has the benefit that it can learn directly from interaction with users, without additional labeling. Business rules can be incorporated, in a similar manner to our approach @cite_8 . However, debugging an RL system is very difficult -- corrections are made via the reward signal, which many designers are unfamiliar with, and which can have non-obvious effects on the resulting policy. In addition, in early stages of learning, RL performance tends to be quite poor, requiring the use of practice users like crowd-workers or simulated users.
{ "cite_N": [ "@cite_8", "@cite_25", "@cite_17" ], "mid": [ "141751795", "2132997613", "2181858714" ], "abstract": [ "A system for producing color water and colored ice cubes in various shapes, having a control unit with front panel buttons which allow a user to select a color. Dye reservoirs are provided which are selectively dispensed into a mixing chamber to provide the selected color, where they are combined with water and chilled. The colored water is provided to a dispensing unit which selectively dispenses the colored water, or creates colored ice cubes for subsequent dispensing. Ice trays are removably secured within the dispensing unit to allow various shaped ice cubes to be produced.", "We propose a quantitative model for dialog systems that can be used for learning the dialog strategy. We claim that the problem of dialog design can be formalized as an optimization problem with an objective function reflecting different dialog dimensions relevant for a given application. We also show that any dialog system can be formally described as a sequential decision process in terms of its state space, action set, and strategy. With additional assumptions about the state transition probabilities and cost assignment, a dialog system can be mapped to a stochastic model known as Markov decision process (MDP). A variety of data driven algorithms for finding the optimal strategy (i.e., the one that optimizes the criterion) is available within the MDP framework, based on reinforcement learning. For an effective use of the available training data we propose a combination of supervised and reinforcement learning: the supervised learning is used to estimate a model of the user, i.e., the MDP parameters that quantify the user's behavior. Then a reinforcement learning algorithm is used to estimate the optimal strategy while the system interacts with the simulated user. This approach is tested for learning the strategy in an air travel information system (ATIS) task. The experimental results we present in this paper show that it is indeed possible to find a simple criterion, a state space representation, and a simulated user parameterization in order to automatically learn a relatively complex dialog behavior, similar to one that was heuristically designed by several research groups.", "Statistical dialog systems (SDSs) are motivated by the need for a data-driven framework that reduces the cost of laboriously handcrafting complex dialog managers and that provides robustness against the errors created by speech re- cognizers operating in noisy environments. By including an explicit Bayesian model of uncertainty and by optimizing the policy via a reward-driven process, partially observable Markov decision processes (POMDPs) provide such a frame- work. However, exact model representation and optimization is computationally intractable. Hence, the practical application of POMDP-based systems requires efficient algorithms and carefully constructed approximations. This review article pro- vides an overview of the current state of the art in the devel- opment of POMDP-based spoken dialog systems." ] }
1606.01364
2417911764
Measurement capabilities are essential for a variety of network applications, such as load balancing, routing, fairness and intrusion detection. These capabilities require large counter arrays in order to monitor the traffic of all network flows. While commodity SRAM memories are capable of operating at line speed, they are too small to accommodate large counter arrays. Previous works suggested estimators, which trade precision for reduced space. However, in order to accurately estimate the largest counter, these methods compromise the accuracy of the smaller counters. In this work, we present a closed form representation of the optimal estimation function. We then introduce Independent Counter Estimation Buckets (ICE-Buckets), a novel algorithm that improves estimation accuracy for all counters. This is achieved by separating the flows to buckets and configuring the optimal estimation function according to each bucket's counter scale. We prove a tighter upper bound on the relative error and demonstrate an accuracy improvement of up to 57 times on real Internet packet traces.
Estimators are able to represent large values with small symbols at the price of a small error. They can therefore be used to implement online counter arrays. This idea was first introduced by @cite_37 and was recently adapted to networking as @cite_34 . It was later improved by @cite_7 in order to provide better accuracy and support variable sized increments.
{ "cite_N": [ "@cite_37", "@cite_34", "@cite_7" ], "mid": [ "2064710146", "2101583649", "" ], "abstract": [ "It is possible to use a small counter to keep approximate counts of large numbers. The resulting expected error can be rather precisely controlled. An example is given in which 8-bit counters (bytes) are used to keep track of as many as 130,000 events with a relative error which is substantially independent of the number n of events. This relative error can be expected to be 24 percent or less 95 percent of the time (i.e. s = n 8). The techniques could be used to advantage in multichannel counting hardware or software used for the monitoring of experiments or processes.", "The need for efficient counter architecture has arisen for the following two reasons. Firstly, a number of data streaming algorithms and network management applications require a large number of counters in order to identify important traffic characteristics. And secondly, at high speeds, current memory devices have significant limitations in terms of speed (DRAM) and size (SRAM). For some applications no information on counters is needed on a per-packet basis and several methods have been proposed to handle this problem with low SRAM memory requirements. However, for a number of applications it is essential to have the counter information on every packet arrival. In this paper we propose two, computationally and memory efficient, randomized algorithms for approximating the counter values. We prove that proposed estimators are unbiased and give variance bounds. A case study on multistage filters (MSF) over the real Internet traces shows a significant improvement by using the active counters architecture.", "" ] }
1606.01280
2413965833
Conventional graph-based dependency parsers guarantee a tree structure both during training and inference. Instead, we formalize dependency parsing as the problem of independently selecting the head of each word in a sentence. Our model which we call (as shorthand for De pendency N eural Se lection) produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network. Without enforcing structural constraints during training, generates (at inference time) trees for the overwhelming majority of sentences, while non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of the approach, our parsers are on par with the state of the art.
Graph-based dependency parsers employ a model for scoring possible dependency graphs for a given sentence. The graphs are typically factored into their component arcs and the score of a tree is defined as the sum of its arcs. This factorization enables tractable search for the highest scoring graph structure which is commonly formulated as the search for the maximum spanning tree (MST). The Chu-Liu-Edmonds algorithm @cite_46 @cite_41 @cite_13 is often used to extract the MST in the case of non-projective trees, and the Eisner algorithm @cite_26 @cite_54 in the case of projective trees. During training, weight parameters of the scoring function can be learned with margin-based algorithms @cite_27 @cite_23 or the structured perceptron @cite_53 @cite_40 . Beyond basic first-order models, the literature offers a few examples of higher-order models involving sibling and grand parent relations @cite_48 @cite_36 @cite_50 . Although more expressive, such models render both training and inference more challenging.
{ "cite_N": [ "@cite_26", "@cite_41", "@cite_48", "@cite_54", "@cite_53", "@cite_36", "@cite_27", "@cite_40", "@cite_23", "@cite_50", "@cite_46", "@cite_13" ], "mid": [ "", "", "47641349", "1586368951", "", "1903393809", "2157791002", "2008652694", "1714704734", "1639395665", "1602191431", "2122922578" ], "abstract": [ "", "", "We present experiments with a dependency parsing model defined on rich factors. Our model represents dependency trees with factors that include three types of relations between the tokens of a dependency and their children. We extend the projective parsing algorithm of Eisner (1996) for our case, and train models using the averaged perceptron. Our experiments show that considering higher-order information yields significant improvements in parsing accuracy, but comes at a high cost in terms of both time and memory consumption. In the multilingual exercise of the CoNLL-2007 shared task (, 2007), our system obtains the best accuracy for English, and the second best accuracies for Basque and Czech.", "This chapter introduces weighted bilexical grammars, a formalism in which individual lexical items, such as verbs and their arguments, can have idiosyncratic selectional influences on each other. Such ‘bilexicalism’ has been a theme of much current work in parsing. The new formalism can be used to describe bilexical approaches to both dependency and phrase-structure grammars, and a slight modification yields link grammars. Its scoring approach is compatible with a wide variety of probability models.", "", "This paper introduces algorithms for non-projective parsing based on dual decomposition. We focus on parsing algorithms for non-projective head automata, a generalization of head-automata models to non-projective structures. The dual decomposition algorithms are simple and efficient, relying on standard dynamic programming and minimum spanning tree algorithms. They provably solve an LP relaxation of the non-projective parsing problem. Empirically the LP relaxation is very often tight: for many languages, exact solutions are achieved on over 98 of test sentences. The accuracy of our models is higher than previous work on a broad range of datasets.", "In this paper we describe the algorithmic implementation of multiclass kernel-based vector machines. Our starting point is a generalized notion of the margin to multiclass problems. Using this notion we cast multiclass categorization problems as a constrained optimization problem with a quadratic objective function. Unlike most of previous approaches which typically decompose a multiclass problem into multiple independent binary classification tasks, our notion of margin yields a direct method for training multiclass predictors. By using the dual of the optimization problem we are able to incorporate kernels with a compact set of constraints and decompose the dual problem into multiple optimization problems of reduced size. We describe an efficient fixed-point algorithm for solving the reduced optimization problems and prove its convergence. We then discuss technical details that yield significant running time improvements for large datasets. Finally, we describe various experiments with our approach comparing it to previously studied kernel-based methods. Our experiments indicate that for multiclass problems we attain state-of-the-art accuracy.", "We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problems. We give experimental results on part-of-speech tagging and base noun phrase chunking, in both cases showing improvements over results for a maximum-entropy tagger.", "In this paper we study a paradigm to generalize online classification algorithms for binary classification problems to multiclass problems. The particular hypotheses we investigate maintain one prototype vector per class. Given an input instance, a multiclass hypothesis computes a similarity-score between each prototype and the input instance and sets the predicted label to be the index of the prototype achieving the highest similarity. To design and analyze the learning algorithms in this paper we introduce the notion of ultraconservativeness. Ultraconservative algorithms are algorithms that update only the prototypes attaining similarity-scores which are higher than the score of the correct label's prototype. We start by describing a family of additive ultraconservative algorithms where each algorithm in the family updates its prototypes by finding a feasible solution for a set of linear constraints that depend on the instantaneous similarity-scores. We then discuss a specific online algorithm that seeks a set of prototypes which have a small norm. The resulting algorithm, which we term MIRA (for Margin Infused Relaxed Algorithm) is ultraconservative as well. We derive mistake bounds for all the algorithms and provide further analysis of MIRA using a generalized notion of the margin for multiclass problems. We discuss the form the algorithms take in the binary case and show that all the algorithms from the first family reduce to the Perceptron algorithm while MIRA provides a new Perceptron-like algorithm with a margin-dependent learning rate. We then return to multiclass problems and describe an analogous multiplicative family of algorithms with corresponding mistake bounds. We end the formal part by deriving and analyzing a multiclass version of Li and Long's ROMMA algorithm. We conclude with a discussion of experimental results that demonstrate the merits of our algorithms.", "State-of-the-art graph-based parsers use features over higher-order dependencies that rely on decoding algorithms that are slow and difficult to generalize. On the other hand, transition-based dependency parsers can easily utilize such features without increasing the linear complexity of the shift-reduce system beyond a constant. In this paper, we attempt to address this imbalance for graph-based parsing by generalizing the Eisner (1996) algorithm to handle arbitrary features over higher-order dependencies. The generalization is at the cost of asymptotic efficiency. To account for this, cube pruning for decoding is utilized (Chiang, 2007). For the first time, label tuple and structural features such as valencies can be scored efficiently with third-order features in a graph-based parser. Our parser achieves the state-of-art unlabeled accuracy of 93.06 and labeled accuracy of 91.86 on the standard test set for English, at a faster speed than a reimplementation of the third-order model of (2010).", "", "We formalize weighted dependency parsing as searching for maximum spanning trees (MSTs) in directed graphs. Using this representation, the parsing algorithm of Eisner (1996) is sufficient for searching over all projective trees in O(n3) time. More surprisingly, the representation is extended naturally to non-projective parsing using Chu-Liu-Edmonds (Chu and Liu, 1965; Edmonds, 1967) MST algorithm, yielding an O(n2) parsing algorithm. We evaluate these methods on the Prague Dependency Treebank using online large-margin learning techniques (, 2003; , 2005) and show that MST parsing increases efficiency and accuracy for languages with non-projective dependencies." ] }
1606.01280
2413965833
Conventional graph-based dependency parsers guarantee a tree structure both during training and inference. Instead, we formalize dependency parsing as the problem of independently selecting the head of each word in a sentence. Our model which we call (as shorthand for De pendency N eural Se lection) produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network. Without enforcing structural constraints during training, generates (at inference time) trees for the overwhelming majority of sentences, while non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of the approach, our parsers are on par with the state of the art.
As the term implies, transition-based parsers conceptualize the process of transforming a sentence into a dependency tree as a sequence of transitions. A transition system typically includes a stack for storing partially processed tokens, a buffer containing the remaining input, and a set of arcs containing all dependencies between tokens that have been added so far @cite_52 @cite_12 . A dependency tree is constructed by manipulating the stack and buffer, and appending arcs with predetermined operations. Most popular parsers employ an @cite_32 @cite_43 or transition system @cite_25 . Extensions of the latter include the use of non-local training methods to avoid greedy error propagation @cite_37 @cite_39 @cite_5 @cite_15 .
{ "cite_N": [ "@cite_37", "@cite_32", "@cite_52", "@cite_39", "@cite_43", "@cite_5", "@cite_15", "@cite_25", "@cite_12" ], "mid": [ "1598566484", "181643614", "2251949442", "2124861326", "2030904529", "2114887620", "595069947", "2105847779", "" ], "abstract": [ "A computer for implementing an event driven algorithm which is used in conjunction with a master computer is disclosed. The computer includes a plurality of processors coupled in a ring arrangement each of which is microprogrammable. Each processor includes a memory and a memory address generator. The generator can generate addresses based on a combination of signals from both the microcode and signals on the data bus.", "In this paper, we propose a method for analyzing word-word dependencies using deterministic bottom-up manner using Support Vector machines. We experimented with dependency trees converted from Penn treebank data, and achieved over 90 accuracy of word-word dependency. Though the result is little worse than the most up-to-date phrase structure based parsers, it looks satisfactorily accurate considering that our parser uses no information from phrase structures.", "Most existing graph-based parsing models rely on millions of hand-crafted features, which limits their generalization ability and slows down the parsing speed. In this paper, we propose a general and effective Neural Network model for graph-based dependency parsing. Our model can automatically learn high-order feature combinations using only atomic features by exploiting a novel activation function tanhcube. Moreover, we propose a simple yet effective way to utilize phrase-level information that is expensive to use in conventional graph-based parsers. Experiments on the English Penn Treebank show that parsers based on our model perform better than conventional graph-based parsers.", "Incremental parsing techniques such as shift-reduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedy and only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shift-reduce parsers, by merging \"equivalent\" stacks based on feature values. Empirically, our algorithm yields up to a five-fold speedup over a state-of-the-art shift-reduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster.", "Deterministic dependency parsing is a robust and efficient approach to syntactic parsing of unrestricted natural language text. In this paper, we analyze its potential for incremental processing and conclude that strict incrementality is not achievable within this framework. However, we also show that it is possible to minimize the number of structures that require non-incremental processing by choosing an optimal parsing algorithm. This claim is substantiated with experimental evidence showing that the algorithm achieves incremental parsing for 68.9 of the input when tested on a random sample of Swedish text. When restricted to sentences that are accepted by the parser, the degree of incrementality increases to 87.9 .", "Transition-based dependency parsers generally use heuristic decoding algorithms but can accommodate arbitrarily rich feature representations. In this paper, we show that we can improve the accuracy of such parsers by considering even richer feature sets than those employed in previous systems. In the standard Penn Treebank setup, our novel features improve attachment score form 91.4 to 92.9 , giving the best results so far for transition-based parsing and rivaling the best results overall. For the Chinese Treebank, they give a signficant improvement of the state of the art. An open source release of our parser is freely available.", "The standard training regime for transition-based dependency parsers makes use of an oracle, which predicts an optimal transition sequence for a sentence and its gold tree. We present an improved oracle for the arc-eager transition system, which provides a set of optimal transitions for every valid parser configuration, including configurations from which the gold tree is not reachable. In such cases, the oracle provides transitions that will lead to the best reachable tree from the given configuration. The oracle is efficient to implement and provably correct. We use the oracle to train a deterministic left-to-right dependency parser that is less sensitive to error propagation, using an online training procedure that also explores parser configurations resulting from non-optimal sequences of transitions. This new parser outperforms greedy parsers trained using conventional oracles on a range of data sets, with an average improvement of over 1.2 LAS points and up to almost 3 LAS points on some data sets.", "Parsing algorithms that process the input from left to right and construct a single derivation have often been considered inadequate for natural language parsing because of the massive ambiguity typically found in natural language grammars. Nevertheless, it has been shown that such algorithms, combined with treebank-induced classifiers, can be used to build highly accurate disambiguating parsers, in particular for dependency-based syntactic representations. In this article, we first present a general framework for describing and analyzing algorithms for deterministic incremental dependency parsing, formalized as transition systems. We then describe and analyze two families of such algorithms: stack-based and list-based algorithms. In the former family, which is restricted to projective dependency structures, we describe an arc-eager and an arc-standard variant; in the latter family, we present a projective and a non-projective variant. For each of the four algorithms, we give proofs of correctness and complexity. In addition, we perform an experimental evaluation of all algorithms in combination with SVM classifiers for predicting the next parsing action, using data from thirteen languages. We show that all four algorithms give competitive accuracy, although the non-projective list-based algorithm generally outperforms the projective algorithms for languages with a non-negligible proportion of non-projective constructions. However, the projective algorithms often produce comparable results when combined with the technique known as pseudo-projective parsing. The linear time complexity of the stack-based algorithms gives them an advantage with respect to efficiency both in learning and in parsing, but the projective list-based algorithm turns out to be equally efficient in practice. Moreover, when the projective algorithms are used to implement pseudo-projective parsing, they sometimes become less efficient in parsing (but not in learning) than the non-projective list-based algorithm. Although most of the algorithms have been partially described in the literature before, this is the first comprehensive analysis and evaluation of the algorithms within a unified framework.", "" ] }
1606.01280
2413965833
Conventional graph-based dependency parsers guarantee a tree structure both during training and inference. Instead, we formalize dependency parsing as the problem of independently selecting the head of each word in a sentence. Our model which we call (as shorthand for De pendency N eural Se lection) produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network. Without enforcing structural constraints during training, generates (at inference time) trees for the overwhelming majority of sentences, while non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of the approach, our parsers are on par with the state of the art.
In an system @cite_32 @cite_43 , the transitions include a operation which removes the first word in the buffer and pushes it onto the stack; a operation adds an arc from the word in the beginning of the buffer to the word on top of the stack; and a operation adds an arc from the word on top of the stack to the word in the beginning of the buffer. During parsing, the transition from one configuration to the next is greedily scored with a linear classifier whose features are defined according to the stack and buffer. The above arc-standard system builds a projective dependency tree bottom up, with the assumption that an arc is only added when the dependent node has already found all its dependents. Extensions include the system @cite_25 which always adds an arc at the earliest possible stage, a more elaborate (reduce) action space to handle non-projective parsing @cite_14 , and the use of non-local training methods to avoid greedy error propagation @cite_37 @cite_39 @cite_5 @cite_15 .
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_32", "@cite_39", "@cite_43", "@cite_5", "@cite_15", "@cite_25" ], "mid": [ "1598566484", "", "181643614", "2124861326", "2030904529", "2114887620", "595069947", "2105847779" ], "abstract": [ "A computer for implementing an event driven algorithm which is used in conjunction with a master computer is disclosed. The computer includes a plurality of processors coupled in a ring arrangement each of which is microprogrammable. Each processor includes a memory and a memory address generator. The generator can generate addresses based on a combination of signals from both the microcode and signals on the data bus.", "", "In this paper, we propose a method for analyzing word-word dependencies using deterministic bottom-up manner using Support Vector machines. We experimented with dependency trees converted from Penn treebank data, and achieved over 90 accuracy of word-word dependency. Though the result is little worse than the most up-to-date phrase structure based parsers, it looks satisfactorily accurate considering that our parser uses no information from phrase structures.", "Incremental parsing techniques such as shift-reduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedy and only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shift-reduce parsers, by merging \"equivalent\" stacks based on feature values. Empirically, our algorithm yields up to a five-fold speedup over a state-of-the-art shift-reduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster.", "Deterministic dependency parsing is a robust and efficient approach to syntactic parsing of unrestricted natural language text. In this paper, we analyze its potential for incremental processing and conclude that strict incrementality is not achievable within this framework. However, we also show that it is possible to minimize the number of structures that require non-incremental processing by choosing an optimal parsing algorithm. This claim is substantiated with experimental evidence showing that the algorithm achieves incremental parsing for 68.9 of the input when tested on a random sample of Swedish text. When restricted to sentences that are accepted by the parser, the degree of incrementality increases to 87.9 .", "Transition-based dependency parsers generally use heuristic decoding algorithms but can accommodate arbitrarily rich feature representations. In this paper, we show that we can improve the accuracy of such parsers by considering even richer feature sets than those employed in previous systems. In the standard Penn Treebank setup, our novel features improve attachment score form 91.4 to 92.9 , giving the best results so far for transition-based parsing and rivaling the best results overall. For the Chinese Treebank, they give a signficant improvement of the state of the art. An open source release of our parser is freely available.", "The standard training regime for transition-based dependency parsers makes use of an oracle, which predicts an optimal transition sequence for a sentence and its gold tree. We present an improved oracle for the arc-eager transition system, which provides a set of optimal transitions for every valid parser configuration, including configurations from which the gold tree is not reachable. In such cases, the oracle provides transitions that will lead to the best reachable tree from the given configuration. The oracle is efficient to implement and provably correct. We use the oracle to train a deterministic left-to-right dependency parser that is less sensitive to error propagation, using an online training procedure that also explores parser configurations resulting from non-optimal sequences of transitions. This new parser outperforms greedy parsers trained using conventional oracles on a range of data sets, with an average improvement of over 1.2 LAS points and up to almost 3 LAS points on some data sets.", "Parsing algorithms that process the input from left to right and construct a single derivation have often been considered inadequate for natural language parsing because of the massive ambiguity typically found in natural language grammars. Nevertheless, it has been shown that such algorithms, combined with treebank-induced classifiers, can be used to build highly accurate disambiguating parsers, in particular for dependency-based syntactic representations. In this article, we first present a general framework for describing and analyzing algorithms for deterministic incremental dependency parsing, formalized as transition systems. We then describe and analyze two families of such algorithms: stack-based and list-based algorithms. In the former family, which is restricted to projective dependency structures, we describe an arc-eager and an arc-standard variant; in the latter family, we present a projective and a non-projective variant. For each of the four algorithms, we give proofs of correctness and complexity. In addition, we perform an experimental evaluation of all algorithms in combination with SVM classifiers for predicting the next parsing action, using data from thirteen languages. We show that all four algorithms give competitive accuracy, although the non-projective list-based algorithm generally outperforms the projective algorithms for languages with a non-negligible proportion of non-projective constructions. However, the projective algorithms often produce comparable results when combined with the technique known as pseudo-projective parsing. The linear time complexity of the stack-based algorithms gives them an advantage with respect to efficiency both in learning and in parsing, but the projective list-based algorithm turns out to be equally efficient in practice. Moreover, when the projective algorithms are used to implement pseudo-projective parsing, they sometimes become less efficient in parsing (but not in learning) than the non-projective list-based algorithm. Although most of the algorithms have been partially described in the literature before, this is the first comprehensive analysis and evaluation of the algorithms within a unified framework." ] }
1606.01280
2413965833
Conventional graph-based dependency parsers guarantee a tree structure both during training and inference. Instead, we formalize dependency parsing as the problem of independently selecting the head of each word in a sentence. Our model which we call (as shorthand for De pendency N eural Se lection) produces a distribution over possible heads for each word using features obtained from a bidirectional recurrent neural network. Without enforcing structural constraints during training, generates (at inference time) trees for the overwhelming majority of sentences, while non-tree outputs can be adjusted with a maximum spanning tree algorithm. We evaluate on four languages (English, Chinese, Czech, and German) with varying degrees of non-projectivity. Despite the simplicity of the approach, our parsers are on par with the state of the art.
Neural network representations have a long history in syntactic parsing @cite_21 @cite_56 @cite_20 . Recent work uses neural networks in lieu of the linear classifiers typically employed in conventional transition- or graph-based dependency parsers. For example, use a feed forward neural network to learn features for a transition-based parser, whereas do the same for a graph-based parser. apply tensor decomposition to obtain word embeddings in their syntactic roles, which they subsequently use in a graph-based parser. redesign components of a transition-based system where the buffer, stack, and action sequences are modeled separately with stack long short-term memory networks. The hidden states of these LSTMs are concatenated and used as features to a final transition classifier. use bidirectional LSTMs to extract features for a transition- and graph-based parser, whereas build a greedy arc-standard parser using similar features.
{ "cite_N": [ "@cite_21", "@cite_20", "@cite_56" ], "mid": [ "", "2149337887", "2098050104" ], "abstract": [ "", "We introduce a framework for syntactic parsing with latent variables based on a form of dynamic Sigmoid Belief Networks called Incremental Sigmoid Belief Networks. We demonstrate that a previous feed-forward neural network parsing model can be viewed as a coarse approximation to inference with this class of graphical model. By constructing a more accurate but still tractable approximation, we significantly improve parsing accuracy, suggesting that ISBNs provide a good idealization for parsing. This generative model of parsing achieves state-of-theart results on WSJ text and 8 error reduction over the baseline neural network parser.", "Discriminative methods have shown significant improvements over traditional generative methods in many machine learning applications, but there has been difficulty in extending them to natural language parsing. One problem is that much of the work on discriminative methods conflates changes to the learning method with changes to the parameterization of the problem. We show how a parser can be trained with a discriminative learning method while still parameterizing the problem according to a generative probability model. We present three methods for training a neural network to estimate the probabilities for a statistical parser, one generative, one discriminative, and one where the probability model is generative but the training criteria is discriminative. The latter model outperforms the previous two, achieving state-of-the-art levels of performance (90.1 F-measure on constituents)." ] }
1606.01584
2410216425
Machine learning is used in a number of security related applications such as biometric user authentication, speaker identification etc. A type of causative integrity attack against machine learning called Poisoning attack works by injecting specially crafted data points in the training data so as to increase the false positive rate of the classifier. In the context of the biometric authentication, this means that more intruders will be classified as valid user, and in case of speaker identification system, user A will be classified user B. In this paper, we examine poisoning attack against SVM and introduce - Curie - a method to protect the SVM classifier from the poisoning attack. The basic idea of our method is to identify the poisoned data points injected by the adversary and filter them out. Our method is light weight and can be easily integrated into existing systems. Experimental results show that it works very well in filtering out the poisoned data.
In this section, we will discuss some background related to poisoning attacks against machine learning algorithms. We will also discuss the poisoning attack againsts SVM proposed by @cite_2 and Xiao @cite_5 , and some proposed solutions to protect against such attacks.
{ "cite_N": [ "@cite_5", "@cite_2" ], "mid": [ "2399587145", "2949506549" ], "abstract": [ "To develop a robust classification algorithm in the adversarial setting, it is important to understand the adversary's strategy. We address the problem of label flips attack where an adversary contaminates the training set through flipping labels. By analyzing the objective of the adversary, we formulate an optimization framework for finding the label flips that maximize the classification error. An algorithm for attacking support vector machines is derived. Experiments demonstrate that the accuracy of classifiers is significantly degraded under the attack.", "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM's test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM's decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM's optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier's test error." ] }
1606.01621
2951262072
Real-world applications could benefit from the ability to automatically generate a fine-grained ranking of photo aesthetics. However, previous methods for image aesthetics analysis have primarily focused on the coarse, binary categorization of images into high- or low-aesthetic categories. In this work, we propose to learn a deep convolutional neural network to rank photo aesthetics in which the relative ranking of photo aesthetics are directly modeled in the loss function. Our model incorporates joint learning of meaningful photographic attributes and image content information which can help regularize the complicated photo aesthetics rating problem. To train and analyze this model, we have assembled a new aesthetics and attributes database (AADB) which contains aesthetic scores and meaningful attributes assigned to each image by multiple human raters. Anonymized rater identities are recorded across images allowing us to exploit intra-rater consistency using a novel sampling strategy when computing the ranking loss of training image pairs. We show the proposed sampling strategy is very effective and robust in face of subjective judgement of image aesthetics by individuals with different aesthetic tastes. Experiments demonstrate that our unified model can generate aesthetic rankings that are more consistent with human ratings. To further validate our model, we show that by simply thresholding the estimated aesthetic scores, we are able to achieve state-or-the-art classification performance on the existing AVA dataset benchmark.
@cite_17 @cite_19 @cite_8 , CNN-based methods are proposed for classifying images into high- or low- aesthetic categories. The authors also show that using patches from the original high-resolution images largely improves the performance. In contrast, our approach formulates aesthetic prediction as a combined regression and ranking problem. Rather than using patches, our architecture warps the whole input image in order to minimize the overall network size and computational workload while retaining compositional elements in the image, rule of thirds, which are lost in patch-based approaches.
{ "cite_N": [ "@cite_19", "@cite_8", "@cite_17" ], "mid": [ "2051596736", "2217895792", "2009678853" ], "abstract": [ "In this work we describe a Convolutional Neural Network (CNN) to accurately predict image quality without a reference image. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max and min pooling, two fully connected layers and an output node. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating image quality. This approach achieves state of the art performance on the LIVE dataset and shows excellent generalization ability in cross dataset experiments. Further experiments on images with local distortions demonstrate the local quality estimation ability of our CNN, which is rarely reported in previous literature.", "This paper investigates problems of image style, aesthetics, and quality estimation, which require fine-grained details from high-resolution images, utilizing deep neural network training approach. Existing deep convolutional neural networks mostly extracted one patch such as a down-sized crop from each image as a training example. However, one patch may not always well represent the entire image, which may cause ambiguity during training. We propose a deep multi-patch aggregation network training approach, which allows us to train models using multiple patches generated from one image. We achieve this by constructing multiple, shared columns in the neural network and feeding multiple patches to each of the columns. More importantly, we propose two novel network layers (statistics and sorting) to support aggregation of those patches. The proposed deep multi-patch aggregation network integrates shared feature learning and aggregation function learning into a unified framework. We demonstrate the effectiveness of the deep multi-patch aggregation network on the three problems, i.e., image style recognition, aesthetic quality categorization, and image quality estimation. Our models trained using the proposed networks significantly outperformed the state of the art in all three applications.", "Effective visual features are essential for computational aesthetic quality rating systems. Existing methods used machine learning and statistical modeling techniques on handcrafted features or generic image descriptors. A recently-published large-scale dataset, the AVA dataset, has further empowered machine learning based approaches. We present the RAPID (RAting PIctorial aesthetics using Deep learning) system, which adopts a novel deep neural network approach to enable automatic feature learning. The central idea is to incorporate heterogeneous inputs generated from the image, which include a global view and a local view, and to unify the feature learning and classifier training using a double-column deep convolutional neural network. In addition, we utilize the style attributes of images to help improve the aesthetic quality categorization accuracy. Experimental results show that our approach significantly outperforms the state of the art on the AVA dataset." ] }
1606.01621
2951262072
Real-world applications could benefit from the ability to automatically generate a fine-grained ranking of photo aesthetics. However, previous methods for image aesthetics analysis have primarily focused on the coarse, binary categorization of images into high- or low-aesthetic categories. In this work, we propose to learn a deep convolutional neural network to rank photo aesthetics in which the relative ranking of photo aesthetics are directly modeled in the loss function. Our model incorporates joint learning of meaningful photographic attributes and image content information which can help regularize the complicated photo aesthetics rating problem. To train and analyze this model, we have assembled a new aesthetics and attributes database (AADB) which contains aesthetic scores and meaningful attributes assigned to each image by multiple human raters. Anonymized rater identities are recorded across images allowing us to exploit intra-rater consistency using a novel sampling strategy when computing the ranking loss of training image pairs. We show the proposed sampling strategy is very effective and robust in face of subjective judgement of image aesthetics by individuals with different aesthetic tastes. Experiments demonstrate that our unified model can generate aesthetic rankings that are more consistent with human ratings. To further validate our model, we show that by simply thresholding the estimated aesthetic scores, we are able to achieve state-or-the-art classification performance on the existing AVA dataset benchmark.
Some recent works have explored the use of high-level describable attributes @cite_3 @cite_2 @cite_17 for image aesthetics classification. In early work, these attributes were modeled using hand-crafted features @cite_3 . This introduces some intrinsic problems, since (1) engineering features that capture high-level semantic attributes is a difficult task, and (2) the choice of describable attributes may ignore some aspects of the image which are relevant to the overall image aesthetics. For these reasons, Marchesotti al propose to automatically select a large number of useful attributes based on textual comments from raters @cite_27 and model these attributes using generic features @cite_0 . Despite good performance, many of the discovered textual attributes ( @math , @math , @math , @math , @math ) do not correspond to well defined visual characteristics which hinders their detectability and utility in applications. Perhaps the closest work to our approach is that of Lu al, who propose to learn several meaningful style attributes @cite_17 in a CNN framework and use the hidden features to regularize aesthetics classification network training.
{ "cite_N": [ "@cite_3", "@cite_0", "@cite_27", "@cite_2", "@cite_17" ], "mid": [ "2080754665", "2048835603", "2397855462", "2080948161", "2009678853" ], "abstract": [ "With the rise in popularity of digital cameras, the amount of visual data available on the web is growing exponentially. Some of these pictures are extremely beautiful and aesthetically pleasing, but the vast majority are uninteresting or of low quality. This paper demonstrates a simple, yet powerful method to automatically select high aesthetic quality images from large image collections. Our aesthetic quality estimation method explicitly predicts some of the possible image cues that a human might use to evaluate an image and then uses them in a discriminative approach. These cues or high level describable image attributes fall into three broad types: 1) compositional attributes related to image layout or configuration, 2) content attributes related to the objects or scene types depicted, and 3) sky-illumination attributes related to the natural lighting conditions. We demonstrate that an aesthetics classifier trained on these describable attributes can provide a significant improvement over baseline methods for predicting human quality judgments. We also demonstrate our method for predicting the “interestingness” of Flickr photos, and introduce a novel problem of estimating query specific “interestingness”.", "In this paper, we automatically assess the aesthetic properties of images. In the past, this problem has been addressed by hand-crafting features which would correlate with best photographic practices (e.g. “Does this image respect the rule of thirds?”) or with photographic techniques (e.g. “Is this image a macro?”). We depart from this line of research and propose to use generic image descriptors to assess aesthetic quality. We experimentally show that the descriptors we use, which aggregate statistics computed from low-level local features, implicitly encode the aesthetic properties explicitly used by state-of-the-art methods and outperform them by a significant margin.", "Current approaches to aesthetic image analysis either provide accurate or interpretable results. To get both accuracy and interpretability, we advocate the use of learned visual attributes as mid-level features. For this purpose, we propose to discover and learn the visual appearance of attributes automatically, using the recently introduced AVA database which contains more than 250,000 images together with their user ratings and textual comments. These learned attributes have many applications including aesthetic quality prediction, image classification and retrieval.", "Aesthetic image analysis is the study and assessment of the aesthetic properties of images. Current computational approaches to aesthetic image analysis either provide accurate or interpretable results. To obtain both accuracy and interpretability by humans, we advocate the use of learned and nameable visual attributes as mid-level features. For this purpose, we propose to discover and learn the visual appearance of attributes automatically, using a recently introduced database, called AVA, which contains more than 250,000 images together with their aesthetic scores and textual comments given by photography enthusiasts. We provide a detailed analysis of these annotations as well as the context in which they were given. We then describe how these three key components of AVA--images, scores, and comments--can be effectively leveraged to learn visual attributes. Lastly, we show that these learned attributes can be successfully used in three applications: aesthetic quality prediction, image tagging and retrieval.", "Effective visual features are essential for computational aesthetic quality rating systems. Existing methods used machine learning and statistical modeling techniques on handcrafted features or generic image descriptors. A recently-published large-scale dataset, the AVA dataset, has further empowered machine learning based approaches. We present the RAPID (RAting PIctorial aesthetics using Deep learning) system, which adopts a novel deep neural network approach to enable automatic feature learning. The central idea is to incorporate heterogeneous inputs generated from the image, which include a global view and a local view, and to unify the feature learning and classifier training using a double-column deep convolutional neural network. In addition, we utilize the style attributes of images to help improve the aesthetic quality categorization accuracy. Experimental results show that our approach significantly outperforms the state of the art on the AVA dataset." ] }
1606.01621
2951262072
Real-world applications could benefit from the ability to automatically generate a fine-grained ranking of photo aesthetics. However, previous methods for image aesthetics analysis have primarily focused on the coarse, binary categorization of images into high- or low-aesthetic categories. In this work, we propose to learn a deep convolutional neural network to rank photo aesthetics in which the relative ranking of photo aesthetics are directly modeled in the loss function. Our model incorporates joint learning of meaningful photographic attributes and image content information which can help regularize the complicated photo aesthetics rating problem. To train and analyze this model, we have assembled a new aesthetics and attributes database (AADB) which contains aesthetic scores and meaningful attributes assigned to each image by multiple human raters. Anonymized rater identities are recorded across images allowing us to exploit intra-rater consistency using a novel sampling strategy when computing the ranking loss of training image pairs. We show the proposed sampling strategy is very effective and robust in face of subjective judgement of image aesthetics by individuals with different aesthetic tastes. Experiments demonstrate that our unified model can generate aesthetic rankings that are more consistent with human ratings. To further validate our model, we show that by simply thresholding the estimated aesthetic scores, we are able to achieve state-or-the-art classification performance on the existing AVA dataset benchmark.
To make use of image content information such as scene categories or choice of photographic subject, Luo al propose to segment regions and extract visual features based on the categorization of photo content @cite_28 . Other work, such as @cite_16 @cite_4 , has also demonstrated that image content is useful for aesthetics analysis. However, it has been assumed that the category labels are provided both during training and testing. To our knowledge, there is only one paper @cite_29 that attempts to jointly predict content semantics and aesthetics labels. @cite_29 , Murray al propose to rank images w.r.t aesthetics in a three-way classification problem (high-, medium- and low-aesthetics quality). However, their work has some limitations because (1) deciding the thresholds between nearby classes is non-trivial, and (2) the final classification model outputs a hard label which is less useful than a continuous rating.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_16", "@cite_4" ], "mid": [ "1997095443", "2036878610", "2078807908", "" ], "abstract": [ "Automatically assessing photo quality from the perspective of visual aesthetics is of great interest in high-level vision research and has drawn much attention in recent years. In this paper, we propose content-based photo quality assessment using regional and global features. Under this framework, subject areas, which draw the most attentions of human eyes, are first extracted. Then regional features extracted from subject areas and the background regions are combined with global features to assess the photo quality. Since professional photographers may adopt different photographic techniques and may have different aesthetic criteria in mind when taking different types of photos (e.g. landscape versus portrait), we propose to segment regions and extract visual features in different ways according to the categorization of photo content. Therefore we divide the photos into seven categories based on their content and develop a set of new subject area extraction methods and new visual features, which are specially designed for different categories. This argument is supported by extensive experimental comparisons of existing photo quality assessment approaches as well as our new regional and global features over different categories of photos. Our new features significantly outperform the state-of-the-art methods. Another contribution of this work is to construct a large and diversified benchmark database for the research of photo quality assessment. It includes 17, 613 photos with manually labeled ground truth.", "Most works on image retrieval from text queries have addressed the problem of retrieving semantically relevant images. However, the ability to assess the aesthetic quality of an image is an increasingly important differentiating factor for search engines. In this work, given a semantic query, we are interested in retrieving images which are semantically relevant and score highly in terms of aesthetics visual quality. We use large-margin classifiers and rankers to learn statistical models capable of ordering images based on the aesthetic and semantic information. In particular, we compare two families of approaches: while the first one attempts to learn a single ranker which takes into account both semantic and aesthetic information, the second one learns separate semantic and aesthetic models. We carry out a quantitative and qualitative evaluation on a recentlypublished large-scale dataset and we show that the second family of techniques significantly outperforms the first one.", "With the ever-expanding volume of visual content available, the ability to organize and navigate such content by aesthetic preference is becoming increasingly important. While still in its nascent stage, research into computational models of aesthetic preference already shows great potential. However, to advance research, realistic, diverse and challenging databases are needed. To this end, we introduce a new large-scale database for conducting Aesthetic Visual Analysis: AVA. It contains over 250,000 images along with a rich variety of meta-data including a large number of aesthetic scores for each image, semantic labels for over 60 categories as well as labels related to photographic style. We show the advantages of AVA with respect to existing databases in terms of scale, diversity, and heterogeneity of annotations. We then describe several key insights into aesthetic preference afforded by AVA. Finally, we demonstrate, through three applications, how the large scale of AVA can be leveraged to improve performance on existing preference tasks.", "" ] }
1606.00950
2416803784
How can we find a good graph clustering of a real-world network, that allows insight into its underlying structure and also potential functions? In this paper, we introduce a new graph clustering algorithm Dcut from a density point of view. The basic idea is to envision the graph clustering as a density-cut problem, such that the vertices in the same cluster are densely connected and the vertices between clusters are sparsely connected. To identify meaningful clusters (communities) in a graph, a density-connected tree is first constructed in a local fashion. Owing to the density-connected tree, Dcut allows partitioning a graph into multiple densely tight-knit clusters directly. We demonstrate that our method has several attractive benefits: (a) Dcut provides an intuitive criterion to evaluate the goodness of a graph clustering in a more natural and precise way; (b) Built upon the density-connected tree, Dcut allows identifying the meaningful graph clusters of densely connected vertices efficiently; (c) The density-connected tree provides a connectivity map of vertices in a graph from a local density perspective. We systematically evaluate our new clustering approach on synthetic as well as real data to demonstrate its good performance.
During the past several decades, many approaches have been proposed for graph clustering, such as @cite_14 , @cite_0 , @cite_1 etc. Due to space limitations, we can only review the closest approaches from the literature. For detailed reviews of graph clustering, please refer to @cite_7 .
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_1", "@cite_7" ], "mid": [ "2121947440", "2070232376", "2151936673", "" ], "abstract": [ "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph [Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445--452; Hendrickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993]. From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to consistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan--Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear programming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.", "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.", "" ] }
1606.00950
2416803784
How can we find a good graph clustering of a real-world network, that allows insight into its underlying structure and also potential functions? In this paper, we introduce a new graph clustering algorithm Dcut from a density point of view. The basic idea is to envision the graph clustering as a density-cut problem, such that the vertices in the same cluster are densely connected and the vertices between clusters are sparsely connected. To identify meaningful clusters (communities) in a graph, a density-connected tree is first constructed in a local fashion. Owing to the density-connected tree, Dcut allows partitioning a graph into multiple densely tight-knit clusters directly. We demonstrate that our method has several attractive benefits: (a) Dcut provides an intuitive criterion to evaluate the goodness of a graph clustering in a more natural and precise way; (b) Built upon the density-connected tree, Dcut allows identifying the meaningful graph clusters of densely connected vertices efficiently; (c) The density-connected tree provides a connectivity map of vertices in a graph from a local density perspective. We systematically evaluate our new clustering approach on synthetic as well as real data to demonstrate its good performance.
The minimum-cut criterion based graph clustering refers to a class of well-known techniques which seek to partition a graph into disjoint subgraphs such that the number of cuts across the subgraphs is minimized. Wu and Leahy @cite_4 has proposed a clustering method based on such minimum cut criterion, where the cut between two subgraphs is computed as the total weights of the edges that have been removed. @math disjoint subgraphs are obtained by recursively finding the minimum cuts that bisect the existing segments. To avoid an unnatural bias towards splitting small-sized subgraphs based on the minimum-cut criterion, @cite_19 has been introduced, and it uses the second smallest eigenvalue of the similarity matrix to find the suitable cut. In the same spirit, Shi and Malik @cite_0 has proposed the , to compute the cut cost as a fraction of the total edge connections to all the nodes in a graph. To optimize this criterion, a generalized eigenvalue decomposition was used to speed up computation time. In many cases, this class of graph clustering algorithms relying on the eigenvector decomposition of a similarity matrix (e.g. and ) is also called spectral clustering.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_4" ], "mid": [ "2121947440", "2125531986", "2132603077" ], "abstract": [ "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "Partitioning of circuit netlists in VLSI design is considered. It is shown that the second smallest eigenvalue of a matrix derived from the netlist gives a provably good approximation of the optimal ratio cut partition cost. It is also demonstrated that fast Lanczos-type methods for the sparse symmetric eigenvalue problem are a robust basis for computing heuristic ratio cuts based on the eigenvector of this second eigenvalue. Effective clustering methods are an immediate by-product of the second eigenvector computation and are very successful on the difficult input classes proposed in the CAD literature. The intersection graph representation of the circuit netlist is considered, as a basis for partitioning, a heuristic based on spectral ratio cut partitioning of the netlist intersection graph is proposed. The partitioning heuristics were tested on industry benchmark suites, and the results were good in terms of both solution quality and runtime. Several types of algorithmic speedups and directions for future work are discussed. >", "A novel graph theoretic approach for data clustering is presented and its application to the image segmentation problem is demonstrated. The data to be clustered are represented by an undirected adjacency graph G with arc capacities assigned to reflect the similarity between the linked vertices. Clustering is achieved by removing arcs of G to form mutually exclusive subgraphs such that the largest inter-subgraph maximum flow is minimized. For graphs of moderate size ( approximately 2000 vertices), the optimal solution is obtained through partitioning a flow and cut equivalent tree of G, which can be efficiently constructed using the Gomory-Hu algorithm (1961). However for larger graphs this approach is impractical. New theorems for subgraph condensation are derived and are then used to develop a fast algorithm which hierarchically constructs and partitions a partially equivalent tree of much reduced size. This algorithm results in an optimal solution equivalent to that obtained by partitioning the complete equivalent tree and is able to handle very large graphs with several hundred thousand vertices. The new clustering algorithm is applied to the image segmentation problem. The segmentation is achieved by effectively searching for closed contours of edge elements (equivalent to minimum cuts in G), which consist mostly of strong edges, while rejecting contours containing isolated strong edges. This method is able to accurately locate region boundaries and at the same time guarantees the formation of closed edge contours. >" ] }
1606.00950
2416803784
How can we find a good graph clustering of a real-world network, that allows insight into its underlying structure and also potential functions? In this paper, we introduce a new graph clustering algorithm Dcut from a density point of view. The basic idea is to envision the graph clustering as a density-cut problem, such that the vertices in the same cluster are densely connected and the vertices between clusters are sparsely connected. To identify meaningful clusters (communities) in a graph, a density-connected tree is first constructed in a local fashion. Owing to the density-connected tree, Dcut allows partitioning a graph into multiple densely tight-knit clusters directly. We demonstrate that our method has several attractive benefits: (a) Dcut provides an intuitive criterion to evaluate the goodness of a graph clustering in a more natural and precise way; (b) Built upon the density-connected tree, Dcut allows identifying the meaningful graph clusters of densely connected vertices efficiently; (c) The density-connected tree provides a connectivity map of vertices in a graph from a local density perspective. We systematically evaluate our new clustering approach on synthetic as well as real data to demonstrate its good performance.
Recently, modularity has been developed to measure the division of a network into communities. Unlike minimum-cut related approaches which investigate the number of edges or the total number of edge weights between two subgroups, modularity identifies a good cut by measuring the expected edges between clusters. Modularity-based graph clustering methods @cite_12 , @cite_1 partition a network into groups to ensure the number of edges between two groups is significantly less than the expected edges.
{ "cite_N": [ "@cite_1", "@cite_12" ], "mid": [ "2151936673", "2089458547" ], "abstract": [ "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as “modularity” over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets.", "Many networks display community structure---groups of vertices within which connections are dense but between which they are sparser---and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists." ] }
1606.00950
2416803784
How can we find a good graph clustering of a real-world network, that allows insight into its underlying structure and also potential functions? In this paper, we introduce a new graph clustering algorithm Dcut from a density point of view. The basic idea is to envision the graph clustering as a density-cut problem, such that the vertices in the same cluster are densely connected and the vertices between clusters are sparsely connected. To identify meaningful clusters (communities) in a graph, a density-connected tree is first constructed in a local fashion. Owing to the density-connected tree, Dcut allows partitioning a graph into multiple densely tight-knit clusters directly. We demonstrate that our method has several attractive benefits: (a) Dcut provides an intuitive criterion to evaluate the goodness of a graph clustering in a more natural and precise way; (b) Built upon the density-connected tree, Dcut allows identifying the meaningful graph clusters of densely connected vertices efficiently; (c) The density-connected tree provides a connectivity map of vertices in a graph from a local density perspective. We systematically evaluate our new clustering approach on synthetic as well as real data to demonstrate its good performance.
Metis is a class of multi-level partitioning techniques proposed by Karypis and Kumar @cite_10 , @cite_14 . Graph clustering starts with constructing a sequence of successively smaller (coarser) graphs, and a bisection of the coarsest graph is applied. Subsequently, a finer graph is generated in the next level based on the previous bisections. At each level, an iterative refinement algorithm such as Kernighan-Lin (KL) or Fiduccia-Mattheyses (FM) is used to further improve the bisection. A more robust overall multilevel paradigm has been introduced by Karypis and Kumar @cite_14 , which presents a powerful graph coarsening scheme. It uses simplified variants of KL and FM to speed up the refinement without compromising the overall quality.
{ "cite_N": [ "@cite_14", "@cite_10" ], "mid": [ "2070232376", "2004951603" ], "abstract": [ "Recently, a number of researchers have investigated a class of graph partitioning algorithms that reduce the size of the graph by collapsing vertices and edges, partition the smaller graph, and then uncoarsen it to construct a partition for the original graph [Bui and Jones, Proc. of the 6th SIAM Conference on Parallel Processing for Scientific Computing, 1993, 445--452; Hendrickson and Leland, A Multilevel Algorithm for Partitioning Graphs, Tech. report SAND 93-1301, Sandia National Laboratories, Albuquerque, NM, 1993]. From the early work it was clear that multilevel techniques held great promise; however, it was not known if they can be made to consistently produce high quality partitions for graphs arising in a wide range of application domains. We investigate the effectiveness of many different choices for all three phases: coarsening, partition of the coarsest graph, and refinement. In particular, we present a new coarsening heuristic (called heavy-edge heuristic) for which the size of the partition of the coarse graph is within a small factor of the size of the final partition obtained after multilevel refinement. We also present a much faster variation of the Kernighan--Lin (KL) algorithm for refining during uncoarsening. We test our scheme on a large number of graphs arising in various domains including finite element methods, linear programming, VLSI, and transportation. Our experiments show that our scheme produces partitions that are consistently better than those produced by spectral partitioning schemes in substantially smaller time. Also, when our scheme is used to compute fill-reducing orderings for sparse matrices, it produces orderings that have substantially smaller fill than the widely used multiple minimum degree algorithm.", "In this paper, we present and study a class of graph partitioning algorithms that reduces the size of the graph by collapsing vertices and edges, we find ak-way partitioning of the smaller graph, and then we uncoarsen and refine it to construct ak-way partitioning for the original graph. These algorithms compute ak-way partitioning of a graphG= (V,E) inO(|E|) time, which is faster by a factor ofO(logk) than previously proposed multilevel recursive bisection algorithms. A key contribution of our work is in finding a high-quality and computationally inexpensive refinement algorithm that can improve upon an initialk-way partitioning. We also study the effectiveness of the overall scheme for a variety of coarsening schemes. We present experimental results on a large number of graphs arising in various domains including finite element methods, linear programming, VLSI, and transportation. Our experiments show that this new scheme produces partitions that are of comparable or better quality than those produced by the multilevel bisection algorithm and requires substantially smaller time. Graphs containing up to 450,000 vertices and 3,300,000 edges can be partitioned in 256 domains in less than 40 s on a workstation such as SGI's Challenge. Compared with the widely used multilevel spectral bisection algorithm, our new algorithm is usually two orders of magnitude faster and produces partitions with substantially smaller edge-cut." ] }
1606.00950
2416803784
How can we find a good graph clustering of a real-world network, that allows insight into its underlying structure and also potential functions? In this paper, we introduce a new graph clustering algorithm Dcut from a density point of view. The basic idea is to envision the graph clustering as a density-cut problem, such that the vertices in the same cluster are densely connected and the vertices between clusters are sparsely connected. To identify meaningful clusters (communities) in a graph, a density-connected tree is first constructed in a local fashion. Owing to the density-connected tree, Dcut allows partitioning a graph into multiple densely tight-knit clusters directly. We demonstrate that our method has several attractive benefits: (a) Dcut provides an intuitive criterion to evaluate the goodness of a graph clustering in a more natural and precise way; (b) Built upon the density-connected tree, Dcut allows identifying the meaningful graph clusters of densely connected vertices efficiently; (c) The density-connected tree provides a connectivity map of vertices in a graph from a local density perspective. We systematically evaluate our new clustering approach on synthetic as well as real data to demonstrate its good performance.
The Markov Cluster algorithm (MCL) @cite_13 is a popular algorithm used in life sciences based on the simulation of (stochastic) flow in graphs. The basic idea is that dense regions in sparse graphs correspond to regions in which the number of random walks of length @math is relatively large. MCL basically identifies high-flowing regions representing the graph clusters by using an inflation parameter to separate regions of weak and strong flow.
{ "cite_N": [ "@cite_13" ], "mid": [ "2166557397" ], "abstract": [ "A cluster algorithm for graphs called the (MCL algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL process. The graphs may be both weighted (with nonnegative weight) and directed. Let @math be such a graph. The MCL algorithm simulates flow in @math by first identifying @math in a canonical way with a Markov graph @math . Flow is then alternatingly expanded and contracted, leading to a row of Markov Graphs @math . Flow expansion corresponds with taking the @math power of a stochastic matrix, where @math . Flow contraction corresponds with a parametrized operator @math , @math , which maps the set of (column) stochastic matrices onto itself. The image @math is obtained by raising each entry in @math to the @math power and rescaling each column to have sum @math again. The heuristic underlying this approach is the expectation that flow between dense regions which are sparsely connected will evaporate. The invariant limits of the process are easily derived and in practice the process converges very fast to such a limit, the structure of which has a generic interpretation as an overlapping clustering of the graph @math . Overlap is limited to cases where the input graph has a symmetric structure inducing it. The contraction and expansion parameters of the MCL process influence the granularity of the output. The algorithm is space and time efficient and lends itself to drastic scaling. This report describes the MCL algorithm and process, convergence towards equilibrium states, interpretation of the states as clusterings, and implementation and scalability. The algorithm is introduced by first considering several related proposals towards graph clustering, of both combinatorial and probabilistic nature." ] }
1606.01025
2732257506
In this paper, a regularization of Wasserstein barycenters for random measures supported on @math is introduced via convex penalization. The existence and uniqueness of such barycenters is first proved for a large class of penalization functions. The Bregman divergence associated to the penalization term is then considered to obtain a stability result on penalized barycenters. This allows the comparison of data made of @math absolutely continuous probability measures, within the more realistic setting where one only has access to a dataset of random variables sampled from unknown distributions. The convergence of the penalized empirical barycenter of a set of @math iid random probability measures towards its population counterpart is finally analyzed. This approach is shown to be appropriate for the statistical analysis of either discrete or absolutely continuous random measures. It also allows to construct, from a set of discrete measures, consistent estimators of population Wasserstein barycenters that are absolutely continuous.
* Statistical inference using optimal transport. The penalized barycenter problem is motivated by the nonparametric method introduced in @cite_2 for the classical density estimation problem from discrete samples. It is based on a variational regularization approach involving the Wasserstein distance as a data fidelity term. However, the adaptation of this work for the penalization of Wasserstein barycenter has not been considered so far.
{ "cite_N": [ "@cite_2" ], "mid": [ "2321750535" ], "abstract": [ "The work of M.B. has been supported by the German Science Foundation (DFG) through project Regularization with Singular Energies. C.B.S acknowledges the financial support provided by the Cambridge Centre for Analysis (CCA), the DFG Graduiertenkolleg 1023 Identification in Mathematical Models: Synergy of Stochastic and Numerical Methods and the project WWTF Five senses-Call 2006, Mathematical Methods for Image Analysis and Processing in the Visual Arts. Further, this publication is based on work supported by Award No. KUK-I1-007-43, made by King Abdullah University of Science and Technology (KAUST)." ] }
1606.01025
2732257506
In this paper, a regularization of Wasserstein barycenters for random measures supported on @math is introduced via convex penalization. The existence and uniqueness of such barycenters is first proved for a large class of penalization functions. The Bregman divergence associated to the penalization term is then considered to obtain a stability result on penalized barycenters. This allows the comparison of data made of @math absolutely continuous probability measures, within the more realistic setting where one only has access to a dataset of random variables sampled from unknown distributions. The convergence of the penalized empirical barycenter of a set of @math iid random probability measures towards its population counterpart is finally analyzed. This approach is shown to be appropriate for the statistical analysis of either discrete or absolutely continuous random measures. It also allows to construct, from a set of discrete measures, consistent estimators of population Wasserstein barycenters that are absolutely continuous.
A detailed characterization of empirical Wasserstein barycenters in terms of existence, uniqueness and regularity for probability measures with support in @math is given in the seminal paper @cite_26 . The relation of such barycenters with the solution of the multi-marginal problem is also studied in both @cite_26 and @cite_31 . The notion of Wasserstein barycenter has been first generalized in @cite_16 , by establishing existence, uniqueness and consistency for random probability measures supported on a locally compact geodesic space. The more general case of probability measures supported on Riemannian manifolds has then been studied in @cite_24 . Subsequently, trimmed barycenters in the Wasserstein space @math have been introduced in @cite_20 for the purpose of combining informations from different experimental units in a parallelized or distributed estimation setting. The framework of optimal transport has been recently adapted for nonnegative measures supported on a compact of @math with different masses, independently by @cite_12 and @cite_33 . However, in all these papers, incorporating regularization into the computation of Wasserstein barycenters has not been considered, which is of interest when the data are irregular probability measures.
{ "cite_N": [ "@cite_26", "@cite_33", "@cite_24", "@cite_31", "@cite_16", "@cite_12", "@cite_20" ], "mid": [ "", "2064090547", "2963587665", "2962749223", "2264931217", "2528005577", "" ], "abstract": [ "", "Abstract In this paper we introduce a new transportation distance between non-negative measures inside a domain Ω . This distance enjoys many nice properties, for instance it makes the space of non-negative measures inside Ω a geodesic space without any convexity assumption on the domain. Moreover we will show that the gradient flow of the entropy functional ∫ Ω [ ρ log ( ρ ) − ρ ] d x with respect to this distance coincides with the heat equation, subject to the Dirichlet boundary condition equal to 1.", "We study barycenters in the space of probability measures on a Riemannian manifold, equipped with the Wasserstein metric. Under reasonable assumptions, we establish absolute continuity of the barycenter of general measures Ω∈P(P(M))Ω∈P(P(M)) on Wasserstein space, extending on one hand, results in the Euclidean case (for barycenters between finitely many measures) of Agueh and Carlier [1] to the Riemannian setting, and on the other hand, results in the Riemannian case of Cordero-Erausquin, McCann, Schmuckenschlager [12] for barycenters between two measures to the multi-marginal setting. Our work also extends these results to the case where Ω is not finitely supported. As applications, we prove versions of Jensen's inequality on Wasserstein space and a generalized Brunn–Minkowski inequality for a random measurable set on a Riemannian manifold.", "Abstract We formulate and study an optimal transportation problem with infinitely many marginals; this is a natural extension of the multi-marginal problem studied by Gangbo and Świȩch (1998) [15] . We prove results on the existence, uniqueness and characterization of the optimizer, which are natural extensions of the results in Gangbo and Świȩch (1998) [15] . The proof relies on a relationship between this problem and the problem of finding barycenters in the Wasserstein space, a connection first observed for finitely many marginals by Agueh and Carlier (2011) [1] .", "In this paper, based on the Frechet mean, we define a notion of barycenter corresponding to a usual notion of statistical mean. We prove the existence of Wasserstein barycenters of random distributions defined on a geodesic space (E, d). We also prove the consistency of this barycenter in a general setting, that includes taking barycenters of empirical versions of the distributions or of a growing set of distributions.", "This paper defines a new transport metric over the space of nonnegative measures. This metric interpolates between the quadratic Wasserstein and the Fisher–Rao metrics and generalizes optimal transport to measures with different masses. It is defined as a generalization of the dynamical formulation of optimal transport of Benamou and Brenier, by introducing a source term in the continuity equation. The influence of this source term is measured using the Fisher–Rao metric and is averaged with the transportation term. This gives rise to a convex variational problem defining the new metric. Our first contribution is a proof of the existence of geodesics (i.e., solutions to this variational problem). We then show that (generalized) optimal transport and Hellinger metrics are obtained as limiting cases of our metric. Our last theoretical contribution is a proof that geodesics between mixtures of sufficiently close Dirac measures are made of translating mixtures of Dirac masses. Lastly, we propose a numerical scheme making use of first-order proximal splitting methods and we show an application of this new distance to image interpolation.", "" ] }
1606.01025
2732257506
In this paper, a regularization of Wasserstein barycenters for random measures supported on @math is introduced via convex penalization. The existence and uniqueness of such barycenters is first proved for a large class of penalization functions. The Bregman divergence associated to the penalization term is then considered to obtain a stability result on penalized barycenters. This allows the comparison of data made of @math absolutely continuous probability measures, within the more realistic setting where one only has access to a dataset of random variables sampled from unknown distributions. The convergence of the penalized empirical barycenter of a set of @math iid random probability measures towards its population counterpart is finally analyzed. This approach is shown to be appropriate for the statistical analysis of either discrete or absolutely continuous random measures. It also allows to construct, from a set of discrete measures, consistent estimators of population Wasserstein barycenters that are absolutely continuous.
* Penalization of the transport map. Alternatively, regularized barycenters may be obtained by adding a convex regularization on optimal transport plans (that is on @math in ) when computing the Wasserstein distance between probability measures. This approach leads to the notion of regularized transportation problems and Wasserstein costs. It has recently gained popularity in the literature of image processing and machine learning and has been considered to compute smoothed Wasserstein barycenters of discrete measures @cite_19 @cite_21 . Such penalizations acting on transport plans nevertheless have an indirect influence on the regularity of the Wasserstein barycenter.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "2161086299", "1629559917" ], "abstract": [ "This article introduces a generalization of discrete Optimal Transport that includes a regularity penalty and a relaxation of the bijectivity constraint. The corresponding transport plan is solved by minimizing an energy which is a convexification of an integer optimization problem. We propose to use a proximal splitting scheme to perform the minimization on large scale imaging problems. For un-regularized relaxed transport, we show that the relaxation is tight and that the transport plan is an assignment. In the general case, the regularization prevents the solution from being an assignment, but we show that the corresponding map can be used to solve imaging problems. We show an illustrative application of this discrete regularized transport to color transfer between images. This imaging problem cannot be solved in a satisfying manner without relaxing the bijective assignment constraint because of mass variation across image color palettes. Furthermore, the regularization of the transport plan helps remove colorization artifacts due to noise amplification.", "We present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric. This mean, known as the Wasserstein barycenter, is the measure that minimizes the sum of its Wasserstein distances to each element in that set. We propose two original algorithms to compute Wasserstein barycenters that build upon the subgradient method. A direct implementation of these algorithms is, however, too costly because it would require the repeated resolution of large primal and dual optimal transport problems to compute subgradients. Extending the work of Cuturi (2013), we propose to smooth the Wasserstein distance used in the definition of Wasserstein barycenters with an entropic regularizer and recover in doing so a strictly convex objective whose gradients can be computed for a considerably cheaper computational cost using matrix scaling algorithms. We use these algorithms to visualize a large family of images and to solve a constrained clustering problem." ] }
1606.01001
2951005488
Current object recognition methods fail on object sets that include both diffuse, reflective and transparent materials, although they are very common in domestic scenarios. We show that a combination of cues from multiple sensor modalities, including specular reflectance and unavailable depth information, allows us to capture a larger subset of household objects by extending a state of the art object recognition method. This leads to a significant increase in robustness of recognition over a larger set of commonly used objects.
@cite_9 fuse sensor data from an RGB-D, a time of flight and a thermal sensor on a low level basis, although their categorization accuracy was only around 23
{ "cite_N": [ "@cite_9" ], "mid": [ "2144891935" ], "abstract": [ "In this paper, we investigate the problem of 3D object categorization of objects typically present in kitchen environments, from data acquired using a composite sensor. Our framework combines different sensing modalities and defines descriptive features in various spaces for the purpose of learning good object models. By fusing the 3D information acquired from a composite sensor that includes a color stereo camera, a time-of-flight (TOF) camera, and a thermal camera, we augment 3D depth data with color and temperature information which helps disambiguate the object categorization process. We make use of statistical relational learning methods (Markov Logic Networks and Bayesian Logic Networks) to capture complex interactions between the different feature spaces. To show the effectiveness of our approach, we analyze and validate the proposed system for the problem of recognizing objects in table settings scenarios." ] }
1606.00868
2413729833
Quantification is the machine learning task of estimating test-data class proportions that are not necessarily similar to those in training. Apart from its intrinsic value as an aggregate statistic, quantification output can also be used to optimize classifier probabilities, thereby increasing classification accuracy. We unify major quantification approaches under a constrained multi-variate regression framework, and use mathematical programming to estimate class proportions for different loss functions. With this modeling approach, we extend existing binary-only quantification approaches to multi-class settings as well. We empirically verify our unified framework by experimenting with several multi-class datasets including the Stanford Sentiment Treebank and CIFAR-10.
The quantification approach by @cite_1 is different from the above-mentioned methods in that it does not require a classifier to come up with the class proportion estimates. Yet this method can be seen as quite similar to the mixture model approach mentioned above with the exception that instead of classifier probabilities, feature distributions are used directly with a feature sampling strategy. Feature distribution is modeled as a univariate multinomial distribution with @math possible outcomes ( @math =sampled feature size), each of which is a possible feature profile. The HDx method outlined in @cite_2 similarly works directly with features without a classifier.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2171060319", "2084933300" ], "abstract": [ "The increasing availability of digitized text presents enormous opportunities for social scientists. Yet hand coding many blogs, speeches, government records, newspapers, or other sources of unstructured text is infeasible. Although computer scientists have methods for automated content analysis, most are optimized to classify individual documents, whereas social scientists instead want generalizations about the population of documents, such as the proportion in a given category. Unfortunately, even a method with a high percent of individual documents correctly classified can be hugely biased when estimating category proportions. By directly optimizing for this social science goal, we develop a method that gives approximately unbiased estimates of category proportions even when the optimal classifier performs poorly. We illustrate with diverse data sets, including the daily expressed opinions of thousands of people about the U.S. presidency. We also make available software that implements our methods and large corpora of text for further analysis.", "Class distribution estimation (quantification) plays an important role in many practical classification problems. Firstly, it is important in order to adapt the classifier to the operational conditions when they differ from those assumed in learning. Additionally, there are some real domains where the quantification task is itself valuable due to the high variability of the class prior probabilities. Our novel quantification approach for two-class problems is based on distributional divergence measures. The mismatch between the test data distribution and validation distributions generated in a fully controlled way is measured by the Hellinger distance in order to estimate the prior probability that minimizes this divergence. Experimental results on several binary classification problems show the benefits of this approach when compared to such approaches as counting the predicted class labels and other methods based on the classifier confusion matrix or on posterior probability estimations. We also illustrate these techniques as well as their robustness against the base classifier performance (a neural network) with a boar semen quality control setting. Empirical results show that the quantification can be conducted with a mean absolute error lower than 0.008, which seems very promising in this field." ] }
1606.01178
2411802338
Future advancements in robot autonomy and sophistication of robotics tasks rest on robust, efficient, and task-dependent semantic understanding of the environment. Semantic segmentation is the problem of simultaneous segmentation and categorization of a partition of sensory data. The majority of current approaches tackle this using multi-class segmentation and labeling in a Conditional Random Field (CRF) framework or by generating multiple object hypotheses and combining them sequentially. In practical settings, the subset of semantic labels that are needed depend on the task and particular scene and labelling every single pixel is not always necessary. We pursue these observations in developing a more modular and flexible approach to multi-class parsing of RGBD data based on learning strategies for combining independent binary object-vs-background segmentations in place of the usual monolithic multi-label CRF approach. Parameters for the independent binary segmentation models can be learned very efficiently, and the combination strategy---learned using reinforcement learning---can be set independently and can vary over different tasks and environments. Accuracy is comparable to state-of-art methods on a subset of the NYU-V2 dataset of indoor scenes, while providing additional flexibility and modularity.
A bodies of work explored Reinforcement Learning techniques for robotics and image understanding problems @cite_28 @cite_24 . Karayev and colleagues @cite_28 learn a policy that optimizes object detections in a time restricted manner. @cite_24 introduced an augmented MDP formulation for the task of picking the best sensing strategies in a robotics experiment and utilizes least square policy iteration to solve for the optimal policy.
{ "cite_N": [ "@cite_28", "@cite_24" ], "mid": [ "2101474437", "2140219596" ], "abstract": [ "In a large visual multi-class detection framework, the timeliness of results can be crucial. Our method for timely multi-class detection aims to give the best possible performance at any single point after a start time; it is terminated at a deadline time. Toward this goal, we formulate a dynamic, closed-loop policy that infers the contents of the image in order to decide which detector to deploy next. In contrast to previous work, our method significantly diverges from the predominant greedy strategies, and is able to learn to take actions with deferred values. We evaluate our method with a novel timeliness measure, computed as the area under an Average Precision vs. Time curve. Experiments are conducted on the PASCAL VOC object detection dataset. If execution is stopped when only half the detectors have been run, our method obtains 66 better AP than a random ordering, and 14 better performance than an intelligent baseline. On the timeliness measure, our method obtains at least 11 better performance. Our method is easily extensible, as it treats detectors and classifiers as black boxes and learns from execution traces using reinforcement learning.", "Since sensors have limited range and coverage, mobile robots often have to make decisions on where to point their sensors. A good sensing strategy allows a robot to collect information that is useful for its tasks. Most existing solutions to this active sensing problem choose the direction that maximally reduces the uncertainty in a single state variable. In more complex problem domains, however, uncertainties exist in multiple state variables, and they affect the performance of the robot in different ways. The robot thus needs to have more sophisticated sensing strategies in order to decide which uncertainties to reduce, and to make the correct trade-offs. In this work, we apply a least squares reinforcement learning method to solve this problem. We implemented and tested the learning approach in the RoboCup domain, where the robot attempts to reach a ball and accurately kick it into the goal. We present experimental results that suggest our approach is able to learn highly effective sensing strategies." ] }
1606.00622
2604512277
We consider the problem of estimating the number of hidden states (the order) of a nonparametric hidden Markov model (HMM). We propose two different methods and prove their almost sure consistency without any prior assumption, be it on the order or on the emission distributions. This is the first time a consistency result is proved in such a general setting without using restrictive assumptions such as a priori upper bounds on the order or parametric restrictions on the emission distributions. Our main method relies on the minimization of a penalized least squares criterion. In addition to the consistency of the order estimation, we also prove that this method yields rate minimax adaptive estimators of the parameters of the HMM - up to a logarithmic factor. Our second method relies on estimating the rank of a matrix obtained from the distribution of two consecutive observations. Finally, numerical experiments are used to compare both methods and study their ability to select the right order in several situations.
From a theoretical point of view, the order estimation problem remains widely open in the HMM framework. One can distinguish two kinds of results. The first kind does not need an upper bound on the order, but is only applicable to restrictive cases. For instance, using tools from coding theory, @cite_20 introduced a penalized maximum likelihood order estimator for which they prove strong consistency without upper bound on the order of the HMM. Nevertheless, their result is restricted to a finite observation space and they have to use heavy penalties that grow as a power of the order. For the special case of Gaussian or Poisson emission distributions, @cite_17 showed that the penalized maximum likelihood estimator is strongly consistent without any upper bound on the order. The second kind of results is more general but requires an upper bound of the order just to get weak consistency of order estimators, for penalized likelihood criterion ( @cite_6 ) as well as Bayesian approaches ( @cite_0 @cite_3 ).
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_0", "@cite_20", "@cite_17" ], "mid": [ "", "2043862650", "2119525808", "2155805386", "1996769950" ], "abstract": [ "", "Abstract We give two simple inequalities on likelihood ratios. A first application is the consistency of the maximum-penalized marginal-likelihood estimator of the number of populations in a mixture with Markov regime. The second application is the derivation of the asymptotic power of the likelihood ratio test under loss of identifiability for contiguous alternatives. Finally, we propose self-normalized score tests that have exponentially decreasing level and asymptotic power 1.", "We consider finite state space stationary hidden Markov models (HMMs) in the situation where the number of hidden states is unknown. We provide a frequentist asymptotic evaluation of Bayesian analysis methods. Our main result gives posterior concentration rates for the marginal densities, that is for the density of a fixed number of consecutive observations. Using conditions on the prior, we are then able to define a consistent Bayesian estimator of the number of hidden states. It is known that the likelihood ratio test statistic for overfitted HMMs has a nonstandard behaviour and is unbounded. Our conditions on the prior may be seen as a way to penalize parameters to avoid this phenomenon. Inference of parameters is a much more difficult task than inference of marginal densities, we still provide a precise description of the situation when the observations are i.i.d. and we allow for 2 possible hidden states.", "We consider the estimation of the number of hidden states (the order) of a discrete-time finite-alphabet hidden Markov model (HMM). The estimators we investigate are related to code-based order estimators: penalized maximum-likelihood (ML) estimators and penalized versions of the mixture estimator introduced by Liu and Narayan (1994). We prove strong consistency of those estimators without assuming any a priori upper bound on the order and smaller penalties than previous works. We prove a version of Stein's lemma for HMM order estimation and derive an upper bound on underestimation exponents. Then we prove that this upper bound can be achieved by the penalized ML estimator and by the penalized mixture estimator. The proof of the latter result gets around the elusive nature of the ML in HMM by resorting to large-deviation techniques for empirical processes. Finally, we prove that for any consistent HMM order estimator, for most HMM, the overestimation exponent is null.", "Abstract We address the issue of order identification for hidden Markov models with Poisson and Gaussian emissions. We prove information-theoretic BIC-like mixture inequalities in the spirit of Finesso [1991. Consistent estimation of the order for Markov and hidden Markov chains. Ph.D. Thesis, University of Maryland]; Liu and Narayan [1994. Order estimation and sequential universal data compression of a hidden Markov source by the method of mixtures. Canad. J. Statist. 30(4), 573–589]; Gassiat and Boucheron [2003. Optimal error exponents in hidden Markov models order estimation. IEEE Trans. Inform. Theory 49(4), 964–980]. These inequalities lead to consistent penalized estimators that need no prior bound on the order. A simulation study and an application to postural analysis in humans are provided." ] }
1606.00622
2604512277
We consider the problem of estimating the number of hidden states (the order) of a nonparametric hidden Markov model (HMM). We propose two different methods and prove their almost sure consistency without any prior assumption, be it on the order or on the emission distributions. This is the first time a consistency result is proved in such a general setting without using restrictive assumptions such as a priori upper bounds on the order or parametric restrictions on the emission distributions. Our main method relies on the minimization of a penalized least squares criterion. In addition to the consistency of the order estimation, we also prove that this method yields rate minimax adaptive estimators of the parameters of the HMM - up to a logarithmic factor. Our second method relies on estimating the rank of a matrix obtained from the distribution of two consecutive observations. Finally, numerical experiments are used to compare both methods and study their ability to select the right order in several situations.
On a practical side, several order estimation methods using penalized likelihood criterion have been studied numerically, see for instance @cite_5 when emission distributions are a mixture of parametric densities or @cite_14 for parametric HMMs. The latter also introduced cross-validation procedures that aimed for circumventing the lack of independance of the observations. In the case of nonparametric HMMs, @cite_28 studied a method using P-splines with a custom penalization.
{ "cite_N": [ "@cite_28", "@cite_5", "@cite_14" ], "mid": [ "2129784757", "1972250859", "2058177657" ], "abstract": [ "Summary Hidden Markov models (HMMs) are flexible time series models in which the distribution of the observations depends on unobserved serially correlated states. The state-dependent distributions in HMMs are usually taken from some class of parametrically specified distributions. The choice of this class can be difficult, and an unfortunate choice can have serious consequences for example on state estimates, and more generally on the resulting model complexity and interpretation. We demonstrate these practical issues in a real data application concerned with vertical speeds of a diving beaked whale, where we demonstrate that parametric approaches can easily lead to overly complex state processes, impeding meaningful biological inference. In contrast, for the dive data, HMMs with nonparametrically estimated state-dependent distributions are much more parsimonious in terms of the number of states and easier to interpret, while fitting the data equally well. Our nonparametric estimation approach is based on the idea of representing the densities of the state-dependent distributions as linear combinations of a large number of standardized B-spline basis functions, imposing a penalty term on non-smoothness in order to maintain a good balance between goodness-of-fit and smoothness.", "In unsupervised classification, Hidden Markov Models (HMM) are used to account for a neighborhood structure between observations. The emission distributions are often supposed to belong to some parametric family. In this paper, a semiparametric model where the emission distributions are a mixture of parametric distributions is proposed to get a higher flexibility. We show that the standard EM algorithm can be adapted to infer the model parameters. For the initialization step, starting from a large number of components, a hierarchical method to combine them into the hidden states is proposed. Three likelihood-based criteria to select the components to be combined are discussed. To estimate the number of hidden states, BIC-like criteria are derived. A simulation study is carried out both to determine the best combination between the combining criteria and the model selection criteria and to evaluate the accuracy of classification. The proposed method is also illustrated using a biological dataset from the model plant Arabidopsis thaliana. A R package HMMmix is freely available on the CRAN.", "The problem of estimating the number of hidden states in a hidden Markov model is considered. Emphasis is placed on cross-validated likelihood criteria. Using cross-validation to assess the number of hidden states allows to circumvent the well-documented technical difficulties of the order identification problem in mixture models. Moreover, in a predictive perspective, it does not require that the sampling distribution belongs to one of the models in competition. However, computing cross-validated likelihood for hidden Markov models for which only one training sample is available, involves difficulties since the data are not independent. Two approaches are proposed to compute cross-validated likelihood for a hidden Markov model. The first one consists of using a deterministic half-sampling procedure, and the second one consists of an adaptation of the EM algorithm for hidden Markov models, to take into account randomly missing values induced by cross-validation. Numerical experiments on both simulated and real data sets compare different versions of cross-validated likelihood criterion and penalised likelihood criteria, including BIC and a penalised marginal likelihood criterion. Those numerical experiments highlight a promising behaviour of the deterministic half-sampling criterion." ] }
1606.00622
2604512277
We consider the problem of estimating the number of hidden states (the order) of a nonparametric hidden Markov model (HMM). We propose two different methods and prove their almost sure consistency without any prior assumption, be it on the order or on the emission distributions. This is the first time a consistency result is proved in such a general setting without using restrictive assumptions such as a priori upper bounds on the order or parametric restrictions on the emission distributions. Our main method relies on the minimization of a penalized least squares criterion. In addition to the consistency of the order estimation, we also prove that this method yields rate minimax adaptive estimators of the parameters of the HMM - up to a logarithmic factor. Our second method relies on estimating the rank of a matrix obtained from the distribution of two consecutive observations. Finally, numerical experiments are used to compare both methods and study their ability to select the right order in several situations.
Then comes the question of estimating the parameters of the HMM once its order is known. In the parametric setting, the asymptotic behaviour of the maximum likelihood estimator is rather well understood (see for instance @cite_13 or @cite_22 using techniques from @cite_8 ), but so far the question of its nonasymptotic behaviour remains open. @cite_32 and @cite_36 proposed a spectral method for parametric HMMs based on joint diagonalization of a set of matrices and controlled its nonasymptotic error. @cite_34 and @cite_16 extended this method to the nonparametric setting, and @cite_2 used the latter to obtain an estimator of the transition matrix of the hidden chain for a quasi-rate minimax adaptive least squares estimator of the emission densities. Our least squares estimation method is a generalization of their procedure that is able to deal with all parameters at once and does not require auxiliary estimators.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_36", "@cite_32", "@cite_16", "@cite_2", "@cite_34", "@cite_13" ], "mid": [ "2016529189", "2157721994", "2149655761", "1825640910", "2201644665", "2003328149", "1516237126", "1968668741" ], "abstract": [ "An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the maximum likelihood estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.", "We consider a hidden Markov model with multidimensional observations and with misspecification, i.e. the assumed coefficients (transition probability matrix and observation conditional densities) are possibly different from the true coefficients. Under mild assumptions on the coefficients of both the true and the assumed models, we prove that: 1) the prediction filter forgets almost surely their initial condition exponentially fast; and 2) the extended Markov chain, whose components are the unobserved Markov chain, the observation sequence and the prediction filter, is geometrically ergodic, and has a unique invariant probability distribution.", "Mixture models are a fundamental tool in applied statistics and machine learning for treating data taken from multiple subpopulations. The current practice for estimating the parameters of such models relies on local search heuristics (e.g., the EM algorithm) which are prone to failure, and existing consistent methods are unfavorable due to their high computational and sample complexity which typically scale exponentially with the number of mixture components. This work develops an efficient method of moments approach to parameter estimation for a broad class of high-dimensional mixture models with many components, including multi-view mixtures of Gaussians (such as mixtures of axis-aligned Gaussians) and hidden Markov models. The new method leads to rigorous unsupervised learning results for mixture models that were not achieved by previous works; and, because of its simplicity, it offers a viable alternative to EM for practical deployment.", "Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typically resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations-it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple, employing only a singular value decomposition and matrix multiplications.", "In this paper, we consider the filtering and smoothing recursions in nonparametric finite state space hidden Markov models (HMMs) when the parameters of the model are unknown and replaced by estimators. We provide an explicit and time uniform control of the filtering and smoothing errors in total variation norm as a function of the parameter estimation errors. We prove that the risk for the filtering and smoothing errors may be uniformly upper bounded by the risk of the estimators. It has been proved very recently that statistical inference for finite state space nonparametric HMMs is possible. We study how the recent spectral methods developed in the parametric setting may be extended to the nonparametric framework and we give explicit upper bounds for the L2-risk of the nonparametric spectral estimators. When the observation space is compact, this provides explicit rates for the filtering and smoothing errors in total variation norm. The performance of the spectral method is assessed with simulated data for both the estimation of the (nonparametric) conditional distribution of the observations and the estimation of the marginal smoothing distributions.", "We consider stationary hidden Markov models with finite state space and nonparametric modeling of the emission distributions. It has remained unknown until very recently that such models are identifiable. In this paper, we propose a new penalized least-squares estimator for the emission distributions which is statistically optimal and practically tractable. We prove a non asymptotic oracle inequality for our nonparametric estimator of the emission distributions. A consequence is that this new estimator is rate minimax adaptive up to a logarithmic term. Our methodology is based on projections of the emission distributions onto nested subspaces of increasing complexity. The popular spectral estimators are unable to achieve the optimal rate but may be used as initial points in our procedure. Simulations are given that show the improvement obtained when applying the least-squares minimization consecutively to the spectral estimation.", "A constructive proof of identification of multilinear decompositions of multiway arrays is presented. It can be applied to show identification in a variety of multivariate latent structures. Examples are finite-mixture models and hidden Markov models. The key step to show identification is the joint diagonalization of a set of matrices in the same non-orthogonal basis. An estimator of the latent-structure model may then be based on a sample version of this simultaneous-diagonalization problem. Simple algorithms are available for computation. Asymptotic theory is derived for this joint approximate-diagonalization estimator.", "Hidden Markov models (HMMs) have during the last decade become a widespread tool for modeling sequences of dependent random variables. Inference for such models is usually based on the maximum-likelihood estimator (MLE), and consistency of the MLE for general HMMs was recently proved by Leroux. In this paper we show that under mild conditions the MLE is also asymptotically normal and prove that the observed information matrix is a consistent estimator of the Fisher information." ] }
1606.00527
2418973910
This paper is a survey about recent progress in measure rigidity and global rigidity of Anosov actions, and celebrates the profound contributions by Federico Rodriguez Hertz to rigidity in dynamical systems.
A. Katok and F. Rodriguez Hertz established the following related result in @cite_43 which they call an arithmeticity theorem .
{ "cite_N": [ "@cite_43" ], "mid": [ "2962741677" ], "abstract": [ "We prove that any smooth action of @math , @math , on an @math -dimensional manifold that preserves a measure such that all non-identity elements of the suspension have positive entropy is essentially algebraic, i.e., isomorphic up to a finite permutation to an affine action on the torus or on its factor by @math . Furthermore this isomorphism has nice geometric properties; in particular, it is smooth in the sense of Whitney on a set whose complement has arbitrarily small measure. We further derive restrictions on topology of manifolds that may admit such actions, for example, excluding spheres and obtaining lower estimate on the first Betti number in the odd-dimensional case." ] }
1606.00527
2418973910
This paper is a survey about recent progress in measure rigidity and global rigidity of Anosov actions, and celebrates the profound contributions by Federico Rodriguez Hertz to rigidity in dynamical systems.
In @cite_29 , A. Katok and F. Rodriguez Hertz obtained the classification of a real analytic action @math of @math on the @math -torus @math assuming the induced map on homology is standard: on the complement of a finite set, there is a real analytic conjugacy from the linearized action to @math whose image has full measure. Very recently, A. Brown, F. Rodriguez Hertz and Z. Wang found a major improvement @cite_9 . We will discuss it below in Section .
{ "cite_N": [ "@cite_9", "@cite_29" ], "mid": [ "2208813460", "2963266144" ], "abstract": [ "In this article we prove global rigidity results for hyperbolic actions of higher-rank lattices. Suppose @math is a lattice in semisimple Lie group, all of whose factors have rank @math or higher. Let @math be a smooth @math -action on a compact nilmanifold @math that lifts to an action on the universal cover. If the linear data @math of @math contains a hyperbolic element, then there is a continuous semiconjugacy intertwining the actions of @math and @math , on a finite-index subgroup of @math . If @math is a @math action and contains an Anosov element, then the semiconjugacy is a @math conjugacy. As a corollary, we obtain @math global rigidity for Anosov actions by cocompact lattices in semisimple Lie group with all factors rank @math or higher. We also obtain global rigidity of Anosov actions of @math on @math for @math and probability-preserving Anosov actions of arbitrary higher-rank lattices on nilmanifolds.", "We prove that any real-analytic action of @math with standard homotopy data that preserves an ergodic measure @math whose support is not contained in a ball, is analytically conjugate on an open invariant set to the standard linear action on the complement to a finite union of periodic orbits." ] }
1606.00499
2411934291
Language models (LMs) are statistical models that calculate probabilities over sequences of words or other discrete symbols. Currently two major paradigms for language modeling exist: count-based n-gram models, which have advantages of scalability and test-time speed, and neural LMs, which often achieve superior modeling performance. We demonstrate how both varieties of models can be unified in a single modeling framework that defines a set of probability distributions over the vocabulary of words, and then dynamically calculates mixture weights over these distributions. This formulation allows us to create novel hybrid models that combine the desirable features of count-based and neural LMs, and experiments demonstrate the advantages of these approaches.
A number of alternative methods focus on interpolating LMs of multiple varieties such as in-domain and out-of-domain LMs @cite_32 @cite_12 @cite_17 . Perhaps most relevant is 's work on learning to interpolate multiple LMs using log-linear models. This differs from our work in that it learns functions to estimate the fallback probabilities @math in Eq. instead of @math , and does not cover interpolation of @math -gram components, non-linearities, or the connection with neural network LMs. Also conceptually similar is work on adaptation of @math -gram LMs, which start with @math -gram probabilities @cite_26 @cite_18 @cite_13 @cite_5 and adapt them based on the distribution of the current document, albeit in a linear model. There has also been work incorporating binary @math -gram features into neural language models, which allows for more direct learning of @math -gram weights @cite_33 , but does not afford many of the advantages of the proposed model such as the incorporation of count-based probability estimates. Finally, recent works have compared @math -gram and neural models, finding that neural models often perform better in perplexity, but @math -grams have their own advantages such as effectiveness in extrinsic tasks @cite_35 and better modeling of rare words @cite_15 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_26", "@cite_33", "@cite_32", "@cite_5", "@cite_15", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "1573488949", "2151315616", "2159077101", "1965154800", "2094971681", "2159518412", "", "1996903695", "2069699492", "" ], "abstract": [ "This paper presents an in-depth investigation on integrating neural language models in translation systems. Scaling neural language models is a difficult task, but crucial for real-world applications. This paper evaluates the impact on end-to-end MT quality of both new and existing scaling techniques. We show when explicitly normalising neural models is necessary and what optimisation tricks one should use in such scenarios. We also focus on scalable training algorithms and investigate noise contrastive estimation and diagonal contexts as sources for further speed improvements. We explore the trade-offs between neural models and back-off n-gram models and find that neural models make strong candidates for natural language applications in memory constrained environments, yet still lag behind traditional models in raw translation quality. We conclude with a set of recommendations one should follow to build a scalable neural language model for MT.", "A simple and general scheme for the adaptation of stochastic language models to changing text styles is introduced. For each word in the running text, the adapted model is a linear combination of specific models, the interpolation parameters being estimated on the preceding text passage. Experiments on a 1.1-million English word corpus show the validity of the approach. The adaptation method improves a bigram language model by 10 in terms of test-set perplexity. >", "The authors present an algorithm to adapt a n-gram language model to a document as it is dictated. The observed partial document is used to estimate a unigram distribution for the words that already occurred. Then, they find the closest n-gram distribution to the static n-gram distribution (using the discrimination information distance measure) that satisfies the marginal constraints derived from the document. The resulting minimum discrimination information model results in a perplexity of 208 instead of 290 for the static trigram model on a document of 321 words. >", "We describe how to effectively train neural network based language models on large data sets. Fast convergence during training and better overall performance is observed when the training data are sorted by their relevance. We introduce hash-based implementation of a maximum entropy model, that can be trained as a part of the neural network model. This leads to significant reduction of computational complexity. We achieved around 10 relative reduction of word error rate on English Broadcast News speech recognition task, against large 4-gram model trained on 400M tokens.", "Sources of training data suitable for language modeling of conversational speech are limited. In this paper, we show how training data can be supplemented with text from the web filtered to match the style and or topic of the target recognition task, but also that it is possible to get bigger performance gains from the data by using class-dependent interpolation of N-grams.", "We investigate a new statistical language model which captures topic-related dependencies of words within and across sentences. First, we develop a sentence-level mixture language model that takes advantage of the topic constraints in a sentence or article. Second, we introduce topic-dependent dynamic cache adaptation techniques in the framework of the mixture model. Experiments with the static (or unadapted) mixture model on the 1994 WSJ task indicated a 21 reduction in perplexity and a 3-4 improvement in recognition accuracy over a general n-gram model. The static mixture model also improved recognition performance over an adapted n-gram model. Mixture adaptation techniques contributed a further 14 reduction in perplexity and a small improvement in recognition accuracy.", "", "An adaptive statistical language model is described, which successfully integrates long distance linguistic information with other knowledge sources. Most existing statistical language models exploit only the immediate history of a text. To extract information from further back in the document's history, we propose and usetrigger pairsas the basic information bearing elements. This allows the model to adapt its expectations to the topic of discourse. Next, statistical evidence from multiple sources must be combined. Traditionally, linear interpolation and its variants have been used, but these are shown here to be seriously deficient. Instead, we apply the principle of Maximum Entropy (ME). Each information source gives rise to a set of constraints, to be imposed on the combined estimate. The intersection of these constraints is the set of probability functions which are consistent with all the information sources. The function with the highest entropy within that set is the ME solution. Given consistent statistical evidence, a unique ME solution is guaranteed to exist, and an iterative algorithm exists which is guaranteed to converge to it. The ME framework is extremely general: any phenomenon that can be described in terms of statistics of the text can be readily incorporated. An adaptive language model based on the ME approach was trained on theWall Street Journalcorpus, and showed a 32–39 perplexity reduction over the baseline. When interfaced to SPHINX-II, Carnegie Mellon's speech recognizer, it reduced its error rate by 10–14 . This thus illustrates the feasibility of incorporating many diverse knowledge sources in a single, unified statistical framework.", "This paper investigates supervised and unsupervised adaptation of stochastic grammars, including n-gram language models and probabilistic context-free grammars (PCFGs), to a new domain. It is shown that the commonly used approaches of count merging and model interpolation are special cases of a more general maximum a posteriori (MAP) framework, which additionally allows for alternate adaptation approaches. This paper investigates the effectiveness of different adaptation strategies, and, in particular, focuses on the need for supervision in the adaptation process. We show that n-gram models as well as PCFGs benefit from either supervised or unsupervised MAP adaptation in various tasks. For n-gram models, we compare the benefit from supervised adaptation with that of unsupervised adaptation on a speech recognition task with an adaptation sample of limited size (about 17h), and show that unsupervised adaptation can obtain 51 of the 7.7 adaptation gain obtained by supervised adaptation. We also investigate the benefit of using multiple word hypotheses (in the form of a word lattice) for unsupervised adaptation on a speech recognition task for which there was a much larger adaptation sample available. The use of word lattices for adaptation required the derivation of a generalization of the well-known Good-Turing estimate. Using this generalization, we derive a method that uses Monte Carlo sampling for building Katz backoff models. The adaptation results show that, for adaptation samples of limited size (several tens of hours), unsupervised adaptation on lattices gives a performance gain over using transcripts. The experimental results also show that with a very large adaptation sample (1050h), the benefit from transcript-based adaptation matches that of lattice-based adaptation. Finally, we show that PCFG domain adaptation using the MAP framework provides similar gains in F-measure accuracy on a parsing task as was seen in ASR accuracy improvements with n-gram adaptation. Experimental results show that unsupervised adaptation provides 37 of the 10.35 gain obtained by supervised adaptation.", "" ] }
1606.00372
2413533759
We investigate the task of modeling open-domain, multi-turn, unstructured, multi-participant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant's history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.
proposed a data-driven approach for building dialogue systems, and they extracted 1.3 million conversations from Twitter with the aim of discovering dialogue acts. Building on the distributional similarities of the vector space model framework, Banchs and Li built a search engine to retrieve the most suitable response for any input message. Other approaches focused on domain specific tasks such as games @cite_24 and restaurants @cite_27 @cite_0
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_0" ], "mid": [ "2949801941", "2340944142", "2294065713" ], "abstract": [ "In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations.", "Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task-oriented dialogue systems requires creating multiple components and typically this involves either a large amount of handcrafting, or acquiring costly labelled datasets to solve a statistical learning problem for each component. In this work we introduce a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue systems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.", "This article presents SimpleDS, a simple and publicly available dialogue system trained with deep reinforcement learning. In contrast to previous reinforcement learning dialogue systems, this system avoids manual feature engineering by performing action selection directly from raw text of the last system and (noisy) user responses. Our initial results, in the restaurant domain, report that it is indeed possible to induce reasonable behaviours with such an approach that aims for higher levels of automation in dialogue control for intelligent interactive systems and robots." ] }
1606.00372
2413533759
We investigate the task of modeling open-domain, multi-turn, unstructured, multi-participant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant's history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.
Personalizing dialogue systems requires sufficient information from each user and a sufficient user population to define the space. Writing styles quantified by word length, verb strength, polarity, and distribution of dialogue acts have been used to model users @cite_15 . Other efforts focused on building a user profile based on demographics, such as gender, income, age, and marital status @cite_18 . Because Reddit users are pseudo-anonymous, we differ from these approaches by learning the relevant features to model the users' dialogue behavior through embedding each user into a distributed representation.
{ "cite_N": [ "@cite_15", "@cite_18" ], "mid": [ "2250595267", "2167503225" ], "abstract": [ "Domestic abuse affects people of every race, class, age, and nation. There is significant research on the prevalence and effects of domestic abuse; however, such research typically involves population-based surveys that have high financial costs. This work provides a qualitative analysis of domestic abuse using data collected from the social and news-aggregation website reddit.com. We develop classifiers to detect submissions discussing domestic abuse, achieving accuracies of up to 92 , a substantial error reduction over its baseline. Analysis of the top features used in detecting abuse discourse provides insight into the dynamics of abusive relationships.", "This papers presents a context-aware NLP approach to automatically detect noteworthy information in spontaneous mobile phone conversations. The proposed method uses a supervised modeling strategy which considers both features from the content of the conversation as well as contextual information from the call. We empirically analyze the predictive performance of features of different nature on a corpus of mobile phone conversations. The results of this study reveal that the context of the conversation plays a crucial role on boosting the predictive performance of the model." ] }
1606.00372
2413533759
We investigate the task of modeling open-domain, multi-turn, unstructured, multi-participant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant's history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.
With the introduction of the sequence-to-sequence framework @cite_20 , many recent learning systems have used recurrent neural networks (RNNs) to generate novel responses given an input message or sentence. For example, Vinyals and Le proposed using IT desk chat logs as a dataset to train LSTM network to generate new sentences. constructed Twitter conversations limiting the history context to one message. With the help of pre-trained RNN language models, they encoded each message into a vector representation. To eliminate the need for a language model, tried end-to-end training on an RNN encoder-decoder network. They also bootstrapped their system with pre-trained word embeddings.
{ "cite_N": [ "@cite_20" ], "mid": [ "2949888546" ], "abstract": [ "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier." ] }
1606.00372
2413533759
We investigate the task of modeling open-domain, multi-turn, unstructured, multi-participant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant's history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.
While these systems are able to produce novel responses, it is difficult to understand how much capacity is consumed by modeling natural language versus modeling discourse and the coherence of the conversation. Often responses gravitate to the most frequent sentences observed in the training corpus @cite_12 .
{ "cite_N": [ "@cite_12" ], "mid": [ "1958706068" ], "abstract": [ "Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., \"I don't know\") regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations." ] }
1606.00372
2413533759
We investigate the task of modeling open-domain, multi-turn, unstructured, multi-participant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant's history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.
Perplexity, BLEU, and deltaBLEU, adapted from language modeling and machine translation communities, have been used for evaluating novel responses @cite_1 @cite_25 @cite_22 . These metrics only measure the response's lexical fluency and do not penalize for incoherent candidates with regard to the conversational discourse. While the search for better metrics is still on going, automatic evaluation of response generation stays an open problem @cite_3 .
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_25", "@cite_3" ], "mid": [ "2951813108", "1847211030", "2951580200", "2159640018" ], "abstract": [ "We introduce Discriminative BLEU (deltaBLEU), a novel metric for intrinsic evaluation of generated text in tasks that admit a diverse range of possible outputs. Reference strings are scored for quality by human raters on a scale of [-1, +1] to weight multi-reference BLEU. In tasks involving generation of conversational responses, deltaBLEU correlates reasonably with human judgments and outperforms sentence-level and IBM BLEU in terms of both Spearman's rho and Kendall's tau.", "In a conversation or a dialogue process, attention and intention play intrinsic roles. This paper proposes a neural network based approach that models the attention and intention processes. It essentially consists of three recurrent networks. The encoder network is a word-level model representing source side sentences. The intention network is a recurrent network that models the dynamics of the intention process. The decoder network is a recurrent network produces responses to the input from the source side. It is a language model that is dependent on the intention and has an attention mechanism to attend to particular source side words, when predicting a symbol in the response. The model is trained end-to-end without labeling data. Experiments show that this model generates natural responses to user inputs.", "We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.", "We propose Neural Responding Machine (NRM), a neural network-based response generator for Short-Text Conversation. NRM takes the general encoder-decoder framework: it formalizes the generation of response as a decoding process based on the latent representation of the input text, while both encoding and decoding are realized with recurrent neural networks (RNN). The NRM is trained with a large amount of one-round conversation data collected from a microblogging service. Empirical study shows that NRM can generate grammatically correct and content-wise appropriate responses to over 75 of the input text, outperforming state-of-the-arts in the same setting, including retrieval-based and SMT-based models." ] }
1606.00372
2413533759
We investigate the task of modeling open-domain, multi-turn, unstructured, multi-participant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant's history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.
or are commonly used for measuring a ranker's performance on the task of response selection. Typically, a positive response is mixed with random responses, and then the system is asked to score the right response higher than others @cite_11 @cite_29 . This task measures the model's ability to discriminate what goes together and what does not. As these metrics are better understood, we focus on the response selection task in our modeling effort.
{ "cite_N": [ "@cite_29", "@cite_11" ], "mid": [ "2197546379", "2951359136" ], "abstract": [ "This paper presents results of our experiments for the next utterance ranking on the Ubuntu Dialog Corpus -- the largest publicly available multi-turn dialog corpus. First, we use an in-house implementation of previously reported models to do an independent evaluation using the same data. Second, we evaluate the performances of various LSTMs, Bi-LSTMs and CNNs on the dataset. Third, we create an ensemble by averaging predictions of multiple models. The ensemble further improves the performance and it achieves a state-of-the-art result for the next utterance ranking on this dataset. Finally, we discuss our future plans using this corpus.", "Semantic matching is of central importance to many natural language tasks bordes2014semantic,RetrievalQA . A successful matching algorithm needs to adequately model the internal structures of language objects and the interaction between them. As a step toward this goal, we propose convolutional neural network models for matching two sentences, by adapting the convolutional strategy in vision and speech. The proposed models not only nicely represent the hierarchical structures of sentences with their layer-by-layer composition and pooling, but also capture the rich matching patterns at different levels. Our models are rather generic, requiring no prior knowledge on language, and can hence be applied to matching tasks of different nature and in different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its superiority to competitor models." ] }
1606.00285
2413295869
RoboCup soccer competitions are considered among the most challenging multi-robot adversarial environments, due to their high dynamism and the partial observability of the environment. In this paper we introduce a method based on a combination of Monte Carlo search and data aggregation (MCSDA) to adapt discrete-action soccer policies for a defender robot to the strategy of the opponent team. By exploiting a simple representation of the domain, a supervised learning algorithm is trained over an initial collection of data consisting of several simulations of human expert policies. Monte Carlo policy rollouts are then generated and aggregated to previous data to improve the learned policy over multiple epochs and games. The proposed approach has been extensively tested both on a soccer-dedicated simulator and on real robots. Using this method, our learning robot soccer team achieves an improvement in ball interceptions, as well as a reduction in the number of opponents' goals. Together with a better performance, an overall more efficient positioning of the whole team within the field is achieved.
Building on the idea of adopting a combination of techniques similar to @cite_5 , our work mostly relates to the and NRPI algorithms by Ross and Bagnell @cite_7 . The former leverages cost-to-go information -- in addition to correct demonstration -- and data aggregation; the latter extends the idea of no-regret learners to Approximate Policy Iteration variants for reinforcement learning. However, differently from previous work our algorithm (MCSDA) uses shorter Monte Carlo roll-outs to evaluate policy improvements. By avoiding to always estimate the full cost-to-go of the policy MCSDA is more practical -- and usable in robotics. Additionally, as explained in , the policy generated by our algorithm can be seen as a combination of expert and learned policies, allowing us to directly leverage results from @cite_12 .
{ "cite_N": [ "@cite_5", "@cite_12", "@cite_7" ], "mid": [ "2257979135", "1850531616", "112666333" ], "abstract": [ "The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.", "Methods for learning to search for structured prediction typically imitate a reference policy, with existing theoretical guarantees demonstrating low regret compared to that reference. This is unsatisfactory in many applications where the reference policy is suboptimal and the goal of learning is to improve upon it. Can learning to search work even when the reference is poor? We provide a new learning to search algorithm, LOLS, which does well relative to the reference policy, but additionally guarantees low regret compared to deviations from the learned policy: a local-optimality guarantee. Consequently, LOLS can improve upon the reference policy, unlike previous algorithms. This enables us to develop structured contextual bandits, a partial information structured prediction setting with many potential applications.", "Recent work has demonstrated that problems-- particularly imitation learning and structured prediction-- where a learner's predictions influence the input-distribution it is tested on can be naturally addressed by an interactive approach and analyzed using no-regret online learning. These approaches to imitation learning, however, neither require nor benefit from information about the cost of actions. We extend existing results in two directions: first, we develop an interactive imitation learning approach that leverages cost information; second, we extend the technique to address reinforcement learning. The results provide theoretical support to the commonly observed successes of online approximate policy iteration. Our approach suggests a broad new family of algorithms and provides a unifying view of existing techniques for imitation and reinforcement learning." ] }
1606.00399
2413939744
We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization. The pruning is applied via a "submodularity graph" over the @math ground elements, where each directed edge is associated with a pairwise dependency defined by the submodular function. In each step, SS prunes a @math (for @math ) fraction of the nodes using weights on edges computed based on only a small number ( @math ) of randomly sampled nodes. The algorithm requires @math steps with a small and highly parallelizable per-step computation. An accuracy-speed tradeoff parameter @math , set as @math , leads to a fast shrink rate @math and small iteration complexity @math . Analysis shows that w.h.p., the greedy algorithm on the pruned set of size @math can achieve a guarantee similar to that of processing the original dataset. In news and video summarization tasks, SS is able to substantially reduce both computational costs and memory usage, while maintaining (or even slightly exceeding) the quality of the original (and much more costly) greedy algorithm.
Approximate greedy algorithms further reduce the number of function evaluations per step at a cost of a worse approximation factor. @cite_15 @cite_6 , each step only approximates identifying the element with the largest marginal gain @math by finding any element whose marginal gain is larger than a fraction @math of @math of its upper bound. The lazier than lazy greedy'' approach @cite_13 selects the element from a smaller random subset @math each step, so only the marginal gains of @math need be computed. A similar algorithm in @cite_21 randomly selects an element from a reasonably good subset @math per step, and extends to the non-monotone case.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_13", "@cite_6" ], "mid": [ "52746146", "2952359104", "2950865888", "" ], "abstract": [ "Motivated by extremely large-scale machine learning problems, we introduce a new multi-stage algorithmic framework for submodular maximization (called MULTGREED), where at each stage we apply an approximate greedy procedure to maximize surrogate submodular functions. The surrogates serve as proxies for a target sub-modular function but require less memory and are easy to evaluate. We theoretically analyze the performance guarantee of the multi-stage framework and give examples on how to design instances of MULTGREED for a broad range of natural submodular functions. We show that MULTGREED performs very closely to the standard greedy algorithm given appropriate surrogate functions and argue how our framework can easily be integrated with distributive algorithms for further optimization. We complement our theory by empirically evaluating on several real-world problems, including data subset selection on millions of speech samples where MULTGREED yields at least a thousand times speedup and superior results over the state-of-the-art selection methods.", "Submodular function maximization has been studied extensively in recent years under various constraints and models. The problem plays a major role in various disciplines. We study a natural online variant of this problem in which elements arrive one-by-one and the algorithm has to maintain a solution obeying certain constraints at all times. Upon arrival of an element, the algorithm has to decide whether to accept the element into its solution and may preempt previously chosen elements. The goal is to maximize a submodular function over the set of elements in the solution. We study two special cases of this general problem and derive upper and lower bounds on the competitive ratio. Specifically, we design a @math -competitive algorithm for the unconstrained case in which the algorithm may hold any subset of the elements, and constant competitive ratio algorithms for the case where the algorithm may hold at most @math elements in its solution.", "Is it possible to maximize a monotone submodular function faster than the widely used lazy greedy algorithm (also known as accelerated greedy), both in theory and practice? In this paper, we develop the first linear-time algorithm for maximizing a general monotone submodular function subject to a cardinality constraint. We show that our randomized algorithm, STOCHASTIC-GREEDY, can achieve a @math approximation guarantee, in expectation, to the optimum solution in time linear in the size of the data and independent of the cardinality constraint. We empirically demonstrate the effectiveness of our algorithm on submodular functions arising in data summarization, including training large-scale kernel methods, exemplar-based clustering, and sensor placement. We observe that STOCHASTIC-GREEDY practically achieves the same utility value as lazy greedy but runs much faster. More surprisingly, we observe that in many practical scenarios STOCHASTIC-GREEDY does not evaluate the whole fraction of data points even once and still achieves indistinguishable results compared to lazy greedy.", "" ] }
1606.00399
2413939744
We propose a new random pruning method (called "submodular sparsification (SS)") to reduce the cost of submodular maximization. The pruning is applied via a "submodularity graph" over the @math ground elements, where each directed edge is associated with a pairwise dependency defined by the submodular function. In each step, SS prunes a @math (for @math ) fraction of the nodes using weights on edges computed based on only a small number ( @math ) of randomly sampled nodes. The algorithm requires @math steps with a small and highly parallelizable per-step computation. An accuracy-speed tradeoff parameter @math , set as @math , leads to a fast shrink rate @math and small iteration complexity @math . Analysis shows that w.h.p., the greedy algorithm on the pruned set of size @math can achieve a guarantee similar to that of processing the original dataset. In news and video summarization tasks, SS is able to substantially reduce both computational costs and memory usage, while maintaining (or even slightly exceeding) the quality of the original (and much more costly) greedy algorithm.
Another class of methods @cite_5 @cite_11 accelerates the greedy algorithm by maximizing a surrogate function whose evaluation is faster and cheaper than the original objective. The surrogate can be either a tight modular lower bound or a simpler submodular function. It can also be adaptively changed in each step to better approach the original objective. @cite_15 , a simple pruning method is used to reduce @math by exploiting @math , a lower bound of @math for @math . E.g., element @math whose singleton gain @math is less than the @math largest @math over all @math can be safely removed. Besides exploiting the global redundancy of @math via @math , the weight @math used in SS further takes the pairwise relationship @math into account. This can result in further ground set reduction.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_11" ], "mid": [ "2951977866", "52746146", "" ], "abstract": [ "We present a practical and powerful new framework for both unconstrained and constrained submodular function optimization based on discrete semidifferentials (sub- and super-differentials). The resulting algorithms, which repeatedly compute and then efficiently optimize submodular semigradients, offer new and generalize many old methods for submodular optimization. Our approach, moreover, takes steps towards providing a unifying paradigm applicable to both submodular min- imization and maximization, problems that historically have been treated quite distinctly. The practicality of our algorithms is important since interest in submodularity, owing to its natural and wide applicability, has recently been in ascendance within machine learning. We analyze theoretical properties of our algorithms for minimization and maximization, and show that many state-of-the-art maximization algorithms are special cases. Lastly, we complement our theoretical analyses with supporting empirical experiments.", "Motivated by extremely large-scale machine learning problems, we introduce a new multi-stage algorithmic framework for submodular maximization (called MULTGREED), where at each stage we apply an approximate greedy procedure to maximize surrogate submodular functions. The surrogates serve as proxies for a target sub-modular function but require less memory and are easy to evaluate. We theoretically analyze the performance guarantee of the multi-stage framework and give examples on how to design instances of MULTGREED for a broad range of natural submodular functions. We show that MULTGREED performs very closely to the standard greedy algorithm given appropriate surrogate functions and argue how our framework can easily be integrated with distributive algorithms for further optimization. We complement our theory by empirically evaluating on several real-world problems, including data subset selection on millions of speech samples where MULTGREED yields at least a thousand times speedup and superior results over the state-of-the-art selection methods.", "" ] }
1605.09526
2753672424
In this paper, we address the new problem of the prediction of human intents. There is neuro-psychological evidence that actions performed by humans are anticipated by peculiar motor acts which are discriminant of the type of action going to be performed afterwards. In other words, an actual intent can be forecast by looking at the kinematics of the immediately preceding movement. To prove it in a computational and quantitative manner, we devise a new experimental setup where, without using contextual information, we predict human intents all originating from the same motor act. We posit the problem as a classification task and we introduce a new multi-modal dataset consisting of a set of motion capture marker 3D data and 2D video sequences, where, by only analysing very similar movements in both training and test phases, we are able to predict the underlying intent, i.e., the future, never observed action. We also present an extensive experimental evaluation as a baseline, customizing state-of-the-art techniques for either 3D and 2D data analysis. Realizing that video processing methods lead to inferior performance but show complementary information with respect to 3D data sequences, we developed a 2D+3D fusion analysis where we achieve better classification accuracies, attesting the superiority of the multimodal approach for the context-free prediction of human intents.
One common aspect of both (early) activity recognition and action prediction is that contextual information is frequently used to perform the classification. Indeed, once the objects present in a scene are detected, the object-object or object-person relationship can be modelled by several probabilistic architectures ( , graphical models @cite_72 @cite_57 @cite_47 or topic models @cite_64 @cite_43 ). Among the works which directly model the context inside the algorithms, some of them deal with the prediction of future trajectories of moving objects (vehicles or pedestrian) @cite_38 @cite_23 @cite_54 @cite_17 by estimating the spatial areas over which such objects will most likely pass with respect to those which are excluded by this passage ( , car circulations over sidewalks @cite_23 ).
{ "cite_N": [ "@cite_38", "@cite_64", "@cite_54", "@cite_57", "@cite_43", "@cite_72", "@cite_23", "@cite_47", "@cite_17" ], "mid": [ "", "1483019628", "2146183743", "1982185844", "2045792079", "", "2056120433", "2093655440", "" ], "abstract": [ "", "An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how) can enable an assistive robot to plan ahead for reactive responses. Furthermore, anticipation can even improve the detection accuracy of past activities. The challenge, however, is two-fold: We need to capture the rich context for modeling the activities and object affordances, and we need to anticipate the distribution over a large space of future human activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles. In extensive evaluation on CAD-120 human activity RGB-D dataset, we first show that anticipation improves the state-of-the-art detection results. We then show that for new subjects (not seen in the training set), we obtain an activity anticipation accuracy (defined as whether one of top three predictions actually happened) of 84.1, 74.4 and 62.2 percent for an anticipation time of 1, 3 and 10 seconds respectively. Finally, we also show a robot using our algorithm for performing a few reactive responses.", "We propose an agent-based behavioral model of pedestrians to improve tracking performance in realistic scenarios. In this model, we view pedestrians as decision-making agents who consider a plethora of personal, social, and environmental factors to decide where to go next. We formulate prediction of pedestrian behavior as an energy minimization on this model. Two of our main contributions are simple, yet effective estimates of pedestrian destination and social relationships (groups). Our final contribution is to incorporate these hidden properties into an energy formulation that results in accurate behavioral prediction. We evaluate both our estimates of destination and grouping, as well as our accuracy at prediction and tracking against state of the art behavioral model and show improvements, especially in the challenging observational situation of infrequent appearance observations–something that might occur in thousands of webcams available on the Internet.", "Given a static scene, a human can trivially enumerate the myriad of things that can happen next and characterize the relative likelihood of each. In the process, we make use of enormous amounts of commonsense knowledge about how the world works. In this paper, we investigate learning this commonsense knowledge from data. To overcome a lack of densely annotated spatiotemporal data, we learn from sequences of abstract images gathered using crowdsourcing. The abstract scenes provide both object location and attribute information. We demonstrate qualitatively and quantitatively that our models produce plausible scene predictions on both the abstract images, as well as natural images taken from the Internet.", "In this paper, we present an event parsing algorithm based on Stochastic Context Sensitive Grammar (SCSG) for understanding events, inferring the goal of agents, and predicting their plausible intended actions. The SCSG represents the hierarchical compositions of events and the temporal relations between the sub-events. The alphabets of the SCSG are atomic actions which are defined by the poses of agents and their interactions with objects in the scene. The temporal relations are used to distinguish events with similar structures, interpolate missing portions of events, and are learned from the training data. In comparison with existing methods, our paper makes the following contributions. i) We define atomic actions by a set of relations based on the fluents of agents and their interactions with objects in the scene. ii) Our algorithm handles events insertion and multi-agent events, keeps all possible interpretations of the video to preserve the ambiguities, and achieves the globally optimal parsing solution in a Bayesian framework; iii) The algorithm infers the goal of the agents and predicts their intents by a top-down process; iv) The algorithm improves the detection of atomic actions by event contexts. We show satisfactory results of event recognition and atomic action detection on the data set we captured which contains 12 event categories in both indoor and outdoor videos.", "", "In this paper we present a conceptually simple but surprisingly powerful method for visual prediction which combines the effectiveness of mid-level visual elements with temporal modeling. Our framework can be learned in a completely unsupervised manner from a large collection of videos. However, more importantly, because our approach models the prediction framework on these mid-level elements, we can not only predict the possible motion in the scene but also predict visual appearances--how are appearances going to change with time. This yields a visual \"hallucination\" of probable events on top of the scene. We show that our method is able to accurately predict and visualize simple future events, we also show that our approach is comparable to supervised methods for event prediction.", "Early prediction of ongoing human activity has become more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions and interacting objects. Different from early detection on short-duration simple actions, we propose a novel framework for long -duration complex activity prediction by discovering three key aspects of activity: Causality, Context-cue, and Predictability. The major contributions of our work include: (1) a general framework is proposed to systematically address the problem of complex activity prediction by mining temporal sequence patterns; (2) probabilistic suffix tree (PST) is introduced to model causal relationships between constituent actions, where both large and small order Markov dependencies between action units are captured; (3) the context-cue, especially interactive objects information, is modeled through sequential pattern mining (SPM), where a series of action and object co-occurrence are encoded as a complex symbolic sequence; (4) we also present a predictive accumulative function (PAF) to depict the predictability of each kind of activity. The effectiveness of our approach is evaluated on two experimental scenarios with two data sets for each: action-only prediction and context-aware prediction. Our method achieves superior performance for predicting global activity classes and local action units.", "" ] }
1605.09338
2412165873
This paper explores the social quality (goodness) of community structures formed across Twitter users, where social links within the structures are estimated based upon semantic properties of user-generated content (corpus). We examined the overlap of the community structures of the constructed graphs, and followership-based social communities, to find the social goodness of the links constructed. Unigram, bigram and LDA content models were empirically investigated for evaluation of effectiveness, as approximators of underlying social graphs, such that they maintain the community social property. Impact of content at varying granularities, for the purpose of predicting links while retaining the social community structures, was investigated. 100 discussion topics, spanning over 10 Twitter events, were used for experiments. The unigram language model performed the best, indicating strong similarity of word usage within deeply connected social communities. This observation agrees with the phenomenon of evolution of word usage behavior, that transform individuals belonging to the same community tending to choose the same words, made by (2013), and raises a question on the literature that use, without validation, LDA for content-based social link prediction over other content models. Also, semantically finer-grained content was observed to be more effective compared to coarser-grained content.
Link prediction on social networks has been an area of long standing research. @cite_2 carried out a comprehensive study of different link prediction techniques in a social network setting, including methods such as graph distance, Adamic-Adar method @cite_15 , Jaccard's coefficient @cite_5 , rooted Pagerank @cite_2 , Katz @cite_11 and SimRank @cite_17 . However, this study, and the subsequent ones in this school of research such as @cite_21 and @cite_31 , focuses on graph structure and properties, and does not consider content semantics. Subsequently, @cite_28 attempted to study the impact of communication semantics in predicting social links, and used Twitter as their platform for the study. This study uses LDA @cite_7 for predicting pairwise links; however, it neither investigates the relative impact of different language in such prediction, nor does it delve deeper to investigate the social properties of the predicted links, which is essential if one were indeed aiming to predict a social network.
{ "cite_N": [ "@cite_31", "@cite_7", "@cite_28", "@cite_21", "@cite_17", "@cite_2", "@cite_5", "@cite_15", "@cite_11" ], "mid": [ "2167467982", "1880262756", "1940500808", "1975138928", "2117831564", "2148847267", "1956559956", "2154454189", "2026417691" ], "abstract": [ "Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model.", "Language use is overlaid on a network of social connections, which exerts an influence on both the topics of discussion and the ways that these topics can be expressed (Halliday, 1978). In the past, efforts to understand this relationship were stymied by a lack of data, but social media offers exciting new opportunities. By combining large linguistic corpora with explicit representations of social network structures, social media provides a new window into the interaction between language and society. Our long term goal is to develop joint sociolinguistic models that explain the social basis of linguistic variation.", "With hundreds of millions of participants, social media services have become commonplace. Unlike a traditional social network service, a microblogging network like Twitter is a hybrid network, combining aspects of both social networks and information networks. Understanding the structure of such hybrid networks and predicting new links are important for many tasks such as friend recommendation, community detection, and modeling network growth. We note that the link prediction problem in a hybrid network is different from previously studied networks. Unlike the information networks and traditional online social networks, the structures in a hybrid network are more complicated and informative. We compare most popular and recent methods and principles for link prediction and recommendation. Finally we propose a novel structure-based personalized link prediction model and compare its predictive performance against many fundamental and popular link prediction methods on real-world data from the Twitter microblogging network. Our experiments on both static and dynamic data sets show that our methods noticeably outperform the state-of-the-art.", "The problem of measuring \"similarity\" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says \"two objects are similar if they are related to similar objects:\" This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach.", "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc.", "", "Abstract The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities.", "For the purpose of evaluating status in a manner free from the deficiencies of popularity contest procedures, this paper presents a new method of computation which takes into accountwho chooses as well ashow many choose. It is necessary to introduce, in this connection, the concept of attenuation in influence transmitted through intermediaries." ] }
1605.09338
2412165873
This paper explores the social quality (goodness) of community structures formed across Twitter users, where social links within the structures are estimated based upon semantic properties of user-generated content (corpus). We examined the overlap of the community structures of the constructed graphs, and followership-based social communities, to find the social goodness of the links constructed. Unigram, bigram and LDA content models were empirically investigated for evaluation of effectiveness, as approximators of underlying social graphs, such that they maintain the community social property. Impact of content at varying granularities, for the purpose of predicting links while retaining the social community structures, was investigated. 100 discussion topics, spanning over 10 Twitter events, were used for experiments. The unigram language model performed the best, indicating strong similarity of word usage within deeply connected social communities. This observation agrees with the phenomenon of evolution of word usage behavior, that transform individuals belonging to the same community tending to choose the same words, made by (2013), and raises a question on the literature that use, without validation, LDA for content-based social link prediction over other content models. Also, semantically finer-grained content was observed to be more effective compared to coarser-grained content.
Researchers have attempted to investigate the flow of information cascades on Twitter, as well as propagation of influence along the underlying social connection graph. @cite_19 predicts the spread of user-generated ideas on Twitter. @cite_3 proposes a multi-class classification model to identify popular messages on Twitter, by predicting retweet quantities, from TF-IDF (term frequency and inverted document frequency) and LDA, along with social properties of users. @cite_25 models the flow of influence along social connections on Twitter, and makes the surprising observation that in spite of URLs rated interesting and content by influential users spreading more than average, predictions of which particular user or URL will generate large cascades are relatively unreliable. Other studies, such as @cite_23 , @cite_12 , @cite_24 , @cite_20 , @cite_9 and @cite_14 provide significant insights into flow of information and influence, along social edges, over Twitter user interactions.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_3", "@cite_24", "@cite_19", "@cite_23", "@cite_20", "@cite_25", "@cite_12" ], "mid": [ "2140540364", "1499517307", "2127267264", "12448093", "2114544578", "2027135291", "2145446394", "1967579779", "2109603058" ], "abstract": [ "Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information.", "We present results of network analyses of information diffusion on Twitter, via users’ ongoing social interactions as denoted by “@username” mentions. Incorporating survival analysis, we constructed a novel model to capture the three major properties of information diffusion: speed, scale, and range. On the whole, we find that some properties of the tweets themselves predict greater information propagation but that properties of the users, the rate with which a user is mentioned historically in particular, are equal or stronger predictors. Implications for end users and system designers are discussed.", "Social network services have become a viable source of information for users. In Twitter, information deemed important by the community propagates through retweets. Studying the characteristics of such popular messages is important for a number of tasks, such as breaking news detection, personalized message recommendation, viral marketing and others. This paper investigates the problem of predicting the popularity of messages as measured by the number of future retweets and sheds some light on what kinds of factors influence information propagation in Twitter. We formulate the task into a classification problem and study two of its variants by investigating a wide spectrum of features based on the content of the messages, temporal information, metadata of messages and users, as well as structural properties of the users' social graph on a large scale dataset. We show that our method can successfully predict messages which will attract thousands of retweets with good performance.", "Social networks have emerged as hubs of user generated content. Online social conversations can be used to retrieve users interests towards given topics and trends. Microblogging platforms like Twitter are primary examples of social networks with significant volumes of topical message exchanges between users. However, unlike traditional online discussion forums, blogs and social networking sites, explicit discussion threads are absent from microblogging networks like Twitter. This inherent absence of any conversation framework makes it challenging to distinguish conversations from mere topical interests. In this work, we explore semantic, social and temporal relationships of topical clusters formed in Twitter to identify conversations. We devise an algorithm comprising of a sequence of steps such as text clustering, topical similarity detection using TF-IDF and Wordnet, and intersecting social, semantic and temporal graphs to discover social conversations around topics. We further qualitatively show the presence of social localization of discussion threads. Our results suggest that discussion threads evolve significantly over social networks on Twitter. Our algorithm to find social discussion threads can be used for settings such as social information spreading applications and information diffusion analyses on microblog networks.", "Current social media research mainly focuses on temporal trends of the information flow and on the topology of the social graph that facilitates the propagation of information. In this paper we study the effect of the content of the idea on the information propagation. We present an efficient hybrid approach based on a linear regression for predicting the spread of an idea in a given time frame. We show that a combination of content features with temporal and topological features minimizes prediction error. Our algorithm is evaluated on Twitter hashtags extracted from a dataset of more than 400 million tweets. We analyze the contribution and the limitations of the various feature types to the spread of information, demonstrating that content aspects can be used as strong predictors thus should not be disregarded. We also study the dependencies between global features such as graph topology and content features.", "Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed.", "There is a widespread intuitive sense that different kinds of information spread differently on-line, but it has been difficult to evaluate this question quantitatively since it requires a setting where many different kinds of information spread in a shared environment. Here we study this issue on Twitter, analyzing the ways in which tokens known as hashtags spread on a network defined by the interactions among Twitter users. We find significant variation in the ways that widely-used hashtags on different topics spread. Our results show that this variation is not attributable simply to differences in \"stickiness,\" the probability of adoption based on one or more exposures, but also to a quantity that could be viewed as a kind of \"persistence\" - the relative extent to which repeated exposures to a hashtag continue to have significant marginal effects. We find that hashtags on politically controversial topics are particularly persistent, with repeated exposures continuing to have unusually large marginal effects on adoption; this provides, to our knowledge, the first large-scale validation of the \"complex contagion\" principle from sociology, which posits that repeated exposures to an idea are particularly crucial when the idea is in some way controversial or contentious. Among other findings, we discover that hashtags representing the natural analogues of Twitter idioms and neologisms are particularly non-persistent, with the effect of multiple exposures decaying rapidly relative to the first exposure. We also study the subgraph structure of the initial adopters for different widely-adopted hashtags, again finding structural differences across topics. We develop simulation-based and generative models to analyze how the adoption dynamics interact with the network structure of the early adopters on which a hashtag spreads.", "In this paper we investigate the attributes and relative influence of 1.6M Twitter users by tracking 74 million diffusion events that took place on the Twitter follower graph over a two month interval in 2009. Unsurprisingly, we find that the largest cascades tend to be generated by users who have been influential in the past and who have a large number of followers. We also find that URLs that were rated more interesting and or elicited more positive feelings by workers on Mechanical Turk were more likely to spread. In spite of these intuitive results, however, we find that predictions of which particular user or URL will generate large cascades are relatively unreliable. We conclude, therefore, that word-of-mouth diffusion can only be harnessed reliably by targeting large numbers of potential influencers, thereby capturing average effects. Finally, we consider a family of hypothetical marketing strategies, defined by the relative cost of identifying versus compensating potential \"influencers.\" We find that although under some circumstances, the most influential users are also the most cost-effective, under a wide range of plausible assumptions the most cost-effective performance can be realized using \"ordinary influencers\"---individuals who exert average or even less-than-average influence.", "Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to \"jump\" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71 of the information volume in Twitter can be attributed to network diffusion, and the remaining 29 is due to external events and factors outside the network." ] }
1605.09338
2412165873
This paper explores the social quality (goodness) of community structures formed across Twitter users, where social links within the structures are estimated based upon semantic properties of user-generated content (corpus). We examined the overlap of the community structures of the constructed graphs, and followership-based social communities, to find the social goodness of the links constructed. Unigram, bigram and LDA content models were empirically investigated for evaluation of effectiveness, as approximators of underlying social graphs, such that they maintain the community social property. Impact of content at varying granularities, for the purpose of predicting links while retaining the social community structures, was investigated. 100 discussion topics, spanning over 10 Twitter events, were used for experiments. The unigram language model performed the best, indicating strong similarity of word usage within deeply connected social communities. This observation agrees with the phenomenon of evolution of word usage behavior, that transform individuals belonging to the same community tending to choose the same words, made by (2013), and raises a question on the literature that use, without validation, LDA for content-based social link prediction over other content models. Also, semantically finer-grained content was observed to be more effective compared to coarser-grained content.
Using a combination of social links and user-generated content has been explored in the literature, from different angles. @cite_0 attempt to combine the strength of links with similarity of content between each pair of graph vertices (social network users), to augment baseline social link based graphs, and discover communities on such graphs. @cite_1 combine the topological structure of a network with the content information present in the network, and thereby model the community structure as a consequence of interaction amongst the participating nodes (social network users). @cite_29 also combine the graph node attributes with the graph edge structures for community discovery. @cite_10 also consider the overlaps between communities using the concept of the intersection graph, for community discovery. @cite_4 observe that joint modeling of links and content significantly improves link prediction performance on Twitter subgraphs.
{ "cite_N": [ "@cite_4", "@cite_29", "@cite_1", "@cite_0", "@cite_10" ], "mid": [ "", "2012921801", "2249925396", "2137962371", "2019112118" ], "abstract": [ "", "Community detection algorithms are fundamental tools that allow us to uncover organizational principles in networks. When detecting communities, there are two possible sources of information one can use: the network structure, and the features and attributes of nodes. Even though communities form around nodes that have common edges and common attributes, typically, algorithms have only focused on one of these two data modalities: community detection algorithms traditionally focus only on the network structure, while clustering algorithms mostly consider only node attributes. In this paper, we develop Communities from Edge Structure and Node Attributes (CESNA), an accurate and scalable algorithm for detecting overlapping communities in networks with node attributes. CESNA statistically models the interaction between the network structure and the node attributes, which leads to more accurate community detection as well as improved robustness in the presence of noise in the network structure. CESNA has a linear runtime in the network size and is able to process networks an order of magnitude larger than comparable approaches. Last, CESNA also helps with the interpretation of detected communities by finding relevant node attributes for each community.", "With the recent advances in information networks, the problem of identifying group structure or communities has received a significant amount of attention. Most of the existing principles of community detection or clustering mainly focus on either the topological structure of a network or the node attributes separately, while both of the two aspects provide valuable information to characterize the nature of communities. In this paper we combine the topological structure of a network as well as the content information of nodes in the task of detecting communities in information networks. Specifically, we treat a network as a dynamic system and consider its community structure as a consequence of interactions among nodes. To model the interactions we introduce the principle of content propagation and integrate the aspects of structure and content in a network naturally. We further describe the interactions among nodes in two different ways, including a linear model to approximate influence propagation, and modeling the interactions directly with random walk. Based on interaction modeling, the nature of communities is described by analyzing the stable status of the dynamic system. Extensive experimental results on benchmark datasets demonstrate the superiority of the proposed framework over the state of the art.", "In this paper we discuss a very simple approach of combining content and link information in graph structures for the purpose of community discovery, a fundamental task in network analysis. Our approach hinges on the basic intuition that many networks contain noise in the link structure and that content information can help strengthen the community signal. This enables ones to eliminate the impact of noise (false positives and false negatives), which is particularly prevalent in online social networks and Web-scale information networks. Specifically we introduce a measure of signal strength between two nodes in the network by fusing their link strength with content similarity. Link strength is estimated based on whether the link is likely (with high probability) to reside within a community. Content similarity is estimated through cosine similarity or Jaccard coefficient. We discuss a simple mechanism for fusing content and link similarity. We then present a biased edge sampling procedure which retains edges that are locally relevant for each graph node. The resulting backbone graph can be clustered using standard community discovery algorithms such as Metis and Markov clustering. Through extensive experiments on multiple real-world datasets (Flickr, Wikipedia and CiteSeer) with varying sizes and characteristics, we demonstrate the effectiveness and efficiency of our methods over state-of-the-art learning and mining approaches several of which also attempt to combine link and content analysis for the purposes of community discovery. Specifically we always find a qualitative benefit when combining content with link analysis. Additionally our biased graph sampling approach realizes a quantitative benefit in that it is typically several orders of magnitude faster than competing approaches.", "Many researchers have studied complex networks such as the World Wide Web, social networks, and the protein interaction network. They have found scale-free characteristics, the small-world effect, the property of high-clustering coefficient, and so on. One hot topic in this area is community detection. For example, the community shows a set of web pages about a certain topic in the WWW. The community structure is unquestionably a key characteristic of complex networks. In this paper, we propose a new method for finding communities in complex networks. Our proposed method considers the overlaps between communities using the concept of the intersection graph. Additionally, we address the problem of edge in homogeneity by weighting edges using the degree of overlaps and the similarity of content information between sets. Finally, we conduct clustering based on modularity. And then, we evaluate our method on a real SNS network." ] }
1605.09336
2327935541
Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form, that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.
There are several themes in the literature that share common elements with the @math method. At the most general level, proposed using the multidimensional kernel regression and non-parametric learning methods in image processing and reconstruction @cite_7 @cite_6 . That work generalizes several image processing methods, including bilateral filtering, and denoising algorithms, under kernel regression schema. The general principles of kernel regression - classifying local data and interpolating measurements - can be applied to a range of imaging problems, such as learning super-resolution kernels @cite_34 .
{ "cite_N": [ "@cite_34", "@cite_6", "@cite_7" ], "mid": [ "2011952414", "", "2006262236" ], "abstract": [ "Example learning-based superresolution (SR) algorithms show promise for restoring a high-resolution (HR) image from a single low-resolution (LR) input. The most popular approaches, however, are either time- or space-intensive, which limits their practical applications in many resource-limited settings. In this paper, we propose a novel computationally efficient single image SR method that learns multiple linear mappings (MLM) to directly transform LR feature subspaces into HR subspaces. In particular, we first partition the large nonlinear feature space of LR images into a cluster of linear subspaces. Multiple LR subdictionaries are then learned, followed by inferring the corresponding HR subdictionaries based on the assumption that the LR–HR features share the same representation coefficients. We establish MLM from the input LR features to the desired HR outputs in order to achieve fast yet stable SR recovery. Furthermore, in order to suppress displeasing artifacts generated by the MLM-based method, we apply a fast nonlocal means algorithm to construct a simple yet effective similarity-based regularization term for SR enhancement. Experimental results indicate that our approach is both quantitatively and qualitatively superior to other application-oriented SR methods, while maintaining relatively low time and space complexity.", "", "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples" ] }
1605.09336
2327935541
Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form, that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.
The proposal that is closest to our work comes from @cite_35 Similar to our work, they describe a nonparametric regression tree models, together with Gaussian conditional random fields, to demosaic raw sensor response. They classify the data near each pixel into one of a large number of classes; the class is based on the color filter type of the pixel and a measurement of the local edge direction in the @math neighborhood of the pixel. For each class they use example camera data to find a quadratic transform that maps pixel data to the rendered value.
{ "cite_N": [ "@cite_35" ], "mid": [ "2069051299" ], "abstract": [ "We introduce a machine learning approach to demosaicing, the reconstruction of color images from incomplete color filter array samples. There are two challenges to overcome by a demosaicing method: 1) it needs to model and respect the statistics of natural images in order to reconstruct natural looking images and 2) it should be able to perform well in the presence of noise. To facilitate an objective assessment of current methods, we introduce a public ground truth data set of natural images suitable for research in image demosaicing and denoising. We then use this large data set to develop a machine learning approach to demosaicing. Our proposed method addresses both demosaicing challenges by learning a statistical model of images and noise from hundreds of natural images. The resulting model performs simultaneous demosaicing and denoising. We show that the machine learning approach has a number of benefits: 1) the model is trained to directly optimize a user-specified performance measure such as peak signal-to-noise ratio (PSNR) or structural similarity; 2) we can handle novel color filter array layouts by retraining the model on such layouts; and 3) it outperforms the previous state-of-the-art, in some setups by 0.7-dB PSNR, faithfully reconstructing edges, textures, and smooth areas. Our results demonstrate that in demosaicing and related imaging applications, discriminatively trained machine learning models have the potential for peak performance at comparatively low engineering effort." ] }
1605.09336
2327935541
Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form, that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.
Another related set of ideas concerns the development of image processing pipelines that are based on joint optimization across optics, sensor, and display. An example is from Stork and Robinson @cite_9 who offered a theoretical foundation for jointly optimizing the design and analysis of the optics, detector, and digital image processing for imaging systems. They optimized the image processing pipeline for different lenses, assuming a monochrome sensor. The @math method incorporates lens properties into the simulation, so that the table of transforms accounts for the specific lens properties. Different tables are generated as the lens properties (e.g., aperture, f #) are varied. Hence, the @math method is also a co-design approach in the sense that the learned rendering parameters depend on the whole system, including the optics and sensor.
{ "cite_N": [ "@cite_9" ], "mid": [ "2071398741" ], "abstract": [ "We describe the mathematical and conceptual foundations for a novel methodology for jointly optimizing the design and analysis of the optics, detector, and digital image processing for imaging systems. Our methodology is based on the end-to-end merit function of predicted average pixel sum-squared error to find the optical and image processing parameters that minimize this merit function. Our approach offers several advantages over the traditional principles of optical design, such as improved imaging performance, expanded operating capabilities, and improved as-built performance." ] }
1605.09336
2327935541
Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form, that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.
also conceive of the image processing pipeline as a single, integrated computation @cite_38 . They suggest a framework (FlexISP) in which they model the relationship between the sensor data and a latent image that represents the fully sampled sensor data prior to optical defocus. They propose to estimate the latent image from the sensor data by solving an optimization problem; the optimization accounts for both the data and a set of natural image priors. Hence, a key difference is that calculate a solution separately for each sensor acquisition, while @math pre-computes a fixed table of transforms and applies this table to all images. Another difference is that , like , begin their calculations with the sensor data. In contrast, @math simulates a camera system beginning with scene radiance, accounting for properties of the optics, pixel, and sensor. By working from scene spectral radiance data, @math can be used to create pipelines at the earliest stages of the design process, when no hardware implementation yet exists. The simulations also make it possible to optimize @math parameters for different types of scenes, some of which may be difficult to create in a laboratory environment.
{ "cite_N": [ "@cite_38" ], "mid": [ "2021347102" ], "abstract": [ "Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible." ] }
1605.09336
2327935541
Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form, that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.
Convolutional sparse coding (CSC) methods share some features of the @math method. CSC representations begin with a full image representation and decompose the image into a linear sum of component images @cite_26 . Each component is the convolution of a single, usually small, kernel with a sparse feature map (most entries are zero). The CSC learns local features from the input training images, and the core calculations are linear. However, the CSC learning methods and target applications of the differ significantly from @math . First, CSC learns kernels and feature maps that decompose an image into separate components. @math performs the reverse computation; it starts with partial sensor data and creates a complete image. Second, the learning methods are different. The CSC kernels are learned through advanced bi-convex optimization methods that require substantial computational power. The affine (or simple polynomial) transforms learned by @math use prior knowledge of the camera and training about the camera design but very simple optimization methods. In summary, @math is an architecture for designing new image processing pipelines and efficient rendering; CSC is a technique for feature extraction and applications to learning image features for machine vision applications and computational photography, such as inpainting.
{ "cite_N": [ "@cite_26" ], "mid": [ "1946953458" ], "abstract": [ "Convolutional sparse coding (CSC) has become an increasingly important tool in machine learning and computer vision. Image features can be learned and subsequently used for classification and reconstruction tasks. As opposed to patch-based methods, convolutional sparse coding operates on whole images, thereby seamlessly capturing the correlation between local neighborhoods. In this paper, we propose a new approach to solving CSC problems and show that our method converges significantly faster and also finds better solutions than the state of the art. In addition, the proposed method is the first efficient approach to allow for proper boundary conditions to be imposed and it also supports feature learning from incomplete data as well as general reconstruction problems." ] }
1605.09530
2415465551
For current High Performance Computing systems to scale towards the holy grail of ExaFLOP performance, their power consumption has to be reduced by at least one order of magnitude. This goal can be achieved only through a combination of hardware and software advances. Being able to model and accurately predict the power consumption of large computational systems is necessary for software-level innovations such as proactive and power-aware scheduling, resource allocation and fault tolerance techniques. In this paper we present a 2-layer model of power consumption for a hybrid supercomputer (which held the top spot of the Green500 list on July 2013) that combines CPU, GPU and MIC technologies to achieve higher energy efficiency. Our model takes as input workload information - the number and location of resources that are used by each job at a certain time - and calculates the resulting system-level power consumption. When jobs are submitted to the system, the workload configuration can be foreseen based on the scheduler policies, and our model can then be applied to predict the ensuing system-level power consumption. Additionally, alternative workload configurations can be evaluated from a power perspective and more efficient ones can be selected. Applications of the model include not only power-aware scheduling but also prediction of anomalous behavior.
With energy needs becoming a major concern for large computational infrastructures, numerous recent research efforts have focused on reducing power usage. A large amount of work regards modeling power for various types of computing units, starting from load, frequency and other hardware counters. For instance, single and dual core CPU power is modeled in @cite_16 by considering the relation between the probability distribution functions of load and power, while servers with up to 8 cores are studied in @cite_10 @cite_18 . GPU power is estimated from load measures in @cite_2 . These methods do not allow for advance prediction in real life scenarios, since load and hardware counters cannot be known in advance, unless they can be predicted through other methods. Our method is significantly different in that we model total system power starting exclusively from workload measures, without the need to monitor the individual components, enabling .
{ "cite_N": [ "@cite_18", "@cite_16", "@cite_10", "@cite_2" ], "mid": [ "2108987574", "1974464843", "2039052145", "2254254903" ], "abstract": [ "Balancing the performance and the energy consumption of the servers is one of the important issues in large-scale computing infrastructure such as data centers. Measuring or accurately estimating power consumption of a server is one of the most fundamental and enabling technologies for enhancing energy efficiency of a server because how the server consumes the supplied power is essential for constructing a power management policy. For the purpose, power models for server systems have been extensively studied. However, most of existing works are too complex to be used real-time, because gathering the data for estimating the power consumption causes much overhead. In this paper, we propose a simple power model for a multicore server. Our model is simple enough to gather only four parameters: operating frequency, the number of active cores, the number of cache accesses and the number of the last level cache misses. We show our model is simple but relatively accurate by experiments that show the model has over 90 accuracy.", "Quantitatively estimating the relationship between the workload and the corresponding power consumption of a multicore processor is an essential step towards achieving energy proportional computing. Most existing and proposed approaches use Performance Monitoring Counters (Hardware Monitoring Counters) for this task. In this paper we propose a complementary approach that employs the statistics of CPU utilization (workload) only. Hence, we model the workload and the power consumption of a multicore processor as random variables and exploit the monotonicity property of their distribution functions to establish a quantitative relationship between the random variables. We will show that for a single-core processor the relationship is best approximated by a quadratic function whereas for a dualcore processor, the relationship is best approximated by a linear function. We will demonstrate the plausibility of our approach by estimating the power consumption of both custom-made and standard benchmarks (namely, the SPEC power benchmark and the Apache benchmarking tool) for an Intel and AMD processors.", "Power management is one of the biggest challenges facing current data centers. As processors consume the dominant amount of power in computer systems, power management of multicore processors is extremely significant. An efficient power model that accurately predict the power consumption of a processor is required to develop effective power management techniques. However, this challenge rises with using virtualization and increasing number of cores in the processors. In this paper, we analyze power consumption of a multicore processor, we develop three statistical CPU-Power models based on the number of active cores and average running frequency using a multiple liner regression. Our models are built upon a virtualized server. The models are validated statistically and experimentally. Statistically, our models cover 97 of system variations. Furthermore, we test our models with different workloads and three benchmarks. The results show that our models achieve better performance compared to the recently proposed model for power management in virtualized environments. Our models provide highly accurate predictions for un-sampled combinations of frequency and cores, 95 of the predicted values have less than 7 error. Thus, we can integrate these models into power management mechanisms for a dynamic configuration of a virtual machine in terms of the number of its virtual-CPUs and the frequency of physical cores to achieve both performance and power constrains.", "In recent years, more and more transistors have been integrated within the GPU, which has resulted in steadily rising power consumption requirements. In this paper we present a preliminary scheme to statistically analyze and model the power consumption of a mainstream GPU (NVidia GeForce 8800gt) by exploiting the innate coupling among power consumption characteristics, runtime performance, and dynamic workloads. Based on the recorded runtime GPU workload signals, our trained statistical model is capable of robustly and accurately predicting power consumption of the target GPU. To the best of our knowledge, this study is the first work that applies statistical analysis to model the power consumption of a mainstream GPU, and its results provide useful insights for future endeavors of building energy-efficient GPU computing paradigms." ] }
1605.09530
2415465551
For current High Performance Computing systems to scale towards the holy grail of ExaFLOP performance, their power consumption has to be reduced by at least one order of magnitude. This goal can be achieved only through a combination of hardware and software advances. Being able to model and accurately predict the power consumption of large computational systems is necessary for software-level innovations such as proactive and power-aware scheduling, resource allocation and fault tolerance techniques. In this paper we present a 2-layer model of power consumption for a hybrid supercomputer (which held the top spot of the Green500 list on July 2013) that combines CPU, GPU and MIC technologies to achieve higher energy efficiency. Our model takes as input workload information - the number and location of resources that are used by each job at a certain time - and calculates the resulting system-level power consumption. When jobs are submitted to the system, the workload configuration can be foreseen based on the scheduler policies, and our model can then be applied to predict the ensuing system-level power consumption. Additionally, alternative workload configurations can be evaluated from a power perspective and more efficient ones can be selected. Applications of the model include not only power-aware scheduling but also prediction of anomalous behavior.
Power of HPC applications has also been analyzed in recent years. For instance, the US Department of Defense are using application signatures to predict power consumption across different architectures @cite_19 . Performance counters are used to model application power on three small scale HPC platforms by @cite_8 . GPU CUDA kernels are analyzed in @cite_17 , again based on job performance counters. Recently, we have introduced a method @cite_13 based on Support Vector Regression (SVR), which builds one power model per user, to predict job power consumption based on workload in Eurora. This method has an advantage over others in that it does not require instrumenting the applications to extract signatures and performance counters, but only needs the number of resources required, making it much more straightforward to apply. In this work, this SVR method will be employed to predict power of computing components, from which we will then obtain system-level power.
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_17", "@cite_8" ], "mid": [ "2100322710", "2269820795", "1978969737", "2087217490" ], "abstract": [ "Energy costs comprise a significant fraction of the total cost of ownership of a large supercomputer. As with performance, energy-efficiency is not an attribute of a compute resource alone; it is a function of a resource-workload combination. The operation mix and locality characteristics of the applications in the workload affect the energy consumption of the resource. Our experiments confirm that data locality is the primary source of variation in energy requirements. The major contributions of this work include a method for performing fine-grained power measurements on high performance computing (HPC) resources, a benchmark infrastructure that exercises specific portions of the node in order to characterize operation energy costs, and a method of combining application information with independent energy measurements in order to estimate the energy requirements for specific application-resource pairings. A verification study using the NAS parallel benchmarks and S3D shows that our model has an average prediction error of 7.4 .", "Power consumption is a major obstacle for High Performance Computing HPC systems in their quest towards the holy grail of ExaFLOP performance. Significant advances in power efficiency have to be made before this goal can be attained and accurate modeling is an essential step towards power efficiency by optimizing system operating parameters to match dynamic energy needs. In this paper we present a study of power consumption by jobs in Eurora, a hybrid CPU-GPU-MIC system installed at the largest Italian data center. Using data from a dedicated monitoring framework, we build a data-driven model of power consumption for each user in the system and use it to predict the power requirements of future jobs. We are able to achieve good prediction results for over 80i?ź of the users in the system. For the remaining users, we identify possible reasons why prediction performance is not as good. Possible applications for our predictive modeling results include scheduling optimization, power-aware billing and system-scale power modeling. All the scripts used for the study have been made available on GitHub.", "We present a statistical approach for estimating power consumption of GPU kernels. We use the GPU performance counters that are exposed for CUDA applications, and train a linear regression model where performance counters are used as independent variables and power consumption is the dependent variable. For model training and evaluation, we use publicly available CUDA applications, consisting of 49 kernels in the CUDA SDK and the Rodinia benchmark suite. Our regression model achieves highly accurate estimates for many of the tested kernels, where the average error ratio is 4.7 . However, we also find that it fails to yield accurate estimates for kernels with texture reads because of the lack of performance counters for monitoring texture accesses, resulting in significant underestimation for such kernels.", "Due to high energy costs, fine-grained power consumption accounting and capability of making users of High Performance Computing (HPC) clusters aware of the cost of their computation is becoming more and more important. Hardware power measurement solutions can be very expensive, hence the appeal of software-based estimation methods. In this paper we present a practical approach to power consumption estimation of both individual application executions and whole computing nodes. We compare it to existing state-of-the-art solutions, provide accuracy figures, and discuss possible deployment scenarios. Highlights? We explore existing methods for measuring HPC machine power use. ? We propose general method for software estimation of power consumption. ? We discuss both application and machine-level power estimation. ? We provide practical application scenarios for both methods." ] }
1605.09350
2412257014
The past century of telecommunications has shown that failures in networks are prevalent. Although much has been done to prevent failures, network nodes and links are bound to fail eventually. Failure recovery processes are therefore needed. Failure recovery is mainly influenced by (1) detection of the failure, and (2) circumvention of the detected failure. However, especially in SDNs where controllers recompute network state reactively, this leads to high delays. Hence, next to primary rules, backup rules should be installed in the switches to quickly detour traffic once a failure occurs. In this work, we propose algorithms for computing an all-to-all primary and backup network forwarding configuration that is capable of circumventing link and node failures. Omitting the high delay invoked by controller recomputation through preconfiguration, our proposal's recovery delay is close to the detection time which is significantly below the 50 ms rule of thumb. After initial recovery, we recompute network configuration to guarantee protection from future failures. Our algorithms use packet-labeling to guarantee correct and shortest detour forwarding. The algorithms and labeling technique allow packets to return to the primary path and are able to discriminate between link and node failures. The computational complexity of our solution is comparable to that of all-to-all-shortest paths computations. Our experimental evaluation on both real and generated networks shows that network configuration complexity highly decreases compared to classic disjoint paths computations. Finally, we provide a proof-of-concept OpenFlow controller in which our proposed configuration is implemented, demonstrating that it readily can be applied in production networks.
Suurballe @cite_0 proposes an iterative scheme for finding @math one-to-one disjoint paths. At each iteration, the network is (temporarily) transformed into an equivalent network such that the network has non-negative link weights and zero-weight links on the links of the shortest paths tree rooted at the source node. Dijkstra's algorithm can then be applied for finding the @math -th disjoint path from the knowledge of the earlier @math disjoint paths. Bhandari @cite_9 later proposed a simplification of Suurballe's algorithm by an iterative scheme for finding the @math -th one-to-one disjoint path from the optimal solution of the @math disjoint paths. At each iteration, the direction and algebraic sign of the link weight is reversed for each link of the @math disjoint paths. The network can thus contain negative link weights. A modified Dijkstra's algorithm @cite_9 or the Bellman-Ford algorithm @cite_18 @cite_26 , both usable in networks with negative link weights, can then be applied for finding the @math -th disjoint path.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_26", "@cite_18" ], "mid": [ "2154434560", "1958473923", "1964119857", "2227557434" ], "abstract": [ "Routes between two given nodes of a network are called diversified if they are node-disjoint, except at the terminals. Diversified routes are required for reliability in communication, and an additional criterion is that their total cost, assumed to be the sum of individual arc lengths or costs, is minimum. An algorithm and related theory is described for a general number K of node-disjoint paths with minimum total length. The algorithm applies shortest path labeling algorithms familiar in the literature. K node-disjoint paths are found in K iterations of a single shortest path algorithm.", "One way to improve the reliability of a network is through physical diversity, i.e., via routing of traffic between a given pair of nodes in the network over two or more physically-disjoint paths such that if a node or a physical link fails on one of the disjoint paths, not all of the traffic is lost. Alternatively, enough spare capacity may be allocated on the individual paths such that the lost traffic due to a node or physical link failure can be routed immediately over the predetermined paths. In this paper, we present optimal algorithms for K-disjoint paths (K spl ges 2) in a graph of vertices (or nodes) and edges (or links). These algorithms are simpler than those given in the past. We discuss how such algorithms can be used in the design of survivable mesh networks based on the digital cross-connect systems (DCS). We also discuss the generation of optimal network topologies which permit K>2 disjoint paths and upon which survivable networks may be modeled. The study is of particular relevance to fiber networks.", "In this classic book, first published in 1962, L. R. Ford, Jr., and D. R. Fulkerson set the foundation for the study of network flow problems. The models and algorithms introduced in Flows in Networks are used widely today in the fields of transportation systems, manufacturing, inventory planning, image processing, and Internet traffic. The techniques presented by Ford and Fulkerson spurred the development of powerful computational tools for solving and analyzing network flow models, and also furthered the understanding of linear programming. In addition, the book helped illuminate and unify results in combinatorial mathematics while emphasizing proofs based on computationally efficient construction. Flows in Networks is rich with insights that remain relevant to current research in engineering, management, and other sciences. This landmark work belongs on the bookshelf of every researcher working with networks.", "Abstract : Given a set of N cities, with every two linked by a road, and the times required to traverse these roads, we wish to determine the path from one given city to another given city which minimizes the travel time. The times are not directly proportional to the distances due to varying quality of roads, and v varying quantities of traffic. The functional equation technique of dynamic programming, combined with approximation in policy space, yield an iterative algorithm which converges after at most (N-1) iterations." ] }
1605.09350
2412257014
The past century of telecommunications has shown that failures in networks are prevalent. Although much has been done to prevent failures, network nodes and links are bound to fail eventually. Failure recovery processes are therefore needed. Failure recovery is mainly influenced by (1) detection of the failure, and (2) circumvention of the detected failure. However, especially in SDNs where controllers recompute network state reactively, this leads to high delays. Hence, next to primary rules, backup rules should be installed in the switches to quickly detour traffic once a failure occurs. In this work, we propose algorithms for computing an all-to-all primary and backup network forwarding configuration that is capable of circumventing link and node failures. Omitting the high delay invoked by controller recomputation through preconfiguration, our proposal's recovery delay is close to the detection time which is significantly below the 50 ms rule of thumb. After initial recovery, we recompute network configuration to guarantee protection from future failures. Our algorithms use packet-labeling to guarantee correct and shortest detour forwarding. The algorithms and labeling technique allow packets to return to the primary path and are able to discriminate between link and node failures. The computational complexity of our solution is comparable to that of all-to-all-shortest paths computations. Our experimental evaluation on both real and generated networks shows that network configuration complexity highly decreases compared to classic disjoint paths computations. Finally, we provide a proof-of-concept OpenFlow controller in which our proposed configuration is implemented, demonstrating that it readily can be applied in production networks.
There are also algorithms that propose protection schemes based on (un)directed disjoint trees, e.g., the Roskind-Tarjan algorithm @cite_4 or the Medard-Finn-Barry-Gallager algorithm @cite_6 . The Roskind-Tarjan algorithm finds @math all-to-all undirected min-sum disjoint trees, while the Medard-Finn-Barry-Gallager algorithm finds a pair of one-to-all directed min-sum disjoint trees that can share links in the reverse direction. Contrary to our work, their resulting end-to-end paths can often be unnecessarily long, which may lead to higher failure probabilities and higher network operation costs. A more extensive overview of disjoint paths algorithms is presented in @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_6" ], "mid": [ "2126487423", "", "2098862160" ], "abstract": [ "Network survivability--the ability to maintain operation when one or a few network components fail--is indispensable for present-day networks. In this paper, we characterize three main components in establishing network survivability for an existing network, namely, (1) determining network connectivity, (2) augmenting the network, and (3) finding disjoint paths. We present a concise overview of network survivability algorithms, where we focus on presenting a few polynomial-time algorithms that could be implemented by practitioners and give references to more involved algorithms.", "", "We present a new algorithm which creates redundant trees on arbitrary node-redundant or link-redundant networks. These trees are such that any node is connected to the common root of the trees by at least one of the trees in case of node or link failure. Our scheme provides rapid preplanned recovery of communications with great flexibility in the topology design. Unlike previous algorithms, our algorithm can establish two redundant trees in the case of a node failing in the network. In the case of failure of a communications link, our algorithm provides a superset of the previously known trees." ] }
1605.09350
2412257014
The past century of telecommunications has shown that failures in networks are prevalent. Although much has been done to prevent failures, network nodes and links are bound to fail eventually. Failure recovery processes are therefore needed. Failure recovery is mainly influenced by (1) detection of the failure, and (2) circumvention of the detected failure. However, especially in SDNs where controllers recompute network state reactively, this leads to high delays. Hence, next to primary rules, backup rules should be installed in the switches to quickly detour traffic once a failure occurs. In this work, we propose algorithms for computing an all-to-all primary and backup network forwarding configuration that is capable of circumventing link and node failures. Omitting the high delay invoked by controller recomputation through preconfiguration, our proposal's recovery delay is close to the detection time which is significantly below the 50 ms rule of thumb. After initial recovery, we recompute network configuration to guarantee protection from future failures. Our algorithms use packet-labeling to guarantee correct and shortest detour forwarding. The algorithms and labeling technique allow packets to return to the primary path and are able to discriminate between link and node failures. The computational complexity of our solution is comparable to that of all-to-all-shortest paths computations. Our experimental evaluation on both real and generated networks shows that network configuration complexity highly decreases compared to classic disjoint paths computations. Finally, we provide a proof-of-concept OpenFlow controller in which our proposed configuration is implemented, demonstrating that it readily can be applied in production networks.
In terms of work related to Software-Defined Networks, @cite_19 derive and compute an MILP formulation for preplanning recovery paths including QoS metrics. Their approach relies heavily on crankback routing, which results in long backup paths and redundant usage of links compared to our approach. Their follow-up work SPIDER @cite_33 implements the respective failure rerouting mechanism using MPLS tags. The system relies heavily on OpenState @cite_8 to perform customized failure detection and data plane switching, making it incompatible with existing networks and available hardware switches. Furthermore, the system does not distinguish between link and node failures as our approach does.
{ "cite_N": [ "@cite_19", "@cite_33", "@cite_8" ], "mid": [ "1561042325", "2949817135", "" ], "abstract": [ "A reliable and scalable mechanism to provide protection against a link or node failure has additional requirements in the context of SDN and OpenFlow. Not only it has to minimize the load on the controller, but it must be able to react even when the controller is unreachable. In this paper we present a protection scheme based on precomputed backup paths and inspired by MPLS “crankback” routing, that guarantees instantaneous recovery times and aims at zero packet-loss after failure detection, regardless of controller reachability, even when OpenFlow's “fast-failover” feature cannot be used. The proposed mechanism is based on OpenState, an OpenFlow extension that allows a programmer to specify how forwarding rules should autonomously adapt in a stateful fashion, reducing the need to rely on remote controllers. We present the scheme as well as two different formulations for the computation of backup paths.", "When dealing with node or link failures in Software Defined Networking (SDN), the network capability to establish an alternative path depends on controller reachability and on the round trip times (RTTs) between controller and involved switches. Moreover, current SDN data plane abstractions for failure detection (e.g. OpenFlow \"Fast-failover\") do not allow programmers to tweak switches' detection mechanism, thus leaving SDN operators still relying on proprietary management interfaces (when available) to achieve guaranteed detection and recovery delays. We propose SPIDER, an OpenFlow-like pipeline design that provides i) a detection mechanism based on switches' periodic link probing and ii) fast reroute of traffic flows even in case of distant failures, regardless of controller availability. SPIDER can be implemented using stateful data plane abstractions such as OpenState or Open vSwitch, and it offers guaranteed short (i.e. ms) failure detection and recovery delays, with a configurable trade off between overhead and failover responsiveness. We present here the SPIDER pipeline design, behavioral model, and analysis on flow tables' memory impact. We also implemented and experimentally validated SPIDER using OpenState (an OpenFlow 1.3 extension for stateful packet processing), showing numerical results on its performance in terms of recovery latency and packet losses.", "" ] }
1605.09350
2412257014
The past century of telecommunications has shown that failures in networks are prevalent. Although much has been done to prevent failures, network nodes and links are bound to fail eventually. Failure recovery processes are therefore needed. Failure recovery is mainly influenced by (1) detection of the failure, and (2) circumvention of the detected failure. However, especially in SDNs where controllers recompute network state reactively, this leads to high delays. Hence, next to primary rules, backup rules should be installed in the switches to quickly detour traffic once a failure occurs. In this work, we propose algorithms for computing an all-to-all primary and backup network forwarding configuration that is capable of circumventing link and node failures. Omitting the high delay invoked by controller recomputation through preconfiguration, our proposal's recovery delay is close to the detection time which is significantly below the 50 ms rule of thumb. After initial recovery, we recompute network configuration to guarantee protection from future failures. Our algorithms use packet-labeling to guarantee correct and shortest detour forwarding. The algorithms and labeling technique allow packets to return to the primary path and are able to discriminate between link and node failures. The computational complexity of our solution is comparable to that of all-to-all-shortest paths computations. Our experimental evaluation on both real and generated networks shows that network configuration complexity highly decreases compared to classic disjoint paths computations. Finally, we provide a proof-of-concept OpenFlow controller in which our proposed configuration is implemented, demonstrating that it readily can be applied in production networks.
IBSDN @cite_7 achieves robustness through running a centralized controller in parallel with a distributed routing protocol. Initially, all traffic is forwarded according to the controller's configuration. Switches revert to the path determined by the traditional routing protocol once a link is detected to be down. The authors omit crankback paths through crankback detection using a custom local monitoring agent. The proposed system is both elegant and simple, though does require customized hardware, since switches need to connect to a central controller, run a routing protocol, and implement a local agent to perform crankback detection. Moreover, the time it takes the routing protocol to converge to the post-failure situation may be long and cannot outpace a preconfigured backup plan.
{ "cite_N": [ "@cite_7" ], "mid": [ "2009508457" ], "abstract": [ "One of the main concerns on SDN is relative to its ability to quickly react to network failures, while limiting both the control-plane overhead and the additional forwarding state kept by data-plane devices. Despite its practical importance, this concern is often overlooked in OpenFlow-based proposals. In this paper, we propose a new architecture, called IBSDN, in which a distributed routing protocol flanks OpenFlow to improve network robustness, reaction to failures, and controller scalability. In deeply exploring this idea, we complement our architecture with data-plane triggered mechanisms that improve its efficiency. We prove that the resulting solution ensures robustness for any combination of topological failures, and quickly reduces the path stretch. Finally, experimenting with a prototype implementation, we show that our approach is practical and overcomes the main limitations of previous work." ] }
1605.09350
2412257014
The past century of telecommunications has shown that failures in networks are prevalent. Although much has been done to prevent failures, network nodes and links are bound to fail eventually. Failure recovery processes are therefore needed. Failure recovery is mainly influenced by (1) detection of the failure, and (2) circumvention of the detected failure. However, especially in SDNs where controllers recompute network state reactively, this leads to high delays. Hence, next to primary rules, backup rules should be installed in the switches to quickly detour traffic once a failure occurs. In this work, we propose algorithms for computing an all-to-all primary and backup network forwarding configuration that is capable of circumventing link and node failures. Omitting the high delay invoked by controller recomputation through preconfiguration, our proposal's recovery delay is close to the detection time which is significantly below the 50 ms rule of thumb. After initial recovery, we recompute network configuration to guarantee protection from future failures. Our algorithms use packet-labeling to guarantee correct and shortest detour forwarding. The algorithms and labeling technique allow packets to return to the primary path and are able to discriminate between link and node failures. The computational complexity of our solution is comparable to that of all-to-all-shortest paths computations. Our experimental evaluation on both real and generated networks shows that network configuration complexity highly decreases compared to classic disjoint paths computations. Finally, we provide a proof-of-concept OpenFlow controller in which our proposed configuration is implemented, demonstrating that it readily can be applied in production networks.
@cite_31 apply the concept of Loop-Free Alternates (LFA) from IP networks to SDNs, where nodes are preprogrammed with single-link backup rules when not creating loops. Through applying an alternative loop-detection method more backup paths are found than using traditional LFA, although full protection requires topological adaptations.
{ "cite_N": [ "@cite_31" ], "mid": [ "2332445018" ], "abstract": [ "Loop-Free Alternates (LFAs) are a local fast-reroute mechanism defined for IP networks. They are simple but suffer from two drawbacks. Firstly, some flows cannot be protected due to missing LFAs, i.e., this concept does not provide full protection coverage, which depends on network topology. Secondly, some LFAs cause loops in case of node or multiple failures. Avoiding those LFAs decreases the protection coverage even further. In this work, we propose to apply LFAs to OpenFlow-based networks. We suggest a method for loop detection so that loops can be avoided without decreasing protection coverage. We propose an implementation with OpenFlow that requires only a single additional flow rule per switch. We further investigate the percentage of flows that can be protected, not protected, or even create loops in different types of failure scenarios. We consider realistic ring and mesh networks as well as typical topologies for data center networks. None of them can be fully protected with LFAs. Therefore, we suggest an augmented fat-tree topology which allows LFAs to protect against all single link and node failures and against most double failures." ] }
1605.09197
2416706156
We give a new proof of the existence of Klyachko models for unitary representations of @math over a non-archimedean local field @math . Our methods are purely local and are based on studying distinction within the class of ladder representations introduced by Lapid and Minguez. We classify those ladder representations that are distinguished with respect to Klyachko models. We prove the hereditary property of these models for induced representations from arbitrary finite length representations. Finally, in the other direction and in the context of admissible representations induced from ladder, we study the relation between distinction of the parabolic induction with respect to the symplectic groups and distinction of the inducing data.
The present work began as an attempt to extend the distinction results of @cite_1 and @cite_11 from the unitary dual of @math to the admissible dual of @math
{ "cite_N": [ "@cite_1", "@cite_11" ], "mid": [ "2141469696", "2963688096" ], "abstract": [ "We provide a family of representations of GLn over a p-adic field that admit a non-vanishing linear functional invariant under the symplectic group (i.e. representations that are Sp(n)-distinguished). This is a generalization of a result of Heumos–Rallis. Our proof uses global methods. The results of [Omer Offen, Eitan Sayag, Global mixed periods and local Klyachko models for the general linear group, submitted for publication] imply that the family at hand contains all irreducible, unitary representations that are distinguished by the symplectic group.", "Through radio frequency glow discharge the molecular structure of a fluoropolymer is permanently modified by the substitution of hydrogen and oxygen or oxygen-containing radicals for fluorine to a depth of up to 100 ANGSTROM . The surface morphological properties on a molecular level (e.g., pore structure) of the modified fluoropolymer remain substantially uncharged from those of the starting polymer as well as the bulk structure below the modified surface region while wettability with respect to low surface tension liquids and surface free energy as determined through critical surface tension are increased and new chemically reactive sites are created for further wet chemical modification or attachment of various functionality for development of a bioprobe device." ] }
1605.09197
2416706156
We give a new proof of the existence of Klyachko models for unitary representations of @math over a non-archimedean local field @math . Our methods are purely local and are based on studying distinction within the class of ladder representations introduced by Lapid and Minguez. We classify those ladder representations that are distinguished with respect to Klyachko models. We prove the hereditary property of these models for induced representations from arbitrary finite length representations. Finally, in the other direction and in the context of admissible representations induced from ladder, we study the relation between distinction of the parabolic induction with respect to the symplectic groups and distinction of the inducing data.
The focus on distinction problems within the class of ladder representations was made in @cite_13 and later in @cite_12 and Theorem of the introduction could be considered as an analogue to their results. A study parallel to our study of the distinction problem for standard modules can be found in these two references.
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "1966456487", "1811619627" ], "abstract": [ "Let E F be a quadratic extension of number fields. We study periods and regularized periods of cusp forms and Eisenstein series on ( GL _ n ( A _ E ) ) over a unitary group of a Hermitian form with respect to E F. We provide factorization for these periods into locally defined functionals, express these factors in terms of suitably defined local periods and characterize global distinction. We also study in detail the analogous local question and analyze the space of invariant linear forms under a unitary group.", "Let E F be a quadratic extension of p-adic fields. We prove that every smooth irreducible ladder representation of the group (GL_n(E) ) which is contragredient to its own Galois conjugate, possesses the expected distinction properties relative to the subgroup (GL_n(F) ). This affirms a conjecture attributed to Jacquet for a large class of representations. Along the way, we prove a reformulation of the conjecture which concerns standard modules in place of irreducible representations." ] }
1605.08900
2412751481
We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation.
Aspect level sentiment classification is a fine-grained classification task in sentiment analysis, which aims at identifying the sentiment polarity of a sentence expressed towards an aspect @cite_7 . Most existing works use machine learning algorithms, and build sentiment classifier from sentences with manually annotated polarity labels. One of the most successful approaches in literature is feature based SVM. Experts could design effective feature templates and make use of external resources like parser and sentiment lexicons @cite_28 @cite_10 . In recent years, neural network approaches @cite_22 @cite_0 @cite_32 @cite_4 are of growing attention for their capacity to learn powerful text representation from data. However, these neural models (e.g. LSTM) are computationally expensive, and could not explicitly reveal the importance of context evidences with regard to an aspect. Instead, we develop simple and fast approach that explicitly encodes the context importance towards a given aspect. It is worth noting that the task we focus on differs from fine-grained opinion extraction, which assigns each word a tag (e.g. B,I,O) to indicate whether it is an aspect sentiment word @cite_27 @cite_18 @cite_34 . The aspect word in this work is given as a part of the input.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_7", "@cite_28", "@cite_32", "@cite_0", "@cite_27", "@cite_34", "@cite_10" ], "mid": [ "2144012961", "", "", "2251648804", "", "2251792193", "", "2162860143", "2252024663", "" ], "abstract": [ "Recurrent neural networks (RNNs) are connectionist models of sequential data that are naturally applicable to the analysis of natural language. Recently, “depth in space” — as an orthogonal notion to “depth in time” — in RNNs has been investigated by stacking multiple layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In this work we apply these deep RNNs to the task of opinion expression extraction formulated as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs outperform traditional shallow, wide RNNs with the same number of parameters. Furthermore, our approach outperforms previous CRF-based baselines, including the state-of-the-art semi-Markov CRF model, and does so without access to the powerful opinion lexicons and syntactic features relied upon by the semi-CRF, as well as without the standard layer-by-layer pre-training typically required of RNN architectures.", "", "", "Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint. The majority of current approaches, however, attempt to detect the overall polarity of a sentence, paragraph, or text span, irrespective of the entities mentioned (e.g., laptops) and their aspects (e.g., battery, screen). SemEval2014 Task 4 aimed to foster research in the field of aspect-based sentiment analysis, where the goal is to identify the aspects of given target entities and the sentiment expressed for each aspect. The task provided datasets containing manually annotated reviews of restaurants and laptops, as well as a common evaluation procedure. It attracted 163 submissions from 32 teams.", "", "This paper presents a new method to identify sentiment of an aspect of an entity. It is an extension of RNN (Recursive Neural Network) that takes both dependency and constituent trees of a sentence into account. Results of an experiment show that our method significantly outperforms previous methods.", "", "Automatic opinion recognition involves a number of related tasks, such as identifying the boundaries of opinion expression, determining their polarity, and determining their intensity. Although much progress has been made in this area, existing research typically treats each of the above tasks in isolation. In this paper, we apply a hierarchical parameter sharing technique using Conditional Random Fields for fine-grained opinion analysis, jointly detecting the boundaries of opinion expressions as well as determining two of their key attributes --- polarity and intensity. Our experimental results show that our proposed approach improves the performance over a baseline that does not exploit hierarchical structure among the classes. In addition, we find that the joint approach outperforms a baseline that is based on cascading two separate components.", "The tasks in fine-grained opinion mining can be regarded as either a token-level sequence labeling problem or as a semantic compositional task. We propose a general class of discriminative models based on recurrent neural networks (RNNs) and word embeddings that can be successfully applied to such tasks without any taskspecific feature engineering effort. Our experimental results on the task of opinion target identification show that RNNs, without using any hand-crafted features, outperform feature-rich CRF-based models. Our framework is flexible, allows us to incorporate other linguistic features, and achieves results that rival the top performing systems in SemEval-2014.", "" ] }
1605.08900
2412751481
We introduce a deep memory network for aspect level sentiment classification. Unlike feature-based SVM and sequential neural models such as LSTM, this approach explicitly captures the importance of each context word when inferring the sentiment polarity of an aspect. Such importance degree and text representation are calculated with multiple computational layers, each of which is a neural attention model over an external memory. Experiments on laptop and restaurant datasets demonstrate that our approach performs comparable to state-of-art feature based SVM system, and substantially better than LSTM and attention-based LSTM architectures. On both datasets we show that multiple computational layers could improve the performance. Moreover, our approach is also fast. The deep memory network with 9 layers is 15 times faster than LSTM with a CPU implementation.
In NLP community, compositionality means that the meaning of a composed expression (e.g. a phrase sentence document) comes from the meanings of its constituents @cite_11 . exploits a variety of addition and multiplication functions to calculate phrase vector. use matrix multiplication as compositional function to compute vectors for longer phrases. To compute sentence representation, researchers develop denoising autoencoder @cite_14 , convolutional neural network @cite_26 @cite_37 @cite_30 , sequence based recurrent neural models @cite_15 @cite_20 @cite_5 and tree-structured neural networks @cite_1 @cite_3 @cite_12 . Several recent studies calculate continuous representation for documents with neural networks @cite_38 @cite_33 @cite_16 @cite_13 @cite_29 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_37", "@cite_14", "@cite_26", "@cite_33", "@cite_29", "@cite_1", "@cite_3", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "", "2949547296", "", "22861983", "2120615054", "2115242108", "", "2251939518", "2104246439", "2953391617", "2949888546", "2950752421", "", "2111369166", "", "2287077086" ], "abstract": [ "", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks.", "", "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.", "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.", "Discourse structure is the hidden link between surface features and document-level properties, such as sentiment polarity. We show that the discourse analyses produced by Rhetorical Structure Theory (RST) parsers can improve document-level sentiment analysis, via composition of local information up the discourse tree. First, we show that reweighting discourse units according to their position in a dependency representation of the rhetorical structure can yield substantial improvements on lexicon-based sentiment analysis. Next, we present a recursive neural network over the RST structure, which offers significant improvements over classificationbased methods.", "", "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.", "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).", "Recursive neural models, which use syntactic parse trees to recursively generate representations bottom-up, are a popular architecture. But there have not been rigorous evaluations showing for exactly which tasks this syntax-based method is appropriate. In this paper we benchmark recursive neural models against sequential recurrent neural models (simple recurrent and LSTM models), enforcing apples-to-apples comparison as much as possible. We investigate 4 tasks: (1) sentiment classification at the sentence level and phrase level; (2) matching questions to answer-phrases; (3) discourse parsing; (4) semantic relation extraction (e.g., component-whole between nouns). Our goal is to understand better when, and why, recursive models can outperform simpler models. We find that recursive models help mainly on tasks (like semantic relation extraction) that require associating headwords across a long distance, particularly on very long sequences. We then introduce a method for allowing recurrent models to achieve similar performance: breaking long sentences into clause-like units at punctuation and processing them separately before combining. Our results thus help understand the limitations of both classes of models, and suggest directions for improving recurrent models.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Long-short term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization Code for the three models described in this paper can be found at www.stanford.edu jiweil .", "", "The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures.", "", "" ] }
1605.08714
2408337087
Whether as telecommunications or power systems, networks are very important in everyday life. Maintaining these networks properly functional and connected, even under attacks or failures, is of special concern. This topic has been previously studied with a whole network robustness perspective,modeling networks as undirected graphs (such as roads or simply cables). This perspective measures the average behavior of the network after its last node has failed. In this article we propose two alternatives to well-known studies about the robustness of the backbone Internet: to use a supply network model and metrics for its representation (we called it the Go-Index), and to use robustness metrics that can be calculated while disconnections appear. Our research question is: if a smart adversary has a limited number of strikes to attack the Internet, how much will the damage be after each one in terms of network disconnection? Our findings suggest that in order to design robust networks it might be better to have a complete view of the robustness evolution of the network, from both the infrastructure and the users perspective.
Targeted attacks have been thoroughly studied to analyze network robustness. Holme tested node degree and betweenness centrality strategies using simultaneous and sequential attacks. On @cite_21 simultaneous attacks based on network centrality measures and random attacks were studied.
{ "cite_N": [ "@cite_21" ], "mid": [ "2926299067" ], "abstract": [ "With increasingly ambitious initiatives such as GENI and FIND that seek to design future internets, it becomes imperative to define the characteristics of robust topologies, and build future networks optimised for robustness. This paper investigates the characteristics of network topologies that maintain a high level of throughput in spite of multiple attacks. To this end, we select network topologies belonging to the main network models and some real world networks. We consider three types of attacks: removal of random nodes, high degree nodes, and high betweenness nodes. We use elasticity as our robustness measure and, through our analysis, illustrate different topologies that can have different degrees of robustness. In particular, elasticity can fall as low as 0.8 of the upper bound based on the attack employed. This result substantiates the need for optimised network topology design. Furthermore, we implement a trade-off function that combines elasticity under the three attack strategies and considers the cost of the network. Our extensive simulations show that, for a given network density, regular and semi-regular topologies can have higher degrees of robustness than heterogeneous topologies, and that link redundancy is a sufficient but not necessary condition for robustness." ] }
1605.08714
2408337087
Whether as telecommunications or power systems, networks are very important in everyday life. Maintaining these networks properly functional and connected, even under attacks or failures, is of special concern. This topic has been previously studied with a whole network robustness perspective,modeling networks as undirected graphs (such as roads or simply cables). This perspective measures the average behavior of the network after its last node has failed. In this article we propose two alternatives to well-known studies about the robustness of the backbone Internet: to use a supply network model and metrics for its representation (we called it the Go-Index), and to use robustness metrics that can be calculated while disconnections appear. Our research question is: if a smart adversary has a limited number of strikes to attack the Internet, how much will the damage be after each one in terms of network disconnection? Our findings suggest that in order to design robust networks it might be better to have a complete view of the robustness evolution of the network, from both the infrastructure and the users perspective.
The stability of scale-free network under degree-based attacks was studied on @cite_11 . Experimental results are shown in @cite_8 who also consider sequential and simultaneous attacks as well as centrality measure strategies. To get closer to a real world strategy scenario on @cite_3 @cite_26 studied the resilience of scale-free networks to a variety of attacks with different amounts of information available to the attacker about the network.
{ "cite_N": [ "@cite_8", "@cite_26", "@cite_3", "@cite_11" ], "mid": [ "2024982571", "2029226568", "2107538138", "2327083405" ], "abstract": [ "Many complex systems can be described by networks, in which the constituent components are represented by vertices and the connections between the components are represented by edges between the corresponding vertices. A fundamental issue concerning complex networked systems is the robustness of the overall system to the failure of its constituent parts. Since the degree to which a networked system continues to function, as its component parts are degraded, typically depends on the integrity of the underlying network, the question of system robustness can be addressed by analyzing how the network structure changes as vertices are removed. Previous work has considered how the structure of complex networks change as vertices are removed uniformly at random, in decreasing order of their degree, or in decreasing order of their betweenness centrality. Here we extend these studies by investigating the effect on network structure of targeting vertices for removal based on a wider range of non-local measures of potential importance than simply degree or betweenness. We consider the effect of such targeted vertex removal on model networks with different degree distributions, clustering coefficients and assortativity coefficients, and for a variety of empirical networks.", "We study the vulnerability of complex networks under intentional attack with incomplete information, which means that one can only preferentially attack the most important nodes among a local region of a network. The known random failure and the intentional attack are two extreme cases of our study. Using the generating function method, we derive the exact value of the critical removal fraction fc of nodes for the disintegration of networks and the size of the giant component. To validate our model and method, we perform simulations of intentional attack with incomplete information in scale-free networks. We show that the attack information has an important effect on the vulnerability of scale-free networks. We also demonstrate that hiding a fraction of the nodes information is a cost-efficient strategy for enhancing the robustness of complex networks.", "In this work, we estimate the resilience of scale-free networks on a number of different attack methods. We study a number of different cases, where we assume that a small amount of knowledge on the network structure is available, or can be approximately estimated. We also present a class of real-life networks that prove to be very resilient on intentional attacks, or equivalently much more difficult to immunize completely than most model scale-free networks.", "We study the stability of random scale-free networks to degree-dependent attacks. We present analytical and numerical results to compute the critical fraction @math of nodes that need to be removed for destroying the network under this attack for different attack parameters. We study the effect of different defense strategies, based on the addition of a constant number of links on network robustness. We test defense strategies based on adding links to either low degree, middegree or high degree nodes. We find using analytical results and simulations that the middegree nodes defense strategy leads to the largest improvement to the network robustness against degree-based attacks. We also test these defense strategies on an internet autonomous systems map and obtain similar results." ] }
1605.08714
2408337087
Whether as telecommunications or power systems, networks are very important in everyday life. Maintaining these networks properly functional and connected, even under attacks or failures, is of special concern. This topic has been previously studied with a whole network robustness perspective,modeling networks as undirected graphs (such as roads or simply cables). This perspective measures the average behavior of the network after its last node has failed. In this article we propose two alternatives to well-known studies about the robustness of the backbone Internet: to use a supply network model and metrics for its representation (we called it the Go-Index), and to use robustness metrics that can be calculated while disconnections appear. Our research question is: if a smart adversary has a limited number of strikes to attack the Internet, how much will the damage be after each one in terms of network disconnection? Our findings suggest that in order to design robust networks it might be better to have a complete view of the robustness evolution of the network, from both the infrastructure and the users perspective.
On @cite_2 the impact of the effectiveness of the attack under observation error was studied. More recently @cite_6 studied sequential multi-strategy attacks using multiples robustness measures including the ( @math -index) @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_6", "@cite_2" ], "mid": [ "1989400610", "2153120494", "299234761" ], "abstract": [ "Terrorist attacks on transportation networks have traumatized modern societies. With a single blast, it has become possible to paralyze airline traffic, electric power supply, ground transportation or Internet communication. How and at which cost can one restructure the network such that it will become more robust against a malicious attack? We introduce a new measure for robustness and use it to devise a method to mitigate economically and efficiently this risk. We demonstrate its efficiency on the European electricity system and on the Internet as well as on complex networks models. We show that with small changes in the network structure (low cost) the robustness of diverse networks can be improved dramatically whereas their functionality remains unchanged. Our results are useful not only for improving significantly with low cost the robustness of existing infrastructures but also for designing economically robust network systems.", "We examine the robustness of networks under attack when the attacker sequentially selects from a number of different attack strategies, each of which removes one node from the network. Network robustness refers to the ability of a network to maintain functionality under attack, and the problem-dependent context implies a number of robustness measures exist. Thus, we analyze four measures: (1) entropy, (2) efficiency, (3) size of largest network component, and suggest to also utilize (4) pairwise connectivity. Six network centrality measures form the set of strategies at the disposal of the attacker. Our study examines the utility of greedy strategy selection versus random strategy selection for each attack, whereas previous studies focused on greedy selection but limited to only one attack strategy. Using a set of common complex network benchmark networks, in addition to real-world networks, we find that randomly selecting an attack strategy often performs well when the attack strategies are of high quality. We also examine defense against the attacks by adding k edges after each node attack and find that the greedy strategy is most useful in this context. We also observed that a betweenness-based attack often outperforms both random and greedy strategy selection, the latter often becoming trapped in local optima.", "Abstract : Identifying the key nodes to target in a social network is an important problem in several application areas, including the disruption of terrorist networks and the crafting of effective immunization strategies. One important issue that has received limited attention is how such targeting strategies are affected by erroneous data about network structure. This paper describes simulation experiments that investigate this issue." ] }
1605.08714
2408337087
Whether as telecommunications or power systems, networks are very important in everyday life. Maintaining these networks properly functional and connected, even under attacks or failures, is of special concern. This topic has been previously studied with a whole network robustness perspective,modeling networks as undirected graphs (such as roads or simply cables). This perspective measures the average behavior of the network after its last node has failed. In this article we propose two alternatives to well-known studies about the robustness of the backbone Internet: to use a supply network model and metrics for its representation (we called it the Go-Index), and to use robustness metrics that can be calculated while disconnections appear. Our research question is: if a smart adversary has a limited number of strikes to attack the Internet, how much will the damage be after each one in terms of network disconnection? Our findings suggest that in order to design robust networks it might be better to have a complete view of the robustness evolution of the network, from both the infrastructure and the users perspective.
On @cite_4 was found that targeted attack can be more effective when they are directed to bottlenecks rather than hubs. On @cite_18 authors present partial values of @math -index while nodes are disconnected, showing the importance of a well chosen robustness metric for performing the attacks.
{ "cite_N": [ "@cite_18", "@cite_4" ], "mid": [ "2291969311", "2067269323" ], "abstract": [ "In this article we present Miuz, a robustness index for complex networks. Miuz measures the impact of disconnecting a node from the network while comparing the sizes of the remaining connected components. Strictly speaking, Miuz for a node is defined as the inverse of the size of the largest connected component divided by the sum of the sizes of the remaining ones. We tested our index in attack strategies where the nodes are disconnected in decreasing order of a specified metric. We considered Miuz and other well-known centrality measures such as betweenness, degree, and harmonic centrality. All of these metrics were compared regarding the behavior of the robustness (R- index) during the attacks. In an attempt to simulate the internet backbone, the attacks were performed in complex networks with power-law degree distributions (scale-free networks). Preliminary results show that attacks based on disconnecting a few number of nodes Miuz are more dangerous (decreasing the robustness) than the same attacks based on other centrality measures. We believe that Miuz, as well as other measures based on the size of the largest connected component, provides a good addition to other robustness metrics for complex networks.", "We study the property of certain complex networks of being both sparse and highly connected, which is known as “good expansion” (GE). A network has GE properties if every subset S of nodes (up to 50 of the nodes) has a neighborhood that is larger than some “expansion factor” φ multiplied by the number of nodes in S. Using a graph spectral method we introduce here a new parameter measuring the good expansion character of a network. By means of this parameter we are able to classify 51 real-world complex networks — technological, biological, informational, biological and social — as GENs or non-GENs. Combining GE properties and node degree distribution (DD) we classify these complex networks in four different groups, which have different resilience to intentional attacks against their nodes. The simultaneous existence of GE properties and uniform degree distribution contribute significantly to the robustness in complex networks. These features appear solely in 14 of the 51 real-world networks studied here. At the other extreme we find that ∼40 of all networks are very vulnerable to targeted attacks. They lack GE properties, display skewed DD — exponential or power-law — and their topologies are changed more dramatically by targeted attacks directed at bottlenecks than by the removal of network hubs." ] }
1605.08714
2408337087
Whether as telecommunications or power systems, networks are very important in everyday life. Maintaining these networks properly functional and connected, even under attacks or failures, is of special concern. This topic has been previously studied with a whole network robustness perspective,modeling networks as undirected graphs (such as roads or simply cables). This perspective measures the average behavior of the network after its last node has failed. In this article we propose two alternatives to well-known studies about the robustness of the backbone Internet: to use a supply network model and metrics for its representation (we called it the Go-Index), and to use robustness metrics that can be calculated while disconnections appear. Our research question is: if a smart adversary has a limited number of strikes to attack the Internet, how much will the damage be after each one in terms of network disconnection? Our findings suggest that in order to design robust networks it might be better to have a complete view of the robustness evolution of the network, from both the infrastructure and the users perspective.
The idea of planning a network attack'' using centrality measures has captured the attention of researchers and practitioners nowadays. For instance, @cite_13 used bet -ween -ness-centrality () for planning a network attack, calculating the value for all nodes, ordering nodes from higher to lower , and then attacking (discarding) those nodes in that order. They have shown that disconnecting only two of the top -ranked nodes, their packet-delivery ratio is reduced to @math 10 @math 100+ @math 0 In the study of resilience after edge removing, Rosen - @cite_19 study backup communication paths for network services defining that a network is '' given that ''.
{ "cite_N": [ "@cite_19", "@cite_13" ], "mid": [ "2115113435", "2014770087" ], "abstract": [ "We develop a graph-theoretic model for service-oriented networks and propose metrics that quantify the resilience of such networks under node and edge failures. These metrics are based on the topological structure of the network and the manner in which services are distributed over the network. We present efficient algorithms to determine the maximum number of node and edge failures that can be tolerated by a given service-oriented network. These algorithms rely on known algorithms for computing minimum cuts in graphs. We also present efficient algorithms for optimally allocating services over a given network so that the resulting service-oriented network can tolerate single node or edge failures. These algorithms are derived through a careful analysis of the decomposition of the underlying network into appropriate types of connected components.", "As the Internet becomes increasingly important to all aspects of society, the consequences of disruption become increasingly severe. Thus it is critical to increase the resilience and survivability of the future network. We define resilience as the ability of the network to provide desired service even when challenged by attacks, large-scale disasters, and other failures. This paper describes a comprehensive methodology to evaluate network resilience using a combination of analytical and simulation techniques with the goal of improving the resilience and survivability of the Future Internet." ] }
1605.09016
2494196141
In some of object recognition problems, labeled data may not be available for all categories. Zero-shot learning utilizes auxiliary information (also called signatures) describing each category in order to find a classifier that can recognize samples from categories with no labeled instance. In this paper, we propose a novel semi-supervised zero-shot learning method that works on an embedding space corresponding to abstract deep visual features. We seek a linear transformation on signatures to map them onto the visual features, such that the mapped signatures of the seen classes are close to labeled samples of the corresponding classes and unlabeled data are also close to the mapped signatures of one of the unseen classes. We use the idea that the rich deep visual features provide a representation space in which samples of each class are usually condensed in a cluster. The effectiveness of the proposed method is demonstrated through extensive experiments on four public benchmarks improving the state-of-the-art prediction accuracy on three of them.
There are major differences between our work and @cite_25 using a dictionary learning scheme in which coding coefficients are considered to be label embeddings in a semantic space and a sparse coding objective is used to map images into this representation space. Most importantly, in our method labels of unseen instances are jointly learned with the mapping of the signatures to the semantic space in our objective function while in @cite_25 the label prediction is accomplished using the nearest neighbor or the label propagation on embeddings of images. Also, we do not need to learn embedding of test instances in the semantic space as opposed to @cite_25 , alternatively we learn just the representation of class signatures in the visual domain.
{ "cite_N": [ "@cite_25" ], "mid": [ "2209594346" ], "abstract": [ "Zero-shot learning (ZSL) can be considered as a special case of transfer learning where the source and target domains have different tasks label spaces and the target domain is unlabelled, providing little guidance for the knowledge transfer. A ZSL method typically assumes that the two domains share a common semantic representation space, where a visual feature vector extracted from an image video can be projected embedded using a projection function. Existing approaches learn the projection function from the source domain and apply it without adaptation to the target domain. They are thus based on naive knowledge transfer and the learned projections are prone to the domain shift problem. In this paper a novel ZSL method is proposed based on unsupervised domain adaptation. Specifically, we formulate a novel regularised sparse coding framework which uses the target domain class labels' projections in the semantic space to regularise the learned target domain projection thus effectively overcoming the projection domain shift problem. Extensive experiments on four object and action recognition benchmark datasets show that the proposed ZSL method significantly outperforms the state-of-the-arts." ] }
1605.08912
2413390986
Topological data analysis is becoming a popular way to study high dimensional feature spaces without any contextual clues or assumptions. This paper concerns itself with one popular topological feature, which is the number of @math dimensional holes in the dataset, also known as the Betti @math number. The persistence of the Betti numbers over various scales is encoded into a persistence diagram (PD), which indicates the birth and death times of these holes as scale varies. A common way to compare PDs is by a point-to-point matching, which is given by the @math -Wasserstein metric. However, a big drawback of this approach is the need to solve correspondence between points before computing the distance; for @math points, the complexity grows according to @math n @math . Instead, we propose to use an entirely new framework built on Riemannian geometry, that models PDs as 2D probability density functions that are represented in the square-root framework on a Hilbert Sphere. The resulting space is much more intuitive with closed form expressions for common operations. The distance metric is 1) correspondence-free and also 2) independent of the number of points in the dataset. The complexity of computing distance between PDs now grows according to @math , for a @math discretization of @math . This also enables the use of existing machinery in differential geometry towards statistical analysis of PDs such as computing the mean, geodesics, classification etc. We report competitive results with the Wasserstein metric, at a much lower computational load, indicating the favorable properties of the proposed approach.
The square-root representation for 1D probability density functions (pdfs) was proposed in @cite_4 and was used for shape classification. It has since been used in several applications, including activity recognition @cite_1 - where the square-root velocity representation is used to model the space of time warping functions. This representation extends quite easily to arbitrary high-dimensional pdfs as well.
{ "cite_N": [ "@cite_1", "@cite_4" ], "mid": [ "2160303236", "2169678505" ], "abstract": [ "Pattern recognition in video is a challenging task because of the multitude of spatio-temporal variations that occur in different videos capturing the exact same event. While traditional pattern-theoretic approaches account for the spatial changes that occur due to lighting and pose, very little has been done to address the effect of temporal rate changes in the executions of an event. In this paper, we provide a systematic model-based approach to learn the nature of such temporal variations (time warps) while simultaneously allowing for the spatial variations in the descriptors. We illustrate our approach for the problem of action recognition and provide experimental justification for the importance of accounting for rate variations in action recognition. The model is composed of a nominal activity trajectory and a function space capturing the probability distribution of activity-specific time warping transformations. We use the square-root parameterization of time warps to derive geodesics, distance measures, and probability distributions on the space of time warping functions. We then design a Bayesian algorithm which treats the execution rate function as a nuisance variable and integrates it out using Monte Carlo sampling, to generate estimates of class posteriors. This approach allows us to learn the space of time warps for each activity while simultaneously capturing other intra- and interclass variations. Next, we discuss a special case of this approach which assumes a uniform distribution on the space of time warping functions and show how computationally efficient inference algorithms may be derived for this special case. We discuss the relative advantages and disadvantages of both approaches and show their efficacy using experiments on gait-based person identification and activity recognition.", "Applications in computer vision involve statistically analyzing an important class of constrained, non-negative functions, including probability density functions (in texture analysis), dynamic time-warping functions (in activity analysis), and re-parametrization or non-rigid registration functions (in shape analysis of curves). For this one needs to impose a Riemannian structure on the spaces formed by these functions. We propose a \"spherical\" version of the Fisher-Rao metric that provides closed-form expressions for geodesies and distances, and allows fast computation of sample statistics. To demonstrate this approach, we present an application in planar shape classification." ] }
1605.08359
2400418317
A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both.
In its simplest form, the view-based approach aims to add viewpoint tolerance to a 2D image of an object, such as with viewpoint-invariant local descriptors @cite_36 @cite_2 or deformation-tolerant global descriptors @cite_19 . Given training images across multiple viewpoints, a more stable set of features can be found by tracking those which are shared across multiple views and clustering images accordingly @cite_41 , or by learning their relative 2D displacements as the viewpoint changes, both with hard constraints for rigid bodies @cite_22 @cite_5 and flexible constraints for deformable bodies @cite_3 @cite_15 . To add further fidelity to the true underlying object geometry, these 2D image elements can also be embedded within an implicit 3D model @cite_8 @cite_25 @cite_29 . If multiple views are available at testing, images can be combined and treated as a single, larger image @cite_23 , an approach which can also be addressed in two stages, by processing the individual images first to reduce the search space @cite_17 .
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_36", "@cite_41", "@cite_29", "@cite_3", "@cite_19", "@cite_23", "@cite_2", "@cite_5", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2064859577", "2131225894", "2177274842", "2162601563", "", "2154422044", "2161969291", "2118341165", "", "", "", "", "1993251515" ], "abstract": [ "The recognition of a place depicted in an image typically adopts methods from image retrieval in large-scale databases. First, a query image is described as a “bag-of-features” and compared to every image in the database. Second, the most similar images are passed to a geometric verification stage. However, this is an inefficient approach when considering that some database images may be almost identical, and many image features may not repeatedly occur. We address this issue by clustering similar database images to represent distinct scenes, and tracking local features that are consistently detected to form a set of real-world landmarks. Query images are then matched to landmarks rather than features, and a probabilistic model of landmark properties is learned from the cluster to appropriately verify or reject putative feature matches. We present novelties in both a bag-of-features retrieval and geometric verification stage based on this concept. Results on a database of 200K images of popular tourist destinations show improvements in both recognition performance and efficiency compared to traditional image retrieval methods.", "We present a novel system for generic object class detection. In contrast to most existing systems which focus on a single viewpoint or aspect, our approach can detect object instances from arbitrary viewpoints. This is achieved by combining the Implicit Shape Model for object class detection proposed by Leibe and Schiele with the multi-view specific object recognition system of After learning single-view codebooks, these are interconnected by so-called activation links, obtained through multi-view region tracks across different training views of individual object instances. During recognition, these integrated codebooks work together to determine the location and pose of the object. Experimental results demonstrate the viability of the approach and compare it to a bank of independent single-view detectors", "In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al, April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al, 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al, 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.", "There have been important recent advances in object recognition through the matching of invariant local image features. However, the existing approaches are based on matching to individual training images. This paper presents a method for combining multiple images of a 3D object into a single model representation. This provides for recognition of 3D objects from any viewpoint, the generalization of models to non-rigid changes, and improved robustness through the combination of features acquired under a range of imaging conditions. The decision of whether to cluster a training image into an existing view representation or to treat it as a new view is based on the geometric accuracy of the match to previous model views. A new probabilistic model is developed to reduce the false positive matches that would otherwise arise due to loosened geometric constraints on matching 3D and non-rigid models. A system has been developed based on these approaches that is able to robustly recognize 3D objects in cluttered natural images in sub-second times.", "", "We present a method to learn and recognize object class models from unlabeled and unsegmented cluttered scenes in a scale invariant manner. Objects are modeled as flexible constellations of parts. A probabilistic representation is used for all aspects of the object: shape, appearance, occlusion and relative scale. An entropy-based feature detector is used to select regions and their scale within the image. In learning the parameters of the scale-invariant object model are estimated. This is done using expectation-maximization in a maximum-likelihood setting. In recognition, this model is used in a Bayesian manner to classify images. The flexible nature of the model is demonstrated by excellent results over a range of datasets including geometrically constrained classes (e.g. faces, cars) and flexible objects (such as animals).", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "We illustrate how to consider a network of cameras as a single generalized camera in a framework proposed by Nayar (2001). We derive the discrete structure from motion equations for generalized cameras, and illustrate the corollaries to epipolar geometry. This formal mechanism allows one to use a network of cameras as if they were a single imaging device, even when they do not share a common center of projection. Furthermore, an analysis of structure from motion algorithms for this imaging model gives constraints on the optimal design of panoramic imaging systems constructed from multiple cameras.", "", "", "", "", "We present an approach for efficiently recognizing all objects in a scene and estimating their full pose from multiple views. Our approach builds upon a state of the art single-view algorithm which recognizes and registers learned metric 3D models using local descriptors. We extend to multiple views using a novel multi-step optimization that processes each view individually and feeds consistent hypotheses back to the algorithm for global refinement. We demonstrate that our method produces results comparable to the theoretical optimum, a full multi-view generalized camera approach, while avoiding its combinatorial time complexity. We provide experimental results demonstrating pose accuracy, speed, and robustness to model error using a three-camera rig, as well as a physical implementation of the pose output being used by an autonomous robot executing grasps in highly cluttered scenes." ] }
1605.08359
2400418317
A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both.
Recently, CNN architectures have been extended to allow for recognition from image sequences using a single network, by max pooling across all viewpoints @cite_28 , or by unwrapping an object shape into a panorama and max pooling across each row @cite_26 . However, both these methods assume that a fixed-length image sequence is provided during both training and testing, and hence are unsuitable for generalised multi-view recognition.
{ "cite_N": [ "@cite_28", "@cite_26" ], "mid": [ "2952789225", "1629010235" ], "abstract": [ "A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.", "This letter introduces a robust representation of 3-D shapes, named DeepPano, learned with deep convolutional neural networks (CNN). Firstly, each 3-D shape is converted into a panoramic view, namely a cylinder projection around its principle axis. Then, a variant of CNN is specifically designed for learning the deep representations directly from such views. Different from typical CNN, a row-wise max-pooling layer is inserted between the convolution and fully-connected layers, making the learned representations invariant to the rotation around a principle axis. Our approach achieves state-of-the-art retrieval classification results on two large-scale 3-D model datasets (ModelNet-10 and ModelNet-40), outperforming typical methods by a large margin." ] }
1605.08359
2400418317
A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both.
Rather than modelling an object as a set of views with 2D features, an explicit 3D shape can be learned from reconstruction @cite_44 or provided by CAD models @cite_37 , and subsequently matched to from depth images @cite_24 , 3D reconstructions @cite_18 , or partial reconstructions with shape completion @cite_40 @cite_14 . Shape descriptors include distributions of local surface properties @cite_9 @cite_4 , spherical harmonic functions over voxel grids @cite_12 , and 3D local invariant features @cite_6 . Recently, CNNs have been applied to 3D shapes by representing them as 3D occupancy grids, and building generative @cite_37 or discriminative @cite_32 networks.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_14", "@cite_4", "@cite_9", "@cite_32", "@cite_6", "@cite_44", "@cite_24", "@cite_40", "@cite_12" ], "mid": [ "2951755740", "2952195138", "", "", "2053939793", "2211722331", "", "1977792424", "2952771913", "2444097022", "1561952261" ], "abstract": [ "3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.", "Projective analysis is an important solution for 3D shape retrieval, since human visual perceptions of 3D shapes rely on various 2D observations from different view points. Although multiple informative and discriminative views are utilized, most projection-based retrieval systems suffer from heavy computational cost, thus cannot satisfy the basic requirement of scalability for search engines. In this paper, we present a real-time 3D shape search engine based on the projective images of 3D shapes. The real-time property of our search engine results from the following aspects: (1) efficient projection and view feature extraction using GPU acceleration; (2) the first inverted file, referred as F-IF, is utilized to speed up the procedure of multi-view matching; (3) the second inverted file (S-IF), which captures a local distribution of 3D shapes in the feature manifold, is adopted for efficient context-based reranking. As a result, for each query the retrieval task can be finished within one second despite the necessary cost of IO overhead. We name the proposed 3D shape search engine, which combines GPU acceleration and Inverted File (Twice), as GIFT. Besides its high efficiency, GIFT also outperforms the state-of-the-art methods significantly in retrieval accuracy on various shape benchmarks and competitions.", "", "", "This is a primer on extended Gaussian images. Extended Gaussian images are useful for representing the shapes of surfaces. They can be computed easily from: 1. needle maps obtained using photometric stereo; or 2. depth maps generated by ranging devices or binocular stereo. Importantly, they can also be determined simply from geometric models of the objects. Extended Gaussian images can be of use in at least two of the tasks facing a machine vision system: 1. recognition, and 2. determining the attitude in space of an object. Here, the extended Gaussian image is defined and some of its properties discussed. An elaboration for nonconvex objects is presented and several examples are shown.", "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.", "", "We address the problem of populating object category detection datasets with dense, per-object 3D reconstructions, bootstrapped from class labels, ground truth figure-ground segmentations and a small set of keypoint annotations. Our proposed algorithm first estimates camera viewpoint using rigid structure-from-motion, then reconstructs object shapes by optimizing over visual hull proposals guided by loose within-class shape similarity assumptions. The visual hull sampling process attempts to intersect an object's projection cone with the cones of minimal subsets of other similar objects among those pictured from certain vantage points. We show that our method is able to produce convincing per-object 3D reconstructions on one of the most challenging existing object-category detection datasets, PASCAL VOC. Our results may re-stimulate once popular geometry-oriented model-based recognition approaches.", "In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.", "Building a complete 3D model of a scene, given only a single depth image, is underconstrained. To gain a full volumetric model, one needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene. We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Exploring this hypothesis, we propose an algorithm that can complete the unobserved geometry of tabletop-sized objects, based on a supervised model trained on already available volumetric elements. Our model maps from a local observation in a single depth image to an estimate of the surface shape in the surrounding neighborhood. We validate our approach both qualitatively and quantitatively on a range of indoor object collections and challenging real scenes.", "One of the challenges in 3D shape matching arises from the fact that in many applications, models should be considered to be the same if they differ by a rotation. Consequently, when comparing two models, a similarity metric implicitly provides the measure of similarity at the optimal alignment. Explicitly solving for the optimal alignment is usually impractical. So, two general methods have been proposed for addressing this issue: (1) Every model is represented using rotation invariant descriptors. (2) Every model is described by a rotation dependent descriptor that is aligned into a canonical coordinate system defined by the model. In this paper, we describe the limitations of canonical alignment and discuss an alternate method, based on spherical harmonics, for obtaining rotation invariant representations. We describe the properties of this tool and show how it can be applied to a number of existing, orientation dependent descriptors to improve their matching performance. The advantages of this tool are two-fold: First, it improves the matching performance of many descriptors. Second, it reduces the dimensionality of the descriptor, providing a more compact representation, which in turn makes comparing two models more efficient." ] }
1605.08359
2400418317
A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both.
As of now however, CNNs with 2D view-based methods @cite_28 have outperformed their counterpart 3D voxel-based methods @cite_37 @cite_0 , and we therefore adopt the 2D approach in our work. However, it is not yet clear whether this greater performance arises from the superior abundance of 2D image data for pre-training deep networks, or the naturally more efficient representation of 2D than 3D in standard CNN architectures.
{ "cite_N": [ "@cite_28", "@cite_37", "@cite_0" ], "mid": [ "2952789225", "2951755740", "" ], "abstract": [ "A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.", "3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.", "" ] }
1605.08359
2400418317
A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both.
Methods for active recognition typically learn a generative model of the object, predict the object appearance from unvisited viewpoints, and select views based on a measure of entropy reduction. @cite_42 modelled objects as a 3D cloud of SIFT features, moving the camera to the view which would reveal the greatest number of features which have not yet been observed. A similar method was proposed in @cite_33 for guided mapping and robot navigation. The incorporation of active recognition into a Random Forests framework was presented in @cite_31 , whereby each decision tree encodes both object classification and viewpoint selection. Recently, the ShapeNets framework of @cite_37 proposed to model objects as a voxel grid, and learn a generative model based on Convolutional Deep Belief Networks to allow for view synthesis from unseen viewpoints.
{ "cite_N": [ "@cite_37", "@cite_31", "@cite_42", "@cite_33" ], "mid": [ "2951755740", "566166310", "1588884749", "2283534560" ], "abstract": [ "3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.", "We present Active Random Forests, a novel framework to address active vision problems. State of the art focuses on best viewing parameters selection based on single view classifiers. We propose a multi-view classifier where the decision mechanism of optimally changing viewing parameters is inherent to the classification process. This has many advantages: a) the classifier exploits the entire set of captured images and does not simply aggregate probabilistically per view hypotheses; b) actions are based on learnt disambiguating features from all views and are optimally selected using the powerful voting scheme of Random Forests and c) the classifier can take into account the costs of actions. The proposed framework is applied to the task of autonomously unfolding clothes by a robot, addressing the problem of best viewpoint selection in classification, grasp point and pose estimation of garments. We show great performance improvement compared to state of the art methods.", "This paper presents an active object recognition and pose estimation system for household objects in a highly cluttered environment. A sparse feature model, augmented with the characteristics of features when observed from different viewpoints is used for recognition and pose estimation while a dense point cloud model is used for storing geometry. This strategy makes it possible to accurately predict the expected information available during the Next-Best-View planning process as both the visibility as well as the likelihood of feature matching can be considered simultaneously. Experimental evaluations of the active object recognition and pose estimation with an RGB-D sensor mounted on a Turtlebot are presented.", "We propose an information-theoretic planning approach that enables mobile robots to autonomously construct dense 3D maps in a computationally efficient manner. Inspired by prior work, we accomplish this task by formulating an information-theoretic objective function based on CauchySchwarz quadratic mutual information (CSQMI) that guides robots to obtain measurements in uncertain regions of the map. We then contribute a two stage approach for active mapping. First, we generate a candidate set of trajectories using a combination of global planning and generation of local motion primitives. From this set, we choose a trajectory that maximizes the information-theoretic objective. Second, we employ a gradientbased trajectory optimization technique to locally refine the chosen trajectory such that the CSQMI objective is maximized while satisfying the robot’s motion constraints. We evaluated our approach through a series of simulations and experiments on a ground robot and an aerial robot mapping unknown 3D environments. Real-world experiments suggest our approach reduces the time to explore an environment by 70 compared to a closest frontier exploration strategy and 57 compared to an information-based strategy that uses global planning, while simulations demonstrate the approach extends to aerial robots with higher-dimensional state." ] }
1605.08359
2400418317
A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both.
However, these methods do not take into account the images acquired along a sequence towards the chosen next view. In @cite_20 , this was incorporated during active object reconstruction by visiting a sequence of actively-selected views, but reconstructing the object based on the entire image sequence that is observed between the views. For recognition, Partially Observable Markov Decision Processes (POMDPs) @cite_34 have seen success in optimising a trajectory for a particular task, although these require generative modelling rather than direct discriminative learning as we propose in our method. Finally, recurrent CNNs have recently been shown to be effective for active recognition from image sequences @cite_27 , and we believe that this approach has exciting future potential.
{ "cite_N": [ "@cite_27", "@cite_34", "@cite_20" ], "mid": [ "1484210532", "2010220123", "2101024363" ], "abstract": [ "We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.", "This paper presents an approach to probabilistic active perception planning for scene modeling in cluttered and realistic environments. When dealing with complex, multi-object scenes with arbitrary object positions, the estimation of 6D poses including their expected uncertainties is essential. The scene model keeps track of the probabilistic object hypotheses over several sequencing sensing actions to represent the real object constellation.", "Recognizing and manipulating objects is an important task for mobile robots performing useful services in everyday environments. In this paper, we develop a system that enables a robot to grasp an object and to move it in front of its depth camera so as to build a 3D surface model of the object. We derive an information gain based variant of the next best view algorithm in order to determine how the manipulator should move the object in front of the camera. By considering occlusions caused by the robot manipulator, our technique also determines when and how the robot should re-grasp the object in order to build a complete model." ] }
1605.08456
2550846962
Abstract We formulate and validate a finite element approach to the propagation of a slowly decaying electromagnetic wave, called surface plasmon–polariton, excited along a conducting sheet, e.g., a single-layer graphene sheet, by an electric Hertzian dipole. By using a suitably rescaled form of time-harmonic Maxwell's equations, we derive a variational formulation that enables a direct numerical treatment of the associated class of boundary value problems by appropriate curl-conforming finite elements. The conducting sheet is modeled as an idealized hypersurface with an effective electric conductivity. The requisite weak discontinuity for the tangential magnetic field across the hypersurface can be incorporated naturally into the variational formulation. We carry out numerical simulations for an infinite sheet with constant isotropic conductivity embedded in two spatial dimensions; and validate our numerics against the closed-form exact solution obtained by the Fourier transform in the tangential coordinate. Numerical aspects of our treatment such as an absorbing perfectly matched layer, as well as local refinement and a posteriori error control are discussed.
In the last few decades, several groups have been studying implications of surface plasmonics; for a (definitely non-exhaustive) sample of related works, see @cite_56 @cite_27 @cite_6 @cite_53 @cite_57 @cite_29 @cite_23 @cite_0 @cite_45 . For instance, in @cite_56 the authors review macroscopic properties of the electric conductivity of graphene, derive dispersion relations for electromagnetic plane waves in inhomogeneous structures, and discuss methods for exciting SPPs; see also the integral-equation approach in @cite_45 . On the other hand, the problem of a radiating dipole source near a graphene sheet is semi-analytically addressed in @cite_27 @cite_6 . In the same vein, in @cite_29 the authors numerically study the field produced by dipoles near a graphene sheet, recognizing a region where the scattered field may be significant. Most recently, two of us derived closed-form analytical expressions for the electromagnetic field when the dipole source and observation point lie on the sheet @cite_57 .
{ "cite_N": [ "@cite_53", "@cite_29", "@cite_6", "@cite_56", "@cite_57", "@cite_0", "@cite_27", "@cite_45", "@cite_23" ], "mid": [ "1995379306", "1967240329", "2052134356", "2142885828", "2267358883", "4296862", "1971219410", "2082650330", "1572432597" ], "abstract": [ "Plasmonics is considered as one of the most promising candidates for implementing the next generation of ultrafast and ultracompact photonic circuits. Considerable effort has been made to scale down individual plasmonic components into the nanometer regime. However, a compact plasmonic source that can efficiently generate surface plasmon polaritons (SPPs) and deliver SPPs to the region of interest is yet to be realized. Here, bridging the optical antenna theory and the recently developed concept of metamaterials, we demonstrate a subwavelength, highly efficient plasmonic source for directional generation of SPPs. The designed device consists of two nanomagnetic resonators with detuned resonant frequencies. At the operating wavelength, incident photons can be efficiently channeled into SPP waves modulated by the electric field polarization. By tailoring the relative phase at resonance and the separation between the two nanoresonators, SPPs can be steered to predominantly propagate along one specific direct...", "The authors acknowledge support from the Spanish Ministry of Science and Innovation under Grants No. MAT2009- 06609-C02 and No. CSD2007-046-NanoLight.es. A.Y.N. acknowledges Juan de la Cierva Grant No. JCI-2008-3123", "Excitation of the discrete (surface-wave plasmon propagation mode) and continuous (radiation modes) spectrum by a point current source in the vicinity of graphene is examined. The graphene is represented by an infinitesimally thin, local, and isotropic two-sided conductivity surface. The dynamic electric field due to the point source is obtained by complex-plane analysis of Sommerfeld integrals, and is decomposed into physically relevant contributions. Frequencies considered are in the GHz through mid-THz range. As expected, the TM discrete surface wave (surface plasmon) can dominate the response along the graphene layer, although this depends on the source and observation point location and frequency. In particular, the TM discrete mode can provide the strongest contribution to the total electric field in the upper GHz and low THz range, where the surface conductivity is dominated by its imaginary part and the graphene acts as a reactive (inductive) sheet. V C 2011 American Institute of Physics. [doi:10.1063 1.3662883]", "We discuss the properties of surface plasmons-polaritons in graphene and describe four possible ways of coupling electromagnetic radiation in the terahertz (THz) spectral range to this type of surface waves: (i) the attenuated total reflection (ATR) method using a prism in the Otto configuration, (ii) graphene micro-ribbon arrays or monolayers with modulated conductivity, (iii) a met al stripe on top of the graphene layer, and (iv) graphene-based gratings. The text provides a number of original results along with their detailed derivation and discussion.", "We derive and interpret solutions of time-harmonic Maxwell’s equations with a vertical and a horizontal electric dipole near a planar, thin conducting film, e.g., graphene sheet, lying between two unbounded isotropic and non-magnetic media. Exact expressions for all field components are extracted in terms of rapidly convergent series of known transcendental functions when the ambient media have equal permittivities and both the dipole and observation point lie on the plane of the film. These solutions are simplified for all distances from the source when the film surface resistivity is large in magnitude compared to the intrinsic impedance of the ambient space. The formulas reveal the analytical structure of two types of waves that can possibly be excited by the dipoles and propagate on the film. One of these waves is intimately related to the surface plasmon-polariton of transverse-magnetic polarization of plane waves.", "Fundamentals of Plasmonics.- Electromagnetics of Met als.- Surface Plasmon Polaritons at Met al Insulator Interfaces.- Excitation of Surface Plasmon Polaritons at Planar Interfaces.- Imaging Surface Plasmon Polariton Propagation.- Localized Surface Plasmons.- Electromagnetic Surface Modes at Low Frequencies.- Applications.- Plasmon Waveguides.- Transmission of Radiation Through Apertures and Films.- Enhancement of Emissive Processes and Nonlinearities.- Spectroscopy and Sensing.- Metamaterials and Imaging with Surface Plasmon Polaritons.- Concluding Remarks.", "", "We consider the scattering of an incident electromagnetic radiation on a two-dimensional (2D) electron layer with an imbedded met allic contact. We show that the incident wave excites in the system the 2D plasmon polaritons running along the 2D layer and localized near it, and electromagnetic waves reflected back from the system. The ratio of the energy transformed to the 2D plasmon polaritons and reflected back from the layer depends on the frequency and the value of a retardation parameter, which characterizes the importance of retardation effects. When the retardation parameter is large, the energy of the incident radiation is mainly reflected from the 2D electron system and the excitation of the 2D plasmon polaritons is less effective. The results obtained are discussed in connection with recent experiments on the microwave response of the 2D electron systems.", "Surface plasmons on smooth surfaces.- Surface plasmons on surfaces of small roughness.- Surfaces of enhanced roughness.- Light scattering at rough surfaces without an ATR device.- Surface plasmons on gratings.- Conclusions." ] }
1605.08325
2397612811
We develop a scalable and extendable training framework that can utilize GPUs across nodes in a cluster and accelerate the training of deep learning models based on data parallelism. Both synchronous and asynchronous training are implemented in our framework, where parameter exchange among GPUs is based on CUDA-aware MPI. In this report, we analyze the convergence and capability of the framework to reduce training time when scaling the synchronous training of AlexNet and GoogLeNet from 2 GPUs to 8 GPUs. In addition, we explore novel ways to reduce the communication overhead caused by exchanging parameters. Finally, we release the framework as open-source for further research on distributed deep learning
The idea of exploiting data parallelism in machine learning has been widely explored in recent years in both asynchronous and synchronous ways. To accelerate the training of a speech recognition model on distributed CPU cores, DownPour, an asynchronous parameter exchanging method @cite_10 , was proposed. It was the largest-scale method to-date for distributed training of neural networks. It was later found that controlling the maximum staleness of parameter updates received by the server leads to faster training convergence @cite_7 on problems like topic modeling, matrix factorization and lasso regression compared to a purely asynchronous approach. For accelerating image classification on the CIFAR and ImageNet datasets, an elastic averaging strategy between asynchronous workers and the server was later proposed @cite_17 . This algorithm allows more exploration of local optima than DownPour and alleviates the need for frequent communication between workers and the server.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_17" ], "mid": [ "2168231600", "", "2963804082" ], "abstract": [ "Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.", "", "We study the problem of stochastic optimization for deep learning in the parallel computing environment under communication constraints. A new algorithm is proposed in this setting where the communication and coordination of work among concurrent processes (local workers), is based on an elastic force which links the parameters they compute with a center variable stored by the parameter server (master). The algorithm enables the local workers to perform more exploration, i.e. the algorithm allows the local variables to fluctuate further from the center variable by reducing the amount of communication between local workers and the master. We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance. We propose synchronous and asynchronous variants of the new algorithm. We provide the stability analysis of the asynchronous variant in the round-robin scheme and compare it with the more common parallelized method ADMM. We show that the stability of EASGD is guaranteed when a simple stability condition is satisfied, which is not the case for ADMM. We additionally propose the momentum-based version of our algorithm that can be applied in both synchronous and asynchronous settings. Asynchronous variant of the algorithm is applied to train convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Experiments demonstrate that the new algorithm accelerates the training of deep architectures compared to DOWNPOUR and other common baseline approaches and furthermore is very communication efficient." ] }