aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1811.02676
2899983078
In this paper we look at the classical set-maxima problem. We give a new result using a sparse lattice of subsets. We then extend this to a geometric setting. Let @math be a set of @math points in a plane and @math be a set of convex regions. Let a key value be assigned to each point. The problems is to determine for each convex region of @math the point having the largest key. We give a parameterized result for the comparison complexity.
@cite_2 gave the first general algorithm. Their rank-sequence algorithm'' determines a rank sequence @math according to the application domain. Specifically a rank sequence is an ordered sequence of @math ranks @math . The corresponding partition of @math is computed. Each @math is reduced to just those elements in on block of the partition. When the elements are points and the sets are hyperplanes that form a projective space, it can be computed with linear comparisons, for a suitable rank sequence. However, the rank-sequence algorithm is no better than the trivial algorithm above in the worst case. It was shown by @cite_1 that for some collection of subsets there are no good rank sequence for which the number of comparisons made by the algorithm is linear.
{ "cite_N": [ "@cite_1", "@cite_2" ], "mid": [ "2413157087", "783504" ], "abstract": [ "Sorting problems have long been one of the foundations of theoretical computer science. Sorting problems attempt to learn properties of an unknown total order of a known set. We test the order by comparing pairs of elements, and through repeated tests deduce some order structure on the set. The set-maxima problem is: given a family S of subsets of a set X, produce the maximal element of each element of S. Local sorting is a sub-problem of set-maxima, when S is a subset of (X (choose) 2), i.e. there exists a graph G with vertex set X and edge set S. We compare algorithms by estimating the number of comparisons needed, as a function of n = |X|, and m = |S|. In this paper, we review the information-theory lower bounds for the set-maxima and local-sorting problems. We review deterministic algorithms which have optimally solved the set-maxima problem, as a function of m,n, in settings where extra assumptions about S have been made. Also, we review randomized algorithms for local sorting and set-maxima which achieve an optimal expected number of comparisons.", "The set maxima problem is as follows: given a family of subsets @math of a totally ordered set @math , find the maximum in each subset. The computational model is the comparison tree. One possible solution is to sort the set X, which requires @math comparisons. The open question is whether set maxima is easier than sorting.Here, a solution is presented that requires a linear number of comparisons for the following two cases: • The sets are hyperplanes in a d-dimensional projective geometry @math . In particular, the interesting case is @math , when the intersection of any two subsets is exactly one. • The sets are chosen randomly with probability @math for each element to be in a set. The random choices are mutually independent and the number of comparisons needed is linear with probability approaching 1 asymptotically." ] }
1811.02676
2899983078
In this paper we look at the classical set-maxima problem. We give a new result using a sparse lattice of subsets. We then extend this to a geometric setting. Let @math be a set of @math points in a plane and @math be a set of convex regions. Let a key value be assigned to each point. The problems is to determine for each convex region of @math the point having the largest key. We give a parameterized result for the comparison complexity.
@cite_4 showed that this can be generalized using weighted matroids. One of the canonical examples of matroids is the graphic matroid. Generalized to binary matroids (since graphic matroids are also regular) this has been termed by Liberatore as the fundamental path maxima problem over such matroids. A cographic matroid is a dual of a graphic matroid. For a cographic matroid the problem can be solved in @math ( @cite_3 ) comparisons. Liberatore generalized these results to a restricted class of matroids that can be constructed via direct-sums and 2-sums and gave a @math -comparison algorithm ( @cite_4 ). @cite_0 proposed an algorithm that chose a rank sequence randomly. They show that the expected number of comparisons in their algorithm is @math which is optimal according to the comparison tree complexity. The randomized algorithm can solve a more general problem of computing the largest @math elements for each subset @math .
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_3" ], "mid": [ "2030153860", "2059771614", "2048500523" ], "abstract": [ "Randomized algorithms for two sorting problems are presented. In the local sorting problem, a graph is given in which each vertex is assigned an element of a total order, and the task is to determine the relative order of every pair of adjacent vertices. In the set-maxima problem, a collection of sets whose elements are drawn from a total order is given, and the task is to determine the maximum element in each set. Lower bounds for the problems in the comparison model are described and it is shown that the algorithms are optimal within a constant factor.", "", "A device for sharpening axe heads comprising a generally U-shaped frame placed over the head of the axe with opposite legs of the frame straddling opposite side faces of the axe head. Securing the frame to the head is a thumb screw threaded through one of the frame legs to engage one side face of the axe head. The other leg of the frame has an elongated aperture therein receiving a threaded bolt which in turn receives an elongated file holder arm to mount the same relative to the main frame for pivotal movement about the bolt. The inner end of the bolt has an enlarged head engaging the adjacent side face of the axe head while the other end of the bolt has a wing nut threaded thereon. A coil compression spring between the leg and the file holder arm yieldably urges the latter against the wing nut which acts as an adjustable stop. The file is mounted to the holder arm as an extention thereof by a U-shaped clamp secured across the file by wing nuts received on threaded studs of the clamp. In use the file may be grasped and oscillated across the cutting edge of the axe head as the file and its associated holder arm pivot about the bolt." ] }
1811.02783
2899665041
We introduce a novel approach to feed-forward neural network interpretation based on partitioning the space of sequences of neuron activations. In line with this approach, we propose a model-specific interpretation method, called YASENN. Our method inherits many advantages of model-agnostic distillation, such as an ability to focus on the particular input region and to express an explanation in terms of features different from those observed by a neural network. Moreover, examination of distillation error makes the method applicable to the problems with low tolerance to interpretation mistakes. Technically, YASENN distills the network with an ensemble of layer-wise gradient boosting decision trees and encodes the sequences of neuron activations with leaf indices. The finite number of unique codes induces a partitioning of the input space. Each partition may be described in a variety of ways, including examination of an interpretable model (e.g. a logistic regression or a decision tree) trained to discriminate between objects of those partitions. Our experiments provide an intuition behind the method and demonstrate revealed artifacts in neural network decision making.
Our method is closely connected to TREPAN @cite_31 , which extracts rules from NNs with a special kind of decision trees. Like this method, we also use trees to interpret the decision-making procedure. Unlike it, we use the internal structure of a NN to gain more knowledge.
{ "cite_N": [ "@cite_31" ], "mid": [ "2113882472" ], "abstract": [ "A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, TREPAN, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that TREPAN is able to produce decision trees that maintain a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces." ] }
1811.02783
2899665041
We introduce a novel approach to feed-forward neural network interpretation based on partitioning the space of sequences of neuron activations. In line with this approach, we propose a model-specific interpretation method, called YASENN. Our method inherits many advantages of model-agnostic distillation, such as an ability to focus on the particular input region and to express an explanation in terms of features different from those observed by a neural network. Moreover, examination of distillation error makes the method applicable to the problems with low tolerance to interpretation mistakes. Technically, YASENN distills the network with an ensemble of layer-wise gradient boosting decision trees and encodes the sequences of neuron activations with leaf indices. The finite number of unique codes induces a partitioning of the input space. Each partition may be described in a variety of ways, including examination of an interpretable model (e.g. a logistic regression or a decision tree) trained to discriminate between objects of those partitions. Our experiments provide an intuition behind the method and demonstrate revealed artifacts in neural network decision making.
@cite_18 proposed LORE method for local rule extraction. LORE uses a special procedure to sample more data near the object of interest, which looks similar to our adaptive explanation extension.
{ "cite_N": [ "@cite_18" ], "mid": [ "2803532212" ], "abstract": [ "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box." ] }
1811.02783
2899665041
We introduce a novel approach to feed-forward neural network interpretation based on partitioning the space of sequences of neuron activations. In line with this approach, we propose a model-specific interpretation method, called YASENN. Our method inherits many advantages of model-agnostic distillation, such as an ability to focus on the particular input region and to express an explanation in terms of features different from those observed by a neural network. Moreover, examination of distillation error makes the method applicable to the problems with low tolerance to interpretation mistakes. Technically, YASENN distills the network with an ensemble of layer-wise gradient boosting decision trees and encodes the sequences of neuron activations with leaf indices. The finite number of unique codes induces a partitioning of the input space. Each partition may be described in a variety of ways, including examination of an interpretable model (e.g. a logistic regression or a decision tree) trained to discriminate between objects of those partitions. Our experiments provide an intuition behind the method and demonstrate revealed artifacts in neural network decision making.
Like @cite_12 @cite_15 @cite_36 we distill a NN with a complex unexplainable model. But unlike it, we interpret the extracted partitioning instead of examining the student model.
{ "cite_N": [ "@cite_36", "@cite_15", "@cite_12" ], "mid": [ "2621053657", "2793566095", "2785017485" ], "abstract": [ "", "Black-box risk scoring models permeate our lives, yet are typically proprietary or opaque. We propose Distill-and-Compare, an approach to audit such models without probing the black-box model API or pre-defining features to audit. To gain insight into black-box models, we treat them as teachers, training transparent student models to mimic the risk scores assigned by the black-box models. We compare the mimic model trained with distillation to a second, un-distilled transparent model trained on ground truth outcomes, and use differences between the two models to gain insight into the black-box model. We demonstrate the approach on four data sets: COMPAS, Stop-and-Frisk, Chicago Police, and Lending Club. We also propose a statistical test to determine if a data set is missing key features used to train the black-box model. Our test finds that the ProPublica data is likely missing key feature(s) used in COMPAS.", "Model distillation was originally designed to distill knowledge from a large, complex teacher model to a faster, simpler student model without significant loss in prediction accuracy. We investigate model distillation for another goal -- transparency -- investigating if fully-connected neural networks can be distilled into models that are transparent or interpretable in some sense. Our teacher models are multilayer perceptrons, and we try two types of student models: (1) tree-based generalized additive models (GA2Ms), a type of boosted, short tree (2) gradient boosted trees (GBTs). More transparent student models are forthcoming. Our results are not yet conclusive. GA2Ms show some promise for distilling binary classification teachers, but not yet regression. GBTs are not \"directly\" interpretable but may be promising for regression teachers. GA2M models may provide a computationally viable alternative to additive decomposition methods for global function approximation." ] }
1811.02783
2899665041
We introduce a novel approach to feed-forward neural network interpretation based on partitioning the space of sequences of neuron activations. In line with this approach, we propose a model-specific interpretation method, called YASENN. Our method inherits many advantages of model-agnostic distillation, such as an ability to focus on the particular input region and to express an explanation in terms of features different from those observed by a neural network. Moreover, examination of distillation error makes the method applicable to the problems with low tolerance to interpretation mistakes. Technically, YASENN distills the network with an ensemble of layer-wise gradient boosting decision trees and encodes the sequences of neuron activations with leaf indices. The finite number of unique codes induces a partitioning of the input space. Each partition may be described in a variety of ways, including examination of an interpretable model (e.g. a logistic regression or a decision tree) trained to discriminate between objects of those partitions. Our experiments provide an intuition behind the method and demonstrate revealed artifacts in neural network decision making.
Like a prototype classifier network @cite_20 we partition the input space with respect to the intrinsic decision-making process. However, in general, YASENN does not return a prototype for each discretized stream. One can produce it in a way appropriate for the application area. Also, we do not restrict the NN to a special architecture.
{ "cite_N": [ "@cite_20" ], "mid": [ "2765813195" ], "abstract": [ "Deep neural networks are widely used for classification. These deep models often suffer from a lack of interpretability -- they are particularly difficult to understand because of their non-linear nature. As a result, neural networks are often treated as \"black box\" models, and in the past, have been trained purely to optimize the accuracy of predictions. In this work, we create a novel network architecture for deep learning that naturally explains its own reasoning for each prediction. This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input. The encoder of the autoencoder allows us to do comparisons within the latent space, while the decoder allows us to visualize the learned prototypes. The training objective has four terms: an accuracy term, a term that encourages every prototype to be similar to at least one encoded input, a term that encourages every encoded input to be close to at least one prototype, and a term that encourages faithful reconstruction by the autoencoder. The distances computed in the prototype layer are used as part of the classification process. Since the prototypes are learned during training, the learned network naturally comes with explanations for each prediction, and the explanations are loyal to what the network actually computes." ] }
1811.02783
2899665041
We introduce a novel approach to feed-forward neural network interpretation based on partitioning the space of sequences of neuron activations. In line with this approach, we propose a model-specific interpretation method, called YASENN. Our method inherits many advantages of model-agnostic distillation, such as an ability to focus on the particular input region and to express an explanation in terms of features different from those observed by a neural network. Moreover, examination of distillation error makes the method applicable to the problems with low tolerance to interpretation mistakes. Technically, YASENN distills the network with an ensemble of layer-wise gradient boosting decision trees and encodes the sequences of neuron activations with leaf indices. The finite number of unique codes induces a partitioning of the input space. Each partition may be described in a variety of ways, including examination of an interpretable model (e.g. a logistic regression or a decision tree) trained to discriminate between objects of those partitions. Our experiments provide an intuition behind the method and demonstrate revealed artifacts in neural network decision making.
Unlike NeuroRule @cite_4 , we work with activations in a more general way: we take into account the whole layer rather than process neurons one by one. In addition, YASENN was designed for deep NNs, and it gains benefits of their depth.
{ "cite_N": [ "@cite_4" ], "mid": [ "1536737053" ], "abstract": [ "Classification, which involves finding rules that partition a given da.ta set into disjoint groups, is one class of data mining problems. Approaches proposed so far for mining classification rules for large databases are mainly decision tree based symbolic learning methods. The connectionist approach based on neura.l networks has been thought not well suited for data mining. One of the major reasons cited is that knowledge generated by neural networks is not explicitly represented in the form of rules suitable for verification or interpretation by humans. This paper examines this issue. With our newly developed algorithms, rules which are similar to, or more concise than those generated by the symbolic methods can be extracted from the neural networks. The data mining process using neural networks with the emphasis on rule extraction is described. ExperimenM results and comparison with previously published works are presented." ] }
1811.02848
2891126844
With the rise of linked data and knowledge graphs, the need becomes compelling to find suitable solutions to increase the coverage and correctness of datasets, to add missing knowledge and to identify and remove errors. Several approaches – mostly relying on machine learning and NLP techniques – have been proposed to address this refinement goal; they usually need a partial gold standard, i.e. some “ground truth” to train automatic models. Gold standards are manually constructed, either by involving domain experts or by adopting crowdsourcing and human computation solutions.
Data linking is rooted in the record linkage problem studied in the databases community since the 1960s @cite_26 ; for this reason, in the Semantic Web community, the term is often used to name the problem of finding equivalent resources on the Web of linked data @cite_8 ; in this meaning, data linking is the process to create links that connect subject- and object-resources from two different datasets through a property that indicate a correspondence or an equivalence (e.g. owl:sameAs ).
{ "cite_N": [ "@cite_26", "@cite_8" ], "mid": [ "2073471108", "2056099064" ], "abstract": [ "Abstract A mathematical model is developed to provide a theoretical framework for a computer-oriented solution to the problem of recognizing those records in two files which represent identical persons, objects or events (said to be matched). A comparison is to be made between the recorded characteristics and values in two records (one from each file) and a decision made as to whether or not the members of the comparison-pair represent the same person or event, or whether there is insufficient evidence to justify either of these decisions at stipulated levels of error. These three decisions are referred to as link (A 1), a non-link (A 3), and a possible link (A 2). The first two decisions are called positive dispositions. The two types of error are defined as the error of the decision A 1 when the members of the comparison pair are in fact unmatched, and the error of the decision A 3 when the members of the comparison pair are, in fact matched. The probabilities of these errors are defined as and respecti...", "By specifying that published datasets must link to other existing datasets, the 4th linked data principle ensures a Web of data and not just a set of unconnected data islands. The authors propose in this paper the term data linking to name the problem of finding equivalent resources on the Web of linked data. In order to perform data linking, many techniques were developed, finding their roots in statistics, database, natural language processing and graph theory. The authors begin this paper by providing background information and terminological clarifications related to data linking. Then a comprehensive survey over the various techniques available for data linking is provided. These techniques are classified along the three criteria of granularity, type of evidence, and source of the evidence. Finally, the authors survey eleven recent tools performing data linking and we classify them according to the surveyed techniques." ] }
1811.02848
2891126844
With the rise of linked data and knowledge graphs, the need becomes compelling to find suitable solutions to increase the coverage and correctness of datasets, to add missing knowledge and to identify and remove errors. Several approaches – mostly relying on machine learning and NLP techniques – have been proposed to address this refinement goal; they usually need a partial gold standard, i.e. some “ground truth” to train automatic models. Gold standards are manually constructed, either by involving domain experts or by adopting crowdsourcing and human computation solutions.
We prefer to generalize the concept of extending it to the task of creating links in the form of triples, without limitation to specific types of resources or predicates, nor necessarily referring to linking across two different datasets or knowledge graphs (data linking can happen also within a single dataset or knowledge graphs). In this sense, data linking can be interpreted as a solution to a issue, i.e. the process to create, update or correct links in a dataset. As defined in @cite_6 with respect to knowledge graphs, with data linking we do not consider the case of constructing a dataset or graph from scratch, but rather we assume an existing input dataset which needs to be improved by adding missing knowledge or identifying and correcting mistakes.
{ "cite_N": [ "@cite_6" ], "mid": [ "2300469216" ], "abstract": [ "In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term \"Knowledge Graph\" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used." ] }
1811.02848
2891126844
With the rise of linked data and knowledge graphs, the need becomes compelling to find suitable solutions to increase the coverage and correctness of datasets, to add missing knowledge and to identify and remove errors. Several approaches – mostly relying on machine learning and NLP techniques – have been proposed to address this refinement goal; they usually need a partial gold standard, i.e. some “ground truth” to train automatic models. Gold standards are manually constructed, either by involving domain experts or by adopting crowdsourcing and human computation solutions.
The Semantic Web community has long investigated the methods to address the data linking problem, by identifying linked dataset quality assessment methodologies @cite_0 and by proposing manual, semi-automatic or automatic tools to implement refinement operations @cite_21 @cite_3 . The large majority of refinement approaches, especially on knowledge graphs in which scalable solutions are needed, are based on different statistical and machine learning techniques @cite_17 @cite_13 @cite_15 @cite_19 .
{ "cite_N": [ "@cite_21", "@cite_3", "@cite_0", "@cite_19", "@cite_15", "@cite_13", "@cite_17" ], "mid": [ "1913451680", "1496949948", "1870959433", "2396241715", "2161098784", "2016753842", "2090634677" ], "abstract": [ "The quality of data is a key factor that determines the performance of information systems, in particular with regard (1) to the amount of exceptions in the execution of business processes and (2) to the quality of decisions based on the output of the respective information system. Recently, the Semantic Web and Linked Data activities have started to provide substantial data resources that may be used for real business operations. Hence, it will soon be critical to manage the quality of such data. Unfortunately, we can observe a wide range of data quality problems in Semantic Web data. In this paper, we (1) evaluate how the state of the art in data quality research fits the characteristics of the Web of Data, (2) describe how the SPARQL query language and the SPARQL Inferencing Notation (SPIN) can be utilized to identify data quality problems in Semantic Web data automatically and this within the Semantic Web technology stack, and (3) evaluate our approach.", "Linked Data is at its core about the setting of links between resources. Links provide enriched semantics, pointers to extra information and enable the merging of data sets. However, as the amount of Linked Data has grown, there has been the need to automate the creation of links and such automated approaches can create low-quality links or unsuitable network structures. In particular, it is difficult to know whether the links introduced improve or diminish the quality of Linked Data. In this paper, we present LINK-QA, an extensible framework that allows for the assessment of Linked Data mappings using network metrics. We test five metrics using this framework on a set of known good and bad links generated by a common mapping system, and show the behaviour of those metrics.", "The development and standardization of semantic web technologies has resulted in an unprecedented volume of data being published on the Web as Linked Data (LD). However, we observe widely varying data quality ranging from extensively curated datasets to crowdsourced and extracted data of relatively low quality. In this article, we present the results of a systematic review of approaches for assessing the quality of LD. We gather existing approaches and analyze them qualitatively. In particular, we unify and formalize commonly used terminologies across papers related to data quality and provide a comprehensive list of 18 quality dimensions and 69 metrics. Additionally, we qualitatively analyze the 30 core approaches and 12 tools using a set of attributes. The aim of this article is to provide researchers and data curators a comprehensive understanding of existing work, thereby encouraging further experimentation and development of new approaches focused towards data quality, specifically for LD.", "Topic models are widely used to thematically describe a collection of text documents and have become an important technique for systems that measure document similarity for classification, clustering, segmentation, entity linking and more. While they have been applied to some non-text domains, their use for semi-structured graph data, such as RDF, has been less explored. We present a framework for applying topic modeling to RDF graph data and describe how it can be used in a number of linked data tasks. Since topic modeling builds abstract topics using the co-occurrence of document terms, sparse documents can be problematic, presenting challenges for RDF data. We outline techniques to overcome this problem and the results of experiments in using them. Finally, we show preliminary results of using Latent Dirichlet Allocation generative topic modeling for several linked data use cases.", "We describe an approach for performing entity type recognition in heterogeneous semantic graphs in order to reduce the computational cost of performing coreferenceresolution. Our research specifically addresses the problem of working with semi-structured text that uses ontologies that are not informative or not known. This problem is similar to co reference resolution in unstructured text, where entities and their types are identified using contextual information and linguistic-based analysis. Semantic graphs are semi-structured with very little contextual information and trivial grammars that do not convey additional information. In the absence of known ontologies, performing co reference resolution can be challenging. Our work uses a supervised machine learning algorithm and entity type dictionaries to map attributes to a common attribute space. We evaluated the approach in experiments using data from Wikipedia, Freebase and Arnetminer.", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", "Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base." ] }
1811.02848
2891126844
With the rise of linked data and knowledge graphs, the need becomes compelling to find suitable solutions to increase the coverage and correctness of datasets, to add missing knowledge and to identify and remove errors. Several approaches – mostly relying on machine learning and NLP techniques – have been proposed to address this refinement goal; they usually need a partial gold standard, i.e. some “ground truth” to train automatic models. Gold standards are manually constructed, either by involving domain experts or by adopting crowdsourcing and human computation solutions.
Machine learning methods, however, need a partial to train automated models; those training sets are usually created manually by experts: while this usually leads to higher quality trained models, it is also an expensive process, so those ground truth'' datasets are usually small. Involving humans at scale in an effective way is, on the other hand, the goal of crowdsourcing @cite_7 and human computation @cite_2 . Indeed, microtask workers have been employed as a mean to perform manual quality assessment of linked data @cite_27 @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_27", "@cite_7", "@cite_2" ], "mid": [ "187967972", "31535402", "", "2170989440" ], "abstract": [ "Many aspects of Linked Data management - including exposing legacy data and applications to semantic formats, designing vocabularies to describe RDF data, identifying links between entities, query processing, and data curation - are necessarily tackled through the combination of human effort with algorithmic techniques. In the literature on traditional data management the theoretical and technical groundwork to realize and manage such combinations is being established. In this paper we build upon and extend these ideas to propose a framework by which human and computational intelligence can co-exist by augmenting existing Linked Data and Linked Service technology with crowdsourcing functionality. Starting from a motivational scenario we introduce a set of generic tasks which may feasibly be approached using crowdsourcing platforms such as Amazon's Mechanical Turk, explain how these tasks can be decomposed and translated into MTurk projects, and roadmap the extensions to SPARQL, D2RQ R2R and Linked Data browsing that are required to achieve this vision.", "In this paper we look into the use of crowdsourcing as a means to handle Linked Data quality problems that are challenging to be solved automatically. We analyzed the most common errors encountered in Linked Data sources and classified them according to the extent to which they are likely to be amenable to a specific form of crowdsourcing. Based on this analysis, we implemented a quality assessment methodology for Linked Data that leverages the wisdom of the crowds in different ways: (i) a contest targeting an expert crowd of researchers and Linked Data enthusiasts; complemented by (ii) paid microtasks published on Amazon Mechanical Turk.We empirically evaluated how this methodology could efficiently spot quality issues in DBpedia. We also investigated how the contributions of the two types of crowds could be optimally integrated into Linked Data curation processes. The results show that the two styles of crowdsourcing are complementary and that crowdsourcing-enabled quality assessment is a promising and affordable way to enhance the quality of Linked Data.", "", "The rapid growth of human computation within research and industry has produced many novel ideas aimed at organizing web users to do great things. However, the growth is not adequately supported by a framework with which to understand each new system in the context of the old. We classify human computation systems to help identify parallels between different systems and reveal \"holes\" in the existing work as opportunities for new research. Since human computation is often confused with \"crowdsourcing\" and other terms, we explore the position of human computation with respect to these related topics." ] }
1811.02848
2891126844
With the rise of linked data and knowledge graphs, the need becomes compelling to find suitable solutions to increase the coverage and correctness of datasets, to add missing knowledge and to identify and remove errors. Several approaches – mostly relying on machine learning and NLP techniques – have been proposed to address this refinement goal; they usually need a partial gold standard, i.e. some “ground truth” to train automatic models. Gold standards are manually constructed, either by involving domain experts or by adopting crowdsourcing and human computation solutions.
Among the different human computation approaches, games with a purpose @cite_18 have experienced a wide success, because of their ability to engage users through the incentive of fun. A GWAP is a gaming application that exploits players' actions in the game to solve some (hidden) tasks; users play the game for fun, but the collateral effect'' of their playing is that the comparison and aggregation of players' contributions are used to solve some problems, usually labelling, classification, ranking, clustering, etc. Also in the Semantic Web community, GWAPs have been adopted to solve a number of linked data management issues @cite_12 @cite_11 @cite_4 @cite_29 @cite_20 @cite_16 @cite_1 @cite_5 @cite_28 , from multimedia labelling to ontology alignment, from error detection to link ranking.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_28", "@cite_29", "@cite_1", "@cite_16", "@cite_5", "@cite_20", "@cite_12", "@cite_11" ], "mid": [ "2035683813", "1561665100", "", "2295344557", "2056583767", "21240985", "1722298535", "97696126", "2158187763", "2056141229" ], "abstract": [ "Data generated as a side effect of game play also solves computational problems and trains AI algorithms.", "Annotated corpora of the size needed for modern computational linguistics research cannot be created by small groups of hand annotators. One solution is to exploit collaborative work on the Web and one way to do this is through games like the ESP game. Applying this methodology however requires developing methods for teaching subjects the rules of the game and evaluating their contribution while maintaining the game entertainment. In addition, applying this method to linguistic annotation tasks like anaphoric annotation requires developing methods for presenting text and identifying the components of the text that need to be annotated. In this paper we present the first version of Phrase Detectives (http: www.phrasedetectives.org), to our knowledge the first game designed for collaborative linguistic annotation on the Web.", "", "The interoperability of data depends on the availability of alignments among different ontologies. Various approaches to match, merge and integrate ontologies and, more recently, to interlink RDF data sets were developed over. Even though the research area has matured, the full automation of the ontology alignment process is not feasible and the human user is indispensable. Such tasks involve mainly bootstrapping the underlying methods and for validating and enhancing their results. The question of acquiring such input still remains to be solved, in particular when it comes to the motivators and incentives that are likely to make people dedicate labor to ontology alignment tasks. In this paper we build on previous work of ours on using casual games to tackle this problem. We present SpotTheLink, the latest release of the OntoGame framework, which allows for the definition of mappings between Semantic Web ontologies as part of a collaborative game experience.We illustrate the idea of SpotTheLink in an instance of the game aiming to align DBpedia and PROTON, and explain the game background mechanics by which players’ inputs are translated into SKOS-based ontology mappings. A summary of findings of SpotTheLink user evaluation and the experiences we gained throughout the entire life span of OntoGame allow us to derive a number of best practices and guidelines for the design of incentives-minded semantic-content-authoring technology, in which human and computational intelligence are smoothly interwoven.", "Checking-in various venues in our surrounding environment via location-based apps like foursquare is becoming more and more popular, this behaviour makes people share some \"bits\" of their location with their friends. Exploiting this trend in a Human Computation fashion to collect information about urban environments is the aim of the Urbanopoly Android app -- a social, mobile and location-based Game with a Purpose designed around the idea of the \"monopoly\" board game. In this paper, we illustrate the main design choices for Urbanopoly -- including the use of social media like Facebook in the context of a Human Computation approach -- and we explain the game play.", "To realize the Smart Cities vision, applications can leverage the large availability of open datasets related to urban environments. Those datasets need to be integrated, but it is often hard to automatically achieve a high-quality interlinkage. Human Computation approaches can be employed to solve such a task where machines are ineffective. We argue that in this case not only people's background knowledge is useful to solve the task, but also people's physical presence and direct experience can be successfully exploited. In this paper we present UrbanMatch, a Game with a Purpose for players in mobility aimed at validating links between points of interest and their photos; we discuss the design choices and we show the high throughput and accuracy achieved in the interlinking task.", "This article describes how a semantic search engine has been build from, and still is continuously improved by, a semantic analysis of the “footprints” left by players on the gaming Web platform ARTigo. The Web platform offers several Games With a Purpose (GWAPs) some of which have been specifically designed to collect the data needed for building the artwork search engine. ARTigo is a “tagging ecosystem” of games that cooperate so as to gather a wide range of information on artworks. The ARTigo ecosystem generates a folksonomy saved as 3rd-order tensor, that is a generalization of a matrix, the three orders or dimensions of which represent (1) who (2) tagged an (3) an artwork. The semantic search engine is build using a non-trivial generalization of the well-known, matrix-based, Latent Semantic Analysis (LSA) methods and algorithms. ARTigo is in service for five years and is subject to an active research constantly resulting in new developments, some of which are reported about for the first time in this article.", "While associations between concepts in our memory have different strengths, explicit strengths of links (edge weights) are missing in Linked Data. In order to build a collection of such edge weights, we created a web-game prototype that ranks triples by importance. In this paper we briefly describe the game, Linked Data preprocessing aspects, and the promising results of an evaluation of the game.", "Weaving the semantic Web requires that humans contribute their labor and judgment for creating, extending, and updating formal knowledge structures. Hiding such tasks behind online multiplayer games presents the tasks as fun and intellectually challenging entertainment. The games we've presented are the first prototypes of the OntoGame series. We're extending and improving the scenarios in several directions.", "Purpose – Linking Open Data (LOD) provides a vast amount of well structured semantic information, but many inconsistencies may occur, especially if the data are generated with the help of automated methods. Data cleansing approaches enable detection of inconsistencies and overhauling of affected data sets, but they are difficult to apply automatically. The purpose of this paper is to present WhoKnows?, an online quiz that generates different kinds of questionnaires from DBpedia data sets.Design methodology approach – Besides its playfulness, WhoKnows? has been developed for the evaluation of property relevance ranking heuristics on DBpedia data, with the convenient side effect of detecting inconsistencies and doubtful facts.Findings – The original purpose for developing WhoKnows? was to evaluate heuristics to rank LOD properties and thus, obtain a semantic relatedness between entities according to the properties by which they are linked. The presented approach is an efficient method to detect popular prop..." ] }
1811.02644
2900149870
Dynamic high resolution data on human population distribution is of great importance for a wide spectrum of activities and real-life applications, but is too difficult and expensive to obtain directly. Therefore, generating fine-scaled population distributions from coarse population data is of great significance. However, there are three major challenges: 1) the complexity in spatial relations between high and low resolution population; 2) the dependence of population distributions on other external information; 3) the difficulty in retrieving temporal distribution patterns. In this paper, we first propose the idea to generate dynamic population distributions in full-time series, then we design dynamic population mapping via deep neural network(DeepDPM), a model that describes both spatial and temporal patterns using coarse data and point of interest information. In DeepDPM, we utilize super-resolution convolutional neural network(SRCNN) based model to directly map coarse data into higher resolution data, and a time-embedded long short-term memory model to effectively capture the periodicity nature to smooth the finer-scaled results from the previous static SRCNN model. We perform extensive experiments on a real-life mobile dataset collected from Shanghai. Our results demonstrate that DeepDPM outperforms previous state-of-the-art methods and a suite of frequent data-mining approaches. Moreover, DeepDPM breaks through the limitation from previous works in time dimension so that dynamic predictions in all-day time slots can be obtained.
Early studies used filtering approaches, linear, bicubic or Lanczos @cite_0 filtering. freeman2002example and freeman2000learning firstly sought to construct mapping algorithm between training patches and corresponding known high-resolution counterparts. In recent years, convolutional neural networks(CNN) based SR algorithms have shown excellent performance @cite_1 @cite_17 @cite_8 @cite_19 , where SRCNN @cite_17 is one of the state-of-the-art for the problem. Vandal2017DeepSD successfully used the SRCNN based DeepSD structure in climate prediction. Our study also learns from the advantage of SRCNN to construct our static part of DeepDPM structure, and furthermore implements the dynamic part to learn the temporal pattern.
{ "cite_N": [ "@cite_8", "@cite_1", "@cite_0", "@cite_19", "@cite_17" ], "mid": [ "2505593925", "2950872229", "", "2949079773", "1885185971" ], "abstract": [ "One impressive advantage of convolutional neural networks (CNNs) is their ability to automatically learn feature representation from raw pixels, eliminating the need for hand-designed procedures. However, recent methods for single image super-resolution (SR) fail to maintain this advantage. They utilize CNNs in two decoupled steps, i.e., first upsampling the low resolution (LR) image to the high resolution (HR) size with hand-designed techniques (e.g., bicubic interpolation), and then applying CNNs on the upsampled LR image to reconstruct HR results. In this paper, we seek an alternative and propose a new image SR method, which jointly learns the feature extraction, upsampling and HR reconstruction modules, yielding a completely end-to-end trainable deep CNN. As opposed to existing approaches, the proposed method conducts upsampling in the latent feature space with filters that are optimized for the task of image SR. In addition, the HR reconstruction is performed in a multi-scale manner to simultaneously incorporate both short- and long-range contextual information, ensuring more accurate restoration of HR images. To facilitate network training, a new training approach is designed, which jointly trains the proposed deep network with a relatively shallow network, leading to faster convergence and more superior performance. The proposed method is extensively evaluated on widely adopted data sets and improves the performance of state-of-the-art methods with a considerable margin. Moreover, in-depth ablation studies are conducted to verify the contribution of different network designs to image SR, providing additional insights for future research.", "Deep learning techniques have been successfully applied in many areas of computer vision, including low-level image restoration problems. For image super-resolution, several models based on deep neural networks have been recently proposed and attained superior performance that overshadows all previous handcrafted models. The question then arises whether large-capacity and data-driven models have become the dominant solution to the ill-posed super-resolution problem. In this paper, we argue that domain expertise represented by the conventional sparse coding model is still valuable, and it can be combined with the key ingredients of deep learning to achieve further improved results. We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end. The interpretation of the network based on sparse coding leads to much more efficient and effective training, as well as a reduced model size. Our model is evaluated on a wide range of images, and shows clear advantage over existing state-of-the-art methods in terms of both restoration accuracy and human subjective quality.", "", "We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality." ] }
1811.02427
2899929919
In this paper we propose a unified two-phase scheme to accelerate any high-order regularized tensor approximation approach on the smooth part of a composite convex optimization model. The proposed scheme has the advantage of not needing to assume any prior knowledge of the Lipschitz constants for the gradient, the Hessian and or high-order derivatives. This is achieved by tuning the parameters used in the algorithm in its process of progression, which can be viewed as a relaxation over the existing algorithms in the literature. Under the assumption that the sub-problems can be solved approximately, we establish the overall iteration complexity bounds for three specific algorithms to obtain an @math -optimal solution. In general, we show that the adaptive high-order method has an iteration bound of @math if the first @math -th order derivative information is used in the approximation, which has the same iteration complexity as in (Nesterov, 2018) where the Lipschitz constants are assumed to be known and subproblems are assumed to be solved exactly. Thus, our results answer an open problem raised by Nesterov on adaptive strategies for high-order accelerated methods. Specifically, we show that the gradient method achieves an iteration complexity in the order of @math , which is known to be best possible, while the adaptive cubic regularization methods with the exact inexact Hessian matrix both achieve an iteration complexity in the order of @math , which matches that of the original accelerated cubic regularization method presented in assuming the availability of the exact Hessian information and the Lipschitz constants, and the global solution of the sub-problems.
It is worth noting that the standard cubic regularized Newton's method and its variants are tailored for smooth unconstrained convex optimization, and thus is unsuited to solve directly. In the literature, there are second-order methods which are efficient for solving , and they are referred to as proximal (quasi-)Newton methods. The global convergence and the local superlinear rate of convergence of those methods have been shown in @cite_27 and more recently in @cite_28 . The global sublinear rate of @math for proximal quasi-Newton methods is established in @cite_20 and was later accelerated to @math in @cite_48 . @cite_6 proposed a highly efficient evaluation of the proximal mapping within the quasi-Newton framework. Very recently, Grapiglia and Nesterov @cite_21 extended accelerated regularized Newton's Methods to solve problem , where @math is twice differentiable with a H "o lderian continuous Hessian, and they showed that the iteration bound depends on the H "o lderian parameter.
{ "cite_N": [ "@cite_28", "@cite_48", "@cite_21", "@cite_6", "@cite_27", "@cite_20" ], "mid": [ "", "2963573197", "2792215433", "2784930045", "2963173886", "1550741530" ], "abstract": [ "", "A general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed by Scheinberg and Tang (Math Program 160:495–529, 2016) and a sublinear global convergence rate has been established. In this paper, we analyze the global convergence rate of this method, in the both exact and inexact settings, in the case when the objective function is strongly convex. We also investigate a practical variant of this method by establishing a simple stopping criterion for the subproblem optimization. Furthermore, we consider an accelerated variant, based on FISTA of Beck and Teboulle (SIAM 2:183–202, 2009), to the proximal quasi-Newton algorithm. (SIAM 22:1042–1064, 2012) considered a similar accelerated method, where the convergence rate analysis relies on very strong impractical assumptions on Hessian estimates. We present a modified analysis while relaxing these assumptions and perform a numerical comparison of the accelerated proximal quasi-Newton algorithm and the regular one. Our analysis and computational results show that acceleration may not bring any benefit in the quasi-Newton setting.", "In this paper, we study accelerated regularized Newton methods for minimizing objectives formed as a sum of two functions: one is convex and twice differentiable with Holder-continuous Hessian, and...", "We introduce a framework for quasi-Newton forward--backward splitting algorithms (proximal quasi-Newton methods) with a metric induced by diagonal @math rank- @math symmetric positive definite matrices. This special type of metric allows for a highly efficient evaluation of the proximal mapping. The key to this efficiency is a general proximal calculus in the new metric. By using duality, formulas are derived that relate the proximal mapping in a rank- @math modified metric to the original metric. We also describe efficient implementations of the proximity calculation for a large class of functions; the implementations exploit the piece-wise linear nature of the dual problem. Then, we apply these results to acceleration of composite convex minimization problems, which leads to elegant quasi-Newton methods for which we prove convergence. The algorithm is tested on several numerical examples and compared to a comprehensive list of alternatives in the literature. Our quasi-Newton splitting algorithm with the prescribed metric compares favorably against state-of-the-art. The algorithm has extensive applications including signal processing, sparse recovery, machine learning and classification to name a few.", "We generalize Newton-type methods for minimizing smooth functions to handle a sum of two convex functions: a smooth function and a nonsmooth function with a simple proximal mapping. We show that the resulting proximal Newton-type methods inherit the desirable convergence behavior of Newton-type methods for minimizing smooth functions, even when search directions are computed inexactly. Many popular methods tailored to problems arising in bioinformatics, signal processing, and statistical learning are special cases of proximal Newton-type methods, and our analysis yields new convergence results for some of these methods.", "Recently several methods were proposed for sparse optimization which make careful use of second-order information [10, 28, 16, 3] to improve local convergence rates. These methods construct a composite quadratic approximation using Hessian information, optimize this approximation using a first-order method, such as coordinate descent and employ a line search to ensure sufficient descent. Here we propose a general framework, which includes slightly modified versions of existing algorithms and also a new algorithm, which uses limited memory BFGS Hessian approximations, and provide a novel global convergence rate analysis, which covers methods that solve subproblems via coordinate descent." ] }
1811.02319
2899960587
The performance of deep neural networks crucially depends on good hyperparameter configurations. Bayesian optimization is a powerful framework for optimizing the hyperparameters of DNNs. These methods need sufficient evaluation data to approximate and minimize the validation error function of hyperparameters. However, the expensive evaluation cost of DNNs leads to very few evaluation data within a limited time, which greatly reduces the efficiency of Bayesian optimization. Besides, the previous researches focus on using the complete evaluation data to conduct Bayesian optimization, and ignore the intermediate evaluation data generated by early stopping methods. To alleviate the insufficient evaluation data problem, we propose a fast hyperparameter optimization method, HOIST, that utilizes both the complete and intermediate evaluation data to accelerate the hyperparameter optimization of DNNs. Specifically, we train multiple basic surrogates to gather information from the mixed evaluation data, and then combine all basic surrogates using weighted bagging to provide an accurate ensemble surrogate. Our empirical studies show that HOIST outperforms the state-of-the-art approaches on a wide range of DNNs, including feed forward neural networks, convolutional neural networks, recurrent neural networks, and variational autoencoder.
Human experts can automatically identify and terminate bad evaluations in a short run. Several methods mimic the early termination of bad evaluations to save the evaluation overhead. A probabilistic model @cite_13 is used to predict the overall performance according to the already observed part of learning curve, enabling us to terminate the bad evaluations earlier. Based on this, the LCNet with a learning layer @cite_3 is developed to improve the prediction of learning curve. Besides, Hyperband @cite_7 is a bandit early stopping method. It dynamically allocates resources to randomly sampled configurations, and uses successive halving algorithm @cite_25 to drop those badly-performing configurations. We will describe this detailedly in Section 3. Despite its simplicity, Hyperband outperforms the state-of-the-art BO methods within a limited time. However, due to the random sampling of configurations, Hyperband achieves worse performance than BO methods if given sufficient time.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_3", "@cite_25" ], "mid": [ "2266822037", "2963815651", "2751836095", "" ], "abstract": [ "Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts.", "Performance of machine learning algorithms depends critically on identifying a good set of hyperparameters. While recent approaches use Bayesian optimization to adaptively select configurations, we focus on speeding up random search through adaptive resource allocation and early-stopping. We formulate hyperparameter optimization as a pure-exploration nonstochastic infinite-armed bandit problem where a predefined resource like iterations, data samples, or features is allocated to randomly sampled configurations. We introduce a novel algorithm, Hyperband, for this framework and analyze its theoretical properties, providing several desirable guarantees. Furthermore, we compare Hyperband with popular Bayesian optimization methods on a suite of hyperparameter optimization problems. We observe that Hyperband can provide over an order-of-magnitude speedup over our competitor set on a variety of deep-learning and kernel-based learning problems.", "Different neural network architectures, hyperparameters and training protocols lead to different performances as a function of time. Human experts routinely inspect the resulting learning curves to quickly terminate runs with poor hyperparameter settings and thereby considerably speed up manual hyperparameter optimization. Exploiting the same information in automatic Bayesian hyperparameter optimization requires a probabilistic model of learning curves across hyperparameter settings. Here, we study the use of Bayesian neural networks for this purpose and improve their performance by a specialized learning curve layer.", "" ] }
1811.02319
2899960587
The performance of deep neural networks crucially depends on good hyperparameter configurations. Bayesian optimization is a powerful framework for optimizing the hyperparameters of DNNs. These methods need sufficient evaluation data to approximate and minimize the validation error function of hyperparameters. However, the expensive evaluation cost of DNNs leads to very few evaluation data within a limited time, which greatly reduces the efficiency of Bayesian optimization. Besides, the previous researches focus on using the complete evaluation data to conduct Bayesian optimization, and ignore the intermediate evaluation data generated by early stopping methods. To alleviate the insufficient evaluation data problem, we propose a fast hyperparameter optimization method, HOIST, that utilizes both the complete and intermediate evaluation data to accelerate the hyperparameter optimization of DNNs. Specifically, we train multiple basic surrogates to gather information from the mixed evaluation data, and then combine all basic surrogates using weighted bagging to provide an accurate ensemble surrogate. Our empirical studies show that HOIST outperforms the state-of-the-art approaches on a wide range of DNNs, including feed forward neural networks, convolutional neural networks, recurrent neural networks, and variational autoencoder.
To accelerate hyperparameter optimization, several methods combine BO methods with early stopping methods. The probabilistic learning curve model @cite_13 is used to terminate the badly-performing evaluations in the setting of BO methods. one method @cite_3 proposes a model-based Hyperband. Instead of random sampling, it samples configurations based on the LCNet. Besides, BOHB @cite_8 is also a model-based Hyperband, which combines the benefit of both Hyperband and BO methods by replacing the random sampling of Hyperband with a TPE-based sampling. However, these methods do not exploit the intermediate evaluation data generated by early stopping methods. Therefore, the current methods do not reach the full potentials of this framework --- combining BO with early stopping methods.
{ "cite_N": [ "@cite_13", "@cite_3", "@cite_8" ], "mid": [ "2266822037", "2751836095", "2911097174" ], "abstract": [ "Deep neural networks (DNNs) show very strong performance on many machine learning problems, but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization methods have recently been shown to yield settings competitive with those found by human experts, but their widespread adoption is hampered by the fact that they require more computational resources than human experts. Humans have one advantage: when they evaluate a poor hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that the resulting network performs poorly and terminate the corresponding evaluation to save time. In this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates the performance from the first part of a learning curve. Experiments with a broad range of neural network architectures on various prominent object recognition benchmarks show that our resulting approach speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold, enabling them to find DNN settings that yield better performance than those chosen by human experts.", "Different neural network architectures, hyperparameters and training protocols lead to different performances as a function of time. Human experts routinely inspect the resulting learning curves to quickly terminate runs with poor hyperparameter settings and thereby considerably speed up manual hyperparameter optimization. Exploiting the same information in automatic Bayesian hyperparameter optimization requires a probabilistic model of learning curves across hyperparameter settings. Here, we study the use of Bayesian neural networks for this purpose and improve their performance by a specialized learning curve layer.", "" ] }
1811.02628
2900207395
Suppressing bones on chest X-rays such as ribs and clavicle is often expected to improve pathologies classification. These bones can interfere with a broad range of diagnostic tasks on pulmonary disease except for musculoskelet al system. Current conventional method for acquisition of bone suppressed X-rays is dual energy imaging, which captures two radiographs at a very short interval with different energy levels; however, the patient is exposed to radiation twice and the artifacts arise due to heartbeats between two shots. In this paper, we introduce a deep generative model trained to predict bone suppressed images on single energy chest X-rays, analyzing a finite set of previously acquired dual energy chest X-rays. Since the relatively small amount of data is available, such approach relies on the methodology maximizing the data utilization. Here we integrate the following two approaches. First, we use a conditional generative adversarial network that complements the traditional regression method minimizing the pairwise image difference. Second, we use Haar 2D wavelet decomposition to offer a perceptual guideline in frequency details to allow the model to converge quickly and efficiently. As a result, we achieve state-of-the-art performance on bone suppression as compared to the existing approaches with dual energy chest X-rays.
Bone suppression was first introduced by @cite_26 , removing the dominant effects of the bony structure within the X-ray projection and reconstructing residual soft tissues components. Most of general studies in relation to bone suppression received relatively less attention and have been conducted for very specific purpose until the actual clinical effect from bone suppression has been verified. However, @cite_32 proved that currently learned diagnosis suffers from lung cancer lesions obscured by anatomical structures such as ribs, and @cite_22 showed that the superposition of ribs highly affects the performance of automatic lung cancer detection. Both studies re-examined the invisibility of abnormalities caused by the superposition of bones and the improvement of automatic or human-level pathologic classification by the detection of these abnormalities.
{ "cite_N": [ "@cite_22", "@cite_26", "@cite_32" ], "mid": [ "2018417093", "2145910025", "2567101557" ], "abstract": [ "A novel framework for image filtering based on regression is presented. Regression is a supervised technique from pattern recognition theory in which a mapping from a number of input variables (features) to a continuous output variable is learned from a set of examples from which both input and output are known. We apply regression on a pixel level. A new, substantially different, image is estimated from an input image by computing a number of filtered input images (feature images) and mapping these to the desired output for every pixel in the image. The essential difference between conventional image filters and the proposed regression filter is that the latter filter is learned from training data. The total scheme consists of preprocessing, feature computation, feature extraction by a novel dimensionality reduction scheme designed specifically for regression, regression by k-nearest neighbor averaging, and (optionally) iterative application of the algorithm. The framework is applied to estimate the bone and soft-tissue components from standard frontal chest radiographs. As training material, radiographs with known soft-tissue and bone components, obtained by dual energy imaging, are used. The results show that good correlation with the true soft-tissue images can be obtained and that the scheme can be applied to images from a different source with good results. We show that bone structures are effectively enhanced and suppressed and that in most soft-tissue images local contrast of ribs decreases more than contrast between pulmonary nodules and their surrounding, making them relatively more pronounced.", "Wavelets are a mathematical tool for hierarchically decomposing functions. They allow a function to be described in terms of a coarse overall shape, plus details that range from broad to narrow. Regardless of whether the function of interest is an image, a curve, or a surface, wavelets offer an elegant technique for representing the levels of detail present. The article is intended to provide people working in computer graphics with some intuition for what wavelets are, as well as to present the mathematical foundations necessary for studying and using them. We discuss the simple case of Haar wavelets in one and two dimensions, and show how they can be used for image compression. >", "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data." ] }
1811.02628
2900207395
Suppressing bones on chest X-rays such as ribs and clavicle is often expected to improve pathologies classification. These bones can interfere with a broad range of diagnostic tasks on pulmonary disease except for musculoskelet al system. Current conventional method for acquisition of bone suppressed X-rays is dual energy imaging, which captures two radiographs at a very short interval with different energy levels; however, the patient is exposed to radiation twice and the artifacts arise due to heartbeats between two shots. In this paper, we introduce a deep generative model trained to predict bone suppressed images on single energy chest X-rays, analyzing a finite set of previously acquired dual energy chest X-rays. Since the relatively small amount of data is available, such approach relies on the methodology maximizing the data utilization. Here we integrate the following two approaches. First, we use a conditional generative adversarial network that complements the traditional regression method minimizing the pairwise image difference. Second, we use Haar 2D wavelet decomposition to offer a perceptual guideline in frequency details to allow the model to converge quickly and efficiently. As a result, we achieve state-of-the-art performance on bone suppression as compared to the existing approaches with dual energy chest X-rays.
As deep learning algorithms are further developed, current related studies focus more on deep learning based model on bone suppression. @cite_7 used a massive artificial neural network, which the sub regions of input passes linear dense layers with single output, to obtain the bone image from a single energy chest X-ray. Then they subtract the bone image from the original image to yield virtual dual energy image, similar to a soft-tissue image. @cite_2 . the extension model of @cite_7 , additionally employed a total variation-minimization smoothing method and multiple anatomically specific networks to improve previously achieved performance. A new approach combined with deep learning and dual energy X-rays data has been commonly used recently; @cite_23 trained with 404 dual-energy chest X-rays with a multi-scale approach, and also subtracted the bone image from the original image to obtain a virtual soft tissue image using its vertical gradient as previously introduced. @cite_1 proposed two end-to-end architecture, convolutional auto-encoder network and non-down-sampling convolutional network that directly output the bone suppressed images based on DXR training set. They combined mean squared error (MSE) with the structural similarity index (SSIM) that addresses sensitivity of the human visual system to changes in local structure @cite_0 .
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_0", "@cite_23", "@cite_2" ], "mid": [ "2567101557", "2761814415", "2133665775", "", "98520088" ], "abstract": [ "With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.", "Bone suppression in lung radiographs is an important task, as it improves the results on other related tasks, such as nodule detection or pathologies classification. In this paper, we propose two architectures that suppress bones in radiographs by treating them as noise. In the proposed methods, we create end-to-end learning frameworks that minimize noise in the images while maintaining sharpness and detail in them. Our results show that our proposed noise-cancellation scheme is robust and does not introduce artifacts into the images.", "Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http: www.cns.nyu.edu spl sim lcv ssim .", "", "Most lung nodules that are missed by radiologists as well as computer-aided detection (CADe) schemes overlap with ribs or clavicles in chest radiographs (CXRs). The purpose of this study was to separate bony structures such as ribs and clavicles from soft tissue in CXRs. To achieve this, we developed anatomically (location-) specific multiple massive-training artificial neural networks (MTANNs) which is a class of pixel-based machine learning combined with total variation (TV) minimization smoothing and a histogram-matching-based consistency processing technique. Multi-resolution MTANNs have previously been developed for rib suppression by use of input CXRs and the corresponding “teaching” images for training. Although they were able to suppress ribs, they did not suppress rib edges, ribs close to the lung wall, and the clavicles very well because the orientation, width, contrast, and density of bones are different from location to location and the capability of a single set of multi-resolution MTANNs is limited. To address this issue, the anatomically specific multiple MTANNs developed in this work were designed to separate bones from soft tissue in different anatomic segments of the lungs. Each of multiple anatomically specific MTANNs was trained with the corresponding anatomic segment in the teaching bone images. The output segmental images from the multiple MTANNs were merged to produce a whole bone image. Total variation minimization smoothing was applied to the bone image for reduction of noise while edges were preserved. This bone image was then subtracted from the original CXR to produce a soft-tissue image where the bones were separated out. In order to ensure the contrast and density in different segments were consistent, a histogram-matching technique was applied to the input segmental images. This new method was compared with the conventional MTANNs by using a database of 110 CXRs with pulmonary nodules. Our new anatomically (location-) specific MTANNs separated rib edges, ribs close to the lung wall, and the clavicles from soft tissue in CXRs to a substantially higher level than the conventional MTANNs did, while the visibility of lung nodules and vessels was maintained. Thus, our image-processing technique for bone-soft-tissue separation by means of our new anatomically specific multiple MTANNs would be potentially useful for radiologists as well as for CAD schemes in detection of lung nodules on CXRs." ] }
1811.02628
2900207395
Suppressing bones on chest X-rays such as ribs and clavicle is often expected to improve pathologies classification. These bones can interfere with a broad range of diagnostic tasks on pulmonary disease except for musculoskelet al system. Current conventional method for acquisition of bone suppressed X-rays is dual energy imaging, which captures two radiographs at a very short interval with different energy levels; however, the patient is exposed to radiation twice and the artifacts arise due to heartbeats between two shots. In this paper, we introduce a deep generative model trained to predict bone suppressed images on single energy chest X-rays, analyzing a finite set of previously acquired dual energy chest X-rays. Since the relatively small amount of data is available, such approach relies on the methodology maximizing the data utilization. Here we integrate the following two approaches. First, we use a conditional generative adversarial network that complements the traditional regression method minimizing the pairwise image difference. Second, we use Haar 2D wavelet decomposition to offer a perceptual guideline in frequency details to allow the model to converge quickly and efficiently. As a result, we achieve state-of-the-art performance on bone suppression as compared to the existing approaches with dual energy chest X-rays.
Such a naive adoption of convolutional auto-encoder families often fails to capture the sharpness since the network misses high frequency details, which are the main reason of blurry images, in its encoding and decoding system. @cite_14 have overcome this limitation and achieved high performance on segmentation task with skip connection in the auto-encoding process. The segmentation task can be addressed by creating mask with its pixel-wise probability, however, with an intensity profile in the bone suppression task can potentially act as a bias. @cite_12 employed very heuristic loss function using conditional GAN framework for image translation similarly to neural style transfer. The success of such approach motivates us to do research on more effective and easier method not only to converge on learning bone suppression from a finite set of DXRs, also eliminate bias in suppressing region. We combine the suppressing noisy bones approach with image-to-image translation and purposely re-designed existing conditional adversarial network; the input system and improved techniques in the training process.
{ "cite_N": [ "@cite_14", "@cite_12" ], "mid": [ "2952232639", "2552465644" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either." ] }
1811.02486
2900436521
Many hallmarks of human intelligence, such as generalizing from limited experience, abstract reasoning and planning, analogical reasoning, creative problem solving, and capacity for language require the ability to consolidate experience into concepts, which act as basic building blocks of understanding and reasoning. We present a framework that defines a concept by an energy function over events in the environment, as well as an attention mask over entities participating in the event. Given few demonstration events, our method uses inference-time optimization procedure to generate events involving similar concepts or identify entities involved in the concept. We evaluate our framework on learning visual, quantitative, relational, temporal concepts from demonstration events in an unsupervised manner. Our approach is able to successfully generate and identify concepts in a few-shot setting and resulting learned concepts can be reused across environments. Example videos of our results are available at sites.google.com site energyconceptmodels
The problem of learning and reasoning over concepts or other abstract representations has long been of interest in machine learning (see @cite_1 @cite_28 for review). Approaches based on Bayesian reasoning have notably been applied for numerical concepts @cite_32 . A recent framework of @cite_31 focuses on visual concepts such as color and shape, where concepts are defined in terms of distributions over latent variables produced by a variational autoencoder. Instead of focusing solely on visual concepts from pixel input, our work explores learning of concepts that involve complex interaction of multiple entities.
{ "cite_N": [ "@cite_28", "@cite_31", "@cite_1", "@cite_32" ], "mid": [ "", "2786897815", "2885825670", "2138138984" ], "abstract": [ "", "The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts.", "Recent progress in artificial intelligence has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats that of humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn and how they learn it. Specifically, we argue that these machines should (1) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (2) ground learning in intuitive theories of physics and psychology to support and enrich the knowledge that is learned; and (3) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes toward these goals that can combine the strengths of recent neural network advances with more structured cognitive models.", "I consider the problem of learning concepts from small numbers of positive examples, a feat which humans perform routinely but which computers are rarely capable of. Bridging machine learning and cognitive science perspectives, I present both theoretical analysis and an empirical study with human subjects for the simple task oflearning concepts corresponding to axis-aligned rectangles in a multidimensional feature space. Existing learning models, when applied to this task, cannot explain how subjects generalize from only a few examples of the concept. I propose a principled Bayesian model based on the assumption that the examples are a random sample from the concept to be learned. The model gives precise fits to human behavior on this simple task and provides qualitative insights into more complex, realistic cases of concept learning." ] }
1811.02486
2900436521
Many hallmarks of human intelligence, such as generalizing from limited experience, abstract reasoning and planning, analogical reasoning, creative problem solving, and capacity for language require the ability to consolidate experience into concepts, which act as basic building blocks of understanding and reasoning. We present a framework that defines a concept by an energy function over events in the environment, as well as an attention mask over entities participating in the event. Given few demonstration events, our method uses inference-time optimization procedure to generate events involving similar concepts or identify entities involved in the concept. We evaluate our framework on learning visual, quantitative, relational, temporal concepts from demonstration events in an unsupervised manner. Our approach is able to successfully generate and identify concepts in a few-shot setting and resulting learned concepts can be reused across environments. Example videos of our results are available at sites.google.com site energyconceptmodels
Our method aims to learn concepts from demonstration events in the environment. A similar problem is tackled by inverse reinforcement learning (IRL) approaches, which aim to infer an underlying cost function that gives rise to demonstrated events. Our method's concept energy functions are analogous to the cost or negative of value functions recovered by IRL approaches. Under this view, multiple concepts can easily be composed simply by summing their energy functions. Concepts are then enacted in our approach via energy minimization, mirroring the application a forward reinforcement learning step in IRL methods. Max entropy @cite_2 is a common IRL formulation, and our method closely resembles recent instantiations of it @cite_10 @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_2" ], "mid": [ "1559990548", "2290104316", "2098774185" ], "abstract": [ "We present new algorithms for inverse optimal control (or inverse reinforcement learning, IRL) within the framework of linearly-solvable MDPs (LMDPs). Unlike most prior IRL algorithms which recover only the control policy of the expert, we recover the policy, the value function and the cost function. This is possible because here the cost and value functions are uniquely defined given the policy. Despite these special properties, we can handle a wide variety of problems such as the grid worlds popular in RL and most of the nonlinear problems arising in robotics and control engineering. Direct comparisons to prior IRL algorithms show that our new algorithms provide more information and are orders of magnitude faster. Indeed our fastest algorithm is the first inverse algorithm which does not require solving the forward problem; instead it performs unconstrained optimization of a convex and easy-to-compute log-likelihood. Our work also sheds light on the recent Maximum Entropy (MaxEntIRL) algorithm, which was defined in terms of density estimation and the corresponding forward problem was left unspecified. We show that MaxEntIRL is inverting an LMDP, using the less efficient of the algorithms derived here. Unlike all prior IRL algorithms which assume pre-existing features, we study feature adaptation and show that such adaptation is essential in continuous state spaces.", "Reinforcement learning can acquire complex behaviors from high-level specifications. However, defining a cost function that can be optimized effectively and encodes the correct task is challenging in practice. We explore how inverse optimal control (IOC) can be used to learn behaviors from demonstrations, with applications to torque control of high-dimensional robotic systems. Our method addresses two key challenges in inverse optimal control: first, the need for informative features and effective regularization to impose structure on the cost, and second, the difficulty of learning the cost function under unknown dynamics for high-dimensional continuous systems. To address the former challenge, we present an algorithm capable of learning arbitrary nonlinear cost functions, such as neural networks, without meticulous feature engineering. To address the latter challenge, we formulate an efficient sample-based approximation for MaxEnt IOC. We evaluate our method on a series of simulated tasks and real-world robotic manipulation problems, demonstrating substantial improvement over prior methods both in terms of task complexity and sample efficiency.", "Recent research has shown the benefit of framing problems of imitation learning as solutions to Markov Decision Problems. This approach reduces learning to the problem of recovering a utility function that makes the behavior induced by a near-optimal policy closely mimic demonstrated behavior. In this work, we develop a probabilistic approach based on the principle of maximum entropy. Our approach provides a well-defined, globally normalized distribution over decision sequences, while providing the same performance guarantees as existing methods. We develop our technique in the context of modeling real-world navigation and driving behaviors where collected data is inherently noisy and imperfect. Our probabilistic approach enables modeling of route preferences as well as a powerful new approach to inferring destinations and routes based on partial trajectories." ] }
1811.02486
2900436521
Many hallmarks of human intelligence, such as generalizing from limited experience, abstract reasoning and planning, analogical reasoning, creative problem solving, and capacity for language require the ability to consolidate experience into concepts, which act as basic building blocks of understanding and reasoning. We present a framework that defines a concept by an energy function over events in the environment, as well as an attention mask over entities participating in the event. Given few demonstration events, our method uses inference-time optimization procedure to generate events involving similar concepts or identify entities involved in the concept. We evaluate our framework on learning visual, quantitative, relational, temporal concepts from demonstration events in an unsupervised manner. Our approach is able to successfully generate and identify concepts in a few-shot setting and resulting learned concepts can be reused across environments. Example videos of our results are available at sites.google.com site energyconceptmodels
Our method relies on performing inference-time optimization processes for concept generation and identification, as well as for determining which concepts are involved in an event. The training is performed by taking behavior of these inner optimization processes into account, similar to the meta-learning formulations of @cite_30 . Relatedly, iterative processes have been explored in the context of control @cite_35 @cite_25 @cite_20 @cite_3 @cite_4 and image generation @cite_22 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_4", "@cite_22", "@cite_3", "@cite_25", "@cite_20" ], "mid": [ "2951775809", "2258731934", "2766447205", "1850742715", "2613603362", "2567374473", "2738669288" ], "abstract": [ "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.", "We introduce the value iteration network (VIN): a fully differentiable neural network with a planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.", "Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games.", "This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.", "Many machine learning systems are built to solve the hardest examples of a particular task, which often makes them large and expensive to run---especially with respect to the easier examples, which might require much less computation. For an agent with a limited computational budget, this \"one-size-fits-all\" approach may result in the agent wasting valuable computation on easy examples, while not spending enough on hard examples. Rather than learning a single, fixed policy for solving all instances of a task, we introduce a metacontroller which learns to optimize a sequence of \"imagined\" internal simulations over predictive models of the world in order to construct a more informed, and more economical, solution. The metacontroller component is a model-free reinforcement learning agent, which decides both how many iterations of the optimization procedure to run, as well as which model to consult on each iteration. The models (which we call \"experts\") can be state transition models, action-value functions, or any other mechanism that provides information useful for solving the task, and can be learned on-policy or off-policy in parallel with the metacontroller. When the metacontroller, controller, and experts were trained with \"interaction networks\" (, 2016) as expert models, our approach was able to solve a challenging decision-making problem under complex non-linear dynamics. The metacontroller learned to adapt the amount of computation it performed to the difficulty of the task, and learned how to choose which experts to consult by factoring in both their reliability and individual computational resource costs. This allowed the metacontroller to achieve a lower overall cost (task loss plus computational cost) than more traditional fixed policy approaches. These results demonstrate that our approach is a powerful framework for using...", "One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple \"imagined\" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.", "We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines." ] }
1811.02563
2963301765
This paper introduces a method of structure inspection using mixed-reality headsets to reduce the human effort in reporting accurate inspection information such as fault locations in 3D coordinates. Prior to every inspection, the headset needs to be localized. While external pose estimation and fiducial marker based localization would require setup, maintenance, and manual calibration; marker-free self-localization can be achieved using the onboard depth sensor and camera. However, due to limited depth sensor range of portable mixed-reality headsets like Microsoft HoloLens, localization based on simple point cloud registration (sPCR) would require extensive mapping of the environment. Also, localization based on camera image would face same issues as stereo ambiguities and hence depends on viewpoint. We thus introduce a novel approach to Joint Point Cloud and Image-based Localization (JPIL) for mixed-reality headsets that uses visual cues and headset orientation to register small, partially overlapped point clouds and save significant manual labor and time in environment mapping. Our empirical results compared to sPCR show average 10 fold reduction of required overlap surface area that could potentially save on average 20 minutes per inspection. JPIL is not only restricted to inspection tasks but also can be essential in enabling intuitive human-robot interaction for spatial mapping and scene understanding in conjunction with other agents like autonomous robotic systems that are increasingly being deployed in outdoor environments for applications like structural inspection.
While we have covered the related works in the above text, here we emphasize on few other point cloud registration methods that uses visual cues. Dold @cite_6 uses planar patches from image data to refine a point cloud registration whereas Men . @cite_11 uses hue as fourth dimension (x,y,z,hue) and search for correspondence in 4D space. Similarly, authors of @cite_2 @cite_3 use 2D image features in a tightly coupled framework to direct point cloud registration. These requires accurate calibration between the Lidar scans and camera images and work well for dominantly planer structures without stereo ambiguities. When accounted for errors in sensor calibration, stereo ambiguities and complex 3D environments, local image features tend to fail and thus decrease the robustness due to the their tightly coupled nature.
{ "cite_N": [ "@cite_3", "@cite_2", "@cite_6", "@cite_11" ], "mid": [ "2142008986", "2145720667", "2922018968", "2159029220" ], "abstract": [ "Building 3D models using terrestrial laser scanner (TLS) data is currently an active area of research, especially in the fields of heritage recording and site documentation. Multiple TLS scans are often required to generate an occlusion-free 3D model in situations where the object to be recorded has a complex geometry. The first task associated with building 3D models from laser scanner data in such cases is to transform the data from the scanner’s local coordinate system into a uniform Cartesian reference datum, which requires sufficient overlap between the scans. Many TLS systems are now supplied with an SLR-type digital camera, such that the scene to be scanned can also be photographed. The provision of overlapping imagery offers an alternative, photogrammetric means to achieve point cloud registration between adjacent scans. The images from the digital camera mounted on top of the laser scanner are used to first relatively orient the network of images, and then to transfer this orientation to the TLS stations to provide exterior orientation. The proposed approach, called the IBR method for Image-Based Registration, offers a one-step registration of the point clouds from each scanner position. In the case of multiple scans, exterior orientation is simultaneously determined for all TLS stations by bundle adjustment. This paper outlines the IBR method and discusses test results obtained with the approach. It will be shown that the photogrammetric orientation process for TLS point cloud registration is efficient and accurate, and offers a viable alternative to other approaches, such as the well-known iterative closest point algorithm.", "In this letter, a novel approach that utilizes the spectrum information (i.e., images) provided in a modern light detection and ranging (LiDAR) sensor is proposed for the registration of multistation LiDAR data sets. First, the conjugate points in the images collected at varied LiDAR stations are extracted through the speedup robust feature technique. Then, by applying the image-object space mapping technique, the 3-D coordinates of the conjugate image points can be obtained. Those identified 3-D conjugate points are then fed into a registration model so that the transformation parameters can be immediately solved using the efficient noniterative solution to linear transformations technique. Based on numerical results from a case study, it has been demonstrated that, by implementing the proposed approach, a fully automatic and reliable registration of multistation LiDAR point clouds can be achieved without the need for any human intervention.", "", "This paper presents methodologies to accelerate the registration of 3D point cloud segments by using hue data from the associated imagery. The proposed variant of the Iterative Closest Point (ICP) algorithm combines both normalized point range data and weighted hue value calculated from RGB data of an image registered 3D point cloud. A k-d tree based nearest neighbor search is used to associated common points in x, y, z, hue 4D space. The unknown rigid translation and rotation matrix required for registration is iteratively solved using Singular Value Decomposition (SVD) method. A mobile robot mounted scanner was used to generate color point cloud segments over a large area. The 4D ICP registration has been compared with typical 3D ICP and numerical results on the generated map segments shows that the 4D method resolves ambiguity in registration and converges faster than the 3D ICP" ] }
1811.02385
2900106541
The ability to correctly classify and retrieve apparel images has a variety of applications important to e-commerce, online advertising and internet search. In this work, we propose a robust framework for fine-grained apparel classification, in-shop and cross-domain retrieval which eliminates the requirement of rich annotations like bounding boxes and human-joints or clothing landmarks, and training of bounding box key-landmark detector for the same. Factors such as subtle appearance differences, variations in human poses, different shooting angles, apparel deformations, and self-occlusion add to the challenges in classification and retrieval of apparel items. Cross-domain retrieval is even harder due to the presence of large variation between online shopping images, usually taken in ideal lighting, pose, positive angle and clean background as compared with street photos captured by users in complicated conditions with poor lighting and cluttered scenes. Our framework uses compact bilinear CNN with tensor sketch algorithm to generate embeddings that capture local pairwise feature interactions in a translationally invariant manner. For apparel classification, we pass the feature embeddings through a softmax classifier, while, the in-shop and cross-domain retrieval pipelines use a triplet-loss based optimization approach, such that squared Euclidean distance between embeddings measures the dissimilarity between the images. Unlike previous works that relied on bounding box, key clothing landmarks or human joint detectors to assist the final deep classifier, proposed framework can be trained directly on the provided category labels or generated triplets for triplet loss optimization. Lastly, Experimental results on the DeepFashion fine-grained categorization, and in-shop and consumer-to-shop retrieval datasets provide a comparative analysis with previous work performed in the domain.
@cite_19 proposed FahionNet, which tries to simultaneously model local attribute level, general category level, and clothing image similarity level representation with the dependence on clothing attributes and landmarks. Apart from this, FashionNet also requires bounding box annotation around clothing item or around human model wearing the clothing apparel in the image for learning classifiers. Obtaining these massive attribute annotations along with clothing landmarks for apparel items is a tedious and costly task. It is not always possible for online marketplaces which maintain huge catalogues of clothing items to create such hand-crafted annotated datasets.
{ "cite_N": [ "@cite_19" ], "mid": [ "2471768434" ], "abstract": [ "Recent advances in clothes recognition have been driven by the construction of clothes datasets. Existing datasets are limited in the amount of annotations and are difficult to cope with the various challenges in real-world applications. In this work, we introduce DeepFashion1, a large-scale clothes dataset with comprehensive annotations. It contains over 800,000 images, which are richly annotated with massive attributes, clothing landmarks, and correspondence of images taken under different scenarios including store, street snapshot, and consumer. Such rich annotations enable the development of powerful algorithms in clothes recognition and facilitating future researches. To demonstrate the advantages of DeepFashion, we propose a new deep model, namely FashionNet, which learns clothing features by jointly predicting clothing attributes and landmarks. The estimated landmarks are then employed to pool or gate the learned features. It is optimized in an iterative manner. Extensive experiments demonstrate the effectiveness of FashionNet and the usefulness of DeepFashion." ] }
1811.02385
2900106541
The ability to correctly classify and retrieve apparel images has a variety of applications important to e-commerce, online advertising and internet search. In this work, we propose a robust framework for fine-grained apparel classification, in-shop and cross-domain retrieval which eliminates the requirement of rich annotations like bounding boxes and human-joints or clothing landmarks, and training of bounding box key-landmark detector for the same. Factors such as subtle appearance differences, variations in human poses, different shooting angles, apparel deformations, and self-occlusion add to the challenges in classification and retrieval of apparel items. Cross-domain retrieval is even harder due to the presence of large variation between online shopping images, usually taken in ideal lighting, pose, positive angle and clean background as compared with street photos captured by users in complicated conditions with poor lighting and cluttered scenes. Our framework uses compact bilinear CNN with tensor sketch algorithm to generate embeddings that capture local pairwise feature interactions in a translationally invariant manner. For apparel classification, we pass the feature embeddings through a softmax classifier, while, the in-shop and cross-domain retrieval pipelines use a triplet-loss based optimization approach, such that squared Euclidean distance between embeddings measures the dissimilarity between the images. Unlike previous works that relied on bounding box, key clothing landmarks or human joint detectors to assist the final deep classifier, proposed framework can be trained directly on the provided category labels or generated triplets for triplet loss optimization. Lastly, Experimental results on the DeepFashion fine-grained categorization, and in-shop and consumer-to-shop retrieval datasets provide a comparative analysis with previous work performed in the domain.
@cite_1 , construct a deep model capable of recognizing fine-grained clothing attributes on images in the wild using multi-task curriculum transfer learning. They collected a large clothing dataset and their meta-label as attributes from different online shopping web-sites. They learned a pre-processor to detect clothing images using Faster R-CCN and then employed an object detector which was trained on PASCAL VOC2007 followed by fine-tuning on an assembled bounding box annotated clothing dataset consisting of 8, 000 street shop photos. Their model was then trained using obtained bounding boxes and rich annotations.
{ "cite_N": [ "@cite_1" ], "mid": [ "2535561795" ], "abstract": [ "Recognising detailed clothing characteristics (finegrained attributes) in unconstrained images of people inthe-wild is a challenging task for computer vision, especially when there is only limited training data from the wild whilst most data available for model learning are captured in well-controlled environments using fashion models (well lit, no background clutter, frontal view, high-resolution). In this work, we develop a deep learning framework capable of model transfer learning from well-controlled shop clothing images collected from web retailers to in-the-wild images from the street. Specifically, we formulate a novel Multi-Task Curriculum Transfer (MTCT) deep learning method to explore multiple sources of different types of web annotations with multi-labelled fine-grained attributes. Our multi-task loss function is designed to extract more discriminative representations in training by jointly learning all attributes, and our curriculum strategy exploits the staged easy-to-hard transfer learning motivated by cognitive studies. We demonstrate the advantages of the MTCT model over the state-of-the-art methods on the X-Domain benchmark, a large scale clothing attribute dataset. Moreover, we show that the MTCT model has a notable advantage over contemporary models when the training data size is small." ] }
1811.02385
2900106541
The ability to correctly classify and retrieve apparel images has a variety of applications important to e-commerce, online advertising and internet search. In this work, we propose a robust framework for fine-grained apparel classification, in-shop and cross-domain retrieval which eliminates the requirement of rich annotations like bounding boxes and human-joints or clothing landmarks, and training of bounding box key-landmark detector for the same. Factors such as subtle appearance differences, variations in human poses, different shooting angles, apparel deformations, and self-occlusion add to the challenges in classification and retrieval of apparel items. Cross-domain retrieval is even harder due to the presence of large variation between online shopping images, usually taken in ideal lighting, pose, positive angle and clean background as compared with street photos captured by users in complicated conditions with poor lighting and cluttered scenes. Our framework uses compact bilinear CNN with tensor sketch algorithm to generate embeddings that capture local pairwise feature interactions in a translationally invariant manner. For apparel classification, we pass the feature embeddings through a softmax classifier, while, the in-shop and cross-domain retrieval pipelines use a triplet-loss based optimization approach, such that squared Euclidean distance between embeddings measures the dissimilarity between the images. Unlike previous works that relied on bounding box, key clothing landmarks or human joint detectors to assist the final deep classifier, proposed framework can be trained directly on the provided category labels or generated triplets for triplet loss optimization. Lastly, Experimental results on the DeepFashion fine-grained categorization, and in-shop and consumer-to-shop retrieval datasets provide a comparative analysis with previous work performed in the domain.
@cite_18 utilize an alexnet @cite_0 with activations from a fully connected layer FC6 to identify exact matching clothes from street to shop domain. They collected street and shop photos, and obtained bounding box annotations using Mechanical Turk service. @cite_7 , proposed a dual attribute-aware ranking network, which optimizes attribute classification loss and image triplet quantization loss together for cross-domain image retrieval. They utilize two-stream CNN for handling in- shop and street images respectively with the dependence on bounding boxes generated using Faster RCNN @cite_6 . They curated 381,975 online-offline image pairs of different categories from the customer review pages. Then, manually pruned the noisy labels, merged similar labels based on human perception using crowd-source annotators and obtained fine grained clothing attributes using image descriptors.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_6", "@cite_7" ], "mid": [ "", "2200092826", "2511502099", "2950940417" ], "abstract": [ "", "In this paper, we define a new task, Exact Street to Shop, where our goal is to match a real-world example of a garment item to the same item in an online shop. This is an extremely challenging task due to visual differences between street photos (pictures of people wearing clothing in everyday uncontrolled settings) and online shop photos (pictures of clothing items on people, mannequins, or in isolation, captured by professionals in more controlled settings). We collect a new dataset for this application containing 404,683 shop photos collected from 25 different online retailers and 20,357 street photos, providing a total of 39,479 clothing item matches between street and shop photos. We develop three different methods for Exact Street to Shop retrieval, including two deep learning baseline methods, and a method to learn a similarity measure between the street and shop domains. Experiments demonstrate that our learned similarity significantly outperforms our baselines that use existing deep learning based representations.", "Visual fashion analysis has attracted many attentions in the recent years. Previous work represented clothing regions by either bounding boxes or human joints. This work presents fashion landmark detection or fashion alignment, which is to predict the positions of functional key points defined on the fashion items, such as the corners of neckline, hemline, and cuff. To encourage future studies, we introduce a fashion landmark dataset (The dataset is available at http: mmlab.ie.cuhk.edu.hk projects DeepFashion LandmarkDetection.html.) with over 120K images, where each image is labeled with eight landmarks. With this dataset, we study fashion alignment by cascading multiple convolutional neural networks in three stages. These stages gradually improve the accuracies of landmark predictions. Extensive experiments demonstrate the effectiveness of the proposed method, as well as its generalization ability to pose estimation. Fashion landmark is also compared to clothing bounding boxes and human joints in two applications, fashion attribute prediction and clothes retrieval, showing that fashion landmark is a more discriminative representation to understand fashion images.", "We address the problem of cross-domain image retrieval, considering the following practical application: given a user photo depicting a clothing image, our goal is to retrieve the same or attribute-similar clothing items from online shopping stores. This is a challenging problem due to the large discrepancy between online shopping images, usually taken in ideal lighting pose background conditions, and user photos captured in uncontrolled conditions. To address this problem, we propose a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning. More specifically, DARN consists of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning. We show that this attribute-guided learning is a key factor for retrieval accuracy improvement. In addition, to further align with the nature of the retrieval problem, we impose a triplet visual similarity constraint for learning to rank across the two sub-networks. Another contribution of our work is a large-scale dataset which makes the network learning feasible. We exploit customer review websites to crawl a large set of online shopping images and corresponding offline user photos with fine-grained clothing attributes, i.e., around 450,000 online shopping images and about 90,000 exact offline counterpart images of those online ones. All these images are collected from real-world consumer websites reflecting the diversity of the data modality, which makes this dataset unique and rare in the academic community. We extensively evaluate the retrieval performance of networks in different configurations. The top-20 retrieval accuracy is doubled when using the proposed DARN other than the current popular solution using pre-trained CNN features only (0.570 vs. 0.268)." ] }
1811.02447
2953202221
In the context of deep learning, this article presents an original deep network, namely CentralNet, for the fusion of information coming from different sensors. This approach is designed to efficiently and automatically balance the trade-off between early and late fusion (i.e. between the fusion of low-level vs high-level information). More specifically, at each level of abstraction-the different levels of deep networks-uni-modal representations of the data are fed to a central neural network which combines them into a common embedding. In addition, a multi-objective regularization is also introduced, helping to both optimize the central network and the unimodal networks. Experiments on four multimodal datasets not only show state-of-the-art performance, but also demonstrate that CentralNet can actually choose the best possible fusion strategy for a given problem.
The proposed method borrows from both visions: it is a multilayer framework and uses a constrained multi-objective loss. It builds on existing deep convolutional neural networks designed to process each modality independently. We suggest connecting these networks, at different levels, using an additional central network dedicated to the projection of the features coming from the different modalities into the same common space. In addition, the global loss allows to back propagate some global constraints on each modality, coordinating their representations. The proposed approach aims at automatically identifying an optimal fusion strategy for a given task. The approach is multi-objective in the sense that it simultaneously tries to minimize per modality losses as well as the global loss defined on the joint space. This article is an extension of @cite_18 .
{ "cite_N": [ "@cite_18" ], "mid": [ "2953039737" ], "abstract": [ "This paper proposes a novel multimodal fusion approach, aiming to produce best possible decisions by integrating information coming from multiple media. While most of the past multimodal approaches either work by projecting the features of different modalities into the same space, or by coordinating the representations of each modality through the use of constraints, our approach borrows from both visions. More specifically, assuming each modality can be processed by a separated deep convolutional network, allowing to take decisions independently from each modality, we introduce a central network linking the modality specific networks. This central network not only provides a common feature embedding but also regularizes the modality specific networks through the use of multi-task learning. The proposed approach is validated on 4 different computer vision tasks on which it consistently improves the accuracy of existing multimodal fusion approaches." ] }
1811.02394
2900428386
We propose DeepChannel, a robust, data-efficient, and interpretable neural model for extractive document summarization. Given any document-summary pair, we estimate a salience score, which is modeled using an attention-based deep neural network, to represent the salience degree of the summary for yielding the document. We devise a contrastive training strategy to learn the salience estimation network, and then use the learned salience score as a guide and iteratively extract the most salient sentences from the document as our generated summary. In experiments, our model not only achieves state-of-the-art ROUGE scores on CNN Daily Mail dataset, but also shows strong robustness in the out-of-domain test on DUC2007 test set. Moreover, our model reaches a ROUGE-1 F-1 score of 39.41 on CNN Daily Mail test set with merely @math training set, demonstrating a tremendous data efficiency.
Traditional summarization methods usually depend on manual rules and expert knowledge, such as the expanding rules of noisy-channel models @cite_14 @cite_28 , objectives and constraints of Integer Linear Programming (ILP) models @cite_5 @cite_2 @cite_19 , human-engineered features of some sequence classification methods @cite_13 , and so on.
{ "cite_N": [ "@cite_14", "@cite_28", "@cite_19", "@cite_2", "@cite_5", "@cite_13" ], "mid": [ "2950781913", "2133182690", "2161611336", "2250968833", "1959120443", "" ], "abstract": [ "We present a document compression system that uses a hierarchical noisy-channel model of text production. Our compression system first automatically derives the syntactic structure of each sentence and the overall discourse structure of the text given as input. The system then uses a statistical hierarchical model of text production in order to drop non-important syntactic and discourse constituents so as to generate coherent, grammatical document compressions of arbitrary length. The system outperforms both a baseline and a sentence-based compression system that operates by simplifying sequentially all sentences in a text. Our results support the claim that discourse knowledge plays an important role in document summarization.", "When humans produce summaries of documents, they do not simply extract sentences and concatenate them. Rather, they create new sentences that are grammatical, that cohere with one another, and that capture the most salient pieces of information in the original document. Given that large collections of text abstract pairs are available online, it is now possible to envision algorithms that are trained to mimic this process. In this paper, we focus on sentence compression, a simpler version of this larger challenge. We aim to achieve two goals simultaneously: our compressions should be grammatical, and they should retain the most important pieces of information. These two goals can conflict. We devise both a noisy-channel and a decision-tree approach to the problem, and we evaluate results against manual compressions and a simple baseline.", "We propose an abstraction-based multi-document summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun verb phrases. Different from existing abstraction-based approaches, our method first constructs a pool of concepts and facts represented by phrases from the input documents. Then new sentences are generated by selecting and merging informative phrases to maximize the salience of phrases and meanwhile satisfy the sentence construction constraints. We employ integer linear optimization for conducting phrase selection and merging simultaneously in order to achieve the global optimal solution for a summary. Experimental results on the benchmark data set TAC 2011 show that our framework outperforms the state-of-the-art models under automated pyramid evaluation metric, and achieves reasonably well results on manual linguistic quality evaluation.", "We present an approach for extractive single-document summarization. Our approach is based on a weighted graphical representation of documents obtained by topic modeling. We optimize importance, coherence and non-redundancy simultaneously using ILP. We compare ROUGE scores of our system with state-of-the-art results on scientific articles from PLOS Medicine and on DUC 2002 data. Human judges evaluate the coherence of summaries generated by our system in comparision to two baselines. Our approach obtains competitive performance.", "Multi-document summarization involves many aspects of content selection and surface realization. The summaries must be informative, succinct, grammatical, and obey stylistic writing conventions. We present a method where such individual aspects are learned separately from data (without any hand-engineering) but optimized jointly using an integer linear programme. The ILP framework allows us to combine the decisions of the expert learners and to select and rewrite source content through a mixture of objective setting, soft and hard constraints. Experimental results on the TAC-08 data set show that our model achieves state-of-the-art performance using ROUGE and significantly improves the informativeness of the summaries.", "" ] }
1811.02394
2900428386
We propose DeepChannel, a robust, data-efficient, and interpretable neural model for extractive document summarization. Given any document-summary pair, we estimate a salience score, which is modeled using an attention-based deep neural network, to represent the salience degree of the summary for yielding the document. We devise a contrastive training strategy to learn the salience estimation network, and then use the learned salience score as a guide and iteratively extract the most salient sentences from the document as our generated summary. In experiments, our model not only achieves state-of-the-art ROUGE scores on CNN Daily Mail dataset, but also shows strong robustness in the out-of-domain test on DUC2007 test set. Moreover, our model reaches a ROUGE-1 F-1 score of 39.41 on CNN Daily Mail test set with merely @math training set, demonstrating a tremendous data efficiency.
A vast majority of abstractive summarizers are built based on the encoder-decoder structure. @cite_3 incorporates a pointing mechanism into the encoder-decoder, such that their model can directly copy words from the source text while decoding summaries. @cite_18 combines the standard cross-entropy loss and RL objectives to maximize the ROUGE metric at the same time of sequence prediction training. @cite_12 proposes a fast summarization model that first selects salient sentences and then rewrites them abstractively to generate a concise overall summary. Their hybrid approach jointly learns an extractor and a rewriter, capable of both extractive and abstractive summarization. @cite_6 also combines extraction and abstraction, but they implement it by unifying a sentence-level attention and a word-level attention and guiding these two parts with an inconsistency loss.
{ "cite_N": [ "@cite_18", "@cite_12", "@cite_3", "@cite_6" ], "mid": [ "2612675303", "2897139265", "2606974598", "2896255318" ], "abstract": [ "Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit \"exposure bias\" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.", "Inspired by how humans summarize long documents, we propose an accurate and fast summarization model that first selects salient sentences and then rewrites them abstractively (i.e., compresses and paraphrases) to generate a concise overall summary. We use a novel sentence-level policy gradient method to bridge the non-differentiable computation between these two neural networks in a hierarchical way, while maintaining language fluency. Empirically, we achieve the new state-of-the-art on all metrics (including human evaluation) on the CNN Daily Mail dataset, as well as significantly higher abstractiveness scores. Moreover, by first operating at the sentence-level and then the word-level, we enable parallel decoding of our neural generative model that results in substantially faster (10-20x) inference speed as well as 4x faster training convergence than previous long-paragraph encoder-decoder models. We also demonstrate the generalization of our model on the test-only DUC-2002 dataset, where we achieve higher scores than a state-of-the-art model.", "", "We propose a unified model combining the strength of extractive and abstractive summarization. On the one hand, a simple extractive model can obtain sentence-level attention with high ROUGE scores but less readable. On the other hand, a more complicated abstractive model can obtain word-level dynamic attention to generate a more readable paragraph. In our model, sentence-level attention is used to modulate the word-level attention such that words in less attended sentences are less likely to be generated. Moreover, a novel inconsistency loss function is introduced to penalize the inconsistency between two levels of attentions. By end-to-end training our model with the inconsistency loss and original losses of extractive and abstractive models, we achieve state-of-the-art ROUGE scores while being the most informative and readable summarization on the CNN Daily Mail dataset in a solid human evaluation." ] }
1811.02134
2899947105
This work explores better adaptation methods to low-resource languages using an external language model (LM) under the framework of transfer learning. We first build a language-independent ASR system in a unified sequence-to-sequence (S2S) architecture with a shared vocabulary among all languages. During adaptation, we perform LM fusion transfer, where an external LM is integrated into the decoder network of the attention-based S2S model in the whole adaptation stage, to effectively incorporate linguistic context of the target language. We also investigate various seed models for transfer learning. Experimental evaluations using the IARPA BABEL data set show that LM fusion transfer improves performances on all target five languages compared with simple transfer learning when the external text data is available. Our final system drastically reduces the performance gap from the hybrid systems.
The traditional usage of unpaired text data in the S2S framework is categorized to four approaches: LM integration, pre-training, multi-task learning (MTL), and data augmentation. In the LM integration approach, there are three methods: , , and as described in Section . Their differences are in the timing to integrate an external LM and the existence of additional parameters of the gating mechanism. We depict these fusion methods in Fig. . In, @cite_9 , these fusion methods are compared in middle-size English conversational speech ( @math 300h) and large-scale Google voice search data. However, none of previous works investigated the effect of them in other languages especially for low-resource languages, which is the focus of this paper. In @cite_36 , the authors show the effectiveness of cold fusion in a cross-domain scenario. Since the external LM is more likely to be changed frequently than the acoustic model, it is time-consuming to train a new S2S model with the LM integration from scratch every time the external LM is updated. In this work, we conduct LM fusion during adaptation, which is regarded as a more realistic scenario.
{ "cite_N": [ "@cite_36", "@cite_9" ], "mid": [ "2748679025", "2883416004" ], "abstract": [ "Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition. Performance has further been improved by leveraging unlabeled data, often in the form of a language model. In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task. We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying i) faster convergence and better generalization, and ii) almost complete transfer to a new domain while using less than 10 of the labeled training data.", "Attention-based recurrent neural encoder-decoder models present an elegant solution to the automatic speech recognition problem. This approach folds the acoustic model, pronunciation model, and language model into a single network and requires only a parallel corpus of speech and text for training. However, unlike in conventional approaches that combine separate acoustic and language models, it is not clear how to use additional (unpaired) text. While there has been previous work on methods addressing this problem, a thorough comparison among methods is still lacking. In this paper, we compare a suite of past methods and some of our own proposed methods for using unpaired text data to improve encoder-decoder models. For evaluation, we use the medium-sized Switchboard data set and the large-scale Google voice search and dictation data sets. Our results confirm the benefits of using unpaired text across a range of methods and data sets. Surprisingly, for first-pass decoding, the rather simple approach of shallow fusion performs best across data sets. However, for Google data sets we find that cold fusion has a lower oracle error rate and outperforms other approaches after second-pass rescoring on the Google voice search data set." ] }
1811.02134
2899947105
This work explores better adaptation methods to low-resource languages using an external language model (LM) under the framework of transfer learning. We first build a language-independent ASR system in a unified sequence-to-sequence (S2S) architecture with a shared vocabulary among all languages. During adaptation, we perform LM fusion transfer, where an external LM is integrated into the decoder network of the attention-based S2S model in the whole adaptation stage, to effectively incorporate linguistic context of the target language. We also investigate various seed models for transfer learning. Experimental evaluations using the IARPA BABEL data set show that LM fusion transfer improves performances on all target five languages compared with simple transfer learning when the external text data is available. Our final system drastically reduces the performance gap from the hybrid systems.
Another usage of the external LM is to initialize the lower layer in the decoder network with the pre-trained LM @cite_9 @cite_20 . However, we transfer almost all parameters in a multilingual S2S model (including the decoder network), and thus we do not explore this direction. Apart from the external LM, the MTL approach with LM objective are investigated in @cite_9 @cite_27 . Although the MTL approach does not require any additional parameters, it gets minor gains compared to LM fusion methods @cite_9 .
{ "cite_N": [ "@cite_27", "@cite_9", "@cite_20" ], "mid": [ "2758310181", "2883416004", "2951991713" ], "abstract": [ "", "Attention-based recurrent neural encoder-decoder models present an elegant solution to the automatic speech recognition problem. This approach folds the acoustic model, pronunciation model, and language model into a single network and requires only a parallel corpus of speech and text for training. However, unlike in conventional approaches that combine separate acoustic and language models, it is not clear how to use additional (unpaired) text. While there has been previous work on methods addressing this problem, a thorough comparison among methods is still lacking. In this paper, we compare a suite of past methods and some of our own proposed methods for using unpaired text data to improve encoder-decoder models. For evaluation, we use the medium-sized Switchboard data set and the large-scale Google voice search and dictation data sets. Our results confirm the benefits of using unpaired text across a range of methods and data sets. Surprisingly, for first-pass decoding, the rather simple approach of shallow fusion performs best across data sets. However, for Google data sets we find that cold fusion has a lower oracle error rate and outperforms other approaches after second-pass rescoring on the Google voice search data set.", "This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models. In our method, the weights of the encoder and decoder of a seq2seq model are initialized with the pretrained weights of two language models and then fine-tuned with labeled data. We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models. Our main result is that pretraining improves the generalization of seq2seq models. We achieve state-of-the art results on the WMT English @math German task, surpassing a range of methods using both phrase-based machine translation and neural machine translation. Our method achieves a significant improvement of 1.3 BLEU from the previous best models on both WMT'14 and WMT'15 English @math German. We also conduct human evaluations on abstractive summarization and find that our method outperforms a purely supervised learning baseline in a statistically significant manner." ] }
1811.02044
2963033617
We present an evaluation of several representative sampling-based and optimization-based motion planners, and then introduce an integrated motion planning system which incorporates recent advances in trajectory optimization into a sparse roadmap framework. Through experiments in 4 common application scenarios with 5000 test cases each, we show that optimization-based or sampling-based planners alone are not effective for realistic problems where fast planning times are required. To the best of our knowledge, this is the first work that presents such a systematic and comprehensive evaluation of state-of-the-art motion planners, which are based on a significant amount of experiments. We then combine different stand-alone planners with trajectory optimization. The results show that the combination of our sparse roadmap and trajectory optimization provides superior performance over other standard sampling-based planners' combinations. By using a multi-query roadmap instead of generating completely new trajectories for each planning problem, our approach allows for extensions such as persistent control policy information associated with a trajectory across planning problems. Also, the sub-optimality resulting from the sparsity of roadmap, as well as the unexpected disturbances from the environment, can both be overcome by the real-time trajectory optimization process.
Optimization-based robotic motion planners are attracting more and more attention with the increasing complexity of robots and environments. Covariance Hamiltonian Optimization for Motion Planning (CHOMP) @cite_6 , Stochastic Trajectory Optimization for Motion Planning (STOMP) @cite_4 , Incremental Trajectory Optimization for Real-time Replanning (ITOMP) @cite_13 and TrajOpt @cite_3 are several state-of-the-art optimization-based planners. In this work, we focus on the TrajOpt planner for three reasons. First, the convex-convex collision checking method used in TrajOpt can take accurate object geometry into consideration, shaping the objective to enhance the ability of getting trajectories out of collision. In contrast, the distance field method used in CHOMP and STOMP consider the collision cost for each exterior point on a robot @cite_5 , which means two points might drive the objective in opposite direction. Second, the sequential quadratic programming method used in TrajOpt can better handle deeply infeasible initial trajectories than the commonly used gradient descent method @cite_3 . Third, customized differential constraints, for example velocity constraints and torque constraints, can be incorporated in TrajOpt. This is an important consideration for Chekhov which aims at building a motion execution system that incorporates system dynamics models and control policies, while respecting additional temporal constraints.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_6", "@cite_5", "@cite_13" ], "mid": [ "2019965290", "2293883387", "2099893201", "2161819990", "61873113" ], "abstract": [ "We present a new approach to motion planning using a stochastic trajectory optimization framework. The approach relies on generating noisy trajectories to explore the space around an initial (possibly infeasible) trajectory, which are then combined to produced an updated trajectory with lower cost. A cost function based on a combination of obstacle and smoothness cost is optimized in each iteration. No gradient information is required for the particular optimization algorithm that we use and so general costs for which derivatives may not be available (e.g. costs corresponding to constraints and motor torques) can be included in the cost function. We demonstrate the approach both in simulation and on a mobile manipulation system for unconstrained and constrained tasks. We experimentally show that the stochastic nature of STOMP allows it to overcome local minima that gradient-based methods like CHOMP can get stuck in.", "We present a novel approach for incorporating collision avoidance into trajectory optimization as a method of solving robotic motion planning problems. At the core of our approach are (i) A sequential convex optimization procedure, which penalizes collisions with a hinge loss and increases the penalty coefficients in an outer loop as necessary. (ii) An efficient formulation of the no-collisions constraint that directly considers continuous-time safety and enables the algorithm to reliably solve motion planning problems, including problems involving thin and complex obstacles. We benchmarked our algorithm against several other motion planning algorithms, solving a suite of 7-degree-of-freedom (DOF) arm-planning problems and 18-DOF full-body planning problems. We compared against sampling-based planners from OMPL, and we also compared to CHOMP, a leading approach for trajectory optimization. Our algorithm was faster than the alternatives, solved more problems, and yielded higher quality paths. Experimental evaluation on the following additional problem types also confirmed the speed and effectiveness of our approach: (i) Planning foot placements with 34 degrees of freedom (28 joints + 6 DOF pose) of the Atlas humanoid robot as it maintains static stability and has to negotiate environmental constraints. (ii) Industrial box picking. (iii) Real-world motion planning for the PR2 that requires considering all degrees of freedom at the same time.", "Existing high-dimensional motion planning algorithms are simultaneously overpowered and underpowered. In domains sparsely populated by obstacles, the heuristics used by sampling-based planners to navigate “narrow passages” can be needlessly complex; furthermore, additional post-processing is required to remove the jerky or extraneous motions from the paths that such planners generate. In this paper, we present CHOMP, a novel method for continuous path refinement that uses covariant gradient techniques to improve the quality of sampled trajectories. Our optimization technique both optimizes higher-order dynamics and is able to converge over a wider range of input paths relative to previous path optimization strategies. In particular, we relax the collision-free feasibility prerequisite on input paths required by those strategies. As a result, CHOMP can be used as a standalone motion planner in many real-world planning queries. We demonstrate the effectiveness of our proposed method in manipulation planning for a 6-DOF robotic arm as well as in trajectory generation for a walking quadruped robot.", "In this paper, we present CHOMP (covariant Hamiltonian optimization for motion planning), a method for trajectory optimization invariant to reparametrization. CHOMP uses functional gradient techniques to iteratively improve the quality of an initial trajectory, optimizing a functional that trades off between a smoothness and an obstacle avoidance component. CHOMP can be used to locally optimize feasible trajectories, as well as to solve motion planning queries, converging to low-cost trajectories even when initialized with infeasible ones. It uses Hamiltonian Monte Carlo to alleviate the problem of convergence to high-cost local minima (and for probabilistic completeness), and is capable of respecting hard constraints along the trajectory. We present extensive experiments with CHOMP on manipulation and locomotion tasks, using seven-degree-of-freedom manipulators and a rough-terrain quadruped robot.", "We present a novel optimization-based algorithm for motion planning in dynamic environments. Our approach uses a stochastic trajectory optimization framework to avoid collisions and satisfy smoothness and dynamics constraints. Our algorithm does not require a priori knowledge about global motion or trajectories of dynamic obstacles. Rather, we compute a conservative local bound on the position or trajectory of each obstacle over a short time and use the bound to compute a collision-free trajectory for the robot in an incremental manner. Moreover, we interleave planning and execution of the robot in an adaptive manner to balance between the planning horizon and responsiveness to obstacle. We highlight the performance of our planner in a simulated dynamic environment with the 7-DOF PR2 robot arm and dynamic obstacles." ] }
1811.02044
2963033617
We present an evaluation of several representative sampling-based and optimization-based motion planners, and then introduce an integrated motion planning system which incorporates recent advances in trajectory optimization into a sparse roadmap framework. Through experiments in 4 common application scenarios with 5000 test cases each, we show that optimization-based or sampling-based planners alone are not effective for realistic problems where fast planning times are required. To the best of our knowledge, this is the first work that presents such a systematic and comprehensive evaluation of state-of-the-art motion planners, which are based on a significant amount of experiments. We then combine different stand-alone planners with trajectory optimization. The results show that the combination of our sparse roadmap and trajectory optimization provides superior performance over other standard sampling-based planners' combinations. By using a multi-query roadmap instead of generating completely new trajectories for each planning problem, our approach allows for extensions such as persistent control policy information associated with a trajectory across planning problems. Also, the sub-optimality resulting from the sparsity of roadmap, as well as the unexpected disturbances from the environment, can both be overcome by the real-time trajectory optimization process.
Despite the advantages of optimization-based planners, they are not stand-alone planners and their performance is very sensitive to the quality of initializations. Also, numerical trajectory optimization often suffers from the problem of getting stuck in high-cost local optima. Therefore, a natural thought to improve the performance of optimization-based planners is to combine them with global planners. Some existing work, for example and , has proposed online path shortening methods for sampling-based planners. The effect of optimization in those approaches is mostly limited to trajectory smoothing and shortening, and can't account for real-time obstacle avoidance and dynamics constraints. Therefore, those modified sampling-based planners still share the typical slow planning times with other common sampling-based planners. Other researches @cite_14 have presented a combined roadmap and trajectory optimization planning algorithm. However, they additionally focused on avoiding singularities in redundant manipulators and meeting Cartesian constraints resulting in relatively long planning times. In comparison, our approach aims at fast reactive real-time planning in practical planning scenarios, and extensive experiment results in Section show that our approach reaches this goal.
{ "cite_N": [ "@cite_14" ], "mid": [ "2217544563" ], "abstract": [ "We present a parallel Cartesian planning algorithm for redundant robot arms and manipulators. We pre-compute a roadmap, that takes into account static obstacles in the environment as well as singular configurations. At runtime, multiple paths in this roadmap are computed as initial trajectories for an optimization-based planner that tends to satisfy various constraints corresponding to demands on the trajectory, including end-effector constraints, collision-free, and non-singular. We highlight and compare the performance of our parallel planner using 7-DOF arms with other planning algorithms. To the best of our knowledge, this is the first approach that can compute smooth and collision-free trajectories in complex environments with dynamic obstacles." ] }
1811.02074
2900376878
Person re-identification (re-ID) is a challenging problem especially when no labels are available for training. Although recent deep re-ID methods have achieved great improvement, it is still difficult to optimize deep re-ID model without annotations in training data. To address this problem, this study introduces a novel approach for unsupervised person re-ID by leveraging virtual and real data. Our approach includes two components: virtual person generation and training of deep re-ID model. For virtual person generation, we learn a person generation model and a camera style transfer model using unlabeled real data to generate virtual persons with different poses and camera styles. The virtual data is formed as labeled training data, enabling subsequently training deep re-ID model in supervision. For training of deep re-ID model, we divide it into three steps: 1) pre-training a coarse re-ID model by using virtual data; 2) collaborative filtering based positive pair mining from the real data; and 3) fine-tuning of the coarse re-ID model by leveraging the mined positive pairs and virtual data. The final re-ID model is achieved by iterating between step 2 and step 3 until convergence. Experimental results on two large-scale datasets, Market-1501 and DukeMTMC-reID, demonstrate the effectiveness of our approach and shows that the state of the art is achieved in unsupervised person re-ID.
Style transfer is a sub-domain of image-to-image translation. Recent works conducted on GANs @cite_13 have achieved impressive results on image-to-image translation. Pix2pix towards this goal by optimizing both adversarial and L1 loss of cGAN @cite_9 . However, paired samples are required in training process, this limits the application of pix2pix in practice. To alleviate this problem, Cycle-GAN @cite_14 introduces cycle-consistent loss to preserve key attributes for both source domain and target domain. These two models can only transfer images from one domain to another and may not be flexible enough when dealing with multi-domain translation. To overcome this problem, Star-GAN @cite_18 is proposed to combine classification loss and adversarial loss into training process to translate image into different styles with only one model.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_13", "@cite_18" ], "mid": [ "2125389028", "2962793481", "2099471712", "" ], "abstract": [ "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "" ] }
1811.02293
2900149775
3GPP Release 15, the first 5G standard, includes protection of user identity privacy against IMSI catchers. These protection mechanisms are based on public key encryption. Despite this protection, IMSI catching is still possible in LTE networks which opens the possibility of a downgrade attack on user identity privacy, where a fake LTE base station obtains the identity of a 5G user equipment. We propose (i) to use an existing pseudonym-based solution to protect user identity privacy of 5G user equipment against IMSI catchers in LTE and (ii) to include a mechanism for updating LTE pseudonyms in the public key encryption based 5G identity privacy procedure. The latter helps to recover from a loss of synchronization of LTE pseudonyms. Using this mechanism, pseudonyms in the user equipment and home network are automatically synchronized when the user equipment connects to 5G. Our mechanisms utilize existing LTE and 3GPP Release 15 messages and require modifications only in the user equipment and home network in order to provide identity privacy. Additionally, lawful interception requires minor patching in the serving network.
In , @cite_7 , one-time pseudonyms are created by probabilistic encryption of the user's identity using a key that is known only by the HN. The HN sends a set of pseudonyms to the UE when the UE and the HN have a secure communication channel.
{ "cite_N": [ "@cite_7" ], "mid": [ "2542575239" ], "abstract": [ "User mobility is rapidly becoming an important and popular feature in today's networks. This is especially evident in wireless cellular environments. While useful and desirable, user mobility raises a number of important security-related issues and concerns. One of them is the issue of tracking mobile user's movements and current whereabouts. Ideally, no entity other than the user himself and a responsible authority in the user's home domain should know either the real identity or the current location of the mobile user. At present, environments supporting user mobility either do not address the problem at all or base their solutions on the specific hardware capabilities of the user's personal device, e.g., a cellular telephone. This paper discusses a wide range of issues related to anonymity in mobile envlronments, reviews current state-of-the-art approaches and proposes several potential solutions. Solutions vary in complexity, degree of protection and assumptions about the underlying environment." ] }
1811.02293
2900149775
3GPP Release 15, the first 5G standard, includes protection of user identity privacy against IMSI catchers. These protection mechanisms are based on public key encryption. Despite this protection, IMSI catching is still possible in LTE networks which opens the possibility of a downgrade attack on user identity privacy, where a fake LTE base station obtains the identity of a 5G user equipment. We propose (i) to use an existing pseudonym-based solution to protect user identity privacy of 5G user equipment against IMSI catchers in LTE and (ii) to include a mechanism for updating LTE pseudonyms in the public key encryption based 5G identity privacy procedure. The latter helps to recover from a loss of synchronization of LTE pseudonyms. Using this mechanism, pseudonyms in the user equipment and home network are automatically synchronized when the user equipment connects to 5G. Our mechanisms utilize existing LTE and 3GPP Release 15 messages and require modifications only in the user equipment and home network in order to provide identity privacy. Additionally, lawful interception requires minor patching in the serving network.
Asokan @cite_9 described how public key encryption can be used to achieve identity privacy in mobile environments. In this solution, only the HN has a public private key pair and the UE is provisioned with the public key of the HN. The UE encrypts identity information using public key before sending it to the HN. K ien @cite_26 suggests using identity based encryption (IBE) to defeat IMSI catchers in LTE. Khan and Niemi @cite_15 propose the use of IBE to defeat IMSI catchers in 5G networks.
{ "cite_N": [ "@cite_9", "@cite_26", "@cite_15" ], "mid": [ "2163576253", "2001135236", "" ], "abstract": [ "In a mobile computing environment, it is desirable to protect information about the movements and activities of mobile entities from onlookers. Solutions to this problem of providing anonymity have to be developed with the constraints of mobile computing environments in mind. In this paper, it is argued that this issue merits investigation. A brief survey of the nature of anonymity provided in proposed or existing mobile computing environments is presented. It is argued further that achieving limited but practical anonymity using standard cryptographic techniques is feasible. Example solutions are presented.", "In this paper we propose a way to enhance the identity privacy in LTE LTE-Advanced systems. This is achieved while minimizing the impact on the existing E-UTRAN system. This is important since proposals to modify a widely deployed infrastructure must be cost effective, both in terms of design changes and in terms of deployment cost. In our proposal, the user equipment (UE) identifies itself with a dummy identity, consisting only of the mobile nation code and the mobile network code. We use the existing signalling mechanisms in a novel way to request a special encrypted identity information element. This element is protected using identity-based encryption (IBE), with the home network (HPLMN) as the private key generator (PKG) and the visited network (VPLMN) and the private key owner. This allows the UE to protect the identity (IMSI) from external parties. To avoid tracking the “encrypted IMSI” also include a random element. We use this as an opportunity to let the UE include as subscriber-side random challenge to the network. The challenge will be bounded to the EPS authentication vector (EPS AV) and will allow use to construct an online 3-way security context. To complete our proposal we also strengthen the requirements on the use of the temporary identifier (M-TMSI).", "" ] }
1811.02166
2948654710
Pattern-based labeling methods have achieved promising results in alleviating the inevitable labeling noises of distantly supervised neural relation extraction. However, these methods require significant expert labor to write relation-specific patterns, which makes them too sophisticated to generalize this http URL ease the labor-intensive workload of pattern writing and enable the quick generalization to new relation types, we propose a neural pattern diagnosis framework, DIAG-NRE, that can automatically summarize and refine high-quality relational patterns from noise data with human experts in the loop. To demonstrate the effectiveness of DIAG-NRE, we apply it to two real-world datasets and present both significant and interpretable improvements over state-of-the-art methods.
We also note that the relational-pattern mining task has been extensively studied @cite_24 @cite_36 @cite_26 @cite_25 . Different from those studies, our pattern generation process is simply based on RL, does not rely on NLP annotation tools (part-of-speech tagging, dependency parsing, etc) and establishes a link with the prediction of NRE models. Furthermore, extracted patterns aim to serve the weak-label-fusion model instead of performing pattern-based relation prediction in traditional studies. Besides, another relevant method is @cite_40 that infers negative patterns from the example-pattern-relation co-occurrence and removes the wrong labels accordingly. In contrast, our framework is based on modern NRE models and not only reduces negative patterns but also reinforces positive patterns.
{ "cite_N": [ "@cite_26", "@cite_36", "@cite_24", "@cite_40", "@cite_25" ], "mid": [ "1483236033", "1512387364", "2143349571", "2149713870", "2595918108" ], "abstract": [ "This paper presents PATTY: a large resource for textual patterns that denote binary relations between entities. The patterns are semantically typed and organized into a subsumption taxonomy. The PATTY system is based on efficient algorithms for frequent itemset mining and can process Web-scale corpora. It harnesses the rich type system and entity population of large knowledge bases. The PATTY taxonomy comprises 350,569 pattern synsets. Random-sampling-based evaluation shows a pattern accuracy of 84.7 . PATTY has 8,162 subsumptions, with a random-sampling-based precision of 75 . The PATTY resource is freely available for interactive access and download.", "We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent.", "Information extraction is a form of shallow text processing that locates a specified set of relevant items in a natural-language document. Systems for this task require significant domain-specific knowledge and are time-consuming and difficult to build by hand, making them a good application for machine learning. We present a system, RAPIER, that uses pairs of sample documents and filled templates to induce pattern-match rules that directly extract fillers for the slots in the template. RAPIER employs a bottom-up learning algorithm which incorporates techniques from several inductive logic programming systems and acquires unbounded patterns that include constraints on the words, part-of-speech tags, and semantic classes present in the filler and the surrounding text. We present encouraging experimental results on two domains.", "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.", "Mining textual patterns in news, tweets, papers, and many other kinds of text corpora has been an active theme in text mining and NLP research. Previous studies adopt a dependency parsing-based pattern discovery approach. However, the parsing results lose rich context around entities in the patterns, and the process is costly for a corpus of large scale. In this study, we propose a novel typed textual pattern structure, called meta pattern, which is extended to a frequent, informative, and precise subsequence pattern in certain context. We propose an efficient framework, called MetaPAD, which discovers meta patterns from massive corpora with three techniques: (1) it develops a context-aware segmentation method to carefully determine the boundaries of patterns with a learnt pattern quality assessment function, which avoids costly dependency parsing and generates high-quality patterns; (2) it identifies and groups synonymous meta patterns from multiple facets---their types, contexts, and extractions; and (3) it examines type distributions of entities in the instances extracted by each group of patterns, and looks for appropriate type levels to make discovered patterns precise. Experiments demonstrate that our proposed framework discovers high-quality typed textual patterns efficiently from different genres of massive corpora and facilitates information extraction." ] }
1811.02288
2900282949
Clustering, a fundamental task in data science and machine learning, groups a set of objects in such a way that objects in the same cluster are closer to each other than to those in other clusters. In this paper, we consider a well-known structure, so-called @math -nets, which rigorously captures the properties of clustering. We devise algorithms that improve the run-time of approximating @math -nets in high-dimensional spaces with @math and @math metrics from @math to @math , where @math . These algorithms are also used to improve a framework that provides approximate solutions to other high dimensional distance problems. Using this framework, several important related problems can also be solved efficiently, e.g., @math -approximate @math th-nearest neighbor distance, @math -approximate Min-Max clustering, @math -approximate @math -center clustering. In addition, we build an algorithm that @math -approximates greedy permutations in time @math where @math is the spread of the input. This algorithm is used to @math -approximate @math -center with the same time complexity.
An approach on hierarchical clustering was presented by @cite_6 . They construct an algorithm based on furthest first traversal, which is essentially building a greedy permutation and then traversing the permutation in order.
{ "cite_N": [ "@cite_6" ], "mid": [ "2769245605" ], "abstract": [ "We show that for any data set in any metric space, it is possible to construct a hierarchical clustering with the guarantee that for every k, the induced k-clustering has cost at most eight times that of the optimal k-clustering. Here the cost of a clustering is taken to be the maximum radius of its clusters. Our algorithm is similar in simplicity and efficiency to popular agglomerative heuristics for hierarchical clustering, and we show that these heuristics have unbounded approximation factors." ] }
1811.02132
2900122995
Generative Adversarial Networks (GANs) have a great performance in image generation, but they need a large scale of data to train the entire framework, and often result in nonsensical results. We propose a new method referring to conditional GAN, which equipments the latent noise with mixture of Student's t-distribution with attention mechanism in addition to class information. Student's t-distribution has long tails that can provide more diversity to the latent noise. Meanwhile, the discriminator in our model implements two tasks simultaneously, judging whether the images come from the true data distribution, and identifying the class of each generated images. The parameters of the mixture model can be learned along with those of GANs. Moreover, we mathematically prove that any multivariate Student's t-distribution can be obtained by a linear transformation of a normal multivariate Student's t-distribution. Experiments comparing the proposed method with typical GAN, DeliGAN and DCGAN indicate that, our method has a great performance on generating diverse and legible objects with limited data.
Attention mechanism is usually used in neural machine translation tasks. Its principle originating from one important property of human perception that one does not tend to notice a whole visual space in its at once, but focuses attention selectively on parts of the scene to acquire what is needed. Motivated by @cite_4 @cite_8 @cite_9 , we utilize the attention mechanism by learn the @math coefficients between each the t-distribution to constrain the weights on each t-distribution, and in this way, a representation of mixture of the t-distribution can be learned. With attention mechanism, we assign different weights to each t-distribution increasing the sensitivity to different t-distribution, so that the noise can super useful ones and suppress less useful ones.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_8" ], "mid": [ "", "2133564696", "2951527505" ], "abstract": [ "", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so." ] }
1811.02132
2900122995
Generative Adversarial Networks (GANs) have a great performance in image generation, but they need a large scale of data to train the entire framework, and often result in nonsensical results. We propose a new method referring to conditional GAN, which equipments the latent noise with mixture of Student's t-distribution with attention mechanism in addition to class information. Student's t-distribution has long tails that can provide more diversity to the latent noise. Meanwhile, the discriminator in our model implements two tasks simultaneously, judging whether the images come from the true data distribution, and identifying the class of each generated images. The parameters of the mixture model can be learned along with those of GANs. Moreover, we mathematically prove that any multivariate Student's t-distribution can be obtained by a linear transformation of a normal multivariate Student's t-distribution. Experiments comparing the proposed method with typical GAN, DeliGAN and DCGAN indicate that, our method has a great performance on generating diverse and legible objects with limited data.
GAN has been successfully applied in many fields. For example, image generation @cite_23 , video prediction @cite_19 , 3D model generation @cite_10 , real image generation based on sketch @cite_16 , image restoration @cite_12 , superpixel image generation @cite_24 , image-to-image translation @cite_20 , and text-to-image synthesis @cite_6 .
{ "cite_N": [ "@cite_20", "@cite_6", "@cite_24", "@cite_19", "@cite_23", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "", "2577946330", "2523714292", "", "2173520492", "2607448608", "2949551726", "2604433135" ], "abstract": [ "", "This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.", "This paper describes an intuitive generalization to the Generative Adversarial Networks (GANs) to generate samples while capturing diverse modes of the true data distribution. Firstly, we propose a very simple and intuitive multi-agent GAN architecture that incorporates multiple generators capable of generating samples from high probability modes. Secondly, in order to enforce different generators to generate samples from diverse modes, we propose two extensions to the standard GAN objective function. (1) We augment the generator specific GAN objective function with a diversity enforcing term that encourage different generators to generate diverse samples using a user-defined similarity based function. (2) We modify the discriminator objective function where along with finding the real and fake samples, the discriminator has to predict the generator which generated the given fake sample. Intuitively, in order to succeed in this task, the discriminator must learn to push different generators towards different identifiable modes. Our framework is generalizable in the sense that it can be easily combined with other existing variants of GANs to produce diverse samples. Experimentally we show that our framework is able to produce high quality diverse samples for the challenging tasks such as image face generation and image-to-image translation. We also show that it is capable of learning a better feature representation in an unsupervised setting.", "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.", "We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models." ] }
1811.01690
2899439440
This paper presents a method to train end-to-end automatic speech recognition (ASR) models using unpaired data. Although the end-to-end approach can eliminate the need for expert knowledge such as pronunciation dictionaries to build ASR systems, it still requires a large amount of paired data, i.e., speech utterances and their transcriptions. Cycle-consistency losses have been recently proposed as a way to mitigate the problem of limited paired data. These approaches compose a reverse operation with a given transformation, e.g., text-to-speech (TTS) with ASR, to build a loss that only requires unsupervised data, speech in this example. Applying cycle consistency to ASR models is not trivial since fundamental information, such as speaker traits, are lost in the intermediate text bottleneck. To solve this problem, this work presents a loss that is based on the speech encoder state sequence instead of the raw speech signal. This is achieved by training a Text-To-Encoder model and defining a loss based on the encoder reconstruction error. Experimental results on the LibriSpeech corpus show that the proposed cycle-consistency training reduced the word error rate by 14.7 from an initial model trained with 100-hour paired data, using an additional 360 hours of audio data without transcriptions. We also investigate the use of text-only data mainly for language modeling to further improve the performance in the unpaired data training scenario.
There is some prior work on unpaired data training for end-to-end ASR. Our prior work on back-translation-style data augmentation @cite_11 focused on the use of text-only data. We introduced a TTE model and used the synthesized encoder state sequences to train the ASR decoder without audio information. However, in this paper, we focus on the use of audio-only data and take a different approach.
{ "cite_N": [ "@cite_11" ], "mid": [ "2952010730" ], "abstract": [ "In this paper we propose a novel data augmentation method for attention-based end-to-end automatic speech recognition (E2E-ASR), utilizing a large amount of text which is not paired with speech signals. Inspired by the back-translation technique proposed in the field of machine translation, we build a neural text-to-encoder model which predicts a sequence of hidden states extracted by a pre-trained E2E-ASR encoder from a sequence of characters. By using hidden states as a target instead of acoustic features, it is possible to achieve faster attention learning and reduce computational cost, thanks to sub-sampling in E2E-ASR encoder, also the use of the hidden states can avoid to model speaker dependencies unlike acoustic features. After training, the text-to-encoder model generates the hidden states from a large amount of unpaired text, then E2E-ASR decoder is retrained using the generated hidden states as additional training data. Experimental evaluation using LibriSpeech dataset demonstrates that our proposed method achieves improvement of ASR performance and reduces the number of unknown words without the need for paired data." ] }
1811.01690
2899439440
This paper presents a method to train end-to-end automatic speech recognition (ASR) models using unpaired data. Although the end-to-end approach can eliminate the need for expert knowledge such as pronunciation dictionaries to build ASR systems, it still requires a large amount of paired data, i.e., speech utterances and their transcriptions. Cycle-consistency losses have been recently proposed as a way to mitigate the problem of limited paired data. These approaches compose a reverse operation with a given transformation, e.g., text-to-speech (TTS) with ASR, to build a loss that only requires unsupervised data, speech in this example. Applying cycle consistency to ASR models is not trivial since fundamental information, such as speaker traits, are lost in the intermediate text bottleneck. To solve this problem, this work presents a loss that is based on the speech encoder state sequence instead of the raw speech signal. This is achieved by training a Text-To-Encoder model and defining a loss based on the encoder reconstruction error. Experimental results on the LibriSpeech corpus show that the proposed cycle-consistency training reduced the word error rate by 14.7 from an initial model trained with 100-hour paired data, using an additional 360 hours of audio data without transcriptions. We also investigate the use of text-only data mainly for language modeling to further improve the performance in the unpaired data training scenario.
The speech chain model @cite_1 is the most similar architecture to ours. As described in Section , the ASR model is trained with synthesized speech and the TTS model is trained with ASR hypotheses for unpaired data. Therefore, the models are not tightly connected with each other, i.e., one model cannot be updated directly with the help of the other model to reduce the recognition or synthesis errors. Our approach can utilize the other model in the loss function to reduce the errors. We also employ a TTE model, which benefits from the reduction of speaker variations in the loss function and of computational complexity.
{ "cite_N": [ "@cite_1" ], "mid": [ "2962699523" ], "abstract": [ "Despite the close relationship between speech perception and production, research in automatic speech recognition (ASR) and text-to-speech synthesis (TTS) has progressed more or less independently without exerting much mutual influence on each other. In human communication, on the other hand, a closed-loop speech chain mechanism with auditory feedback from the speaker's mouth to her ear is crucial. In this paper, we take a step further and develop a closed-loop speech chain model based on deep learning. The sequence-to-sequence model in close-loop architecture allows us to train our model on the concatenation of both labeled and unlabeled data. While ASR transcribes the unlabeled speech features, TTS attempts to reconstruct the original speech waveform based on the text from ASR. In the opposite direction, ASR also attempts to reconstruct the original text transcription given the synthesized speech. To the best of our knowledge, this is the first deep learning model that integrates human speech perception and production behaviors. Our experimental results show that the proposed approach significantly improved the performance more than separate systems that were only trained with labeled data." ] }
1811.01833
2898837525
There is an increasing need for more automated system-log analysis tools for large scale online system in a timely manner. However, conventional way to monitor and classify the log output based on keyword list does not scale well for complex system in which codes contributed by a large group of developers, with diverse ways of encoding the error messages, often with misleading pre-set labels. In this paper, we propose that the design of a large scale online log analysis should follow the "Least Prior Knowledge Principle", in which unsupervised or semi-supervised solution with the minimal prior knowledge of the log should be encoded directly. Thereby, we report our experience in designing a two-stage machine learning based method, in which the system logs are regarded as the output of a quasi-natural language, pre-filtered by a perplexity score threshold, and then undergo a fine-grained classification procedure. Tests on empirical data show that our method has obvious advantage regarding to the processing speed and classification accuracy.
Researches on analyzing system logs automatically evolves along with the evolvement of the target systems themselves, including their types, volume size, etc. For instance, studies the task of query log analysis at web scale @cite_9 . They analysis of an AltaVista Search Engine query log to study the interaction of terms within queries, aiming to find correlations between the searching items. Splunk https: www.splunk.com en homepage.html is one of the most widely used tool for log analysis. It provides several out-of-box services that are key to the system administrators with several built-in machine learning algorithms, and has been integrated inside the Google's cloud platform. However, Splunk only provides a command line wrapper of a few general-purpose machine learning algorithms that have not been tailored for log analysis tasks yet.
{ "cite_N": [ "@cite_9" ], "mid": [ "1982889956" ], "abstract": [ "In this paper we present an analysis of an AltaVista Search Engine query log consisting of approximately 1 billion entries for search requests over a period of six weeks. This represents almost 285 million user sessions, each an attempt to fill a single information need. We present an analysis of individual queries, query duplication, and query sessions. We also present results of a correlation analysis of the log entries, studying the interaction of terms within queries. Our data supports the conjecture that web users differ significantly from the user assumed in the standard information retrieval literature. Specifically, we show that web users type in short queries, mostly look at the first 10 results only, and seldom modify the query. This suggests that traditional information retrieval techniques may not work well for answering web search requests. The correlation analysis showed that the most highly correlated items are constituents of phrases. This result indicates it may be useful for search engines to consider search terms as parts of phrases even if the user did not explicitly specify them as such." ] }
1811.01833
2898837525
There is an increasing need for more automated system-log analysis tools for large scale online system in a timely manner. However, conventional way to monitor and classify the log output based on keyword list does not scale well for complex system in which codes contributed by a large group of developers, with diverse ways of encoding the error messages, often with misleading pre-set labels. In this paper, we propose that the design of a large scale online log analysis should follow the "Least Prior Knowledge Principle", in which unsupervised or semi-supervised solution with the minimal prior knowledge of the log should be encoded directly. Thereby, we report our experience in designing a two-stage machine learning based method, in which the system logs are regarded as the output of a quasi-natural language, pre-filtered by a perplexity score threshold, and then undergo a fine-grained classification procedure. Tests on empirical data show that our method has obvious advantage regarding to the processing speed and classification accuracy.
The runtime monitoring system proposed by @cite_1 , is another representative example to apply machine learning techniques on log data mining, which involves four sub-modules of log parsing , feature creation , anomaly detection and visualization . The Sher-Log tool proposed by @cite_11 shares a similar design principle of our method. It analyzes source code by leveraging information provided by run-time logs, and predicate the events that would happen in the production run without re-executing the program or prior-knowledge on the log's semantics. The Beehive project by @cite_6 targets at the network security issue, and aims to extract useful knowledge by mining the dirty log data. Their experiments are conducted on the network packages and regards the meta-information of different layers of the network protocols as the mean features for classification. @cite_5 suggest that processing the log data can be regarded as a special type of natural language processing problem, such that many of the NLP methods can be used for analyzing the data and extracting a repository of templates.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_6", "@cite_11" ], "mid": [ "2000339673", "2039157918", "1990089904", "" ], "abstract": [ "System log from network equipment is one of the most important information for network management. Sophisticated log message mining could help in investigating a huge number of log messages for trouble shooting, especially in recent complicated network structure (e.g., virtualized networks). However, generating log templates (i.e., meta format) from real log messages (instances) is still difficult problem in terms of accuracy. In this paper we propose a Natural Language Processing (NLP) approach to generate log templates from log messages produced by network equipment in order to overcome this problem. The key idea of the work is to leverage the use of Conditional Random Fields (CRF), a well-studied supervised natural language processing technique. As preliminarily evaluation, with one month network equipment logs in a Japanese academic network, we show that our CRF based algorithm improves the accuracy of generated log templates in reasonable processing time, compared with a traditional method.", "Surprisingly, console logs rarely help operators detect problems in large-scale datacenter services, for they often consist of the voluminous intermixing of messages from many software components written by independent developers. We propose a general methodology to mine this rich source of information to automatically detect system runtime problems. We first parse console logs by combining source code analysis with information retrieval to create composite features. We then analyze these features using machine learning to detect operational problems. We show that our method enables analyses that are impossible with previous methods because of its superior ability to create sophisticated features. We also show how to distill the results of our analysis to an operator-friendly one-page decision tree showing the critical messages associated with the detected problems. We validate our approach using the Darkstar online game server and the Hadoop File System, where we detect numerous real problems with high accuracy and few false positives. In the Hadoop case, we are able to analyze 24 million lines of console logs in 3 minutes. Our methodology works on textual console logs of any size and requires no changes to the service software, no human input, and no knowledge of the software's internals.", "As more and more Internet-based attacks arise, organizations are responding by deploying an assortment of security products that generate situational intelligence in the form of logs. These logs often contain high volumes of interesting and useful information about activities in the network, and are among the first data sources that information security specialists consult when they suspect that an attack has taken place. However, security products often come from a patchwork of vendors, and are inconsistently installed and administered. They generate logs whose formats differ widely and that are often incomplete, mutually contradictory, and very large in volume. Hence, although this collected information is useful, it is often dirty. We present a novel system, Beehive, that attacks the problem of automatically mining and extracting knowledge from the dirty log data produced by a wide variety of security products in a large enterprise. We improve on signature-based approaches to detecting security incidents and instead identify suspicious host behaviors that Beehive reports as potential security incidents. These incidents can then be further analyzed by incident response teams to determine whether a policy violation or attack has occurred. We have evaluated Beehive on the log data collected in a large enterprise, EMC, over a period of two weeks. We compare the incidents identified by Beehive against enterprise Security Operations Center reports, antivirus software alerts, and feedback from enterprise security specialists. We show that Beehive is able to identify malicious events and policy violations which would otherwise go undetected.", "" ] }
1811.01833
2898837525
There is an increasing need for more automated system-log analysis tools for large scale online system in a timely manner. However, conventional way to monitor and classify the log output based on keyword list does not scale well for complex system in which codes contributed by a large group of developers, with diverse ways of encoding the error messages, often with misleading pre-set labels. In this paper, we propose that the design of a large scale online log analysis should follow the "Least Prior Knowledge Principle", in which unsupervised or semi-supervised solution with the minimal prior knowledge of the log should be encoded directly. Thereby, we report our experience in designing a two-stage machine learning based method, in which the system logs are regarded as the output of a quasi-natural language, pre-filtered by a perplexity score threshold, and then undergo a fine-grained classification procedure. Tests on empirical data show that our method has obvious advantage regarding to the processing speed and classification accuracy.
@cite_8 are among the first who apply deep learning methods on log data analysis, and proposed a system called DeepLog , which also regards the log data as a sequence of natural language output, and relies on a LSTM-based network architecture to find new patterns in the logs. Their LSTM-based approach is also motivated by the language model, similar to a part of our work, as the optimization objective is also to predicte the next token based on the previously seen context. However, their work focuses on a different set of tasks, which are anomaly detection and root cause analysis.
{ "cite_N": [ "@cite_8" ], "mid": [ "2767094836" ], "abstract": [ "Anomaly detection is a critical step towards building a secure and trustworthy system. The primary purpose of a system log is to record system states and significant events at various critical points to help debug system failures and perform root cause analysis. Such log data is universally available in nearly all computer systems. Log data is an important and valuable resource for understanding system status and performance issues; therefore, the various system logs are naturally excellent source of information for online monitoring and anomaly detection. We propose DeepLog, a deep neural network model utilizing Long Short-Term Memory (LSTM), to model a system log as a natural language sequence. This allows DeepLog to automatically learn log patterns from normal execution, and detect anomalies when log patterns deviate from the model trained from log data under normal execution. In addition, we demonstrate how to incrementally update the DeepLog model in an online fashion so that it can adapt to new log patterns over time. Furthermore, DeepLog constructs workflows from the underlying system log so that once an anomaly is detected, users can diagnose the detected anomaly and perform root cause analysis effectively. Extensive experimental evaluations over large log data have shown that DeepLog has outperformed other existing log-based anomaly detection methods based on traditional data mining methodologies." ] }
1811.01727
2899313146
Extreme multi-label text classification (XMTC) is a task for tagging each given text with the most relevant multiple labels from an extremely large-scale label set. This task can be found in many applications, such as product categorization,web page tagging, news annotation and so on. Many methods have been proposed so far for solving XMTC, while most of the existing methods use traditional bag-of-words (BOW) representation, ignoring word context as well as deep semantic information. XML-CNN, a state-of-the-art deep learning-based method, uses convolutional neural network (CNN) with dynamic pooling to process the text, going beyond the BOW-based appraoches but failing to capture 1) the long-distance dependency among words and 2) different levels of importance of a word for each label. We propose a new deep learning-based method, AttentionXML, which uses bidirectional long short-term memory (LSTM) and a multi-label attention mechanism for solving the above 1st and 2nd problems, respectively. We empirically compared AttentionXML with other six state-of-the-art methods over five benchmark datasets. AttentionXML outperformed all competing methods under all experimental settings except only a couple of cases. In addition, a consensus ensemble of AttentionXML with the second best method, Parabel, could further improve the performance over all five benchmark datasets.
Recently, to reduce the complexity, PD-Sparse @cite_8 and PPDSparse @cite_15 are proposed by the idea of sparse learning. PD-Sparse trains a classifier for each label by a margin-maximizing loss funcion with the @math penalty to obtain an extremely sparse solution both in primal and dual, without sacrificing the expressive power of the predictor. It proposes a Fully-Corrective Block-Coordinate Frank-Wolfe (FC-BCFW) algorithm to achieve a complexity sublinear to the number of primal and dual variables, and a bi-stochastic search method to improve the efficiency. PPDSparse @cite_15 is an extension of PD-Sparse, by efficient parallelization of large-scale distributed computing (e.g. 100 cores), keeping a similar performance to PD-Sparse.
{ "cite_N": [ "@cite_15", "@cite_8" ], "mid": [ "2743021690", "2461743311" ], "abstract": [ "Extreme Classification comprises multi-class or multi-label prediction where there is a large number of classes, and is increasingly relevant to many real-world applications such as text and image tagging. In this setting, standard classification methods, with complexity linear in the number of classes, become intractable, while enforcing structural constraints among classes (such as low-rank or tree-structure) to reduce complexity often sacrifices accuracy for efficiency. The recent PD-Sparse method addresses this via an algorithm that is sub-linear in the number of variables, by exploiting primal-dual sparsity inherent in a specific loss function, namely the max-margin loss. In this work, we extend PD-Sparse to be efficiently parallelized in large-scale distributed settings. By introducing separable loss functions, we can scale out the training, with network communication and space efficiency comparable to those in one-versus-all approaches while maintaining an overall complexity sub-linear in the number of classes. On several large-scale benchmarks our proposed method achieves accuracy competitive to the state-of-the-art while reducing the training time from days to tens of minutes compared with existing parallel or sparse methods on a cluster of 100 cores.", "We consider Multiclass and Multilabel classification with extremely large number of classes, of which only few are labeled to each instance. In such setting, standard methods that have training, prediction cost linear to the number of classes become intractable. State-of-the-art methods thus aim to reduce the complexity by exploiting correlation between labels under assumption that the similarity between labels can be captured by structures such as low-rank matrix or balanced tree. However, as the diversity of labels increases in the feature space, structural assumption can be easily violated, which leads to degrade in the testing performance. In this work, we show that a margin-maximizing loss with l1 penalty, in case of Extreme Classification, yields extremely sparse solution both in primal and in dual without sacrificing the expressive power of predictor. We thus propose a Fully-Corrective Block-Coordinate Frank-Wolfe (FC-BCFW) algorithm that exploits both primal and dual sparsity to achieve a complexity sublinear to the number of primal and dual variables. A bi-stochastic search method is proposed to further improve the efficiency. In our experiments on both Multiclass and Multilabel problems, the proposed method achieves significant higher accuracy than existing approaches of Extreme Classification with very competitive training and prediction time." ] }
1811.01727
2899313146
Extreme multi-label text classification (XMTC) is a task for tagging each given text with the most relevant multiple labels from an extremely large-scale label set. This task can be found in many applications, such as product categorization,web page tagging, news annotation and so on. Many methods have been proposed so far for solving XMTC, while most of the existing methods use traditional bag-of-words (BOW) representation, ignoring word context as well as deep semantic information. XML-CNN, a state-of-the-art deep learning-based method, uses convolutional neural network (CNN) with dynamic pooling to process the text, going beyond the BOW-based appraoches but failing to capture 1) the long-distance dependency among words and 2) different levels of importance of a word for each label. We propose a new deep learning-based method, AttentionXML, which uses bidirectional long short-term memory (LSTM) and a multi-label attention mechanism for solving the above 1st and 2nd problems, respectively. We empirically compared AttentionXML with other six state-of-the-art methods over five benchmark datasets. AttentionXML outperformed all competing methods under all experimental settings except only a couple of cases. In addition, a consensus ensemble of AttentionXML with the second best method, Parabel, could further improve the performance over all five benchmark datasets.
DiSMEC @cite_17 is another state-of-the-art 1-vs-All method, based on distributed computing, which learns a linear classifier for each label. DiSMEC uses a double layer of parallelization to sufficiently exploit computing resource (400 cores), implementing a significant speed-up of training and prediction. DiSMEC prunes spurious weight coefficients (close to zero), which makes the model thousands of times smaller, resulting in reducing the required computational resource to a much smaller size than those by other state-of-the-art methods.
{ "cite_N": [ "@cite_17" ], "mid": [ "2520348554" ], "abstract": [ "Extreme multi-label classification refers to supervised multi-label learning involving hundreds of thousands or even millions of labels. Datasets in extreme classification exhibit fit to power-law distribution, i.e. a large fraction of labels have very few positive instances in the data distribution. Most state-of-the-art approaches for extreme multi-label classification attempt to capture correlation among labels by embedding the label matrix to a low-dimensional linear sub-space. However, in the presence of power-law distributed extremely large and diverse label spaces, structural assumptions such as low rank can be easily violated. In this work, we present DiSMEC, which is a large-scale distributed framework for learning one-versus-rest linear classifiers coupled with explicit capacity control to control model size. Unlike most state-of-the-art methods, DiSMEC does not make any low rank assumptions on the label matrix. Using double layer of parallelization, DiSMEC can learn classifiers for datasets consisting hundreds of thousands labels within few hours. The explicit capacity control mechanism filters out spurious parameters which keep the model compact in size, without losing prediction accuracy. We conduct extensive empirical evaluation on publicly available real-world datasets consisting upto 670,000 labels. We compare DiSMEC with recent state-of-the-art approaches, including - SLEEC which is a leading approach for learning sparse local embeddings, and FastXML which is a tree-based approach optimizing ranking based loss function. On some of the datasets, DiSMEC can significantly boost prediction accuracies - 10 better compared to SLECC and 15 better compared to FastXML, in absolute terms." ] }
1811.01727
2899313146
Extreme multi-label text classification (XMTC) is a task for tagging each given text with the most relevant multiple labels from an extremely large-scale label set. This task can be found in many applications, such as product categorization,web page tagging, news annotation and so on. Many methods have been proposed so far for solving XMTC, while most of the existing methods use traditional bag-of-words (BOW) representation, ignoring word context as well as deep semantic information. XML-CNN, a state-of-the-art deep learning-based method, uses convolutional neural network (CNN) with dynamic pooling to process the text, going beyond the BOW-based appraoches but failing to capture 1) the long-distance dependency among words and 2) different levels of importance of a word for each label. We propose a new deep learning-based method, AttentionXML, which uses bidirectional long short-term memory (LSTM) and a multi-label attention mechanism for solving the above 1st and 2nd problems, respectively. We empirically compared AttentionXML with other six state-of-the-art methods over five benchmark datasets. AttentionXML outperformed all competing methods under all experimental settings except only a couple of cases. In addition, a consensus ensemble of AttentionXML with the second best method, Parabel, could further improve the performance over all five benchmark datasets.
The main difference among different embedding-based methods is the design of compression function @math and decompression function @math . For example, the most representative method, SLEEC @cite_18 , learns embedding vectors @math by capturing non-linear label correlations, preserving the pairwise distance between label vectors, @math and @math , i.e. @math if @math is in the @math nearest neighbors of @math . Regressors @math are then trained to predict embedding label @math , and a @math -nearest neighbor classifier (KNN) is used for prediction. KNN has high computational complexity, so SLEEC uses clusters, into which training instances are embedded. That is, given a test instance, only the cluster into which this instance can be fallen is used for prediction.
{ "cite_N": [ "@cite_18" ], "mid": [ "2437817353" ], "abstract": [ "We consider the problem of (macro) F-measure maximization in the context of extreme multilabel classification (XMLC), i.e., multi-label classification with extremely large label spaces. We investigate several approaches based on recent results on the maximization of complex performance measures in binary classification. According to these results, the F-measure can be maximized by properly thresholding conditional class probability estimates. We show that a naive adaptation of this approach can be very costly for XMLC and propose to solve the problem by classifiers that efficiently deliver sparse probability estimates (SPEs), that is, probability estimates restricted to the most probable labels. Empirical results provide evidence for the strong practical performance of this approach." ] }
1811.01727
2899313146
Extreme multi-label text classification (XMTC) is a task for tagging each given text with the most relevant multiple labels from an extremely large-scale label set. This task can be found in many applications, such as product categorization,web page tagging, news annotation and so on. Many methods have been proposed so far for solving XMTC, while most of the existing methods use traditional bag-of-words (BOW) representation, ignoring word context as well as deep semantic information. XML-CNN, a state-of-the-art deep learning-based method, uses convolutional neural network (CNN) with dynamic pooling to process the text, going beyond the BOW-based appraoches but failing to capture 1) the long-distance dependency among words and 2) different levels of importance of a word for each label. We propose a new deep learning-based method, AttentionXML, which uses bidirectional long short-term memory (LSTM) and a multi-label attention mechanism for solving the above 1st and 2nd problems, respectively. We empirically compared AttentionXML with other six state-of-the-art methods over five benchmark datasets. AttentionXML outperformed all competing methods under all experimental settings except only a couple of cases. In addition, a consensus ensemble of AttentionXML with the second best method, Parabel, could further improve the performance over all five benchmark datasets.
AnnexML @cite_19 is an extension of SLEEC, solving three problems of SLEEC: 1) clustering without labels; 2) ignoring the distance value in prediction (since just KNN is used); and 3) slow prediction. AnnexML generates a KNN graph (KNNG) of label vectors in the embedding space, addressing the above problems, and improves both the accuracy and efficiency.
{ "cite_N": [ "@cite_19" ], "mid": [ "2744136723" ], "abstract": [ "Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called \"AnnexML\". At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have larger a label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, which is a state-of-the-art embedding-based method." ] }
1811.01727
2899313146
Extreme multi-label text classification (XMTC) is a task for tagging each given text with the most relevant multiple labels from an extremely large-scale label set. This task can be found in many applications, such as product categorization,web page tagging, news annotation and so on. Many methods have been proposed so far for solving XMTC, while most of the existing methods use traditional bag-of-words (BOW) representation, ignoring word context as well as deep semantic information. XML-CNN, a state-of-the-art deep learning-based method, uses convolutional neural network (CNN) with dynamic pooling to process the text, going beyond the BOW-based appraoches but failing to capture 1) the long-distance dependency among words and 2) different levels of importance of a word for each label. We propose a new deep learning-based method, AttentionXML, which uses bidirectional long short-term memory (LSTM) and a multi-label attention mechanism for solving the above 1st and 2nd problems, respectively. We empirically compared AttentionXML with other six state-of-the-art methods over five benchmark datasets. AttentionXML outperformed all competing methods under all experimental settings except only a couple of cases. In addition, a consensus ensemble of AttentionXML with the second best method, Parabel, could further improve the performance over all five benchmark datasets.
The most representative tree-based method, FastXML @cite_21 , learns a hyperplane to split instances rather than selecting a single feature. More in detail, FastXML optimizes an nDCG-based ranking loss function at each node. An extension of FastXML is PfastreXML @cite_12 , which keeps the same architecture as FastXML, and PfastreXML uses a propensity scored objective function, instead of optimizing nDCG. Due to this objective function, PfastreXML makes more accurate tail label prediction over FastXML.
{ "cite_N": [ "@cite_21", "@cite_12" ], "mid": [ "2068074736", "2362855512" ], "abstract": [ "The objective in extreme multi-label classification is to learn a classifier that can automatically tag a data point with the most relevant subset of labels from a large label set. Extreme multi-label classification is an important research problem since not only does it enable the tackling of applications with many labels but it also allows the reformulation of ranking problems with certain advantages over existing formulations. Our objective, in this paper, is to develop an extreme multi-label classifier that is faster to train and more accurate at prediction than the state-of-the-art Multi-label Random Forest (MLRF) algorithm [2] and the Label Partitioning for Sub-linear Ranking (LPSR) algorithm [35]. MLRF and LPSR learn a hierarchy to deal with the large number of labels but optimize task independent measures, such as the Gini index or clustering error, in order to learn the hierarchy. Our proposed FastXML algorithm achieves significantly higher accuracies by directly optimizing an nDCG based ranking loss function. We also develop an alternating minimization algorithm for efficiently optimizing the proposed formulation. Experiments reveal that FastXML can be trained on problems with more than a million labels on a standard desktop in eight hours using a single core and in an hour using multiple cores.", "The choice of the loss function is critical in extreme multi-label learning where the objective is to annotate each data point with the most relevant subset of labels from an extremely large label set. Unfortunately, existing loss functions, such as the Hamming loss, are unsuitable for learning, model selection, hyperparameter tuning and performance evaluation. This paper addresses the issue by developing propensity scored losses which: (a) prioritize predicting the few relevant labels over the large number of irrelevant ones; (b) do not erroneously treat missing labels as irrelevant but instead provide unbiased estimates of the true loss function even when ground truth labels go missing under arbitrary probabilistic label noise models; and (c) promote the accurate prediction of infrequently occurring, hard to predict, but rewarding tail labels. Another contribution is the development of algorithms which efficiently scale to extremely large datasets with up to 9 million labels, 70 million points and 2 million dimensions and which give significant improvements over the state-of-the-art. This paper's results also apply to tagging, recommendation and ranking which are the motivating applications for extreme multi-label learning. They generalize previous attempts at deriving unbiased losses under the restrictive assumption that labels go missing uniformly at random from the ground truth. Furthermore, they provide a sound theoretical justification for popular label weighting heuristics used to recommend rare items. Finally, they demonstrate that the proposed contributions align with real world applications by achieving superior clickthrough rates on sponsored search advertising in Bing." ] }
1811.01727
2899313146
Extreme multi-label text classification (XMTC) is a task for tagging each given text with the most relevant multiple labels from an extremely large-scale label set. This task can be found in many applications, such as product categorization,web page tagging, news annotation and so on. Many methods have been proposed so far for solving XMTC, while most of the existing methods use traditional bag-of-words (BOW) representation, ignoring word context as well as deep semantic information. XML-CNN, a state-of-the-art deep learning-based method, uses convolutional neural network (CNN) with dynamic pooling to process the text, going beyond the BOW-based appraoches but failing to capture 1) the long-distance dependency among words and 2) different levels of importance of a word for each label. We propose a new deep learning-based method, AttentionXML, which uses bidirectional long short-term memory (LSTM) and a multi-label attention mechanism for solving the above 1st and 2nd problems, respectively. We empirically compared AttentionXML with other six state-of-the-art methods over five benchmark datasets. AttentionXML outperformed all competing methods under all experimental settings except only a couple of cases. In addition, a consensus ensemble of AttentionXML with the second best method, Parabel, could further improve the performance over all five benchmark datasets.
Another state-of-the-art tree-based method is Parabel @cite_10 , which learns balanced trees partitioning labels rather than instances. It also incorporates the idea of 1-vs-All, to take advantage of both tree-based and 1-vs-All methods. For each label, Parabel uses the average of features of its all positive instances as feature of the label, and builds a balanced binary tree recursively. All labels in an internal node of the tree are separated into two subsets with almost the same size. Parabel trains a binary classifier for each label in a leaf node using the positive instances of all labels only in the leaf node, which significantly reduces the size of training data. For a given instance during prediction, Parabel calculates the probabilities of being assigned a label in left and right subtree. A beam search is used during prediction. Parabel achieves almost the same prediction performance as both two state-of-the-art 1-vs-All methods, DiSMEC and PPSparse, keeping high efficiency even by a machine with only one core.
{ "cite_N": [ "@cite_10" ], "mid": [ "2788125153" ], "abstract": [ "This paper develops the Parabel algorithm for extreme multi-label learning where the objective is to learn classifiers that can annotate a data point with the most relevant subset of labels from an extremely large label set. The state-of-the-art DiSMEC and PPDSparse algorithms are the most accurate but take weeks for training and prediction as they learn and apply an independent linear classifier per label. Consequently, they do not scale to large datasets with millions of labels. Parabel addresses these limitations by: (a) cutting down the training time to a few hours on a single core of a standard desktop by learning a hierarchy of coarse to fine label classifiers, each trained on a small subset of datapoints and (b) by cutting down the prediction time to a few milliseconds per test point by leveraging the classifier hierarchy for logarithmic time prediction in number of labels. This allows Parabel to scale to tasks considered infeasible for DiSMEC and PPDSparse such as predicting the subset of search engine queries that might lead to a click on a given ad-landing page for dynamic search advertising. Experimental results demonstrated that Parabel could train in 80 hours on a proprietary dataset with 7 million labels which is beyond the scale of both DiSMEC and PPDSparse. Results on some of the largest publically available datasets revealed that Parabel could be 1,000x faster at training than both DiSMEC and PPDSparse, as well as 10,000x and 40x faster at prediction than DiSMEC and PPDSparse without any significant loss in prediction accuracy. Moreover, Parabel was also found to be much more accurate than other tree based extreme classifiers and could be more than 10x faster at training with a 10x smaller model. Finally, Parabel was demonstrated to significantly improve dynamic search advertising on Bing by more than doubling the ad coverage, as well as improving the click-through rate by 20 ." ] }
1811.01734
2951525874
For many text classification tasks, there is a major problem posed by the lack of labeled data in a target domain. Although classifiers for a target domain can be trained on labeled text data from a related source domain, the accuracy of such classifiers is usually lower in the cross-domain setting. Recently, string kernels have obtained state-of-the-art results in various text classification tasks such as native language identification or automatic essay scoring. Moreover, classifiers based on string kernels have been found to be robust to the distribution gap between different domains. In this paper, we formally describe an algorithm composed of two simple yet effective transductive learning approaches to further improve the results of string kernels in cross-domain settings. By adapting string kernels to the test set without using the ground-truth test labels, we report significantly better accuracy rates in cross-domain English polarity classification.
Transfer learning (or domain adaptation) aims at building effective classifiers for a target domain when the only available labeled training data belongs to a different (source) domain. Domain adaptation techniques can be roughly divided into graph-based methods @cite_2 @cite_17 @cite_3 @cite_6 , probabilistic models @cite_14 @cite_10 , knowledge-based models @cite_40 @cite_20 @cite_39 and joint optimization frameworks @cite_32 . The transfer learning methods from the literature show promising results in a variety of real-world applications, such as image classification @cite_32 , text classification @cite_36 @cite_34 @cite_14 , polarity classification @cite_2 @cite_17 @cite_10 @cite_8 @cite_6 and others @cite_15 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_36", "@cite_32", "@cite_6", "@cite_3", "@cite_39", "@cite_40", "@cite_2", "@cite_15", "@cite_34", "@cite_10", "@cite_20", "@cite_17" ], "mid": [ "", "2287612586", "2107008379", "2100664256", "", "2604699435", "", "", "2153353890", "2120354757", "1980862579", "", "", "2250889137" ], "abstract": [ "", "Domain Adaptation (DA) techniques aim at enabling machine learning methods learn effective classifiers for a \"target\" domain when the only available training data belongs to a different \"source\" domain. In this paper we present the Distributional Correspondence Indexing (DCI) method for domain adaptation in sentiment classification. DCI derives term representations in a vector space common to both domains where each dimension re ects its distributional correspondence to a pivot, i.e., to a highly predictive term that behaves similarly across domains. Term correspondence is quantified by means of a distributional correspondence function (DCF). We propose a number of efficient DCFs that are motivated by the distributional hypothesis, i.e., the hypothesis according to which terms with similar meaning tend to have similar distributions in text. Experiments show that DCI obtains better performance than current state-of-the-art techniques for cross-lingual and cross-domain sentiment classification. DCI also brings about a significantly reduced computational cost, and requires a smaller amount of human intervention. As a final contribution, we discuss a more challenging formulation of the domain adaptation problem, in which both the cross-domain and cross-lingual dimensions are tackled simultaneously.", "", "Domain transfer learning, which learns a target classifier using labeled data from a different distribution, has shown promising value in knowledge discovery yet still been a challenging problem. Most previous works designed adaptive classifiers by exploring two learning strategies independently: distribution adaptation and label propagation. In this paper, we propose a novel transfer learning framework, referred to as Adaptation Regularization based Transfer Learning (ARTL), to model them in a unified way based on the structural risk minimization principle and the regularization theory. Specifically, ARTL learns the adaptive classifier by simultaneously optimizing the structural risk functional, the joint distribution matching between domains, and the manifold consistency underlying marginal distribution. Based on the framework, we propose two novel methods using Regularized Least Squares (RLS) and Support Vector Machines (SVMs), respectively, and use the Representer theorem in reproducing kernel Hilbert space to derive corresponding solutions. Comprehensive experiments verify that ARTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.", "", "", "", "", "Sentiment classification aims to automatically predict sentiment polarity (e.g., positive or negative) of users publishing sentiment data (e.g., reviews, blogs). Although traditional classification algorithms can be used to train sentiment classifiers from manually labeled text data, the labeling work can be time-consuming and expensive. Meanwhile, users often use some different words when they express sentiment in different domains. If we directly apply a classifier trained in one domain to other domains, the performance will be very low due to the differences between these domains. In this work, we develop a general solution to sentiment classification when we do not have any labels in a target domain but have some labeled data in a different domain, regarded as source domain. In this cross-domain sentiment classification setting, to bridge the gap between the domains, we propose a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, with the help of domain-independent words as a bridge. In this way, the clusters can be used to reduce the gap between domain-specific words of the two domains, which can be used to train sentiment classifiers in the target domain accurately. Compared to previous approaches, SFA can discover a robust representation for cross-domain data by fully exploiting the relationship between the domain-specific and domain-independent words via simultaneously co-clustering them in a common latent space. We perform extensive experiments on two real world datasets, and demonstrate that SFA significantly outperforms previous approaches to cross-domain sentiment classification.", "We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough “target” data to do slightly better than just using only “source” data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms stateof-the-art approaches on a range of datasets. Moreover, it is trivially extended to a multidomain adaptation problem, where one has data from a variety of different domains.", "In cross-lingual text classification problems, it is costly and time-consuming to annotate documents for each individual language. To avoid the expensive re-labeling process, domain adaptation techniques can be applied to adapt a learning system trained in one language domain to another language domain. In this paper we develop a transductive subspace representation learning method to address domain adaptation for cross-lingual text classifications. The proposed approach is formulated as a nonnegative matrix factorization problem and solved using an iterative optimization procedure. Our empirical study on cross-lingual text classification tasks shows the proposed approach consistently outperforms a number of comparison methods.", "", "", "The lack of labeled data always poses challenges for tasks where machine learning is involved. Semi-supervised and cross-domain approaches represent the most common ways to overcome this difficulty. Graph-based algorithms have been widely studied during the last decade and have proved to be very effective at solving the data limitation problem. This paper explores one of the most popular stateof-the-art graph-based algorithms - label propagation, together with its modifications previously applied to sentiment classification. We study the impact of modified graph structure and parameter variations and compare the performance of graph-based algorithms in cross-domain and semi-supervised settings. The results provide a strategy for selecting the most favourable algorithm and learning paradigm on the basis of the available labeled and unlabeled data." ] }
1811.01918
2898668372
As collaborative coding environments make it easier to contribute to software projects, the number of developers involved in these projects keeps increasing. Consequently, making it more difficult for code reviewers to deal with harmful contributions. Collaborative environments like GitHub provide a rich source of data on developers' contribution, which can be used to extract information on technical (e.g., developers' experience) and social (e.g., interactions among developers) factors related to developers. Recent studies analyzed the influence of these factors on different activities of the software development. However, there is still little knowledge on the relation between these factors and the introduction of bugs. We present a broader study on relating five technical, two social factors and the introduction of bugs. The results indicate that both technical and social factors present statistically significant relations with the introduction of bugs. Particularly, the developers' habits of not following technical contribution norms are associated with an increase on commit bugginess. Unexpectedly, the presence of tests in commits presents a relation with the increase of commit bugginess. Finally, the developer's experience presents a contradictory relation with the introduction of bugs. But, the analysis of both code complexity and developer's experience may explain this contradiction.
Rahman & Devanbu analyzed four open-source projects and found that high levels of ownership are associated with a lower bug introduction rate. Moreover, the authors found that specialized experience is consistently associated with buggy code, while general experience is not. Similar findings on the ownership factor were presented in the work of Bird @cite_21 . Thongtanunam @cite_7 show that there is a relationship between ownership and code review. In addition, the proportion of reviewers without expertise shares a strong relation with commit bugginess. Tufano @cite_28 presented an empirical study on developer-related factors. Their results show that commit coherence, developer experience, and past interfering changes are associated with commit bugginess.
{ "cite_N": [ "@cite_28", "@cite_21", "@cite_7" ], "mid": [ "", "2145574830", "2374812233" ], "abstract": [ "", "Ownership is a key aspect of large-scale software development. We examine the relationship between different ownership measures and software failures in two large software projects: Windows Vista and Windows 7. We find that in all cases, measures of ownership such as the number of low-expertise developers, and the proportion of ownership for the top owner have a relationship with both pre-release faults and post-release failures. We also empirically identify reasons that low-expertise developers make changes to components and show that the removal of low-expertise contributions dramatically decreases the performance of contribution based defect prediction. Finally we provide recommendations for source code change policies and utilization of resources such as code inspections based on our results.", "Code ownership establishes a chain of responsibility for modules in large software systems. Although prior work uncovers a link between code ownership heuristics and software quality, these heuristics rely solely on the authorship of code changes. In addition to authoring code changes, developers also make important contributions to a module by reviewing code changes. Indeed, recent work shows that reviewers are highly active in modern code review processes, often suggesting alternative solutions or providing updates to the code changes. In this paper, we complement traditional code ownership heuristics using code review activity. Through a case study of six releases of the large Qt and OpenStack systems, we find that: (1) 67 --86 of developers did not author any code changes for a module, but still actively contributed by reviewing 21 --39 of the code changes, (2) code ownership heuristics that are aware of reviewing activity share a relationship with software quality, and (3) the proportion of reviewers without expertise shares a strong, increasing relationship with the likelihood of having post-release defects. Our results suggest that reviewing activity captures an important aspect of code ownership, and should be included in approximations of it in future studies." ] }
1811.01918
2898668372
As collaborative coding environments make it easier to contribute to software projects, the number of developers involved in these projects keeps increasing. Consequently, making it more difficult for code reviewers to deal with harmful contributions. Collaborative environments like GitHub provide a rich source of data on developers' contribution, which can be used to extract information on technical (e.g., developers' experience) and social (e.g., interactions among developers) factors related to developers. Recent studies analyzed the influence of these factors on different activities of the software development. However, there is still little knowledge on the relation between these factors and the introduction of bugs. We present a broader study on relating five technical, two social factors and the introduction of bugs. The results indicate that both technical and social factors present statistically significant relations with the introduction of bugs. Particularly, the developers' habits of not following technical contribution norms are associated with an increase on commit bugginess. Unexpectedly, the presence of tests in commits presents a relation with the increase of commit bugginess. Finally, the developer's experience presents a contradictory relation with the introduction of bugs. But, the analysis of both code complexity and developer's experience may explain this contradiction.
Mockus @cite_19 investigated the organizational factor (e.g, size of the organization, time between releases) relating to the presence of defects in the software. The author found that recent departures from an organization and distributed development are related with commit bugginess. Bernardi @cite_8 studied the influence of developer communication on commit bugginess, finding that developers who introduce bugs have a higher social importance and communicate less between themselves.
{ "cite_N": [ "@cite_19", "@cite_8" ], "mid": [ "2093125667", "2150800057" ], "abstract": [ "The key premise of an organization is to allow more efficient production, including production of high quality software. To achieve that, an organization defines roles and reporting relationships. Therefore, changes in organization's structure are likely to affect product's quality. We propose and investigate a relationship between developer-centric measures of organizational change and the probability of customer-reported defects in the context of a large software project. We find that the proximity to an organizational change is significantly associated with reductions in software quality. We also replicate results of several prior studies of software quality supporting findings that code, change, and developer characteristics affect fault-proneness. In contrast to prior studies we find that distributed development decreases quality. Furthermore, recent departures from an organization were associated with increased probability of customer-reported defects, thus demonstrating that in the observed context the organizational change reduces product quality.", "Developers working on related artifacts often communicate each other to coordinate their changes and to make others aware of their changes. When such a communication does not occur, this could create misunderstanding and cause the introduction of bugs. This paper investigates how the level of communication between committers relates to their proneness to introduce faults. This is done by identifying committers likely responsible of bug-introducing changes, and comparing-through social network measures-characteristics of their communication with the characteristics of other committers. We report results from a study conducted on bugs from Eclipse and Mozilla, indicating that bug-introducing committers have a higher social importance than other committers, although the communication between themselves is significantly lower than for others." ] }
1811.01918
2898668372
As collaborative coding environments make it easier to contribute to software projects, the number of developers involved in these projects keeps increasing. Consequently, making it more difficult for code reviewers to deal with harmful contributions. Collaborative environments like GitHub provide a rich source of data on developers' contribution, which can be used to extract information on technical (e.g., developers' experience) and social (e.g., interactions among developers) factors related to developers. Recent studies analyzed the influence of these factors on different activities of the software development. However, there is still little knowledge on the relation between these factors and the introduction of bugs. We present a broader study on relating five technical, two social factors and the introduction of bugs. The results indicate that both technical and social factors present statistically significant relations with the introduction of bugs. Particularly, the developers' habits of not following technical contribution norms are associated with an increase on commit bugginess. Unexpectedly, the presence of tests in commits presents a relation with the increase of commit bugginess. Finally, the developer's experience presents a contradictory relation with the introduction of bugs. But, the analysis of both code complexity and developer's experience may explain this contradiction.
Those studies @cite_19 @cite_6 @cite_8 @cite_21 @cite_7 @cite_28 @cite_5 evaluated the relation between the factors discussed above and commit bugginess in a very limited way by considering only proprietary projects @cite_19 @cite_21 , projects that do not adopt modern code review practices @cite_28 @cite_35 or a reduced number of factors as well as characteristics to represent them @cite_21 @cite_7 @cite_6 . Our study differs from prior work by providing a more extensive and complete study on the relation between technical, social factors and the introduction of bugs.
{ "cite_N": [ "@cite_35", "@cite_7", "@cite_8", "@cite_28", "@cite_21", "@cite_6", "@cite_19", "@cite_5" ], "mid": [ "2147386665", "2374812233", "2150800057", "", "2145574830", "1984769753", "2093125667", "" ], "abstract": [ "Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.", "Code ownership establishes a chain of responsibility for modules in large software systems. Although prior work uncovers a link between code ownership heuristics and software quality, these heuristics rely solely on the authorship of code changes. In addition to authoring code changes, developers also make important contributions to a module by reviewing code changes. Indeed, recent work shows that reviewers are highly active in modern code review processes, often suggesting alternative solutions or providing updates to the code changes. In this paper, we complement traditional code ownership heuristics using code review activity. Through a case study of six releases of the large Qt and OpenStack systems, we find that: (1) 67 --86 of developers did not author any code changes for a module, but still actively contributed by reviewing 21 --39 of the code changes, (2) code ownership heuristics that are aware of reviewing activity share a relationship with software quality, and (3) the proportion of reviewers without expertise shares a strong, increasing relationship with the likelihood of having post-release defects. Our results suggest that reviewing activity captures an important aspect of code ownership, and should be included in approximations of it in future studies.", "Developers working on related artifacts often communicate each other to coordinate their changes and to make others aware of their changes. When such a communication does not occur, this could create misunderstanding and cause the introduction of bugs. This paper investigates how the level of communication between committers relates to their proneness to introduce faults. This is done by identifying committers likely responsible of bug-introducing changes, and comparing-through social network measures-characteristics of their communication with the characteristics of other committers. We report results from a study conducted on bugs from Eclipse and Mozilla, indicating that bug-introducing committers have a higher social importance than other committers, although the communication between themselves is significantly lower than for others.", "", "Ownership is a key aspect of large-scale software development. We examine the relationship between different ownership measures and software failures in two large software projects: Windows Vista and Windows 7. We find that in all cases, measures of ownership such as the number of low-expertise developers, and the proportion of ownership for the top owner have a relationship with both pre-release faults and post-release failures. We also empirically identify reasons that low-expertise developers make changes to components and show that the removal of low-expertise contributions dramatically decreases the performance of contribution based defect prediction. Finally we provide recommendations for source code change policies and utilization of resources such as code inspections based on our results.", "Modern software is often developed over many years with hundreds of thousands of commits. Commit metadata is a rich source of social characteristics, including the commit's time of day and the experience and commit frequency of its author. The \"bugginess\" of a commit is also a critical property of that commit. In this paper, we investigate the correlation between a commit's social characteristics and its \"bugginess\"; such results can be very useful for software developers and software engineering researchers. For instance, developers or code reviewers might be well-advised to thoroughly verify commits that are more likely to be buggy. In this paper, we study the correlation between a commit's bugginess and the time of day of the commit, the day of week of the commit, and the experience and commit frequency of the commit authors. We survey two widely-used open source projects: the Linux kernel and PostgreSQL. Our main findings include: (1) commits submitted between midnight and 4 AM (referred to as late-night commits) are significantly buggier and commits between 7 AM and noon are less buggy, implying that developers may want to double-check their own latenight commits; (2) daily-committing developers produce less-buggy commits, indicating that we may want to promote the practice of daily-committing developers reviewing other developers' commits; and (3) the bugginess of commits versus day-of-week varies for different software projects.", "The key premise of an organization is to allow more efficient production, including production of high quality software. To achieve that, an organization defines roles and reporting relationships. Therefore, changes in organization's structure are likely to affect product's quality. We propose and investigate a relationship between developer-centric measures of organizational change and the probability of customer-reported defects in the context of a large software project. We find that the proximity to an organizational change is significantly associated with reductions in software quality. We also replicate results of several prior studies of software quality supporting findings that code, change, and developer characteristics affect fault-proneness. In contrast to prior studies we find that distributed development decreases quality. Furthermore, recent departures from an organization were associated with increased probability of customer-reported defects, thus demonstrating that in the observed context the organizational change reduces product quality.", "" ] }
1811.01557
2899474891
Since its introduction, unsupervised representation learning has attracted a lot of attention from the research community, as it is demonstrated to be highly effective and easy-to-apply in tasks such as dimension reduction, clustering, visualization, information retrieval, and semi-supervised learning. In this work, we propose a novel unsupervised representation learning framework called neighbor-encoder, in which domain knowledge can be easily incorporated into the learning process without modifying the general encoder-decoder architecture of the classic autoencoder.In contrast to autoencoder, which reconstructs the input data itself, neighbor-encoder reconstructs the input data's neighbors. As the proposed representation learning problem is essentially a neighbor reconstruction problem, domain knowledge can be easily incorporated in the form of an appropriate definition of similarity between objects. Based on that observation, our framework can leverage any off-the-shelf similarity search algorithms or side information to find the neighbor of an input object. Applications of other algorithms (e.g., association rule mining) in our framework are also possible, given that the appropriate definition of neighbor can vary in different contexts. We have demonstrated the effectiveness of our framework in many diverse domains, including images, text, and time series, and for various data mining tasks including classification, clustering, and visualization. Experimental results show that neighbor-encoder not only outperforms autoencoder in most of the scenarios we consider, but also achieves the state-of-the-art performance on text document clustering.
is usually achieved by optimizing either domain-specific objectives or general unsupervised objectives. For example, in the domain of computer vision and music processing, unsupervised representation learning problem is formulated as a supervised learning problem with surrogate labels, generated by exploiting the temporal coherence in videos and music @cite_10 @cite_12 @cite_30 @cite_15 @cite_31 . In the case of natural language processing, word embedding can be achieved by optimizing an objective function that pushes'' words occurring in a similar context (i.e., surrounded by similar words) closer in the embedding space @cite_23 . Alternatively, general unsupervised objectives are also useful for unsupervised representation learning. For example, minimizing the self-reconstruction error is used in autoencoder @cite_16 @cite_19 @cite_32 , while optimizing the @math -means objective is shown effective in coates2012nn ( coates2012nn ) and yang2017icml ( yang2017icml ). Other objectives, such as self-organizing map criteria @cite_27 @cite_0 and adversarial training @cite_38 @cite_8 @cite_17 @cite_9 , are also effective objectives for unsupervised representation learning.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_15", "@cite_8", "@cite_9", "@cite_32", "@cite_0", "@cite_19", "@cite_27", "@cite_23", "@cite_31", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "219040644", "2099471712", "2575671312", "2412320034", "2202109488", "2145094598", "2607510315", "2139427956", "2725470024", "2950133940", "2774294118", "2110798204", "2951590555", "2198618282", "2173520492" ], "abstract": [ "Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "This paper presents a novel yet intuitive approach to unsupervised feature learning. Inspired by the human visual system, we explore whether low-level motion-based grouping cues can be used to learn an effective visual representation. Specifically, we use unsupervised motion-based segmentation on videos to obtain segments, which we use as pseudo ground truth to train a convolutional network to segment objects from a single frame. Given the extensive evidence that motion plays a key role in the development of the human visual system, we hope that this straightforward approach to unsupervised learning will be more effective than cleverly designed pretext tasks studied in the literature. Indeed, our extensive experiments show that this is the case. When used for transfer learning on object detection, our representation significantly outperforms previous unsupervised approaches across multiple settings, especially when training data for the target task is scarce.", "The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs have no means of learning the inverse mapping -- projecting data back into the latent space. We propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.", "We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.", "We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.", "Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts of supervision. This paper introduces a generic framework to train deep networks, end-to-end, with no supervision. We propose to fix a set of target representations, called Noise As Targets (NAT), and to constrain the deep features to align to them. This domain agnostic approach avoids the standard unsupervised learning issues of trivial solutions and collapsing of features. Thanks to a stochastic batch reassignment strategy and a separable square loss function, it scales to millions of images. The proposed approach produces representations that perform on par with state-of-the-art unsupervised methods on ImageNet and Pascal VOC.", "We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64 error on MNIST, and 54 average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples.", "", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "Generating music medleys is about finding an optimal permutation of a given set of music clips. Toward this goal, we propose a self-supervised learning task, called the music puzzle game, to train neural network models to learn the sequential patterns in music. In essence, such a game requires machines to correctly sort a few multisecond music fragments. In the training stage, we learn the model by sampling multiple non-overlapping fragment pairs from the same songs and seeking to predict whether a given pair is consecutive and is in the correct chronological order. For testing, we design a number of puzzle games with different difficulty levels, the most difficult one being music medley, which requiring sorting fragments from different songs. On the basis of state-of-the-art Siamese convolutional network, we propose an improved architecture that learns to embed frame-level similarity scores computed from the input fragment pairs to a common space, where fragment pairs in the correct order can be more easily identified. Our result shows that the resulting model, dubbed as the similarity embedding network (SEN), performs better than competing models across different games, including music jigsaw puzzle, music sequencing, and music medley. Example results can be found at our project website, this https URL", "Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.", "The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.", "Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.", "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations." ] }
1811.01713
2950634454
While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings. Recent work has demonstrated that a distance measure between documents called (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is expensive to compute, and it is hard to extend its use beyond a KNN classifier. In this paper, we propose the (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. In our experiments on 9 benchmark text classification datasets and 22 textual similarity tasks, the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.
Two broad classes of and methods have been proposed to generate sentence and document representations. The former primarily generate general purpose and domain independent embeddings of word sequences @cite_47 @cite_18 @cite_31 ; many unsupervised training research efforts have focused on either training an auto-encoder to learn the latent structure of a sentence @cite_44 , a paragraph, or document @cite_58 ; or generalizing Word2Vec models to predict words in a paragraph @cite_20 @cite_2 or in neighboring sentences @cite_18 . However, some important information could be lost in the resulting document representation without considering the word order. Our proposed WME overcomes this difficulty by considering the alignments between each pair of words.
{ "cite_N": [ "@cite_47", "@cite_18", "@cite_44", "@cite_2", "@cite_31", "@cite_58", "@cite_20" ], "mid": [ "2103305545", "", "2251939518", "", "2752172973", "2950752421", "2949547296" ], "abstract": [ "Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.", "", "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.", "", "The success of neural network methods for computing word embeddings has motivated methods for generating semantic embeddings of longer pieces of text, such as sentences and paragraphs. Surprisingly, (ICLR'16) showed that such complicated methods are outperformed, especially in out-of-domain (transfer learning) settings, by simpler methods involving mild retraining of word embeddings and basic linear regression. The method of requires retraining with a substantial labeled dataset such as Paraphrase Database (, 2013). @PARASPLIT The current paper goes further, showing that the following completely unsupervised sentence embedding is a formidable baseline: Use word embeddings computed using one of the popular methods on unlabeled corpus like Wikipedia, represent the sentence by a weighted average of the word vectors, and then modify them a bit using PCA SVD. This weighting improves performance by about 10 to 30 in textual similarity tasks, and beats sophisticated supervised methods including RNN's and LSTM's. It even improves 's embeddings. This simple method should be used as the baseline to beat in future, especially when labeled training data is scarce or nonexistent. @PARASPLIT The paper also gives a theoretical explanation of the success of the above unsupervised method using a latent variable generative model for sentences, which is a simple extension of the model in (TACL'16) with new \"smoothing\" terms that allow for words occurring out of context, as well as high probabilities for words like and, not in all contexts.", "Natural language generation of coherent long texts like paragraphs or longer documents is a challenging problem for recurrent networks models. In this paper, we explore an important step toward this generation task: training an LSTM (Long-short term memory) auto-encoder to preserve and reconstruct multi-sentence paragraphs. We introduce an LSTM model that hierarchically builds an embedding for a paragraph from embeddings for sentences and words, then decodes this embedding to reconstruct the original paragraph. We evaluate the reconstructed paragraph using standard metrics like ROUGE and Entity Grid, showing that neural models are able to encode texts in a way that preserve syntactic, semantic, and discourse coherence. While only a first step toward generating coherent text units from neural models, our work has the potential to significantly impact natural language generation and summarization Code for the three models described in this paper can be found at www.stanford.edu jiweil .", "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks." ] }
1811.01713
2950634454
While the celebrated Word2Vec technique yields semantically rich representations for individual words, there has been relatively less success in extending to generate unsupervised sentences or documents embeddings. Recent work has demonstrated that a distance measure between documents called (WMD) that aligns semantically similar words, yields unprecedented KNN classification accuracy. However, WMD is expensive to compute, and it is hard to extend its use beyond a KNN classifier. In this paper, we propose the (WME), a novel approach to building an unsupervised document (sentence) embedding from pre-trained word embeddings. In our experiments on 9 benchmark text classification datasets and 22 textual similarity tasks, the proposed technique consistently matches or outperforms state-of-the-art techniques, with significantly higher accuracy on problems of short length.
The other line of work has focused on developing compositional supervised models to create a vector representation of sentences @cite_14 @cite_53 . Most of this work proposed composition using recursive neural networks based on parse structure @cite_56 @cite_44 , deep averaging networks over bag-of-words models @cite_36 @cite_51 , convolutional neural networks @cite_15 @cite_16 @cite_35 , and recurrent neural networks using long short-term memory @cite_10 @cite_59 . However, these methods are less well suited for domain adaptation settings.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_36", "@cite_10", "@cite_53", "@cite_56", "@cite_44", "@cite_59", "@cite_15", "@cite_16", "@cite_51" ], "mid": [ "2796167946", "1938755728", "", "2104246439", "2950322090", "1889268436", "2251939518", "2251189452", "2949541494", "2120615054", "2175723921" ], "abstract": [ "Celebrated and its fruitful variants are powerful models to achieve excellent performance on the tasks that map sequences to sequences. However, these are many machine learning tasks with inputs naturally represented in a form of graphs, which imposes significant challenges to existing Seq2Seq models for lossless conversion from its graph form to the sequence. In this work, we present a general end-to-end approach to map the input graph to a sequence of vectors, and then another attention-based LSTM to decode the target sequence from these vectors. Specifically, to address inevitable information loss for data conversion, we introduce a novel graph-to-sequence neural network model that follows the encoder-decoder architecture. Our method first uses an improved graph-based neural network to generate the node and graph embeddings by a novel aggregation strategy to incorporate the edge direction information into the node embeddings. We also propose an attention based mechanism that aligns node embeddings and decoding sequence to better cope with large graphs. Experimental results on bAbI task, Shortest Path Task, and Natural Language Generation Task demonstrate that our model achieves the state-of-the-art performance and significantly outperforms other baselines. We also show that with the proposed aggregation strategy, our proposed model is able to quickly converge to good performance.", "We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60 fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.", "", "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).", "Measuring similarity between texts is an important task for several applications. Available approaches to measure document similarity are inadequate for document pairs that have non-comparable lengths, such as a long document and its summary. This is because of the lexical, contextual and the abstraction gaps between a long document of rich details and its concise summary of abstract information. In this paper, we present a document matching approach to bridge this gap, by comparing the texts in a common space of hidden topics. We evaluate the matching algorithm on two matching tasks and find that it consistently and widely outperforms strong baselines. We also highlight the benefits of incorporating domain knowledge to text matching.", "Single-word vector space models have been very successful at learning lexical information. However, they cannot capture the compositional meaning of longer phrases, preventing them from a deeper understanding of language. We introduce a recursive neural network (RNN) model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Our model assigns a vector and a matrix to every node in a parse tree: the vector captures the inherent meaning of the constituent, while the matrix captures how it changes the meaning of neighboring words or phrases. This matrix-vector RNN can learn the meaning of operators in propositional logic and natural language. The model obtains state of the art performance on three different experiments: predicting fine-grained sentiment distributions of adverb-adjective pairs; classifying sentiment labels of movie reviews and classifying semantic relationships such as cause-effect or topic-message between nouns using the syntactic path between them.", "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.", "Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, it is still a challenge task to model long texts, such as sentences and documents. In this paper, we propose a multi-timescale long short-termmemory (MT-LSTM) neural network to model long texts. MTLSTM partitions the hidden states of the standard LSTM into several groups. Each group is activated at different time periods. Thus, MT-LSTM can model very long documents as well as short sentences. Experiments on four benchmark datasets show that our model outperforms the other neural models in text classification task.", "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.", "The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25 error reduction in the last task with respect to the strongest baseline.", "We consider the problem of learning general-purpose, paraphrastic sentence embeddings based on supervision from the Paraphrase Database (, 2013). We compare six compositional architectures, evaluating them on annotated textual similarity datasets drawn both from the same distribution as the training data and from a wide range of other domains. We find that the most complex architectures, such as long short-term memory (LSTM) recurrent neural networks, perform best on the in-domain data. However, in out-of-domain scenarios, simple architectures such as word averaging vastly outperform LSTMs. Our simplest averaging model is even competitive with systems tuned for the particular tasks while also being extremely efficient and easy to use. In order to better understand how these architectures compare, we conduct further experiments on three supervised NLP tasks: sentence similarity, entailment, and sentiment classification. We again find that the word averaging models perform well for sentence similarity and entailment, outperforming LSTMs. However, on sentiment classification, we find that the LSTM performs very strongly-even recording new state-of-the-art performance on the Stanford Sentiment Treebank. We then demonstrate how to combine our pretrained sentence embeddings with these supervised tasks, using them both as a prior and as a black box feature extractor. This leads to performance rivaling the state of the art on the SICK similarity and entailment tasks. We release all of our resources to the research community with the hope that they can serve as the new baseline for further work on universal sentence embeddings." ] }
1811.01587
2899136019
We in this paper propose a realizable framework TECU, which embeds task-specific strategies into update schemes of coordinate descent, for optimizing multivariate non-convex problems with coupled objective functions. On one hand, TECU is capable of improving algorithm efficiencies through embedding productive numerical algorithms, for optimizing univariate sub-problems with nice properties. From the other side, it also augments probabilities to receive desired results, by embedding advanced techniques in optimizations of realistic tasks. Integrating both numerical algorithms and advanced techniques together, TECU is proposed in a unified framework for solving a class of non-convex problems. Although the task embedded strategies bring inaccuracies in sub-problem optimizations, we provide a realizable criterion to control the errors, meanwhile, to ensure robust performances with rigid theoretical analyses. By respectively embedding ADMM and a residual-type CNN in our algorithm framework, the experimental results verify both efficiency and effectiveness of embedding task-oriented strategies in coordinate descent for solving practical problems.
For solving general multivariate non-convex problems, the most classical case that adopts CD scheme is the proximal alternating method (PAM) @cite_27 . However, it is limited to most coupled problems for requiring explicit solutions for every univariate sub-problems. To get around this limitation, the PALM linearizes the coupled function, in pursuit of explicit solutions @cite_22 . However, it requires computing exact Lipschitz constants during iterations, which sometimes is time-consuming even for estimating their tight upper bounds @cite_29 @cite_5 . Moreover, improper upper bounds definitely slow down the convergence speeds of PALM. These troubles on estimating Lipschitz constants also exist in CD variants like BCU @cite_37 and iPALM @cite_31 . Besides the mentioned defects, the updates of existing CD algorithms utterly lose sight of task specificities, i.e., optimizing every univariate sub-problems in the same scheme, which is less efficient in practice.
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_29", "@cite_27", "@cite_5", "@cite_31" ], "mid": [ "2963648037", "2027982384", "2415447328", "2129732816", "1968154520", "2549127690" ], "abstract": [ "Nonconvex optimization arises in many areas of computational science and engineering. However, most nonconvex optimization algorithms are only known to have local convergence or subsequence convergence properties. In this paper, we propose an algorithm for nonconvex optimization and establish its global convergence (of the whole sequence) to a critical point. In addition, we give its asymptotic convergence rate and numerically demonstrate its efficiency. In our algorithm, the variables of the underlying problem are either treated as one block or multiple disjoint blocks. It is assumed that each non-differentiable component of the objective function, or each constraint, applies only to one block of variables. The differentiable components of the objective function, however, can involve multiple blocks of variables together. Our algorithm updates one block of variables at a time by minimizing a certain prox-linear surrogate, along with an extrapolation to accelerate its convergence. The order of update can be either deterministically cyclic or randomly shuffled for each cycle. In fact, our convergence analysis only needs that each block be updated at least once in every fixed number of iterations. We show its global convergence (of the whole sequence) to a critical point under fairly loose conditions including, in particular, the Kurdyka–Łojasiewicz condition, which is satisfied by a broad class of nonconvex nonsmooth applications. These results, of course, remain valid when the underlying problem is convex. We apply our convergence results to the coordinate descent iteration for non-convex regularized linear regression, as well as a modified rank-one residue iteration for nonnegative matrix factorization. We show that both applications have global convergence. Numerically, we tested our algorithm on nonnegative matrix and tensor factorization problems, where random shuffling clearly improves the chance to avoid low-quality local solutions.", "We introduce a proximal alternating linearized minimization (PALM) algorithm for solving a broad class of nonconvex and nonsmooth minimization problems. Building on the powerful Kurdyka---?ojasiewicz property, we derive a self-contained convergence analysis framework and establish that each bounded sequence generated by PALM globally converges to a critical point. Our approach allows to analyze various classes of nonconvex-nonsmooth problems and related nonconvex proximal forward---backward algorithms with semi-algebraic problem's data, the later property being shared by many functions arising in a wide variety of fundamental applications. A by-product of our framework also shows that our results are new even in the convex setting. As an illustration of the results, we derive a new and simple globally convergent algorithm for solving the sparse nonnegative matrix factorization problem.", "In recent years, sparse coding has been widely used in many applications ranging from image processing to pattern recognition. Most existing sparse coding based applications require solving a class of challenging non-smooth and non-convex optimization problems. Despite the fact that many numerical methods have been developed for solving these problems, it remains an open problem to find a numerical method which is not only empirically fast, but also has mathematically guaranteed strong convergence. In this paper, we propose an alternating iteration scheme for solving such problems. A rigorous convergence analysis shows that the proposed method satisfies the global convergence property: the whole sequence of iterates is convergent and converges to a critical point. Besides the theoretical soundness, the practical benefit of the proposed method is validated in applications including image restoration and recognition. Experiments show that the proposed method achieves similar results with less computation when compared to widely used methods such as K-SVD.", "We study the convergence properties of an alternating proximal minimization algorithm for nonconvex structured functions of the type: L(x,y)=f(x)+Q(x,y)+g(y), where f and g are proper lower semicontinuous functions, defined on Euclidean spaces, and Q is a smooth function that couples the variables x and y. The algorithm can be viewed as a proximal regularization of the usual Gauss-Seidel method to minimize L. We work in a nonconvex setting, just assuming that the function L satisfies the Kurdyka-Łojasiewicz inequality. An entire section illustrates the relevancy of such an assumption by giving examples ranging from semialgebraic geometry to “metrically regular” problems. Our main result can be stated as follows: If L has the Kurdyka-Łojasiewicz property, then each bounded sequence generated by the algorithm converges to a critical point of L. This result is completed by the study of the convergence rate of the algorithm, which depends on the geometrical properties of the function L around its critical points. When specialized to @math and to f, g indicator functions, the algorithm is an alternating projection mehod (a variant of von Neumann's) that converges for a wide class of sets including semialgebraic and tame sets, transverse smooth manifolds or sets with “regular” intersection. To illustrate our results with concrete problems, we provide a convergent proximal reweighted l1 algorithm for compressive sensing and an application to rank reduction problems.", "This paper considers regularized block multiconvex optimization, where the feasible set and objective function are generally nonconvex but convex in each block of variables. It also accepts nonconvex blocks and requires these blocks to be updated by proximal minimization. We review some interesting applications and propose a generalized block coordinate descent method. Under certain conditions, we show that any limit point satisfies the Nash equilibrium conditions. Furthermore, we establish global convergence and estimate the asymptotic convergence rate of the method by assuming a property based on the Kurdyka--Łojasiewicz inequality. The proposed algorithms are tested on nonnegative matrix and tensor factorization, as well as matrix and tensor recovery from incomplete observations. The tests include synthetic data and hyperspectral data, as well as image sets from the CBCL and ORL databases. Compared to the existing state-of-the-art algorithms, the proposed algorithms demonstrate superior performance in ...", "In this paper we study nonconvex and nonsmooth optimization problems with semialgebraic data, where the variables vector is split into several blocks of variables. The problem consists of one smooth function of the entire variables vector and the sum of nonsmooth functions for each block separately. We analyze an inertial version of the proximal alternating linearized minimization algorithm and prove its global convergence to a critical point of the objective function at hand. We illustrate our theoretical findings by presenting numerical experiments on blind image deconvolution, on sparse nonnegative matrix factorization and on dictionary learning, which demonstrate the viability and effectiveness of the proposed method." ] }
1811.01587
2899136019
We in this paper propose a realizable framework TECU, which embeds task-specific strategies into update schemes of coordinate descent, for optimizing multivariate non-convex problems with coupled objective functions. On one hand, TECU is capable of improving algorithm efficiencies through embedding productive numerical algorithms, for optimizing univariate sub-problems with nice properties. From the other side, it also augments probabilities to receive desired results, by embedding advanced techniques in optimizations of realistic tasks. Integrating both numerical algorithms and advanced techniques together, TECU is proposed in a unified framework for solving a class of non-convex problems. Although the task embedded strategies bring inaccuracies in sub-problem optimizations, we provide a realizable criterion to control the errors, meanwhile, to ensure robust performances with rigid theoretical analyses. By respectively embedding ADMM and a residual-type CNN in our algorithm framework, the experimental results verify both efficiency and effectiveness of embedding task-oriented strategies in coordinate descent for solving practical problems.
Unlike algorithms for general problems, it is common to embed numerical algorithms for optimizing sub-problems in real-world applications @cite_36 @cite_30 @cite_4 @cite_32 . Such algorithms often make good uses on the nice traits of univariate sub-problems, thus they always possess high efficiencies and superior performances. However, their specificities give rise to less generalization: the well-designed algorithms usually cannot be borrowed to other models. Not only this, those specified algorithms mostly have relatively weak convergence in theory, thus their efficiencies are mostly lack of robustness.
{ "cite_N": [ "@cite_36", "@cite_32", "@cite_4", "@cite_30" ], "mid": [ "", "2612911136", "2523532944", "1980212291" ], "abstract": [ "", "In this paper, we propose to introduce intrinsic image decomposition priors into decomposition models for contrast enhancement. Since image decomposition is a highly ill-posed problem, we introduce constraints on both reflectance and illumination layers to yield a highly reliable solution. We regularize the reflectance layer to be piecewise constant by introducing a weighted @math norm constraint on neighboring pixels according to the color similarity, so that the decomposed reflectance would not be affected much by the illumination information. The illumination layer is regularized by a piecewise smoothness constraint. The proposed model is effectively solved by the Split Bregman algorithm. Then, by adjusting the illumination layer, we obtain the enhancement result. To avoid potential color artifacts introduced by illumination adjusting and reduce computing complexity, the proposed decomposition model is performed on the value channel in HSV space. Experiment results demonstrate that the proposed method performs well for a wide variety of images, and achieves better or comparable subjective and objective quality compared with the state-of-the-art methods.", "Images captured under water are usually degraded due to the effects of absorption and scattering. Degraded underwater images show some limitations when they are used for display and analysis. For example, underwater images with low contrast and color cast decrease the accuracy rate of underwater object detection and marine biology recognition. To overcome those limitations, a systematic underwater image enhancement method, which includes an underwater image dehazing algorithm and a contrast enhancement algorithm, is proposed. Built on a minimum information loss principle, an effective underwater image dehazing algorithm is proposed to restore the visibility, color, and natural appearance of underwater images. A simple yet effective contrast enhancement algorithm is proposed based on a kind of histogram distribution prior, which increases the contrast and brightness of underwater images. The proposed method can yield two versions of enhanced output. One version with relatively genuine color and natural appearance is suitable for display. The other version with high contrast and brightness can be used for extracting more valuable information and unveiling more details. Simulation experiment, qualitative and quantitative comparisons, as well as color accuracy and application tests are conducted to evaluate the performance of the proposed method. Extensive experiments demonstrate that the proposed method achieves better visual quality, more valuable information, and more accurate color restoration than several state-of-the-art methods, even for underwater images taken under several challenging scenes.", "This paper addresses extracting two layers from an image where one layer is smoother than the other. This problem arises most notably in intrinsic image decomposition and reflection interference removal. Layer decomposition from a single-image is inherently ill-posed and solutions require additional constraints to be enforced. We introduce a novel strategy that regularizes the gradients of the two layers such that one has a long tail distribution and the other a short tail distribution. While imposing the long tail distribution is a common practice, our introduction of the short tail distribution on the second layer is unique. We formulate our problem in a probabilistic framework and describe an optimization scheme to solve this regularization with only a few iterations. We apply our approach to the intrinsic image and reflection removal problems and demonstrate high quality layer separation on par with other techniques but being significantly faster than prevailing methods." ] }
1811.01669
2899500298
Pregel's vertex-centric model allows us to implement many interesting graph algorithms, where optimization plays an important role in making it practically useful. Although many optimizations have been developed for dealing with different performance issues, it is hard to compose them together to optimize complex algorithms, where we have to deal with multiple performance issues at the same time. In this paper, we propose a new approach to composing optimizations, by making use of the interface, as a replacement of Pregel's message passing and aggregator mechanism, which can better structure the communication in Pregel algorithms. We demonstrate that it is convenient to optimize a Pregel program by simply using a proper channel from the channel library or composing them to deal with multiple performance issues. We intensively evaluate the approach through many nontrivial examples. By adopting the channel interface, our system achieves an all-around performance gain for various graph algorithms. In particular, the composition of different optimizations makes the S-V algorithm 2.20x faster than the current best implementation.
Google's Pregel @cite_13 is the first specific in-memory system for distributed graph processing. It adopts the Bulk-Synchronous Parallel (BSP) model @cite_4 with explicit messages to let users implement graph algorithms in a vertex-centric way. The core design of Pregel has been widely adopted by many open-source frameworks @cite_35 @cite_9 , and most of them inherit the monolithic message passing interface, meaning that the messages of different purposes are mixed and indistinguishable for the system. As an attempt for optimizing communication patterns, extends Pregel with additional interfaces (in particular, the and the mode), but it is less flexible since the two modes cannot be composed and adding optimizations is inconvenient.
{ "cite_N": [ "@cite_35", "@cite_9", "@cite_13", "@cite_4" ], "mid": [ "2080098453", "", "2170616854", "2045271686" ], "abstract": [ "The vertex-centric programming model is an established computational paradigm recently incorporated into distributed processing frameworks to address challenges in large-scale graph processing. Billion-node graphs that exceed the memory capacity of commodity machines are not well supported by popular Big Data tools like MapReduce, which are notoriously poor performing for iterative graph algorithms such as PageRank. In response, a new type of framework challenges one to “think like a vertex” (TLAV) and implements user-defined programs from the perspective of a vertex rather than a graph. Such an approach improves locality, demonstrates linear scalability, and provides a natural way to express and compute many iterative graph algorithms. These frameworks are simple to program and widely applicable but, like an operating system, are composed of several intricate, interdependent components, of which a thorough understanding is necessary in order to elicit top performance at scale. To this end, the first comprehensive survey of TLAV frameworks is presented. In this survey, the vertex-centric approach to graph processing is overviewed, TLAV frameworks are deconstructed into four main components and respectively analyzed, and TLAV implementations are reviewed and categorized.", "", "Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices, trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program.", "The success of the von Neumann model of sequential computation is attributable to the fact that it is an efficient bridge between software and hardware: high-level languages can be efficiently compiled on to this model; yet it can be effeciently implemented in hardware. The author argues that an analogous bridge between software and hardware in required for parallel computation if that is to become as widely used. This article introduces the bulk-synchronous parallel (BSP) model as a candidate for this role, and gives results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware." ] }
1811.01669
2899500298
Pregel's vertex-centric model allows us to implement many interesting graph algorithms, where optimization plays an important role in making it practically useful. Although many optimizations have been developed for dealing with different performance issues, it is hard to compose them together to optimize complex algorithms, where we have to deal with multiple performance issues at the same time. In this paper, we propose a new approach to composing optimizations, by making use of the interface, as a replacement of Pregel's message passing and aggregator mechanism, which can better structure the communication in Pregel algorithms. We demonstrate that it is convenient to optimize a Pregel program by simply using a proper channel from the channel library or composing them to deal with multiple performance issues. We intensively evaluate the approach through many nontrivial examples. By adopting the channel interface, our system achieves an all-around performance gain for various graph algorithms. In particular, the composition of different optimizations makes the S-V algorithm 2.20x faster than the current best implementation.
To support intuitive message slicing in Pregel-like systems, Telos @cite_36 proposes a layered architecture where interleaving tasks are implemented as separate , each having a user-defined compute() function with a dedicated message buffer. However, it lacks an essential feature for optimization that users cannot modify the implementation of the message buffer. Husky @cite_23 is a general-purpose distributed framework with the channel interface, and it supports primitives like , and and to combine the strength of graph-parallel and machine learning systems. We extend this idea for composing optimizations in graph-parallel system and propose our optimized channels for three common performance issues.
{ "cite_N": [ "@cite_36", "@cite_23" ], "mid": [ "2293880162", "2232745072" ], "abstract": [ "The processing of graph in a parallel and distributed fashion is a constantly rising trend, due to the size of the today’s graphs. This paper proposes a multi-layer graph overlay approach to support the orchestration of distributed, vertex-centric computations targeting large graphs. Our approach takes inspiration from the overlay networks, a widely exploited approach for information dissemination, aggregation and computing orchestration in massively distributed systems. We propose Telos, an environment supporting the definition of multi-layer graph overlays which provides each vertex with a layered, vertex-centric, view of the graph. Telos is defined on the top of Apache Spark and has been evaluated by considering two well-known graph problems. We present a set of experimental results showing the effectiveness of our approach.", "Finding efficient, expressive and yet intuitive programming models for data-parallel computing system is an important and open problem. Systems like Hadoop and Spark have been widely adopted for massive data processing, as coarse-grained primitives like map and reduce are succinct and easy to master. However, sometimes over-simplified API hinders programmers from more fine-grained control and designing more efficient algorithms. Developers may have to resort to sophisticated domain-specific languages (DSLs), or even low-level layers like MPI, but this raises development cost---learning many mutually exclusive systems prolongs the development schedule, and the use of low-level tools may result in bugprone programming. This motivated us to start the Husky open-source project, which is an attempt to strike a better balance between high performance and low development cost. Husky is developed mainly for in-memory large scale data mining, and also serves as a general research platform for designing efficient distributed algorithms. We show that many existing frameworks can be easily implemented and bridged together inside Husky, and Husky is able to achieve similar or even better performance compared with domain-specific systems." ] }
1811.01669
2899500298
Pregel's vertex-centric model allows us to implement many interesting graph algorithms, where optimization plays an important role in making it practically useful. Although many optimizations have been developed for dealing with different performance issues, it is hard to compose them together to optimize complex algorithms, where we have to deal with multiple performance issues at the same time. In this paper, we propose a new approach to composing optimizations, by making use of the interface, as a replacement of Pregel's message passing and aggregator mechanism, which can better structure the communication in Pregel algorithms. We demonstrate that it is convenient to optimize a Pregel program by simply using a proper channel from the channel library or composing them to deal with multiple performance issues. We intensively evaluate the approach through many nontrivial examples. By adopting the channel interface, our system achieves an all-around performance gain for various graph algorithms. In particular, the composition of different optimizations makes the S-V algorithm 2.20x faster than the current best implementation.
There has been much research studying the optimizations on Pregel-like systems, and our optimized channels draw inspiration from this line of research, such as the sender-side message combining (a.k.a. vertex-replication, mirroring) @cite_22 @cite_24 @cite_29 @cite_30 , the request-respond paradigm @cite_13 , the block-centric model @cite_0 @cite_10 @cite_17 and so on. In particular, our scatter-combine channel recognizes the static messaging pattern and reduces the computational cost as well as message size by pre-processing, which is novel and turns out to be effective for com -mu -ni -ca -tion-intensive algorithms like PageRank and S-V. We also demonstrate how complex algorithms like S-V and SCC can be optimized by such technique, while most existing systems only focus on rather simple algorithms.
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_10", "@cite_29", "@cite_24", "@cite_0", "@cite_13", "@cite_17" ], "mid": [ "", "2127653767", "2259576664", "2096544401", "1969970763", "217817341", "2170616854", "" ], "abstract": [ "", "Many practical computing problems concern large graph. Standard problems include web graph analysis and social networks analysis like Facebook, Twitter. The scale of these graph poses challenge to their efficient processing. To efficiently process large-scale graph, we create X-Pregel, a graph processing system based on Google's Computing Pregel model [1], by using the state-of-the-art PGAS programming language X10. We do not purely implement Google Pregel by using X10 language, but we also introduce two new features that do not exists in the original model to optimize the performance: (1) an optimization to reduce the number of messages which is exchanged among workers, (2) a dynamic re-partitioning scheme that effectively reassign vertices to different workers during the computation. Our performance evaluation demonstrates that our optimization method of sending messages achieves up to 200 speed up on Pagerank by reducing the network I O to 10 times in comparison with the default method of sending messages when processing SCALE20 Kronecker graph [2](vertices = 1,048,576, edges = 33,554,432). It also demonstrates that our system processes large graph faster than prior implementation of Pregel such as GPS [3](stands for graph processing system) and Giraph [4].", "The rapid growth in the volume of many real-world graphs (e.g., social networks, web graphs, and spatial networks) has led to the development of various vertex-centric distributed graph computing systems in recent years. However, real-world graphs from different domains have very different characteristics, which often create bottlenecks in vertex-centric parallel graph computation. We identify three such important characteristics from a wide spectrum of real-world graphs, namely (1)skewed degree distribution, (2)large diameter, and (3)(relatively) high density. Among them, only (1) has been studied by existing systems, but many real-world power-law graphs also exhibit the characteristics of (2) and (3). In this paper, we propose a block-centric framework, called Blogel, which naturally handles all the three adverse graph characteristics. Blogel programmers may think like a block and develop efficient algorithms for various graph problems. We propose parallel algorithms to partition an arbitrary graph into blocks efficiently, and block-centric programs are then run over these blocks. Our experiments on large real-world graphs verified that Blogel is able to achieve orders of magnitude performance improvements over the state-of-the-art distributed graph computing systems.", "While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations.", "GPS (for Graph Processing System) is a complete open-source system we developed for scalable, fault-tolerant, and easy-to-program execution of algorithms on extremely large graphs. This paper serves the dual role of describing the GPS system, and presenting techniques and experimental results for graph partitioning in distributed graph-processing systems like GPS. GPS is similar to Google's proprietary Pregel system, with three new features: (1) an extended API to make global computations more easily expressed and more efficient; (2) a dynamic repartitioning scheme that reassigns vertices to different workers during the computation, based on messaging patterns; and (3) an optimization that distributes adjacency lists of high-degree vertices across all compute nodes to improve performance. In addition to presenting the implementation of GPS and its novel features, we also present experimental results on the performance effects of both static and dynamic graph partitioning schemes, and we describe the compilation of a high-level domain-specific programming language to GPS, enabling easy expression of complex algorithms.", "To meet the challenge of processing rapidly growing graph and network data created by modern applications, a number of distributed graph processing systems have emerged, such as Pregel and GraphLab. All these systems divide input graphs into partitions, and employ a \"think like a vertex\" programming model to support iterative graph computation. This vertex-centric model is easy to program and has been proved useful for many graph algorithms. However, this model hides the partitioning information from the users, thus prevents many algorithm-specific optimizations. This often results in longer execution time due to excessive network messages (e.g. in Pregel) or heavy scheduling overhead to ensure data consistency (e.g. in GraphLab). To address this limitation, we propose a new \"think like a graph\" programming paradigm. Under this graph-centric model, the partition structure is opened up to the users, and can be utilized so that communication within a partition can bypass the heavy message passing or scheduling machinery. We implemented this model in a new system, called Giraph++, based on Apache Giraph, an open source implementation of Pregel. We explore the applicability of the graph-centric model to three categories of graph algorithms, and demonstrate its flexibility and superior performance, especially on well-partitioned data. For example, on a web graph with 118 million vertices and 855 million edges, the graph-centric version of connected component detection algorithm runs 63X faster and uses 204X fewer network messages than its vertex-centric counterpart.", "Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices, trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program.", "" ] }
1811.01669
2899500298
Pregel's vertex-centric model allows us to implement many interesting graph algorithms, where optimization plays an important role in making it practically useful. Although many optimizations have been developed for dealing with different performance issues, it is hard to compose them together to optimize complex algorithms, where we have to deal with multiple performance issues at the same time. In this paper, we propose a new approach to composing optimizations, by making use of the interface, as a replacement of Pregel's message passing and aggregator mechanism, which can better structure the communication in Pregel algorithms. We demonstrate that it is convenient to optimize a Pregel program by simply using a proper channel from the channel library or composing them to deal with multiple performance issues. We intensively evaluate the approach through many nontrivial examples. By adopting the channel interface, our system achieves an all-around performance gain for various graph algorithms. In particular, the composition of different optimizations makes the S-V algorithm 2.20x faster than the current best implementation.
There are also graph systems using a functional interface with high-level primitives to manipulate the entire graph, such as GraphX @cite_26 (a library on top of Apache Spark @cite_15 ) and its extension HelP @cite_3 . However, their primitives are hard to compose. Furthermore, experiment results @cite_23 show that they are less efficient than other systems even on simple algorithms like PageRank. Sparse-matrix based frameworks (e.g. the CombBLAS @cite_20 and PEGASUS @cite_14 ) are also popular for handling graphs which provide linear algebra primitives, but the lack of graph semantics makes it hard for deep optimizations.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_3", "@cite_23", "@cite_15", "@cite_20" ], "mid": [ "1982003698", "2167927436", "", "2232745072", "2189465200", "2141380216" ], "abstract": [ "From social networks to targeted advertising, big graphs capture the structure in data and are central to recent advances in machine learning and data mining. Unfortunately, directly applying existing data-parallel tools to graph computation tasks can be cumbersome and inefficient. The need for intuitive, scalable tools for graph computation has lead to the development of new graph-parallel systems (e.g., Pregel, PowerGraph) which are designed to efficiently execute graph algorithms. Unfortunately, these new graph-parallel systems do not address the challenges of graph construction and transformation which are often just as problematic as the subsequent computation. Furthermore, existing graph-parallel systems provide limited fault-tolerance and support for interactive data mining. We introduce GraphX, which combines the advantages of both data-parallel and graph-parallel systems by efficiently expressing graph computation within the Spark data-parallel framework. We leverage new ideas in distributed graph representation to efficiently distribute graphs as tabular data-structures. Similarly, we leverage advances in data-flow systems to exploit in-memory computation and fault-tolerance. We provide powerful new operations to simplify graph construction and transformation. Using these primitives we implement the PowerGraph and Pregel abstractions in less than 20 lines of code. Finally, by exploiting the Scala foundation of Spark, we enable users to interactively load, transform, and compute on massive graphs.", "In this paper, we describe PEGASUS, an open source Peta Graph Mining library which performs typical graph mining tasks such as computing the diameter of the graph, computing the radius of each node and finding the connected components. As the size of graphs reaches several Giga-, Tera- or Peta-bytes, the necessity for such a library grows too. To the best of our knowledge, PEGASUS is the first such library, implemented on the top of the Hadoop platform, the open source version of MapReduce. Many graph mining operations (PageRank, spectral clustering, diameter estimation, connected components etc.) are essentially a repeated matrix-vector multiplication. In this paper we describe a very important primitive for PEGASUS, called GIM-V (Generalized Iterated Matrix-Vector multiplication). GIM-V is highly optimized, achieving (a) good scale-up on the number of available machines (b) linear running time on the number of edges, and (c) more than 5 times faster performance over the non-optimized version of GIM-V. Our experiments ran on M45, one of the top 50 supercomputers in the world. We report our findings on several real graphs, including one of the largest publicly available Web Graphs, thanks to Yahoo!, with 6,7 billion edges.", "", "Finding efficient, expressive and yet intuitive programming models for data-parallel computing system is an important and open problem. Systems like Hadoop and Spark have been widely adopted for massive data processing, as coarse-grained primitives like map and reduce are succinct and easy to master. However, sometimes over-simplified API hinders programmers from more fine-grained control and designing more efficient algorithms. Developers may have to resort to sophisticated domain-specific languages (DSLs), or even low-level layers like MPI, but this raises development cost---learning many mutually exclusive systems prolongs the development schedule, and the use of low-level tools may result in bugprone programming. This motivated us to start the Husky open-source project, which is an attempt to strike a better balance between high performance and low development cost. Husky is developed mainly for in-memory large scale data mining, and also serves as a general research platform for designing efficient distributed algorithms. We show that many existing frameworks can be easily implemented and bridged together inside Husky, and Husky is able to achieve similar or even better performance compared with domain-specific systems.", "MapReduce and its variants have been highly successful in implementing large-scale data-intensive applications on commodity clusters. However, most of these systems are built around an acyclic data flow model that is not suitable for other popular applications. This paper focuses on one such class of applications: those that reuse a working set of data across multiple parallel operations. This includes many iterative machine learning algorithms, as well as interactive data analysis tools. We propose a new framework called Spark that supports these applications while retaining the scalability and fault tolerance of MapReduce. To achieve these goals, Spark introduces an abstraction called resilient distributed datasets (RDDs). An RDD is a read-only collection of objects partitioned across a set of machines that can be rebuilt if a partition is lost. Spark can outperform Hadoop by 10x in iterative machine learning jobs, and can be used to interactively query a 39 GB dataset with sub-second response time.", "This paper presents a scalable high-performance software library to be used for graph analysis and data mining. Large combinatorial graphs appear in many applications of high-performance computing, including computational biology, informatics, analytics, web search, dynamical systems, and sparse matrix methods. Graph computations are difficult to parallelize using traditional approaches due to their irregular nature and low operational intensity. Many graph computations, however, contain sufficient coarse-grained parallelism for thousands of processors, which can be uncovered by using the right primitives. We describe the parallel Combinatorial BLAS, which consists of a small but powerful set of linear algebra primitives specifically targeting graph and data mining applications. We provide an extensible library interface and some guiding principles for future development. The library is evaluated using two important graph algorithms, in terms of both performance and ease-of-use. The scalability and raw performance of the example applications, using the Combinatorial BLAS, are unprecedented on distributed memory clusters." ] }
1811.01571
2962889053
We propose an efficient Stereographic Projection Neural Network (SPNet) for learning representations of 3D objects. We first transform a 3D input volume into a 2D planar image using stereographic projection. We then present a shallow 2D convolutional neural network (CNN) to estimate the object category followed by view ensemble, which combines the responses from multiple views of the object to further enhance the predictions. Specifically, the proposed approach consists of four stages: (1) Stereographic projection of a 3D object, (2) view-specific feature learning, (3) view selection and (4) view ensemble. The proposed approach performs comparably to the state-of-the-art methods while having substantially lower GPU memory as well as network parameters. Despite its lightness, the experiments on 3D object classification and shape retrievals demonstrate the high performance of the proposed method.
While previous works often combine hand-crafted features or descriptors with a machine learning classifier @cite_22 @cite_25 @cite_35 @cite_30 , the point cloud-based methods operate directly on point clouds in an end-to-end manner. In @cite_14 @cite_27 @cite_20 , the authors designed novel neural network architectures suitable for handling unordered point sets in 3D. Features based on point clouds often require spatial neighborhood queries, which can be hard to deal for inputs with large numbers of points.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_14", "@cite_22", "@cite_27", "@cite_25", "@cite_20" ], "mid": [ "1564871316", "2050690116", "2950642167", "2102402541", "", "2154458843", "2606987267" ], "abstract": [ "Recognition of three dimensional (3D) objects in noisy and cluttered scenes is a challenging problem in 3D computer vision. One approach that has been successful in past research is the regional shape descriptor. In this paper, we introduce two new regional shape descriptors: 3D shape contexts and harmonic shape contexts. We evaluate the performance of these descriptors on the task of recognizing vehicles in range scans of scenes using a database of 56 cars. We compare the two novel descriptors to an existing descriptor, the spin image, showing that the shape context based descriptors have a higher recognition rate on noisy scenes and that 3D shape contexts outperform the others on cluttered scenes.", "The selection of suitable features and their parameters for the classification of three-dimensional laser range data is a crucial issue for high-quality results. In this paper we compare the performance of different histogram descriptors and their parameters on three urban datasets recorded with various sensors—sweeping SICK lasers, tilting SICK lasers and a Velodyne 3D laser range scanner. These descriptors are 1D, 2D, and 3D histograms capturing the distribution of normals or points around a query point. We also propose a novel histogram descriptor, which relies on the spectral values in different scales. We argue that choosing a larger support radius and a z-axis based global reference frame axis can boost the performance of all kinds of investigated classification models significantly. The 3D histograms relying on the point distribution, normal orientations, or spectral values, turned out to be the best choice for the classification in urban environments.", "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.", "This paper investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The system is decomposed into four steps: locating, segmenting, characterizing, and classifying clusters of 3D points. Specifically, we first cluster nearby points to form a set of potential object locations (with hierarchical clustering). Then, we segment points near those locations into foreground and background sets (with a graph-cut algorithm). Next, we build a feature vector for each point cluster (based on both its shape and its context). Finally, we label the feature vectors using a classifier trained on a set of manually labeled objects. The paper presents several alternative methods for each step. We quantitatively evaluate the system and tradeoffs of different alternatives in a truthed part of a scan of Ottawa that contains approximately 100 million points and 1000 objects of interest. Then, we use this truth data as a training set to recognize objects amidst approximately 1 billion points of the remainder of the Ottawa scan.", "", "Object recognition is a critical next step for autonomous robots, but a solution to the problem has remained elusive. Prior 3D-sensor-based work largely classifies individual point cloud segments or uses class-specific trackers. In this paper, we take the approach of classifying the tracks of all visible objects. Our new track classification method, based on a mathematically principled method of combining log odds estimators, is fast enough for real time use, is non-specific to object class, and performs well (98.5 accuracy) on the task of classifying correctly-tracked, well-segmented objects into car, pedestrian, bicyclist, and background classes. We evaluate the classifier's performance using the Stanford Track Collection, a new dataset of about 1.3 million labeled point clouds in about 14,000 tracks recorded from an autonomous vehicle research platform. This dataset, which we make publicly available, contains tracks extracted from about one hour of 360-degree, 10Hz depth information recorded both while driving on busy campus streets and parked at busy intersections.", "We present a new deep learning architecture (called Kd-network) that is designed for 3D model recognition tasks and works with unstructured point clouds. The new architecture performs multiplicative transformations and share parameters of these transformations according to the subdivisions of the point clouds imposed onto them by Kd-trees. Unlike the currently dominant convolutional architectures that usually require rasterization on uniform two-dimensional or three-dimensional grids, Kd-networks do not rely on such grids in any way and therefore avoid poor scaling behaviour. In a series of experiments with popular shape recognition benchmarks, Kd-networks demonstrate competitive performance in a number of shape recognition tasks such as shape classification, shape retrieval and shape part segmentation." ] }
1811.01571
2962889053
We propose an efficient Stereographic Projection Neural Network (SPNet) for learning representations of 3D objects. We first transform a 3D input volume into a 2D planar image using stereographic projection. We then present a shallow 2D convolutional neural network (CNN) to estimate the object category followed by view ensemble, which combines the responses from multiple views of the object to further enhance the predictions. Specifically, the proposed approach consists of four stages: (1) Stereographic projection of a 3D object, (2) view-specific feature learning, (3) view selection and (4) view ensemble. The proposed approach performs comparably to the state-of-the-art methods while having substantially lower GPU memory as well as network parameters. Despite its lightness, the experiments on 3D object classification and shape retrievals demonstrate the high performance of the proposed method.
In 3D shapeNet @cite_13 , the authors proposed a method which learns global features from voxelized 3D shapes based on the 3D convolutional restricted Boltzmann machine. Similarly, Maturana and Scherer @cite_12 proposed VoxNet which integrates a volumetric occupancy grid representation with a supervised 3D CNN. In a follow-up, @cite_4 extended VoxNet by introducing auxiliary task. They proposed to add orientation loss in addition to the general classification loss, in which the architecture predicts both the pose and class of the object. @cite_15 proposed Deep Local feature Aggregation Network (DLAN) which combines rotation-invariant 3D local features and their aggregation in a single architecture.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_13", "@cite_12" ], "mid": [ "2612326916", "2336098239", "2951755740", "2211722331" ], "abstract": [ "", "Recent work has shown good recognition results in 3D object recognition using 3D convolutional networks. In this paper, we show that the object orientation plays an important role in 3D recognition. More specifically, we argue that objects induce different features in the network under rotation. Thus, we approach the category-level classification task as a multi-task problem, in which the network is trained to predict the pose of the object in addition to the class label as a parallel task. We show that this yields significant improvements in the classification results. We test our suggested architecture on several datasets representing various 3D data sources: LiDAR data, CAD models, and RGB-D images. We report state-of-the-art results on classification as well as significant improvements in precision and speed over the baseline on 3D detection.", "3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.", "Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second." ] }
1811.01571
2962889053
We propose an efficient Stereographic Projection Neural Network (SPNet) for learning representations of 3D objects. We first transform a 3D input volume into a 2D planar image using stereographic projection. We then present a shallow 2D convolutional neural network (CNN) to estimate the object category followed by view ensemble, which combines the responses from multiple views of the object to further enhance the predictions. Specifically, the proposed approach consists of four stages: (1) Stereographic projection of a 3D object, (2) view-specific feature learning, (3) view selection and (4) view ensemble. The proposed approach performs comparably to the state-of-the-art methods while having substantially lower GPU memory as well as network parameters. Despite its lightness, the experiments on 3D object classification and shape retrievals demonstrate the high performance of the proposed method.
@cite_0 proposed a fully convolutional denoising auto-encoder to perform unsupervised global feature learning. In addition, 3D variational auto-encoders and generative adversarial networks have been adopted by @cite_33 and @cite_7 , respectively. Furthermore, recent works @cite_24 @cite_1 exploit the sparsity of 3D input using the octree data structure to reduce the computational complexity and speed up the learning of global features.
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_1", "@cite_0", "@cite_24" ], "mid": [ "2511691466", "2546066744", "2556802233", "2338532005", "2737234477" ], "abstract": [ "When working with three-dimensional data, choice of representation is key. We explore voxel-based models, and present evidence for the viability of voxellated representations in applications including shape modeling and object classification. Our key contributions are methods for training voxel-based variational autoencoders, a user interface for exploring the latent space learned by the autoencoder, and a deep convolutional neural network architecture for object classification. We address challenges unique to voxel-based representations, and empirically evaluate our models on the ModelNet benchmark, where we demonstrate a 51.5 relative improvement in the state of the art for object classification.", "We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods.", "We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.", "With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes.", "We present O-CNN, an Octree-based Convolutional Neural Network (CNN) for 3D shape analysis. Built upon the octree representation of 3D shapes, our method takes the average normal vectors of a 3D model sampled in the finest leaf octants as input and performs 3D CNN operations on the octants occupied by the 3D shape surface. We design a novel octree data structure to efficiently store the octant information and CNN features into the graphics memory and execute the entire O-CNN training and evaluation on the GPU. O-CNN supports various CNN structures and works for 3D shapes in different representations. By restraining the computations on the octants occupied by 3D surfaces, the memory and computational costs of the O-CNN grow quadratically as the depth of the octree increases, which makes the 3D CNN feasible for high-resolution 3D models. We compare the performance of the O-CNN with other existing 3D CNN solutions and demonstrate the efficiency and efficacy of O-CNN in three shape analysis tasks, including object classification, shape retrieval, and shape segmentation." ] }
1811.01571
2962889053
We propose an efficient Stereographic Projection Neural Network (SPNet) for learning representations of 3D objects. We first transform a 3D input volume into a 2D planar image using stereographic projection. We then present a shallow 2D convolutional neural network (CNN) to estimate the object category followed by view ensemble, which combines the responses from multiple views of the object to further enhance the predictions. Specifically, the proposed approach consists of four stages: (1) Stereographic projection of a 3D object, (2) view-specific feature learning, (3) view selection and (4) view ensemble. The proposed approach performs comparably to the state-of-the-art methods while having substantially lower GPU memory as well as network parameters. Despite its lightness, the experiments on 3D object classification and shape retrievals demonstrate the high performance of the proposed method.
Image-based methods have been considered as one of the fundamental approaches in 3D object classification. Light Field descriptor (LFD) @cite_5 by used multiple views around a 3D shape, and evaluates the dissimilarity between two shapes by comparing the corresponding two view sets in a greedy way instead of learning global features by combining multi-view information. @cite_18 used a similar approach but using the Hausdorff distance between the corresponding view sets to measure the similarity between two 3D shapes.
{ "cite_N": [ "@cite_5", "@cite_18" ], "mid": [ "2021122545", "2342223463" ], "abstract": [ "A large number of 3D models are created and available on the Web, since more and more 3D modelling and digitizing tools are developed for ever increasing applications. The techniques for content-based 3D model retrieval then become necessary. In this paper, a visual similarity-based 3D model retrieval system is proposed. This approach measures the similarity among 3D models by visual similarity, and the main idea is that if two 3D models are similar, they also look similar from all viewing angles. Therefore, one hundred orthogonal projections of an object, excluding symmetry, are encoded both by Zernike moments and Fourier descriptors as features for later retrieval. The visual similarity-based approach is robust against similarity transformation, noise, model degeneracy etc., and provides 42 , 94 and 25 better performance (precision-recall evaluation diagram) than three other competing approaches: (1)the spherical harmonics approach developed by , (2)the MPEG-7 Shape 3D descriptors, and (3)the MPEG-7 Multiple View Descriptor. The proposed system is on the Web for practical trial use (http: 3d.csie.ntu.edu.tw), and the database contains more than 10,000 publicly available 3D models collected from WWW pages. Furthermore, a user friendly interface is provided to retrieve 3D models by drawing 2D shapes. The retrieval is fast enough on a server with Pentium IV 2.4GHz CPU, and it takes about 2 seconds and 0.1 seconds for querying directly by a 3D model and by hand drawn 2D shapes, respectively.", "Projective analysis is an important solution for 3D shape retrieval, since human visual perceptions of 3D shapes rely on various 2D observations from different view points. Although multiple informative and discriminative views are utilized, most projection-based retrieval systems suffer from heavy computational cost, thus cannot satisfy the basic requirement of scalability for search engines. In this paper, we present a real-time 3D shape search engine based on the projective images of 3D shapes. The real-time property of our search engine results from the following aspects: (1) efficient projection and view feature extraction using GPU acceleration, (2) the first inverted file, referred as F-IF, is utilized to speed up the procedure of multi-view matching, (3) the second inverted file (S-IF), which captures a local distribution of 3D shapes in the feature manifold, is adopted for efficient context-based reranking. As a result, for each query the retrieval task can be finished within one second despite the necessary cost of IO overhead. We name the proposed 3D shape search engine, which combines GPU acceleration and Inverted File (Twice), as GIFT. Besides its high efficiency, GIFT also outperforms the state-of-the-art methods significantly in retrieval accuracy on various shape benchmarks and competitions." ] }
1811.01571
2962889053
We propose an efficient Stereographic Projection Neural Network (SPNet) for learning representations of 3D objects. We first transform a 3D input volume into a 2D planar image using stereographic projection. We then present a shallow 2D convolutional neural network (CNN) to estimate the object category followed by view ensemble, which combines the responses from multiple views of the object to further enhance the predictions. Specifically, the proposed approach consists of four stages: (1) Stereographic projection of a 3D object, (2) view-specific feature learning, (3) view selection and (4) view ensemble. The proposed approach performs comparably to the state-of-the-art methods while having substantially lower GPU memory as well as network parameters. Despite its lightness, the experiments on 3D object classification and shape retrievals demonstrate the high performance of the proposed method.
@cite_9 proposed a CNN architecture that aggregates information from multiple views rendered from a 3D object which achieves higher recognition performance compared to single view based architectures. By decomposing each view sequence into a set of view pairs, @cite_8 classified each pair independently and learned an object classifier by weighting the contribution of each pair, which allows 3D shape recognition over arbitrary camera viewpoint. To perform pooling more efficiently, @cite_2 proposed a dominant set clustering technique where pooling is performed in each cluster individually. @cite_34 proposed RotationNet which takes multi-view images of an object and jointly estimates its object category and poses. RotationNet learns viewpoint labels in an unsupervised manner. Moreover, it learns view-specific feature representations shared across classes to boost the performance.
{ "cite_N": [ "@cite_9", "@cite_34", "@cite_2", "@cite_8" ], "mid": [ "1644641054", "2964342398", "2893477965", "2962724911" ], "abstract": [ "A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images. We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors. Recognition rates further increase when multiple views of the shapes are provided. In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance. The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes. We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives.", "We propose a Convolutional Neural Network (CNN)-based model \"RotationNet,\" which takes multi-view images of an object as input and jointly estimates its pose and object category. Unlike previous approaches that use known viewpoint labels for training, our method treats the viewpoint labels as latent variables, which are learned in an unsupervised manner during the training using an unaligned object dataset. RotationNet is designed to use only a partial set of multi-view images for inference, and this property makes it useful in practical scenarios where only partial views are available. Moreover, our pose alignment strategy enables one to obtain view-specific feature representations shared across classes, which is important to maintain high accuracy in both object categorization and pose estimation. Effectiveness of RotationNet is demonstrated by its superior performance to the state-of-the-art methods of 3D object classification on 10- and 40-class ModelNet datasets. We also show that RotationNet, even trained without known poses, achieves the state-of-the-art performance on an object pose estimation dataset.", "", "A multi-view image sequence provides a much richer capacity for object recognition than from a single image. However, most existing solutions to multi-view recognition typically adopt hand-crafted, model-based geometric methods, which do not readily embrace recent trends in deep learning. We propose to bring Convolutional Neural Networks to generic multi-view recognition, by decomposing an image sequence into a set of image pairs, classifying each pair independently, and then learning an object classifier by weighting the contribution of each pair. This allows for recognition over arbitrary camera trajectories, without requiring explicit training over the potentially infinite number of camera paths and lengths. Building these pairwise relationships then naturally extends to the next-best-view problem in an active recognition framework. To achieve this, we train a second Convolutional Neural Network to map directly from an observed image to next viewpoint. Finally, we incorporate this into a trajectory optimisation task, whereby the best recognition confidence is sought for a given trajectory length. We present state-of-the-art results in both guided and unguided multi-view recognition on the ModelNet dataset, and show how our method can be used with depth images, greyscale images, or both." ] }
1811.01468
2899014158
A ubiquitous task in processing electronic medical data is the assignment of standardized codes representing diagnoses and or procedures to free-text documents such as medical reports. This is a difficult natural language processing task that requires parsing long, heterogeneous documents and selecting a set of appropriate codes from tens of thousands of possibilities---many of which have very few positive training samples. We present a deep learning system that advances the state of the art for the MIMIC-III dataset, achieving a new best micro F1-measure of 55.85 , significantly outperforming the previous best result ( 2018). We achieve this through a number of enhancements, including two major novel contributions: multi-view convolutional channels, which effectively learn to adjust kernel sizes throughout the input; and attention regularization, mediated by natural-language code descriptions, which helps overcome sparsity for thousands of uncommon codes. These and other modifications are selected to address difficulties inherent to both automated coding specifically and deep learning generally. Finally, we investigate our accuracy results in detail to individually measure the impact of these contributions and point the way towards future algorithmic improvements.
There has been significant work towards the automated coding problem @cite_3 @cite_10 @cite_13 @cite_6 @cite_9 @cite_4 @cite_12 . We review some of the recent relevant work.
{ "cite_N": [ "@cite_4", "@cite_10", "@cite_9", "@cite_6", "@cite_3", "@cite_13", "@cite_12" ], "mid": [ "2784499877", "", "2951125449", "2755247101", "2096664202", "2509591188", "" ], "abstract": [ "Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality. Constructing predictive statistical models typically requires extraction of curated predictor variables from normalized EHR data, a labor-intensive process that discards the vast majority of information in each patient’s record. We propose a representation of patients’ entire raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization. We validated our approach using de-identified EHR data from two US academic medical centers with 216,221 adult patients hospitalized for at least 24 h. In the sequential format we propose, this volume of EHR data unrolled into a total of 46,864,534,945 data points, including clinical notes. Deep learning models achieved high accuracy for tasks such as predicting: in-hospital mortality (area under the receiver operator curve [AUROC] across sites 0.93–0.94), 30-day unplanned readmission (AUROC 0.75–0.76), prolonged length of stay (AUROC 0.85–0.86), and all of a patient’s final discharge diagnoses (frequency-weighted AUROC 0.90). These models outperformed traditional, clinically-used predictive models in all cases. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios. In a case study of a particular prediction, we demonstrate that neural networks can be used to identify relevant information from the patient’s chart.", "", "Diagnosis of a clinical condition is a challenging task, which often requires significant medical investigation. Previous work related to diagnostic inferencing problems mostly consider multivariate observational data (e.g. physiological signals, lab tests etc.). In contrast, we explore the problem using free-text medical notes recorded in an electronic health record (EHR). Complex tasks like these can benefit from structured knowledge bases, but those are not scalable. We instead exploit raw text from Wikipedia as a knowledge source. Memory networks have been demonstrated to be effective in tasks which require comprehension of free-form text. They use the final iteration of the learned representation to predict probable classes. We introduce condensed memory neural networks (C-MemNNs), a novel model with iterative condensation of memory representations that preserves the hierarchy of features in the memory. Experiments on the MIMIC-III dataset show that the proposed model outperforms other variants of memory networks to predict the most probable diagnoses given a complex clinical scenario.", "Abstract A multitude of information sources is present in the electronic health record (EHR), each of which can contain clues to automatically assign diagnosis and procedure codes. These sources however show information overlap and quality differences, which complicates the retrieval of these clues. Through feature selection, a denser representation with a consistent quality and less information overlap can be obtained. We introduce and compare coverage-based feature selection methods, based on confidence and information gain. These approaches were evaluated over a range of medical specialties, with seven different medical specialties for ICD-9-CM code prediction (six at the Antwerp University Hospital and one in the MIMIC-III dataset) and two different medical specialties for ICD-10-CM code prediction. Using confidence coverage to integrate all sources in an EHR shows a consistent improvement in F-measure (49.83 for diagnosis codes on average), both compared with the baseline (44.25 for diagnosis codes on average) and with using the best standalone source (44.41 for diagnosis codes on average). Confidence coverage creates a concise patient stay representation independent of a rigid framework such as UMLS, and contains easily interpretable features. Confidence coverage has several advantages to a baseline setup. In our baseline setup, feature selection was limited to a filter removing features with less than five total occurrences in the trainingset. Prediction results improved consistently when using multiple heterogeneous sources to predict clinical codes, while reducing the number of features and the processing time.", "Background and objective The volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments. Methods We study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community. Results The hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5 and 27.6 , respectively, when trained on 20 533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated. Conclusions Hierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art.", "With the latest developments in database technologies, it becomes easier to store the medical records of hospital patients from their first day of admission than was previously possible. In Intensive Care Units (ICU), modern medical information systems can record patient events in relational databases every second. Knowledge mining from these huge volumes of medical data is beneficial to both caregivers and patients. Given a set of electronic patient records, a system that effectively assigns the disease labels can facilitate medical database management and also benefit other researchers, e.g., pathologists. In this paper, we have proposed a framework to achieve that goal. Medical chart and note data of a patient are used to extract distinctive features. To encode patient features, we apply a Bag-of-Words encoding method for both chart and note data. We also propose a model that takes into account both global information and local correlations between diseases. Correlated diseases are characterized by a graph structure that is embedded in our sparsity-based framework. Our algorithm captures the disease relevance when labeling disease codes rather than making individual decision with respect to a specific disease. At the same time, the global optimal values are guaranteed by our proposed convex objective function. Extensive experiments have been conducted on a real-world large-scale ICU database. The evaluation results demonstrate that our method improves multi-label classification results by successfully incorporating disease correlations.", "" ] }
1811.01484
2899188444
We quantify the accuracy of various simulators compared to a real world robotic reaching and interaction task. Simulators are used in robotics to design solutions for real world hardware without the need for physical access. The reality gap' prevents solutions developed or learnt in simulation from performing well, or at at all, when transferred to real-world hardware. Making use of a Kinova robotic manipulator and a motion capture system, we record a ground truth enabling comparisons with various simulators, and present quantitative data for various manipulation-oriented robotic tasks. We show the relative strengths and weaknesses of numerous contemporary simulators, highlighting areas of significant discrepancy, and assisting researchers in the field in their selection of appropriate simulators for their use cases. All code and parameter listings are publicly available from: this https URL .
Robotics is an embodied discipline focused on building systems that act in the physical world. However, for numerous reasons highlighted in Section I, simulation is a key tool to many successful robotic engineering and integration efforts. Simulation is fast, cheap, and allows for rapid prototyping and iteration over the composition and control of a robotic system. These benefits are perhaps most strongly felt when learning is used, due to the data-hungry nature of many contemporary learning approaches. Because simulators necessarily abstract various features (e.g., sensory delays, actuator slop), away from the physical reality, there exists a gap between what is simulated and how the final system performs in the real world. Of course, we can in some situations learn directly on real hardware, however this requires sophisticated learning testbeds @cite_25 @cite_5 @cite_4 and, depending on the amount of data required, may be prohibitive in terms of required resources @cite_3 . Here we focus on simulated efforts to learn.
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_25", "@cite_3" ], "mid": [ "2949098821", "2738495647", "2204537438", "2293467699" ], "abstract": [ "Current learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.", "Evolutionary algorithms have previously shown promise in generating controllers for legged robots. Multiple evaluations across many evolutionary generations are typically required — simulators are frequently used to accommodate this. However, performance degradation is frequently observed when transferring controllers from simulation to reality due to inconsistencies between the two. In this paper we demonstrate a testbed that permits repeated, direct evolution of hexapod controllers as a closed-loop system. The testbed uses a two-stage evolutionary process. In stage 1, a multi-objective evolutionary algorithm spreads a population of controllers across a space of desirable criteria. The second stage allows for specific criteria to be selected for on a per-mission basis, with promising initial controller parameters taken from the first stage. As the optimisation occurs directly on the robot, performance is guaranteed. Furthermore, controllers can be made specific to irregularities in e.g., motor wear, and robot mass distribution, creating controllers that are sensitive to the hardware state of the individual robot.", "We describe an experimental platform that uses an evolutionary algorithm to automatically tune the gains of a cascaded PID quadcopter controller. All parameters are tuned simultaneously, few platform assumptions are necessary, and no modeling is required. The platform is able to run back-to-back experiments for over 24 hours without human intervention. In a sample experiment, we apply the system to solve a hovering task — the behaviors generated by an initially-random population of gain vectors are evaluated and gradually improved, with the attainment of high fitness hover controllers reported within 12 hours.", "We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing." ] }
1811.01484
2899188444
We quantify the accuracy of various simulators compared to a real world robotic reaching and interaction task. Simulators are used in robotics to design solutions for real world hardware without the need for physical access. The reality gap' prevents solutions developed or learnt in simulation from performing well, or at at all, when transferred to real-world hardware. Making use of a Kinova robotic manipulator and a motion capture system, we record a ground truth enabling comparisons with various simulators, and present quantitative data for various manipulation-oriented robotic tasks. We show the relative strengths and weaknesses of numerous contemporary simulators, highlighting areas of significant discrepancy, and assisting researchers in the field in their selection of appropriate simulators for their use cases. All code and parameter listings are publicly available from: this https URL .
This reality gap' is of increasing importance, as current deep learning approaches require a significant amount of data to achieve acceptable performance. Although increased computing power has narrowed this gap by facilitating more complex, high-fidelity simulations @cite_10 , the issue is as yet unsolved.
{ "cite_N": [ "@cite_10" ], "mid": [ "2810627076" ], "abstract": [ "This research considers the task of evolving the physical structure of a robot to enhance its performance in various environments, which is a significant problem in the field of Evolutionary Robotics. Inspired by the fields of evolutionary art and sculpture, we evolve only targeted parts of a robot, which simplifies the optimisation problem compared to traditional approaches that must simultaneously evolve both (actuated) body and brain. Exploration fidelity is emphasised in areas of the robot most likely to benefit from shape optimisation, whilst exploiting existing robot structure and control. Our approach uses a Genetic Algorithm to optimise collections of Bezier splines that together define the shape of a legged robot's tibia, and leg performance is evaluated in parallel in a high-fidelity simulator. The leg is represented in the simulator as 3D-printable file, and as such can be readily instantiated in reality. Provisional experiments in three distinct environments show the evolution of environment-specific leg structures that are both high-performing and notably different to those evolved in the other environments. This proof-of-concept represents an important step towards the environment-dependent optimisation of performance-critical components for a range of ubiquitous, standard, and already-capable robots that can carry out a wide variety of tasks." ] }
1811.01484
2899188444
We quantify the accuracy of various simulators compared to a real world robotic reaching and interaction task. Simulators are used in robotics to design solutions for real world hardware without the need for physical access. The reality gap' prevents solutions developed or learnt in simulation from performing well, or at at all, when transferred to real-world hardware. Making use of a Kinova robotic manipulator and a motion capture system, we record a ground truth enabling comparisons with various simulators, and present quantitative data for various manipulation-oriented robotic tasks. We show the relative strengths and weaknesses of numerous contemporary simulators, highlighting areas of significant discrepancy, and assisting researchers in the field in their selection of appropriate simulators for their use cases. All code and parameter listings are publicly available from: this https URL .
This sim-to-real transfer problem has recently been tackled by numerous research groups. Earlier approaches mainly highlight the issues around this transfer @cite_22 , with more recent efforts proposing solutions, including domain adaptation and Generative Adversarial Networks (GAN) which requires both real world and simulated data @cite_14 . Results showing the early promise of these techniques --- using domain adaptation, Bousmalis @cite_20 were able to achieve a success rate for real world grasping trained in simulation of $76.7 An alternative method to randomisation is the optimisation of the simulated environments, with the goal to emulate the real world better. This approach requires real world data for the simulator to be able to fit to the real world observations, making it robot and application specific @cite_7 @cite_13 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_13", "@cite_20" ], "mid": [ "2737215781", "1999156278", "31984690", "", "2962899390" ], "abstract": [ "While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 pixels), showing the potential for more complicated and broader applications. Our method provides a technique for more efficient learning and transfer of visuo-motor policies for real robotic systems without relying entirely on large real-world robot datasets.", "We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.", "Abstract In this work a new method to evolutionary robotics is proposed, it combines into asingle framework, learning from reality and simulations. An illusory sub-system is incorporated as an integral part of an autonomous system. The adaptation of the illusory system results from minimizing differences of robot behavior evaluations in reality and in simulations. Behavior guides the illusory adaptation by sampling task-relevant instances of the world. Thus explicit calibration is not required. We remark two attributes of the presented methodology: (i) it is a promising approach for crossing the reality-gap among simulation and reality in evolutionary robotics, and (ii) it allows to generate automatically models and theories of the real robot environment expressed as simulations. We present validation experiments on locomotive behavior acquisition for legged robots.", "", "Instrumenting and collecting annotated visual grasping datasets to train modern machine learning algorithms can be extremely time-consuming and expensive. An appealing alternative is to use off-the-shelf simulators to render synthetic data for which ground-truth annotations are generated automatically. Unfortunately, models trained purely on simulated data often fail to generalize to the real world. We study how randomized simulated environments and domain adaptation methods can be extended to train a grasping system to grasp novel objects from raw monocular RGB images. We extensively evaluate our approaches with a total of more than 25,000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN. We show that, by using synthetic data and domain adaptation, we are able to reduce the number of real-world samples needed to achieve a given level of performance by up to 50 times, using only randomly generated simulated objects. We also show that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with 939,777 labeled real-world samples." ] }
1811.01484
2899188444
We quantify the accuracy of various simulators compared to a real world robotic reaching and interaction task. Simulators are used in robotics to design solutions for real world hardware without the need for physical access. The reality gap' prevents solutions developed or learnt in simulation from performing well, or at at all, when transferred to real-world hardware. Making use of a Kinova robotic manipulator and a motion capture system, we record a ground truth enabling comparisons with various simulators, and present quantitative data for various manipulation-oriented robotic tasks. We show the relative strengths and weaknesses of numerous contemporary simulators, highlighting areas of significant discrepancy, and assisting researchers in the field in their selection of appropriate simulators for their use cases. All code and parameter listings are publicly available from: this https URL .
Reviews of physics engines in the past have proven many times over that no one engine is capable of modelling all scenarios. Boeing @cite_18 compared PhysX, Bullet, JigLib, Newton, Open Dynamics Engine (ODE), Tokamak and True Axis; they reported that Bullet performed best overall however no physics engine was best at all tasks. Chung @cite_6 likewise found when testing Bullet, Dynamic Animation and Robotics Toolkit (DART), MuJoCo, and ODE, that no one engine performed better at all tasks, stating that for different tasks and different conditions a different physics engine was found to be better. These findings are further corroborated by Gonzalez-Badillo @cite_8 , who showed that PhysX performs better than Bullet for non-complex geometries but is unable to simulate more complex geometries to the same degree as Bullet.
{ "cite_N": [ "@cite_18", "@cite_6", "@cite_8" ], "mid": [ "", "2386323994", "1971286791" ], "abstract": [ "", "Contact behaviors in physics simulations are important for real-time interactive applications, especially in virtual reality applications where user's body parts are tracked and interact with the environment via contact. For these contact simulations, it is ideal to have small changes in initial condition yield predictable changes in the output. Predictable simulation is key for success in iterative learning processes as well, such as learning controllers for manipulations or locomotion tasks. Here, we present an extensive comparison of contact simulations using Bullet Physics, Dynamic Animation and Robotics Toolkit DART, MuJoCo, and Open Dynamics Engine, with a focus on predictability of behavior. We first tune each engine to match an analytical solution as closely as possible and then compare the results for a more complex simulation. We found that in the commonly available physics engines, small changes in initial condition can sometimes induce different sequences of contact events to occur and ultimately lead to a vastly different result. Our results confirmed that parameter settings do matter a great deal and suggest that there may be a trade-off between accuracy and predictability. Copyright © 2016 John Wiley & Sons, Ltd.", "Purpose – In this study, a new methodology to evaluate the performance of physics simulation engines (PSEs) when used in haptic virtual assembly applications is proposed. This methodology can be used to assess the performance of any physics engine. To prove the feasibility of the proposed methodology, two-third party PSEs – Bullet and PhysXtm – were evaluated. The paper aims to discuss these issues. Design methodology approach – Eight assembly tests comprising variable geometric and dynamic complexity were conducted. The strengths and weaknesses of each simulation engine for haptic virtual assembly were identified by measuring different parameters such as task completion time, influence of weight perception and force feedback. Findings – The proposed tests have led to the development of a standard methodology by which physics engines can be compared and evaluated. The results have shown that when the assembly comprises complex shapes, Bullet has better performance than PhysX. It was also observed that the..." ] }
1811.01427
2936775203
We describe a @math -query monotonicity tester for Boolean functions @math on the @math -hypergrid. This is the first @math monotonicity tester with query complexity independent of @math . Motivated by this independence of @math , we initiate the study of monotonicity testing of measurable Boolean functions @math over the continuous domain, where the distance is measured with respect to a product distribution over @math . We give a @math -query monotonicity tester for such functions. Our main technical result is a domain reduction theorem for monotonicity. For any function @math , let @math be its distance to monotonicity. Consider the restriction @math of the function on a random @math sub-hypergrid of the original domain. We show that for @math , the expected distance of the restriction is @math . Previously, such a result was only known for @math (Berman-Raskhodnikova-Yaroslavtsev, STOC 2014). Our result for testing Boolean functions over @math then follows by applying the @math -query hypergrid tester of Black-Chakrabarty-Seshadhri (SODA 2018). To obtain the result for testing Boolean functions over @math , we use standard measure theoretic tools to reduce monotonicity testing of a measurable function @math to monotonicity testing of a discretized version of @math over a hypergrid domain @math for large, but finite, @math (that may depend on @math ). The independence of @math in the hypergrid tester is crucial to getting the final tester over @math .
We give a short summary of Boolean monotonicity testing over the hypercube. The problem was introduced by @cite_2 (refer to Raskhodnikova's thesis @cite_14 ), with an @math -query tester. The first improvement over that bound was the @math tester of Chakrabarty and Seshadhri @cite_6 , achieved via a directed analogue of Margulis' isoperimetric theorem. Chen-Servedio-Tan improved the analysis to get an @math bound @cite_27 . A breakthrough result of Khot-Minzer-Safra gave an @math tester @cite_29 . All these testers are non-adaptive and one-sided. had proved a (nearly) matching lower bound of @math for this case @cite_26 . The first polynomial two-sided lower bound was given by Chen-Servedio-Tan, subsequently improved to @math by @cite_1 . The first polynomial lower bound of @math for adaptive testers was given recently by Belovs-Blais @cite_15 , and was improved to @math by Chen-Waingarten-Xie @cite_31 .
{ "cite_N": [ "@cite_31", "@cite_14", "@cite_26", "@cite_29", "@cite_1", "@cite_6", "@cite_27", "@cite_2", "@cite_15" ], "mid": [ "2593853527", "", "2038521541", "2212352090", "", "2515694000", "2085693797", "", "2267350596" ], "abstract": [ "We prove a lower bound of Ω(n1 3) for the query complexity of any two-sided and adaptive algorithm that tests whether an unknown Boolean function f: 0,1 n→ 0,1 is monotone versus far from monotone. This improves the recent lower bound of Ω(n1 4) for the same problem by Belovs and Blais (STOC'16). Our result builds on a new family of random Boolean functions that can be viewed as a two-level extension of Talagrand's random DNFs. Beyond monotonicity we prove a lower bound of Ω(√n) for two-sided, adaptive algorithms and a lower bound of Ω(n) for one-sided, non-adaptive algorithms for testing unateness, a natural generalization of monotonicity. The latter matches the linear upper bounds by Khot and Shinkar (RANDOM'16) and by Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova, and Seshadhri (2017).", "", "The field of property testing studies algorithms that distinguish, using a small number of queries, between inputs which satisfy a given property, and those that are far' from satisfying the property. Testing properties that are defined in terms of monotonicity has been extensively investigated, primarily in the context of the monotonicity of a sequence of integers, or the monotonicity of a function over the n-dimensional hypercube 1,l,m n. These works resulted in monotonicity testers whose query complexity is at most polylogarithmic in the size of the domain.We show that in its most general setting, testing that Boolean functions are close to monotone is equivalent, with respect to the number of required queries, to several other testing problems in logic and graph theory. These problems include: testing that a Boolean assignment of variables is close to an assignment that satisfies a specific 2-CNF formula, testing that a set of vertices is close to one that is a vertex cover of a specific graph, and testing that a set of vertices is close to a clique.We then investigate the query complexity of monotonicity testing of both Boolean and integer functions over general partial orders. We give algorithms and lower bounds for the general problem, as well as for some interesting special cases. In proving a general lower bound, we construct graphs with combinatorial properties that may be of independent interest.", "We show a directed and robust analogue of a boolean isoperimetric type theorem of Talagrand. As an application, we give a monotonicity testing algorithm that makes, up to polylog factors, (square root of n) a#x03B5;2 non-adaptive queries to a function f:0, 1n -> 0, 1, always accepts a monotone function and rejects a function that is a#x03B5;-far from being monotone with constant probability.", "", "A Boolean function @math is said to be @math -far from monotone if @math needs to be modified in at least @math -fraction of the points to make it monotone. We design a randomized tester that is given oracle access to @math and an input parameter @math and has the following guarantee: It outputs Yes if the function is monotonically nondecreasing and outputs No with probability @math , if the function is @math -far from monotone. This nonadaptive, one-sided tester makes @math queries to the oracle.", "We consider the problem of testing whether an unknown Boolean function f : -- 1, 1 n a#x21C6; -- 1, 1 is monotone versus aepsi;-far from every monotone function. The two main results of this paper are a new lower bound and a new algorithm for this well-studied problem. Lower bound: We prove an a#x03A9;(n1 5) lower bound on the query complexity of any non-adaptive two-sided error algorithm for testing whether an unknown Boolean function f is monotone versus constant-far from monotone. This gives an exponential improvement on the previous lower bound of a#x03A9;(log n) due to [1]. We show that the same lower bound holds for monotonicity testing of Boolean-valued functions over hypergrid domains 1,,m n for all m age; 2. Upper bound: We present an O(n5 6) poly(1 aepsi;)-query algorithm that tests whether an unknown Boolean function f is monotone versus aepsi;-far from monotone. Our algorithm, which is non-adaptive and makes one-sided error, is a modified version of the algorithm of Chakrabarty and Seshadhri[2], which makes O(n7 8) poly(1 aepsi;) queries.", "", "We show that every algorithm for testing n-variate Boolean functions for monotonicityhas query complexity Ω(n1 4). All previous lower bounds for this problem were designed for non-adaptive algorithms and, as a result, the best previous lower bound for general (possibly adaptive) monotonicity testers was only Ω(logn). Combined with the query complexity of the non-adaptive monotonicity tester of Khot, Minzer, and Safra (FOCS 2015), our lower bound shows that adaptivity can result in at most a quadratic reduction in the query complexity for testing monotonicity. By contrast, we show that there is an exponential gap between the query complexity of adaptive and non-adaptive algorithms for testing regular linear threshold functions (LTFs) for monotonicity. Chen, De, Servedio, and Tan (STOC 2015)recently showed that non-adaptive algorithms require almost Ω(n1 2) queries for this task. We introduce a new adaptive monotonicity testing algorithm which has query complexity O(logn) when the input is a regular LTF." ] }
1811.01427
2936775203
We describe a @math -query monotonicity tester for Boolean functions @math on the @math -hypergrid. This is the first @math monotonicity tester with query complexity independent of @math . Motivated by this independence of @math , we initiate the study of monotonicity testing of measurable Boolean functions @math over the continuous domain, where the distance is measured with respect to a product distribution over @math . We give a @math -query monotonicity tester for such functions. Our main technical result is a domain reduction theorem for monotonicity. For any function @math , let @math be its distance to monotonicity. Consider the restriction @math of the function on a random @math sub-hypergrid of the original domain. We show that for @math , the expected distance of the restriction is @math . Previously, such a result was only known for @math (Berman-Raskhodnikova-Yaroslavtsev, STOC 2014). Our result for testing Boolean functions over @math then follows by applying the @math -query hypergrid tester of Black-Chakrabarty-Seshadhri (SODA 2018). To obtain the result for testing Boolean functions over @math , we use standard measure theoretic tools to reduce monotonicity testing of a measurable function @math to monotonicity testing of a discretized version of @math over a hypergrid domain @math for large, but finite, @math (that may depend on @math ). The independence of @math in the hypergrid tester is crucial to getting the final tester over @math .
For Boolean monotonicity testing over general hypergrids, gave a non-adaptive, one-sided @math -query tester @cite_13 . This was improved to @math by Berman-Raskhodnikova-Yaroslavtsev @cite_21 . They also prove an @math separation between adaptive and non-adaptive monotonicity testers for @math . They show an @math adaptive tester (for any constant @math ), and an @math lower bound for non-adaptive monotonicity testers. Previous work by the authors give a monotonicity tester with query complexity @math via directed isoperimetric inequalities for augmented hypergrids @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_21", "@cite_13" ], "mid": [ "2781320126", "648134895", "1586809440" ], "abstract": [ "We study monotonicity testing of Boolean functions over the hypergrid [n]d and design a non-adaptive tester with 1-sided error whose query complexity is O(d5 6) · poly(log n, 1 e). Previous to our work, the best known testers had query complexity linear in d but independent of n. We improve upon these testers as long as n = 2do(1). To obtain our results, we work with what we call the augmented hypergrid, which adds extra edges to the hypergrid. Our main technical contribution is a Margulis-style isoperimetric result for the augmented hypergrid, and our tester, like previous testers for the hypercube domain, performs directed random walks on this structure.", "", "We present improved algorithms for testing monotonicity of functions. Namely, given the ability to query an unknown function f: Σ n ↦ Ξ, where Σ and Ξ are finite ordered sets, the test always accepts a monotone f, and rejects f with high probability if it is e-far from being monotone (i.e., every monotone function differs from f on more than an e fraction of the domain). For any e > 0, the query complexity of the test is O((n e) · log ∣Σ ∣ · log ∣Ξ∣). The previous best known bound was ( O ((n^2 ) ^2 ) )." ] }
1811.01249
2898843400
In real-world scenarios, different features have different acquisition costs at test time which necessitates cost-aware methods to optimize the cost and performance tradeoff. This paper introduces a novel and scalable approach for cost-aware feature acquisition at test time. The method incrementally asks for features based on the available context that are known feature values. The proposed method is based on sensitivity analysis in neural networks and density estimation using denoising autoencoders with binary representation layers. In the proposed architecture, a denoising autoencoder is used to handle unknown features (i.e., features that are yet to be acquired), and the sensitivity of predictions with respect to each unknown feature is used as a context-dependent measure of informativeness. We evaluated the proposed method on eight different real-world data sets as well as one synthesized data set and compared its performance with several other approaches in the literature. According to the results, the suggested method is capable of efficiently acquiring features at test time in a cost- and context-aware fashion.
One of the approaches to incorporate feature acquisition costs or feature costs in general is considering the feature costs during the training phase and trading off the prediction accuracy with the prediction cost. An example of these approaches is limiting the number of features that are actually used in the predictor model by using @math regularization @cite_35 . In this method, the @math regularization enforces weights corresponding to certain features to be zero, and hence they can be omitted during the test phase. There are other methods in the literature that try to define and solve optimization problems over both the prediction performance and prediction costs @cite_24 @cite_3 @cite_17 . Nevertheless, in all these methods, the final set of selected features is fixed and these methods fail to capture and take the advantage of the contextual information available at test-time.
{ "cite_N": [ "@cite_24", "@cite_35", "@cite_3", "@cite_17" ], "mid": [ "1519196800", "2063978378", "", "2077808043" ], "abstract": [ "Most classification algorithms are \"passive\", in that they assign a class label to each instance based only on the description given, even if that description is incomplete. By contrast, an active classifier can--at some cost--obtain the values of some unspecified attributes, before deciding upon a class label. This can be useful, for instance, when deciding whether to gather information relevant to a medical procedure or experiment. The expected utility of using an active classifier depends on both the cost required to obtain the values of additional attributes and the penalty incurred if the classifier outputs the wrong classification. This paper analyzes the problem of learning optimal active classifiers, using a variant of the probably-approximately-correct (PAC) model. After defining the framework, we show that this task can be achieved efficiently when the active classifier is allowed to perform only (at most) a constant number of tests. We then show that, in more general environments, this task of learning optimal active classifiers is often intractable.", "The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates.", "", "There are many sensing challenges for which one must balance the effectiveness of a given measurement with the associated sensing cost. For example, when performing a diagnosis a doctor must balance the cost and benefit of a given test (measurement), and the decision to stop sensing (stop performing tests) must account for the risk to the patient and doctor (malpractice) for a given diagnosis based on observed data. This motivates a cost-sensitive classification problem in which the features (sensing results) are not given a priori; the algorithm determines which features to acquire next, as well as when to stop sensing and make a classification decision based on previous observations (accounting for the costs of various types of errors, as well as the rewards of being correct). We formally define the cost-sensitive classification problem and solve it via a partially observable Markov decision process (POMDP). While the POMDP constitutes an intuitively appealing formulation, the intrinsic properties of classification tasks resist application of it to this problem. We circumvent the difficulties of the POMDP via a myopic approach, with an adaptive stopping criterion linked to the standard POMDP. The myopic algorithm is computationally feasible, easily handles continuous features, and seamlessly avoids repeated actions. Experiments with several benchmark data sets show that the proposed method yields state-of-the-art performance, and importantly our method uses only a small fraction of the features that are generally used in competitive approaches." ] }
1811.01249
2898843400
In real-world scenarios, different features have different acquisition costs at test time which necessitates cost-aware methods to optimize the cost and performance tradeoff. This paper introduces a novel and scalable approach for cost-aware feature acquisition at test time. The method incrementally asks for features based on the available context that are known feature values. The proposed method is based on sensitivity analysis in neural networks and density estimation using denoising autoencoders with binary representation layers. In the proposed architecture, a denoising autoencoder is used to handle unknown features (i.e., features that are yet to be acquired), and the sensitivity of predictions with respect to each unknown feature is used as a context-dependent measure of informativeness. We evaluated the proposed method on eight different real-world data sets as well as one synthesized data set and compared its performance with several other approaches in the literature. According to the results, the suggested method is capable of efficiently acquiring features at test time in a cost- and context-aware fashion.
One intuitive approach to incorporate feature costs during the training phase, while considering the available context during the test phase, is using the idea of decision trees. One of the most famous examples of this approach is the face detection cascade classifier by Viola and Jones @cite_21 . While their goal was to increase the prediction speed by rejecting negative samples as soon as possible within a cascade of classifiers, many papers followed their architecture and incorporated feature cost in creating cascade predictors @cite_30 @cite_28 . One main drawback of cascade approaches is that cascades are only applicable to problems with a considerable class imbalance such as face detection or spam email detection. In these cases, the number of negative samples is significantly higher than the number of positive samples. However, there are many real-world applications in which the classes are relatively balanced such as document classification or image classification. To overcome this issue, in @cite_36 @cite_0 authors suggested the idea of classifier trees instead of classifier cascades to handle the problems where cascades are not applicable.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_36", "@cite_21", "@cite_0" ], "mid": [ "2182356372", "2949654875", "2128466927", "2137401668", "2101474437" ], "abstract": [ "Machine learning algorithms are increasingly used in large-scale industrial settings. Here, the operational cost during test-time has to be taken into account when an algorithm is designed. This operational cost is affected by the average running time and the computation time required for feature extraction. When a diverse set of features is used, the latter can vary drastically. In this paper we propose an algorithm that constructs a cascade of classifiers which explicitly trades-off operational cost and classifier accuracy while accounting for on-demand feature extraction costs. Different from previous work, our algorithm reoptimizes trained classifiers and allows expensive features to be scheduled at any stage within the cascade to minimize overall cost. Experiments on actual web-search ranking data sets demonstrate that our framework leads to drastic test-time improvements.", "As machine learning algorithms enter applications in industrial settings, there is increased interest in controlling their cpu-time during testing. The cpu-time consists of the running time of the algorithm and the extraction time of the features. The latter can vary drastically when the feature set is diverse. In this paper, we propose an algorithm, the Greedy Miser, that incorporates the feature extraction cost during training to explicitly minimize the cpu-time during testing. The algorithm is a straightforward extension of stage-wise regression and is equally suitable for regression or multi-class classification. Compared to prior work, it is significantly more cost-effective and scales to larger data sets.", "Machine learning algorithms have successfully entered industry through many real-world applications (e.g., search engines and product recommendations). In these applications, the test-time CPU cost must be budgeted and accounted for. In this paper, we examine two main components of the test-time CPU cost, classifier evaluation cost and feature extraction cost, and show how to balance these costs with the classifier accuracy. Since the computation required for feature extraction dominates the test-time cost of a classifier in these settings, we develop two algorithms to efficiently balance the performance with the test-time cost. Our first contribution describes how to construct and optimize a tree of classifiers, through which test inputs traverse along individual paths. Each path extracts different features and is optimized for a specific sub-partition of the input space. Our second contribution is a natural reduction of the tree of classifiers into a cascade. The cascade is particularly useful for class-imbalanced data sets as the majority of instances can be early-exited out of the cascade when the algorithm is sufficiently confident in its prediction. Because both approaches only compute features for inputs that benefit from them the most, we find our trained classifiers lead to high accuracies at a small fraction of the computational cost.", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "In a large visual multi-class detection framework, the timeliness of results can be crucial. Our method for timely multi-class detection aims to give the best possible performance at any single point after a start time; it is terminated at a deadline time. Toward this goal, we formulate a dynamic, closed-loop policy that infers the contents of the image in order to decide which detector to deploy next. In contrast to previous work, our method significantly diverges from the predominant greedy strategies, and is able to learn to take actions with deferred values. We evaluate our method with a novel timeliness measure, computed as the area under an Average Precision vs. Time curve. Experiments are conducted on the PASCAL VOC object detection dataset. If execution is stopped when only half the detectors have been run, our method obtains 66 better AP than a random ordering, and 14 better performance than an intelligent baseline. On the timeliness measure, our method obtains at least 11 better performance. Our method is easily extensible, as it treats detectors and classifiers as black boxes and learns from execution traces using reinforcement learning." ] }
1811.01249
2898843400
In real-world scenarios, different features have different acquisition costs at test time which necessitates cost-aware methods to optimize the cost and performance tradeoff. This paper introduces a novel and scalable approach for cost-aware feature acquisition at test time. The method incrementally asks for features based on the available context that are known feature values. The proposed method is based on sensitivity analysis in neural networks and density estimation using denoising autoencoders with binary representation layers. In the proposed architecture, a denoising autoencoder is used to handle unknown features (i.e., features that are yet to be acquired), and the sensitivity of predictions with respect to each unknown feature is used as a context-dependent measure of informativeness. We evaluated the proposed method on eight different real-world data sets as well as one synthesized data set and compared its performance with several other approaches in the literature. According to the results, the suggested method is capable of efficiently acquiring features at test time in a cost- and context-aware fashion.
In order to address these issues, recently, there has been great attention toward using learning methods to solve the generic problem of cost-sensitive and context-aware feature acquisition. He @cite_27 suggested a method based on imitation learning that trains a model that is able to predict an optimal feature query decision to be made given the available features. Contardo @cite_9 @cite_6 introduced the idea of defining the problem as a reinforcement learning problem and solving it as a separate problem. While these methods are successful in terms of truly incorporating the test-time context information to the decisions, they require extra effort of training a feature query model in addition to the target predictor.
{ "cite_N": [ "@cite_9", "@cite_27", "@cite_6" ], "mid": [ "2526665672", "2325933960", "2503743130" ], "abstract": [ "We propose to tackle the cost-sensitive learning problem, where each feature is associated to a particular acquisition cost. We propose a new model with the following key properties: (i) it acquires features in an adaptive way, (ii) features can be acquired per block (several at a time) so that this model can deal with high dimensional data, and (iii) it relies on representation-learning ideas. The effectiveness of this approach is demonstrated on several experiments considering a variety of datasets and with different cost settings.", "We present an instance-specific test-time dynamic feature selection algorithm. Our algorithm sequentially chooses features given previously selected features and their values. It stops the selection process to make a prediction according to a user-specified accuracy-cost trade-off. We cast the sequential decision-making problem as a Markov Decision Process and apply imitation learning techniques. We address the problem of learning and inference jointly in a simple multiclass classification setting. Experimental results on UCI datasets show that our approach achieves the same or higher accuracy using only a small fraction of features than static feature selection methods.", "We propose a reinforcement learning based approach to tackle the cost-sensitive learning problem where each input feature has a specific cost. The acquisition process is handled through a stochastic policy which allows features to be acquired in an adaptive way. The general architecture of our approach relies on representation learning to enable performing prediction on any partially observed sample, whatever the set of its observed features are. The resulting model is an original mix of representation learning and of reinforcement learning ideas. It is learned with policy gradient techniques to minimize a budgeted inference cost. We demonstrate the effectiveness of our proposed method with several experiments on a variety of datasets for the sparse prediction problem where all features have the same cost, but also for some cost-sensitive settings." ] }
1811.01249
2898843400
In real-world scenarios, different features have different acquisition costs at test time which necessitates cost-aware methods to optimize the cost and performance tradeoff. This paper introduces a novel and scalable approach for cost-aware feature acquisition at test time. The method incrementally asks for features based on the available context that are known feature values. The proposed method is based on sensitivity analysis in neural networks and density estimation using denoising autoencoders with binary representation layers. In the proposed architecture, a denoising autoencoder is used to handle unknown features (i.e., features that are yet to be acquired), and the sensitivity of predictions with respect to each unknown feature is used as a context-dependent measure of informativeness. We evaluated the proposed method on eight different real-world data sets as well as one synthesized data set and compared its performance with several other approaches in the literature. According to the results, the suggested method is capable of efficiently acquiring features at test time in a cost- and context-aware fashion.
An alternative idea for measuring the informativeness of features given the context is using sensitivity analysis at test-time to measure the influence of each feature on the predictions. Early @cite_31 introduced a method based on sensitivity analysis that exhaustively measures the impact of acquiring each feature on the prediction outcome. Their solution does not require training any other model, and it works in conjunction with almost any supervised learning algorithm. However, exhaustive sensitivity measurement is computationally expensive. It is impractical in problems with a large number of features to exhaustively examine the sensitivity with respect to each unknown feature.
{ "cite_N": [ "@cite_31" ], "mid": [ "2515289904" ], "abstract": [ "Predictive algorithms are a critical part of the ubiquitous computing vision, enabling appropriate action on behalf of users. A common class of algorithms, which has seen uptake in ubiquitous computing, is supervised machine learning algorithms. Such algorithms are trained to make predictions based on a set of features (selected at training time). However, features needed at prediction time (such as mobile information that impacts battery life, or information collected from users via experience sampling) may be costly to collect. In addition, both cost and value of a feature may change dynamically based on real-world context (such as battery life or user location) and prediction context (what features are already known, and what their values are). We contribute a framework for dynamically trading off feature cost against prediction quality at prediction time. We demonstrate this work in the context of three prediction tasks: providing prospective tenants estimates for energy costs in potential homes, estimating momentary stress levels from both sensed and user-provided mobile data, and classifying images to facilitate opportunistic device interactions. Our results show that while our approach to cost-sensitive feature selection is up to 45 less costly than competing approaches, error rates are equivalent or better." ] }
1811.01249
2898843400
In real-world scenarios, different features have different acquisition costs at test time which necessitates cost-aware methods to optimize the cost and performance tradeoff. This paper introduces a novel and scalable approach for cost-aware feature acquisition at test time. The method incrementally asks for features based on the available context that are known feature values. The proposed method is based on sensitivity analysis in neural networks and density estimation using denoising autoencoders with binary representation layers. In the proposed architecture, a denoising autoencoder is used to handle unknown features (i.e., features that are yet to be acquired), and the sensitivity of predictions with respect to each unknown feature is used as a context-dependent measure of informativeness. We evaluated the proposed method on eight different real-world data sets as well as one synthesized data set and compared its performance with several other approaches in the literature. According to the results, the suggested method is capable of efficiently acquiring features at test time in a cost- and context-aware fashion.
In this paper we suggest a novel approach that is based on the idea of sensitivity analysis. The proposed approach incrementally asks for features based on the feature acquisition costs and the expected effect each feature can induce on the prediction. Furthermore, the devised method uses back-propagation of gradients and binary representation layers in neural networks to address the computational load as well as scalability concerns. r1 In an earlier work @cite_19 , we introduced the idea of sensitivity analysis as a method for dynamic feature selection. However, in this paper, we extend the idea by considering feature acquisition costs, introducing improvements such as feature encoding, and conducting more detailed experiments and analysis.
{ "cite_N": [ "@cite_19" ], "mid": [ "2792899180" ], "abstract": [ "The decision to select which features to use and query can be effectively addressed based on the available features or context. This paper presents a novel approach based on denoising autoencoders and sensitivity analysis in neural networks to efficiently query for unknown features given the context. In this setting, a denoising autoencoder is responsible for handling unknown features. On the other hand, the sensitivity of output predictions with respect to each unknown feature is used as a measure of feature importance. We evaluated the suggested method on human activity recognition and handwritten digit recognition tasks. According to the results, using the proposed method can reduce the number of extracted features in these datasets by approximately 70 and 60 , respectively. This reduction in the number of required features can be crucially important in mobile and battery-powered IoT systems as it reduces the amount of required data acquisition and computational load substantially." ] }
1811.01498
2898935494
In-band full duplex wireless is of utmost interest to future wireless communication and networking due to great potentials of spectrum efficiency. IBFD wireless, however, is throttled by its key challenge, namely self-interference. Therefore, effective self-interference cancellation is the key to enable IBFD wireless. This paper proposes a real-time non-linear self-interference cancellation solution based on deep learning. In this solution, a self-interference channel is modeled by a deep neural network (DNN). Synchronized self-interference channel data is first collected to train the DNN of the self-interference channel. Afterwards, the trained DNN is used to cancel the self-interference at a wireless node. This solution has been implemented on a USRP SDR testbed and evaluated in real world in multiple scenarios with various modulations in transmitting information including numbers, texts as well as images. It results in the performance of 17dB in digital cancellation, which is very close to the self-interference power and nearly cancels the self-interference at a SDR node in the testbed. The solution yields an average of 8.5 bit error rate (BER) over many scenarios and different modulation schemes.
Since the first systematic study of IBFD wireless on a narrow band @cite_14 , several solutions have been proposed for IBFD wireless on larger bandwidths. These IBFD wireless SIC solutions can be classified into 3 different categories @cite_11 : (1) antenna cancellation, (2) analog cancellation and (3) digital cancellation.
{ "cite_N": [ "@cite_14", "@cite_11" ], "mid": [ "2068678243", "2135328009" ], "abstract": [ "Division-free duplex is proposed for future wireless systems, thus providing simultaneous duplex radio transmission on a division-free basis. The required duplex isolation can be achieved by electronic interference cancellation operating at both RF and baseband, with results from an experimental RF system yielding some 72 dB duplex isolation at 1.8 GHz for 200 kHz channelisation.", "In-band full-duplex (IBFD) transmission represents an attractive option for increasing the throughput of wireless communication systems. A key challenge for IBFD transmission is reducing self-interference. Fortunately, the power associated with residual self-interference can be effectively canceled for feasible IBFD transmission with combinations of various advanced passive, analog, and digital self-interference cancellation schemes. In this survey paper, we first review the basic concepts of IBFD transmission with shared and separated antennas and advanced self-interference cancellation schemes. Furthermore, we also discuss the effects of IBFD transmission on system performance in various networks such as bidirectional, relay, and cellular topology networks. This survey covers a wide array of technologies that have been proposed in the literature as feasible for IBFD transmission and evaluates the performance of the IBFD systems compared to conventional half-duplex transmission in connection with theoretical aspects such as the achievable sum rate, network capacity, system reliability, and so on. We also discuss the research challenges and opportunities associated with the design and analysis of IBFD systems in a variety of network topologies. This work also explores the development of MAC protocols for an IBFD system in both infrastructure-based and ad hoc networks. Finally, we conclude our survey by reviewing the advantages of IBFD transmission when applied for different purposes, such as spectrum sensing, network secrecy, and wireless power transfer." ] }
1811.01498
2898935494
In-band full duplex wireless is of utmost interest to future wireless communication and networking due to great potentials of spectrum efficiency. IBFD wireless, however, is throttled by its key challenge, namely self-interference. Therefore, effective self-interference cancellation is the key to enable IBFD wireless. This paper proposes a real-time non-linear self-interference cancellation solution based on deep learning. In this solution, a self-interference channel is modeled by a deep neural network (DNN). Synchronized self-interference channel data is first collected to train the DNN of the self-interference channel. Afterwards, the trained DNN is used to cancel the self-interference at a wireless node. This solution has been implemented on a USRP SDR testbed and evaluated in real world in multiple scenarios with various modulations in transmitting information including numbers, texts as well as images. It results in the performance of 17dB in digital cancellation, which is very close to the self-interference power and nearly cancels the self-interference at a SDR node in the testbed. The solution yields an average of 8.5 bit error rate (BER) over many scenarios and different modulation schemes.
As far as the performance of IBFD wireless solutions is concerned, some works have reported that the channel capacity could be doubled on a single-hop link, while in mesh network the actual benefit of IBFD wireless may be effected by spatial reuse and asynchronous contention @cite_6 . Another works @cite_2 @cite_13 shows 1.87 times throughput improvement and 1.42 times capacity over a half-duplex system at a transmission power of 20 dBm. A recent work @cite_0 shows that at a certain situation, IBFD wireless can double the downlink and uplink throughputs of static TDD. Another recent work reported a performance improvement by 30
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_6", "@cite_2" ], "mid": [ "2596686446", "", "1992439952", "2160030828" ], "abstract": [ "In-band full duplex has emerged as a solution for high data rate and low access delay for 5G wireless networks after its feasibility has been demonstrated. However, the impact of the in-band full duplex on the system-level performance of multi-cell wireless networks has not been investigated thoroughly. In this paper, we conduct an extensive simulation study to investigate the performance of in-band full duplex for indoor 5G small cell wireless networks. Particularly, we compare the in-band full duplex with static and dynamic time division duplexing schemes which require much less hardware complexity. We examine the effects of beamforming and interference cancellation under various traffic demands and asymmetry situations in the performance comparison. Our objective is to identify under which condition and with which technology support the in-band full duplex becomes advantageous over the simpler duplexing schemes. Numerical results indicate that for highly utilized wireless networks, in-band full duplex should be combined with interference cancellation and beamforming in order to achieve a performance gain over traditional duplexing schemes. Only then in-band full duplex is considered to be advantageous at any number of active mobile stations in the network and any downlink to uplink traffic proportion. Our results also suggest that in order to achieve a performance gain with the in-band full duplex in both links, the transmit power of the access points and the mobile stations should be comparable.", "", "Half duplex radios use two separate frequency bands to enable simultaneous transmission and reception. Recently, full duplex radio that allows a wireless node to transmit and receive simultaneously in one frequency band was proposed. We propose a MAC protocol for full duplex radio networks, FDMAC, which is a simple and efficient protocol compatible with IEEE 802.11 MAC protocol. We analyze the performance of full duplex wireless networks with FD-MAC considering both the physical layer and the MAC layer.", "This paper presents the design and implementation of the first in-band full duplex WiFi radios that can simultaneously transmit and receive on the same channel using standard WiFi 802.11ac PHYs and achieves close to the theoretical doubling of throughput in all practical deployment scenarios. Our design uses a single antenna for simultaneous TX RX (i.e., the same resources as a standard half duplex system). We also propose novel analog and digital cancellation techniques that cancel the self interference to the receiver noise floor, and therefore ensure that there is no degradation to the received signal. We prototype our design by building our own analog circuit boards and integrating them with a fully WiFi-PHY compatible software radio implementation. We show experimentally that our design works robustly in noisy indoor environments, and provides close to the expected theoretical doubling of throughput in practice." ] }
1811.01355
2898937948
Clickbaits are catchy headlines that are frequently used by social media outlets in order to allure its viewers into clicking them and thus leading them to dubious content. Such venal schemes thrive on exploiting the curiosity of naive social media users, directing traffic to web pages that won't be visited otherwise. In this paper, we propose a novel, semi-supervised classification based approach, that employs attentions sampled from a Gumbel-Softmax distribution to distill contexts that are fairly important in clickbait detection. An additional loss over the attention weights is used to encode prior knowledge. Furthermore, we propose a confidence network that enables learning over weak labels and improves robustness to noisy labels. We show that with merely 30 of strongly labeled samples we can achieve over 97 of the accuracy, of current state of the art methods in clickbait detection.
The problem of clickbait detection has been primarily studied in two forms. One of them involves a pair of post and target phrases, in which, the objective is to identify whether the post text (visible to the user), is in someway related to the target content (text images on the landing page). Using this formulation, @cite_20 suggested that the features involved in clickbait detection can be broadly classified into: content features (quotes, capitalization), similarity between the source and target representations, informality, forward reference, URLs etc. A gradient boosted tree was trained using these features. @cite_6 highlighted the use of features based on linguistic and structural differences between the clickbait & non-clickbait headlines. Using the n-grams and word patterns, they successfully trained a SVM classifier with a Radial Basis Function (RBF) kernel, that outperformed the baseline rule based method in @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_6", "@cite_20" ], "mid": [ "", "2952861497", "2569999341" ], "abstract": [ "", "Most of the online news media outlets rely heavily on the revenues generated from the clicks made by their readers, and due to the presence of numerous such outlets, they need to compete with each other for reader attention. To attract the readers to click on an article and subsequently visit the media site, the outlets often come up with catchy headlines accompanying the article links, which lure the readers to click on the link. Such headlines are known as Clickbaits. While these baits may trick the readers into clicking, in the long run, clickbaits usually don't live up to the expectation of the readers, and leave them disappointed. In this work, we attempt to automatically detect clickbaits and then build a browser extension which warns the readers of different media sites about the possibility of being baited by such headlines. The extension also offers each reader an option to block clickbaits she doesn't want to see. Then, using such reader choices, the extension automatically blocks similar clickbaits during her future visits. We run extensive offline and online experiments across multiple media sites and find that the proposed clickbait detection and the personalized blocking approaches perform very well achieving 93 accuracy in detecting and 89 accuracy in blocking clickbaits.", "Clickbaits are articles with misleading titles, exaggerating the content on the landing page. Their goal is to entice users to click on the title in order to monetize the landing page. The content on the landing page is usually of low quality. Their presence in user homepage stream of news aggregator sites (e.g., Yahoo news, Google news) may adversely impact user experience. Hence, it is important to identify and demote or block them on homepages. In this paper, we present a machine-learning model to detect clickbaits. We use a variety of features and show that the degree of informality of a web-page (as measured by different metrics) is a strong indicator of it being a clickbait. We conduct extensive experiments to evaluate our approach and analyze properties of clickbait and non-clickbait articles. Our model achieves high performance (74.9 F-1 score) in predicting clickbaits." ] }
1811.01355
2898937948
Clickbaits are catchy headlines that are frequently used by social media outlets in order to allure its viewers into clicking them and thus leading them to dubious content. Such venal schemes thrive on exploiting the curiosity of naive social media users, directing traffic to web pages that won't be visited otherwise. In this paper, we propose a novel, semi-supervised classification based approach, that employs attentions sampled from a Gumbel-Softmax distribution to distill contexts that are fairly important in clickbait detection. An additional loss over the attention weights is used to encode prior knowledge. Furthermore, we propose a confidence network that enables learning over weak labels and improves robustness to noisy labels. We show that with merely 30 of strongly labeled samples we can achieve over 97 of the accuracy, of current state of the art methods in clickbait detection.
The other form comprises of making decisions purely based on the headline content @cite_7 , @cite_20 . Our work is based on this paradigm, and performs on par with methods that follow the former approach. According to @cite_21 , Recurrent Neural Network (RNN) based sentence embeddings of the headlines, are expressive enough to learn neural network based classifiers that separate the two classes. @cite_12 explored convolutional networks that can convolve our word embeddings to learn n-grams, sub-words and token patterns that are consistent with clickbaits. @cite_8 worked on the twitter dataset and treated clickbait detection as a regression problem instead of binary classification. Hence, they proposed a model that outputs scores indicative of how clickbaity a tweet is. @cite_2 used a bi-directional LSTM to improve upon the results published by @cite_6 , on the (Section ).
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_21", "@cite_6", "@cite_2", "@cite_20", "@cite_12" ], "mid": [ "2765545856", "", "2131774270", "2952861497", "2560440203", "2569999341", "2755427716" ], "abstract": [ "Clickbait detection in tweets remains an elusive challenge. In this paper, we describe the solution for the Zingel Clickbait Detector at the Clickbait Challenge 2017, which is capable of evaluating each tweet's level of click baiting. We first reformat the regression problem as a multi-classification problem, based on the annotation scheme. To perform multi-classification, we apply a token-level, self-attentive mechanism on the hidden states of bi-directional Gated Recurrent Units (biGRU), which enables the model to generate tweets' task-specific vector representations by attending to important tokens. The self-attentive neural network can be trained end-to-end, without involving any manual feature engineering. Our detector ranked first in the final evaluation of Clickbait Challenge 2017.", "", "In the first part of this paper, a regular recurrent neural network (RNN) is extended to a bidirectional recurrent neural network (BRNN). The BRNN can be trained without the limitation of using input information just up to a preset future frame. This is accomplished by training it simultaneously in positive and negative time direction. Structure and training procedure of the proposed network are explained. In regression and classification experiments on artificial data, the proposed structure gives better results than other approaches. For real data, classification experiments for phonemes from the TIMIT database show the same tendency. In the second part of this paper, it is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution. For this part, experiments on real data are reported.", "Most of the online news media outlets rely heavily on the revenues generated from the clicks made by their readers, and due to the presence of numerous such outlets, they need to compete with each other for reader attention. To attract the readers to click on an article and subsequently visit the media site, the outlets often come up with catchy headlines accompanying the article links, which lure the readers to click on the link. Such headlines are known as Clickbaits. While these baits may trick the readers into clicking, in the long run, clickbaits usually don't live up to the expectation of the readers, and leave them disappointed. In this work, we attempt to automatically detect clickbaits and then build a browser extension which warns the readers of different media sites about the possibility of being baited by such headlines. The extension also offers each reader an option to block clickbaits she doesn't want to see. Then, using such reader choices, the extension automatically blocks similar clickbaits during her future visits. We run extensive offline and online experiments across multiple media sites and find that the proposed clickbait detection and the personalized blocking approaches perform very well achieving 93 accuracy in detecting and 89 accuracy in blocking clickbaits.", "Online content publishers often use catchy headlines for their articles in order to attract users to their websites. These headlines, popularly known as clickbaits, exploit a user’s curiosity gap and lure them to click on links that often disappoint them. Existing methods for automatically detecting clickbaits rely on heavy feature engineering and domain knowledge. Here, we introduce a neural network architecture based on Recurrent Neural Networks for detecting clickbaits. Our model relies on distributed word representations learned from a large unannotated corpora, and character embeddings learned via Convolutional Neural Networks. Experimental results on a dataset of news headlines show that our model outperforms existing techniques for clickbait detection with an accuracy of 0.98 with F1-score of 0.98 and ROC-AUC of 0.99.", "Clickbaits are articles with misleading titles, exaggerating the content on the landing page. Their goal is to entice users to click on the title in order to monetize the landing page. The content on the landing page is usually of low quality. Their presence in user homepage stream of news aggregator sites (e.g., Yahoo news, Google news) may adversely impact user experience. Hence, it is important to identify and demote or block them on homepages. In this paper, we present a machine-learning model to detect clickbaits. We use a variety of features and show that the degree of informality of a web-page (as measured by different metrics) is a strong indicator of it being a clickbait. We conduct extensive experiments to evaluate our approach and analyze properties of clickbait and non-clickbait articles. Our model achieves high performance (74.9 F-1 score) in predicting clickbaits.", "Today's general-purpose deep convolutional neural networks (CNN) for image classification and object detection are trained offline on large static datasets. Some applications, however, will require training in real-time on live video streams with a human-in-the-loop. We refer to this class of problem as Time-ordered Online Training (ToOT) - these problems will require a consideration of not only the quantity of incoming training data, but the human effort required to tag and use it. In this paper, we define training benefit as a metric to measure the effectiveness of a sequence in using each user interaction. We demonstrate and evaluate a system tailored to performing ToOT in the field, capable of training an image classifier on a live video stream through minimal input from a human operator. We show that by exploiting the time-ordered nature of the video stream through optical flow-based object tracking, we can increase the effectiveness of human actions by about 8 times." ] }
1811.01355
2898937948
Clickbaits are catchy headlines that are frequently used by social media outlets in order to allure its viewers into clicking them and thus leading them to dubious content. Such venal schemes thrive on exploiting the curiosity of naive social media users, directing traffic to web pages that won't be visited otherwise. In this paper, we propose a novel, semi-supervised classification based approach, that employs attentions sampled from a Gumbel-Softmax distribution to distill contexts that are fairly important in clickbait detection. An additional loss over the attention weights is used to encode prior knowledge. Furthermore, we propose a confidence network that enables learning over weak labels and improves robustness to noisy labels. We show that with merely 30 of strongly labeled samples we can achieve over 97 of the accuracy, of current state of the art methods in clickbait detection.
Some of the most recent approaches that have been successful in modelling curiosity are based on the "novelty" and "surprise" components of a headline. @cite_29 used topic modelling to identify the topic distributions for each headline in the corpus. Distance metrics in the space of propability distributions like KL divergence and Hellinger distance were used as features to express novelty while surprise was primarily modeled using bi-gram frequency counts. Based on these features, an SVM LR classifier was trained on a small section of the training data.
{ "cite_N": [ "@cite_29" ], "mid": [ "2808118765" ], "abstract": [ "The impact of continually evolving digital technologies and the proliferation of communications and content has now been widely acknowledged to be central to understanding our world. What is less acknowledged is that this is based on the successful arousing of curiosity both at the collective and individual levels. Advertisers, communication professionals and news editors are in constant competition to capture attention of the digital population perennially shifty and distracted. This paper, tries to understand how curiosity works in the digital world by attempting the first ever work done on quantifying human curiosity, basing itself on various theories drawn from humanities and social sciences. Curious communication pushes people to spot, read and click the message from their social feed or any other form of online presentation. Our approach focuses on measuring the strength of the stimulus to generate reader curiosity by using unsupervised and supervised machine learning algorithms, but is also informed by philosophical, psychological, neural and cognitive studies on this topic. Manually annotated news headlines - clickbaits - have been selected for the study, which are known to have drawn huge reader response. A binary classifier was developed based on human curiosity (unlike the work done so far using words and other linguistic features). Our classifier shows an accuracy of 97 . This work is part of the research in computational humanities on digital politics quantifying the emotions of curiosity and outrage on digital media." ] }
1811.01287
2899379687
Recent advances in representation learning on graphs, mainly leveraging graph convolutional networks, have brought a substantial improvement on many graph-based benchmark tasks. While novel approaches to learning node embeddings are highly suitable for node classification and link prediction, their application to graph classification (predicting a single label for the entire graph) remains mostly rudimentary, typically using a single global pooling step to aggregate node features or a hand-designed, fixed heuristic for hierarchical coarsening of the graph structure. An important step towards ameliorating this is differentiable graph coarsening---the ability to reduce the size of the graph in an adaptive, data-dependent manner within a graph neural network pipeline, analogous to image downsampling within CNNs. However, the previous prominent approach to pooling has quadratic memory requirements during training and is therefore not scalable to large graphs. Here we combine several recent advances in graph neural network design to demonstrate that competitive hierarchical graph classification results are possible without sacrificing sparsity. Our results are verified on several established graph classification benchmarks, and highlight an important direction for future research in graph-based neural networks.
In this work, we leverage recent advances in graph neural network design @cite_7 @cite_15 @cite_2 to demonstrate that sparsity need not be sacrificed to obtain good performance on end-to-end graph convolutional architectures with pooling. We demonstrate performance that is comparable to variants of DiffPool on four standard graph classification benchmarks, all while using a graph CNN that only requires @math storage (comparable to the storage complexity of the input graph).
{ "cite_N": [ "@cite_15", "@cite_7", "@cite_2" ], "mid": [ "", "2962767366", "2804057010" ], "abstract": [ "", "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of \"neighboring\" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance." ] }
1811.01396
2899277337
Handwritten Word Recognition and Spotting is a challenging field dealing with handwritten text possessing irregular and complex shapes. The design of deep neural network models makes it necessary to extend training datasets in order to introduce variations and increase the number of samples; word-retrieval is therefore very difficult in low-resource scripts. Much of the existing literature comprises preprocessing strategies which are seldom sufficient to cover all possible variations. We propose the Adversarial Feature Deformation Module (AFDM) that learns ways to elastically warp extracted features in a scalable manner. The AFDM is inserted between intermediate layers and trained alternatively with the original framework, boosting its capability to better learn highly informative features rather than trivial ones. We test our meta-framework, which is built on top of popular word-spotting and word-recognition frameworks and enhanced by the AFDM, not only on extensive Latin word datasets but also sparser Indic scripts. We record results for varying training data sizes, and observe that our enhanced network generalizes much better in the low-data regime; the overall word-error rates and mAP scores are observed to improve as well.
The process of augmenting to extend datasets is seen even in the case of large extensive datasets @cite_44 @cite_43 and in works focusing on Chinese handwritten character recognition where there are close to 4000 classes in standard datasets. In a different class of approaches, the process of online hard example mining (OHEM) has proved effective, boosting accuracy in datasets by targeting the fewer hard examples in the dataset, as shown in @cite_4 @cite_20 @cite_21 @cite_36 . With the advent of adversarial learning and GANs in recent years, several approaches have incorporated generative modeling to create synthetic data that is realistic @cite_32 @cite_6 @cite_34 , following architectural guidelines described by Goodfellow al for stable GAN training @cite_27 . Papers such as @cite_37 use GANs to augment data in limited datasets by computing over a sample class image to output samples that belong to the same class.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_36", "@cite_21", "@cite_32", "@cite_6", "@cite_44", "@cite_43", "@cite_27", "@cite_34", "@cite_20" ], "mid": [ "2770173563", "2174940656", "", "", "2579352881", "", "2548026590", "", "2099471712", "", "" ], "abstract": [ "Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation krizhevsky2012imagenet alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given there is potential to generate a much broader set of augmentations, we design and train a generative model to do data augmentation. The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items. As this generative process does not depend on the classes themselves, it can be applied to novel unseen classes of data. We show that a Data Augmentation Generative Adversarial Network (DAGAN) augments standard vanilla classifiers well. We also show a DAGAN can enhance few-shot learning systems such as Matching Networks. We demonstrate these approaches on Omniglot, on EMNIST having learnt the DAGAN on Omniglot, and VGG-Face data. In our experiments we can see over 13 increase in accuracy in the low-data regime experiments in Omniglot (from 69 to 82 ), EMNIST (73.9 to 76 ) and VGG-Face (4.5 to 12 ); in Matching Networks for Omniglot we observe an increase of 0.5 (from 96.9 to 97.4 ) and an increase of 1.8 in EMNIST (from 59.5 to 61.3 ).", "Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5.", "", "", "It's useful to automatically transform an image from its original form to some synthetic form (style, partial contents, etc.), while keeping the original structure or semantics. We define this requirement as the \"image-to-image translation\" problem, and propose a general approach to achieve it, based on deep convolutional and conditional generative adversarial networks (GANs), which has gained a phenomenal success to learn mapping images from noise input since 2014. In this work, we develop a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data. Compared with prior works, we demonstrated the capacity of generality in our model, by which variance of translations can be conduct by a single type of model. Such capability is desirable in applications like bidirectional translation", "", "In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform word spotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.", "", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "", "" ] }
1811.01396
2899277337
Handwritten Word Recognition and Spotting is a challenging field dealing with handwritten text possessing irregular and complex shapes. The design of deep neural network models makes it necessary to extend training datasets in order to introduce variations and increase the number of samples; word-retrieval is therefore very difficult in low-resource scripts. Much of the existing literature comprises preprocessing strategies which are seldom sufficient to cover all possible variations. We propose the Adversarial Feature Deformation Module (AFDM) that learns ways to elastically warp extracted features in a scalable manner. The AFDM is inserted between intermediate layers and trained alternatively with the original framework, boosting its capability to better learn highly informative features rather than trivial ones. We test our meta-framework, which is built on top of popular word-spotting and word-recognition frameworks and enhanced by the AFDM, not only on extensive Latin word datasets but also sparser Indic scripts. We record results for varying training data sizes, and observe that our enhanced network generalizes much better in the low-data regime; the overall word-error rates and mAP scores are observed to improve as well.
A recent work by Wang al @cite_31 describes an adversarial model that generates hard examples by using the generator @cite_27 to incorporate occlusions as well as spatial deformations into the feature space, forcing the detector to adapt to uncommon and rare deformations in actual inputs to the model. In our framework, we use a similar strategy to make our word-retrieval detector robust and invariant to all sorts of variations seen in natural images of handwritten text. Another similar approach @cite_50 also explores the use of adversarial learning in visual tracking and detection of objects and attempts to alleviate the class-imbalance problem in datasets, where it is observed that the amount of data in one class far exceeds another class. Having a larger number of easy to recognize samples in datasets deters the training process as the detector is unaware of more valuable hard" examples.
{ "cite_N": [ "@cite_27", "@cite_31", "@cite_50" ], "mid": [ "2099471712", "2607037079", "2796919380" ], "abstract": [ "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "How do we learn an object detector that is invariant to occlusions and deformations? Our current solution is to use a data-driven strategy – collect large-scale datasets which have object instances under different conditions. The hope is that the final classifier can use these examples to learn invariances. But is it really possible to see all the occlusions in a dataset? We argue that like categories, occlusions and object deformations also follow a long-tail. Some occlusions and deformations are so rare that they hardly happen, yet we want to learn a model invariant to such occurrences. In this paper, we propose an alternative solution. We propose to learn an adversarial network that generates examples with occlusions and deformations. The goal of the adversary is to generate examples that are difficult for the object detector to classify. In our framework both the original detector and adversary are learned in a joint manner. Our experimental results indicate a 2.3 mAP boost on VOC07 and a 2.6 mAP boost on VOC2012 object detection challenge compared to the Fast-RCNN pipeline.", "The tracking-by-detection framework consists of two stages, i.e., drawing samples around the target object in the first stage and classifying each sample as the target object or as background in the second stage. The performance of existing trackers using deep classification networks is limited by two aspects. First, the positive samples in each frame are highly spatially overlapped, and they fail to capture rich appearance variations. Second, there exists extreme class imbalance between positive and negative samples. This paper presents the VITAL algorithm to address these two problems via adversarial learning. To augment positive samples, we use a generative network to randomly generate masks, which are applied to adaptively dropout input features to capture a variety of appearance changes. With the use of adversarial learning, our network identifies the mask that maintains the most robust features of the target objects over a long temporal span. In addition, to handle the issue of class imbalance, we propose a high-order cost sensitive loss to decrease the effect of easy negative samples to facilitate training the classification network. Extensive experiments on benchmark datasets demonstrate that the proposed tracker performs favorably against state-of-the-art approaches." ] }
1811.01183
2899270352
Identifying and extracting data elements such as study descriptors in publication full texts is a critical yet manual and labor-intensive step required in a number of tasks. In this paper we address the question of identifying data elements in an unsupervised manner. Specifically, provided a set of criteria describing specific study parameters, such as species, route of administration, and dosing regimen, we develop an unsupervised approach to identify text segments (sentences) relevant to the criteria. A binary classifier trained to identify publications that met the criteria performs better when trained on the candidate sentences than when trained on sentences randomly picked from the text, supporting the intuition that our method is able to accurately identify study descriptors.
A number of previous studies have focused on unsupervised extraction of relations such as protein-protein interactions (PPI) from biomedical texts. For example, @cite_1 have utilized several techniques, namely kernel-based pattern clustering and dependency parsing, to extract PPI from biomedical texts. @cite_12 have introduced a system for unsupervised extraction of entities and relations between these entities from clinical texts written in Italian, which utilized a thesaurus for extraction of entities and clustering methods for relation extraction. @cite_0 also used clinical texts and proposed a generative model for unsupervised relation extraction. Another approach focusing on relation extraction has been proposed by @cite_24 . Their approach is based on constructing a graph which is used to construct domain-independent patterns for extracting protein-protein interactions.
{ "cite_N": [ "@cite_0", "@cite_24", "@cite_1", "@cite_12" ], "mid": [ "1596007284", "2046740651", "2060515712", "" ], "abstract": [ "This paper presents a generative model for the automatic discovery of relations between entities in electronic medical records. The model discovers relation instances and their types by determining which context tokens express the relation. Additionally, the valid semantic classes for each type of relation are determined. We show that the model produces clusters of relation trigger words which better correspond with manually annotated relations than several existing clustering techniques. The discovered relations reveal some of the implicit semantic structure present in patient records.", "The vast number of published medical documents is considered a vital source for relationship discovery. This paper presents a statistical unsupervised system, called BioNoculars, for extracting protein-protein interactions from biomedical text. BioNoculars uses graph-based mutual reinforcement to make use of redundancy in data to construct extraction patterns in a domain independent fashion. The system was tested using MEDLINE abstract for which the protein-protein interactions that they contain are listed in the database of interacting proteins and protein-protein interactions (DIPPPI). The system reports an F-Measure of 0.55 on test MEDLINE abstracts.", "The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1) Protein–protein interactions extraction, and (2) Gene–suicide association extraction. The evaluation of task (1) on the benchmark dataset (AImed corpus) showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene–suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.", "" ] }
1811.01183
2899270352
Identifying and extracting data elements such as study descriptors in publication full texts is a critical yet manual and labor-intensive step required in a number of tasks. In this paper we address the question of identifying data elements in an unsupervised manner. Specifically, provided a set of criteria describing specific study parameters, such as species, route of administration, and dosing regimen, we develop an unsupervised approach to identify text segments (sentences) relevant to the criteria. A binary classifier trained to identify publications that met the criteria performs better when trained on the candidate sentences than when trained on sentences randomly picked from the text, supporting the intuition that our method is able to accurately identify study descriptors.
A similar but distinct approach to unsupervised extraction is distant supervision. Similarly as unsupervised extraction methods, distant supervision methods don't require any labeled data, but make use of weakly labeled data, such as data extracted from a knowledge base. Distant supervision has been applied to relation extraction @cite_6 , extraction of gene interactions @cite_18 , PPI extraction @cite_3 @cite_4 , and identification of PICO elements @cite_10 . The advantage of our approach compared to the distantly supervised methods is that it does not require any underlying knowledge base or a similar source of data.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_3", "@cite_6", "@cite_10" ], "mid": [ "2098679902", "1576450670", "", "2028861968", "2484269232" ], "abstract": [ "Motivation: A complete repository of gene–gene interactions is key for understanding cellular processes, human disease and drug response. These gene–gene interactions include both protein–protein interactions and transcription factor interactions. The majority of known interactions are found in the biomedical literature. Interaction databases, such as BioGRID and ChEA, annotate these gene–gene interactions; however, curation becomes difficult as the literature grows exponentially. DeepDive is a trained system for extracting information from a variety of sources, including text. In this work, we used DeepDive to extract both protein–protein and transcription factor interactions from over 100 000 full-text PLOS articles. Methods: We built an extractor for gene–gene interactions that identified candidate gene–gene relations within an input sentence. For each candidate relation, DeepDive computed a probability that the relation was a correct interaction. We evaluated this system against the Database of Interacting Proteins and against randomly curated extractions. Results: Our system achieved 76 precision and 49 recall in extracting direct and indirect interactions involving gene symbols co-occurring in a sentence. For randomly curated extractions, the system achieved between 62 and 83 precision based on direct or indirect interactions, as well as sentence-level and document-level precision. Overall, our system extracted 3356 unique gene pairs using 724 features from over 100 000 full-text articles. Availability and implementation: Application source code is publicly available at https: github.com edoughty deepdive_genegene_app Contact: ude.drofnats@namtla.ssur Supplementary information: Supplementary data are available at Bioinformatics online.", "Relation extraction is frequently and successfully addressed by machine learning methods. The downside of this approach is the need for annotated training data, typically generated in tedious manual, cost intensive work. Distantly supervised approaches make use of weakly annotated data, like automatically annotated corpora. Recent work in the biomedical domain has applied distant supervision for protein-protein interaction (PPI) with reasonable results making use of the IntAct database. Such data is typically noisy and heuristics to filter the data are commonly applied. We propose a constraint to increase the quality of data used for training based on the assumption that no self-interaction of real-world objects are described in sentences. In addition, we make use of the University of Kansas Proteomics Service (KUPS) database. These two steps show an increase of 7 percentage points (pp) for the PPI corpus AIMed. We demonstrate the broad applicability of our approach by using the same workflow for the analysis of drug-drug interactions, utilizing relationships available from the drug database DrugBank. We achieve 37.31 in F1 measure without manually annotated training data on an independent test set.", "", "We develop a novel distant supervised model that integrates the results from open information extraction techniques to perform relation extraction task from biomedical literature. Unlike state-of-the-art models for relation extraction in biomedical domain which are mainly based on supervised methods, our approach does not require manually-labeled instances. In addition, our model incorporates a grouping strategy to take into consideration the coordinating structure among entities co-occurred in one sentence. We apply our approach to extract gene expression relationship between genes and brain regions from literature. Results show that our methods can achieve promising performance over baselines of Transductive Support Vector Machine and with non-grouping strategy.", "Systematic reviews underpin Evidence Based Medicine (EBM) by addressing precise clinical questions via comprehensive synthesis of all relevant published evidence. Authors of systematic reviews typically define a Population Problem, Intervention, Comparator, and Outcome (a PICO criteria) of interest, and then retrieve, appraise and synthesize results from all reports of clinical trials that meet these criteria. Identifying PICO elements in the full-texts of trial reports is thus a critical yet time-consuming step in the systematic review process. We seek to expedite evidence synthesis by developing machine learning models to automatically extract sentences from articles relevant to PICO elements. Collecting a large corpus of training data for this task would be prohibitively expensive. Therefore, we derive distant supervision (DS) with which to train models using previously conducted reviews. DS entails heuristically deriving 'soft' labels from an available structured resource. However, we have access only to unstructured, free-text summaries of PICO elements for corresponding articles; we must derive from these the desired sentence-level annotations. To this end, we propose a novel method - supervised distant supervision (SDS) - that uses a small amount of direct supervision to better exploit a large corpus of distantly labeled instances by learning to pseudo-annotate articles using the available DS. We show that this approach tends to outperform existing methods with respect to automated PICO extraction." ] }
1811.01254
2951479848
Legged robots are becoming popular not only in research, but also in industry, where they can demonstrate their superiority over wheeled machines in a variety of applications. Either when acting as mobile manipulators or just as all-terrain ground vehicles, these machines need to precisely track the desired base and end-effector trajectories, perform Simultaneous Localization and Mapping (SLAM), and move in challenging environments, all while keeping balance. A crucial aspect for these tasks is that all onboard sensors must be properly calibrated and synchronized to provide consistent signals for all the software modules they feed. In this paper, we focus on the problem of calibrating the relative pose between a set of cameras and the base link of a quadruped robot. This pose is fundamental to successfully perform sensor fusion, state estimation, mapping, and any other task requiring visual feedback. To solve this problem, we propose an approach based on factor graphs that jointly optimizes the mutual position of the cameras and the robot base using kinematics and fiducial markers. We also quantitatively compare its performance with other state-of-the-art methods on the hydraulic quadruped robot HyQ. The proposed approach is simple, modular, and independent from external devices other than the fiducial marker.
More recently, Bl "ochlinger al @cite_7 applied the same idea to the quadruped robot StarlETH. They used a couple of circular fiducial marker arrays attached to the front feet and minimized the reprojection error between the detected markers (with a monocular camera) and their projection using forward kinematics. The objective of the cost function includes a set of 33 parameters, including: link lengths, encoder offsets, time offsets, and also the robot-camera relative pose. Since the focus was more on finding the kinematics parameters, no results are shown for the robot-camera calibration. The performance on link lengths is assessed by comparison with the CAD specifications, yielding a margin of 5 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2408767391" ], "abstract": [ "Legged robots rely on an accurate calibration of the system’s kinematics for reliable motion tracking of dynamic gaits and for precise foot placement when moving in rough terrain. In our automatic foot-eye calibration approach, a monocular camera is attached to the robot and observes the robot’s moving feet, which are equipped with Augmented Reality (AR) markers. The measurements are used to formulate a non-linear least squares problem over a xed time window in order to estimate the 33 unknown parameters. This is eciently solved with the Levenberg-Marquardt algorithm and we get estimates for both the kinematic and the camera parameters. The approach is successfully evaluated on a real quadruped robot." ] }
1811.01254
2951479848
Legged robots are becoming popular not only in research, but also in industry, where they can demonstrate their superiority over wheeled machines in a variety of applications. Either when acting as mobile manipulators or just as all-terrain ground vehicles, these machines need to precisely track the desired base and end-effector trajectories, perform Simultaneous Localization and Mapping (SLAM), and move in challenging environments, all while keeping balance. A crucial aspect for these tasks is that all onboard sensors must be properly calibrated and synchronized to provide consistent signals for all the software modules they feed. In this paper, we focus on the problem of calibrating the relative pose between a set of cameras and the base link of a quadruped robot. This pose is fundamental to successfully perform sensor fusion, state estimation, mapping, and any other task requiring visual feedback. To solve this problem, we propose an approach based on factor graphs that jointly optimizes the mutual position of the cameras and the robot base using kinematics and fiducial markers. We also quantitatively compare its performance with other state-of-the-art methods on the hydraulic quadruped robot HyQ. The proposed approach is simple, modular, and independent from external devices other than the fiducial marker.
To reduce the uncertainty due to kinematics, in our previous work @cite_14 , we developed a method based on fiducial markers and an external capture system. The fiducial marker is not attached to the robot body, but it is placed externally with a rim of Motion Capture (MoCap) markers (small reflective balls). The MoCap markers are also placed on the robot at a known location. When the camera detects the fiducial marker, the robot-camera affine transform is recovered from the chain formed by the camera-to-fiducial, the vicon-to-fiducial and the robot-to-vicon transforms. An obvious drawback of this method is the need of an expensive equipment for MoCap.
{ "cite_N": [ "@cite_14" ], "mid": [ "1906051104" ], "abstract": [ "We present a real-time SLAM system that combines an improved version of the Iterative Closest Point (ICP) and inertial dead reckoning to localize our dynamic quadrupedal machine in a local map. Despite the strong and fast motions induced by our 80 kg hydraulic legged robot, the SLAM system is robust enough to keep the position error below 5 within the local map that surrounds the robot. The 3D map of the terrain, computed at the camera frame rate is suitable for vision based planned locomotion. The inertial measurements are used before and after the ICP registration, to provide a good initial guess, to correct the output and to detect registration failures which can potentially corrupt the map. The performance in terms of time and accuracy are also doubled by preprocessing the point clouds with a background subtraction prior to performing the ICP alignment. Our local mapping approach, in spite of having a global frame of reference fixed onto the ground, aligns the current map to the body frame, and allows us to push the drift away from the most recent camera scan. The system has been tested on our robot by performing a trot around obstacles and validated against a motion capture system." ] }
1811.01254
2951479848
Legged robots are becoming popular not only in research, but also in industry, where they can demonstrate their superiority over wheeled machines in a variety of applications. Either when acting as mobile manipulators or just as all-terrain ground vehicles, these machines need to precisely track the desired base and end-effector trajectories, perform Simultaneous Localization and Mapping (SLAM), and move in challenging environments, all while keeping balance. A crucial aspect for these tasks is that all onboard sensors must be properly calibrated and synchronized to provide consistent signals for all the software modules they feed. In this paper, we focus on the problem of calibrating the relative pose between a set of cameras and the base link of a quadruped robot. This pose is fundamental to successfully perform sensor fusion, state estimation, mapping, and any other task requiring visual feedback. To solve this problem, we propose an approach based on factor graphs that jointly optimizes the mutual position of the cameras and the robot base using kinematics and fiducial markers. We also quantitatively compare its performance with other state-of-the-art methods on the hydraulic quadruped robot HyQ. The proposed approach is simple, modular, and independent from external devices other than the fiducial marker.
In the visual-inertial calibration domain, the objective is the relative transformation between an IMU and a camera. If the relative pose between IMU and base link is known, this method is equivalent to robot-camera calibration. Since no kinematics is involved, the primary application is Micro-Aerial-Vehicles (MAVs): in this context, the different frequencies and time delays between inertial and camera data need to be taken into account @cite_8 .
{ "cite_N": [ "@cite_8" ], "mid": [ "1982852503" ], "abstract": [ "We examine the problem of determining the relative time delay between IMU and camera data streams. The primary difficulty is that the correspondences between measurements from the sensors are not initially known, and hence, the time delay cannot be computed directly. We instead formulate time delay calibration as a registration problem, and introduce a calibration algorithm that operates by aligning curves in a three-dimensional orientation space. Results from simulation studies and from experiments with real hardware demonstrate that the delay can be accurately calibrated. cessing, integration, filtering, etc. If the delays are incorrectly estimated, then any fusion algorithm will produce suboptimal results. At best, this will degrade the accuracy of the output—in the worst case, it may lead to significant navigation errors. In this paper, we examine the problem of determining the relative time delay between inertial and visual sensor measure- ments. It may be possible in some cases to directly synchronize the sensors using a hardware timing signal. Our interest here is primarily in low-cost \"black box\" IMUs and cameras, for which direct time synchronization is not possible. We assume that there is a constant but unknown time shift (delay) when the different sensor measurements arrive and are timestamped at a centralized receiver." ] }
1811.01254
2951479848
Legged robots are becoming popular not only in research, but also in industry, where they can demonstrate their superiority over wheeled machines in a variety of applications. Either when acting as mobile manipulators or just as all-terrain ground vehicles, these machines need to precisely track the desired base and end-effector trajectories, perform Simultaneous Localization and Mapping (SLAM), and move in challenging environments, all while keeping balance. A crucial aspect for these tasks is that all onboard sensors must be properly calibrated and synchronized to provide consistent signals for all the software modules they feed. In this paper, we focus on the problem of calibrating the relative pose between a set of cameras and the base link of a quadruped robot. This pose is fundamental to successfully perform sensor fusion, state estimation, mapping, and any other task requiring visual feedback. To solve this problem, we propose an approach based on factor graphs that jointly optimizes the mutual position of the cameras and the robot base using kinematics and fiducial markers. We also quantitatively compare its performance with other state-of-the-art methods on the hydraulic quadruped robot HyQ. The proposed approach is simple, modular, and independent from external devices other than the fiducial marker.
Similarly to @cite_2 , Kelly al @cite_5 proposed an approach based on the Unscented Kalman Filter (UKF), which has been tested both with and without prior knowledge of a calibration marker.
{ "cite_N": [ "@cite_5", "@cite_2" ], "mid": [ "2128516964", "2140924050" ], "abstract": [ "Visual and inertial sensors, in combination, are well-suited for many robot navigation and mapping tasks. However, correct data fusion, and hence overall system performance, depends on accurate calibration of the 6-DOF transform between the sensors (one or more camera(s) and an inertial measurement unit). Obtaining this calibration information is typically difficult and time-consuming. In this paper, we describe an algorithm, based on the unscented Kalman filter (UKF), for camera-IMU simultaneous localization, mapping and sensor relative pose self-calibration. We show that the sensor-to-sensor transform, the IMU gyroscope and accelerometer biases, the local gravity vector, and the metric scene structure can all be recovered from camera and IMU measurements alone. This is possible without any prior knowledge about the environment in which the robot is operating. We present results from experiments with a monocular camera and a low-cost solid-state IMU, which demonstrate accurate estimation of the calibration parameters and the local scene structure.", "Vision-aided inertial navigation systems (V-INSs) can provide precise state estimates for the 3-D motion of a vehicle when no external references (e.g., GPS) are available. This is achieved by combining inertial measurements from an inertial measurement unit (IMU) with visual observations from a camera under the assumption that the rigid transformation between the two sensors is known. Errors in the IMU-camera extrinsic calibration process cause biases that reduce the estimation accuracy and can even lead to divergence of any estimator processing the measurements from both sensors. In this paper, we present an extended Kalman filter for precisely determining the unknown transformation between a camera and an IMU. Contrary to previous approaches, we explicitly account for the time correlation of the IMU measurements and provide a figure of merit (covariance) for the estimated transformation. The proposed method does not require any special hardware (such as spin table or 3-D laser scanner) except a calibration target. Furthermore, we employ the observability rank criterion based on Lie derivatives and prove that the nonlinear system describing the IMU-camera calibration process is observable. Simulation and experimental results are presented that validate the proposed method and quantify its accuracy." ] }
1811.01533
2898843852
Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
Since AlexNet @cite_0 won the ImageNet competition in 2012, deep learning has seen a lot of successful applications in many different domains @cite_10 such as reaching human level performance in image recognition problems @cite_9 as well as different natural language processing tasks @cite_17 @cite_3 . Motivated by this success of deep neural networks in many different domains, deep learning has been recently applied for the TSC problem @cite_24 @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_3", "@cite_0", "@cite_24", "@cite_10", "@cite_17" ], "mid": [ "2515503816", "2097117768", "2133564696", "2163605009", "2783187651", "", "2130942839" ], "abstract": [ "Time series classification has been around for decades in the data-mining and machine learning communities. In this paper, we investigate the use of convolutional neural networks (CNN) for time series classification. Such networks have been widely used in many domains like computer vision and speech recognition, but only a little for time series classification. We design a convolu-tional neural network that consists of two convolutional layers. One drawback with CNN is that they need a lot of training data to be efficient. We propose two ways to circumvent this problem: designing data-augmentation techniques and learning the network in a semi-supervised way using training time series from different datasets. These techniques are experimentally evaluated on a benchmark of time series datasets.", "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Some deep convolutional neural networks were proposed for time-series classification and class imbalanced problems. However, those models performed degraded and even failed to recognize the minority class of an imbalanced temporal sequences dataset. Minority samples would bring troubles for temporal deep learning classifiers due to the equal treatments of majority and minority class. Until recently, there were few works applying deep learning on imbalanced time-series classification (ITSC) tasks. Here, this paper aimed at tackling ITSC problems with deep learning. An adaptive cost-sensitive learning strategy was proposed to modify temporal deep learning models. Through the proposed strategy, classifiers could automatically assign misclassification penalties to each class. In the experimental section, the proposed method was utilized to modify five neural networks. They were evaluated on a large volume, real-life and imbalanced time-series dataset with six metrics. Each single network was also tested alone and combined with several mainstream data samplers. Experimental results illustrated that the proposed cost-sensitive modified networks worked well on ITSC tasks. Compared to other methods, the cost-sensitive convolution neural network and residual network won out in the terms of all metrics. Consequently, the proposed cost-sensitive learning strategy can be used to modify deep learning classifiers from cost-insensitive to cost-sensitive. Those cost-sensitive convolutional networks can be effectively applied to address ITSC issues.", "", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier." ] }
1811.01533
2898843852
Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
Other deep learning models have been also validated on the UCR archive @cite_39 . @cite_26 , a deep CNN was designed and trained from scratch to classify time series. In order to avoid the overfitting problem, the authors proposed different data augmentation techniques that warped and split the time series. @cite_24 , the authors took the FCN model and modified the cost function in order to take into account the imbalanced classification of time series.
{ "cite_N": [ "@cite_24", "@cite_26", "@cite_39" ], "mid": [ "2783187651", "2515503816", "" ], "abstract": [ "Some deep convolutional neural networks were proposed for time-series classification and class imbalanced problems. However, those models performed degraded and even failed to recognize the minority class of an imbalanced temporal sequences dataset. Minority samples would bring troubles for temporal deep learning classifiers due to the equal treatments of majority and minority class. Until recently, there were few works applying deep learning on imbalanced time-series classification (ITSC) tasks. Here, this paper aimed at tackling ITSC problems with deep learning. An adaptive cost-sensitive learning strategy was proposed to modify temporal deep learning models. Through the proposed strategy, classifiers could automatically assign misclassification penalties to each class. In the experimental section, the proposed method was utilized to modify five neural networks. They were evaluated on a large volume, real-life and imbalanced time-series dataset with six metrics. Each single network was also tested alone and combined with several mainstream data samplers. Experimental results illustrated that the proposed cost-sensitive modified networks worked well on ITSC tasks. Compared to other methods, the cost-sensitive convolution neural network and residual network won out in the terms of all metrics. Consequently, the proposed cost-sensitive learning strategy can be used to modify deep learning classifiers from cost-insensitive to cost-sensitive. Those cost-sensitive convolutional networks can be effectively applied to address ITSC issues.", "Time series classification has been around for decades in the data-mining and machine learning communities. In this paper, we investigate the use of convolutional neural networks (CNN) for time series classification. Such networks have been widely used in many domains like computer vision and speech recognition, but only a little for time series classification. We design a convolu-tional neural network that consists of two convolutional layers. One drawback with CNN is that they need a lot of training data to be efficient. We propose two ways to circumvent this problem: designing data-augmentation techniques and learning the network in a semi-supervised way using training time series from different datasets. These techniques are experimentally evaluated on a benchmark of time series datasets.", "" ] }
1811.01533
2898843852
Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
Outside the UCR archive @cite_39 , deep learning has reached state of the art performance on several datasets in different domains @cite_12 . For spatio-temporal series forecasting problems, such as meteorology and oceanography, deep neural networks were proposed in @cite_5 . For human activity recognition from wearable sensors, deep learning is replacing the feature engineering approaches @cite_32 where features are no longer hand-designed but rather learned by deep learning models trained through back-propagation. One other type of time series data is present in Electronic Health Records, where a recent generative adversarial network with a CNN @cite_19 was trained for risk prediction based on patients historical medical records.
{ "cite_N": [ "@cite_32", "@cite_39", "@cite_19", "@cite_5", "@cite_12" ], "mid": [ "2795342689", "", "2963034797", "2964197989", "2026430219" ], "abstract": [ "Abstract Human activity recognition systems are developed as part of a framework to enable continuous monitoring of human behaviours in the area of ambient assisted living, sports injury detection, elderly care, rehabilitation, and entertainment and surveillance in smart home environments. The extraction of relevant features is the most challenging part of the mobile and wearable sensor-based human activity recognition pipeline. Feature extraction influences the algorithm performance and reduces computation time and complexity. However, current human activity recognition relies on handcrafted features that are incapable of handling complex activities especially with the current influx of multimodal and high dimensional sensor data. With the emergence of deep learning and increased computation powers, deep learning and artificial intelligence methods are being adopted for automatic feature learning in diverse areas like health, image classification, and recently, for feature extraction and classification of simple and complex human activity recognition in mobile and wearable sensors. Furthermore, the fusion of mobile or wearable sensors and deep learning methods for feature learning provide diversity, offers higher generalisation, and tackles challenging issues in human activity recognition. The focus of this review is to provide in-depth summaries of deep learning methods for mobile and wearable sensor-based human activity recognition. The review presents the methods, uniqueness, advantages and their limitations. We not only categorise the studies into generative, discriminative and hybrid methods but also highlight their important advantages. Furthermore, the review presents classification and evaluation procedures and discusses publicly available datasets for mobile sensor human activity recognition. Finally, we outline and explain some challenges to open research problems that require further research and improvements.", "", "The rapid growth of Electronic Health Records (EHRs), as well as the accompanied opportunities in Data-Driven Healthcare (DDH), has been attracting widespread interests and attentions. Recent progress in the design and applications of deep learning methods has shown promising results and is forcing massive changes in healthcare academia and industry, but most of these methods rely on massive labeled data. In this work, we propose a general deep learning framework which is able to boost risk prediction performance with limited EHR data. Our model takes a modified generative adversarial network namely ehrGAN, which can provide plausible labeled EHR data by mimicking real patient records, to augment the training dataset in a semi-supervised learning manner. We use this generative model together with a convolutional neural network (CNN) based prediction model to improve the onset prediction performance. Experiments on two real healthcare datasets demonstrate that our proposed framework produces realistic data samples and achieves significant improvements on classification tasks with the generated data over several stat-of-the-art baselines.", "We introduce a dynamical spatio-temporal model formalized as a recurrent neural network for forecasting time series of spatial processes, i.e. series of observations sharing temporal and spatial dependencies. The model learns these dependencies through a structured latent dynamical component, while a decoder predicts the observations from the latent representations. We consider several variants of this model, corresponding to different prior hypothesis about the spatial relations between the series. The model is evaluated and compared to state-of-the-art baselines, on a variety of forecasting problems representative of different application areas: epidemiology, geo-spatial statistics and car-traffic prediction. Besides these evaluations, we also describe experiments showing the ability of this approach to extract relevant spatial relations.", "This paper gives a review of the recent developments in deep learning and unsupervised feature learning for time-series problems. While these techniques have shown promise for modeling static data, such as computer vision, applying them to time-series data is gaining increasing attention. This paper overviews the particular challenges present in time-series data and provides a review of the works that have either applied time-series data to unsupervised feature learning algorithms or alternatively have contributed to modifications of feature learning algorithms to take into account the challenges present in time-series data." ] }
1811.01533
2898843852
Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
In short, deep learning is being applied to time series data with very successful results in several different domains. In fact, the convolutional neural network's ability to learn temporal invariant features is one of main the reasons behind its recent success, as well as the availability of big data across different domains @cite_13 .
{ "cite_N": [ "@cite_13" ], "mid": [ "2892035503" ], "abstract": [ "Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR UEA archive) and 12 multivariate time series datasets. By training 8730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date." ] }
1811.01533
2898843852
Transfer learning for deep neural networks is the process of first training a base network on a source dataset, and then transferring the learned features (the network’s weights) to a second network to be trained on a target dataset. This idea has been shown to improve deep neural network’s generalization capabilities in many computer vision tasks such as image recognition and object localization. Apart from these applications, deep Convolutional Neural Networks (CNNs) have also recently gained popularity in the Time Series Classification (TSC) community. However, unlike for image recognition problems, transfer learning techniques have not yet been investigated thoroughly for the TSC task. This is surprising as the accuracy of deep learning models for TSC could potentially be improved if the model is fine-tuned from a pre-trained neural network instead of training it from scratch. In this paper, we fill this gap by investigating how to transfer deep CNNs for the TSC task. To evaluate the potential of transfer learning, we performed extensive experiments using the UCR archive which is the largest publicly available TSC benchmark containing 85 datasets. For each dataset in the archive, we pre-trained a model and then fine-tuned it on the other datasets resulting in 7140 different deep neural networks. These experiments revealed that transfer learning can improve or degrade the models predictions depending on the dataset used for transfer. Therefore, in an effort to predict the best source dataset for a given target dataset, we propose a new method relying on Dynamic Time Warping to measure inter-datasets similarities. We describe how our method can guide the transfer to choose the best source dataset leading to an improvement in accuracy on 71 out of 85 datasets.
Before getting into the details of the recent applications for transfer learning, we give a formal definition of the latter @cite_34 . Throughout this paper, we will refer to dataset as the dataset we are transferring the pre-trained model , and to dataset as the dataset we are transferring the pre-trained model .
{ "cite_N": [ "@cite_34" ], "mid": [ "2395579298" ], "abstract": [ "Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments." ] }