aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1508.02131
2952403517
Structural kernels are a flexible learning paradigm that has been widely used in Natural Language Processing. However, the problem of model selection in kernel-based methods is usually overlooked. Previous approaches mostly rely on setting default values for kernel hyperparameters or using grid search, which is slow and coarse-grained. In contrast, Bayesian methods allow efficient model selection by maximizing the evidence on the training data through gradient-based methods. In this paper we show how to perform this in the context of structural kernels by using Gaussian Processes. Experimental results on tree kernels show that this procedure results in better prediction performance compared to hyperparameter optimization via grid search. The framework proposed in this paper can be adapted to other structures besides trees, e.g., strings and graphs, thereby extending the utility of kernel-based methods.
Interest in model selection procedures for kernel-based methods has been growing in the last years. One widely used approach for that is Multiple Kernel Learning (MKL) @cite_8 . MKL is based on the idea of using combinations of kernels to model the data and developing algorithms to tune the kernel coefficients. This is different from our method, where we focus on learning the hyperparameters of a single structural kernel. An approach similar to ours was proposed by . They combine oligo kernels (a kind of n-gram kernel) with MKL, derive their gradients and optimize towards a kernel alignment metric. Compared to our approach, they restrict the length of the n-grams being considered, while we rely on dynamic programming to explore the whole substructure space. Also, their method does not take into account the underlying learning algorithm. Another recent approach proposed for model selection is random search @cite_31 . Like grid search, it has the drawback of not employing gradient information, as it is designed for any kind of hyperparameters (including categorical ones).
{ "cite_N": [ "@cite_31", "@cite_8" ], "mid": [ "2097998348", "2109743529" ], "abstract": [ "Grid search and manual search are the most widely used strategies for hyper-parameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32-dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyper-parameters to validation set performance reveals that for most data sets only a few of the hyper-parameters really matter, but that different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets. Our analysis casts some light on why recent \"High Throughput\" methods achieve surprising success--they appear to search through a large number of hyper-parameters because most hyper-parameters do not matter much. We anticipate that growing interest in large hierarchical models will place an increasing burden on techniques for hyper-parameter optimization; this work shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper-parameter optimization algorithms.", "In recent years, several methods have been proposed to combine multiple kernels instead of using a single one. These different kernels may correspond to using different notions of similarity or may be using information coming from multiple sources (different representations or different feature subsets). In trying to organize and highlight the similarities and differences between them, we give a taxonomy of and review several multiple kernel learning algorithms. We perform experiments on real data sets for better illustration and comparison of existing algorithms. We see that though there may not be large differences in terms of accuracy, there is difference between them in complexity as given by the number of stored support vectors, the sparsity of the solution as given by the number of used kernels, and training time complexity. We see that overall, using multiple kernels instead of a single one is useful and believe that combining kernels in a nonlinear or data-dependent way seems more promising than linear combination in fusing information provided by simple linear kernels, whereas linear methods are more reasonable when combining complex Gaussian kernels." ] }
1508.02131
2952403517
Structural kernels are a flexible learning paradigm that has been widely used in Natural Language Processing. However, the problem of model selection in kernel-based methods is usually overlooked. Previous approaches mostly rely on setting default values for kernel hyperparameters or using grid search, which is slow and coarse-grained. In contrast, Bayesian methods allow efficient model selection by maximizing the evidence on the training data through gradient-based methods. In this paper we show how to perform this in the context of structural kernels by using Gaussian Processes. Experimental results on tree kernels show that this procedure results in better prediction performance compared to hyperparameter optimization via grid search. The framework proposed in this paper can be adapted to other structures besides trees, e.g., strings and graphs, thereby extending the utility of kernel-based methods.
Structural kernels have been successfully employed in a number of NLP tasks. The original SSTK proposed by was used to rerank the output of syntactic parsers. Recently, this reranking idea was also applied to discourse parsing @cite_3 . Other tree kernel applications include Semantic Role Labelling @cite_17 and Relation Extraction @cite_9 . String kernels were mostly used in Text Classification @cite_34 @cite_4 , while graph kernels have been used for recognizing Textual Entailment @cite_5 . However, these previous works focused on frequentist methods like SVM or voted perceptron while we employ a Bayesian approach.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_3", "@cite_5", "@cite_34", "@cite_17" ], "mid": [ "2005752251", "2250646484", "2251620509", "", "", "1987486061" ], "abstract": [ "Biological data mining using kernel methods can be improved by a task-specific choice of the kernel function. Oligo kernels for genomic sequence analysis have proven to have a high discriminative power and to provide interpretable results. Oligo kernels that consider subsequences of different lengths can be combined and parameterized to increase their flexibility. For adapting these parameters efficiently, gradient-based optimization of the kernel-target alignment is proposed. The power of this new, general model selection procedure and the benefits of fitting kernels to problem classes are demonstrated by adapting oligo kernels for bacterial gene start detection.", "Relation Extraction (RE) is the task of extracting semantic relationships between entities in text. Recent studies on relation extraction are mostly supervised. The clear drawback of supervised methods is the need of training data: labeled data is expensive to obtain, and there is often a mismatch between the training data and the data the system will be applied to. This is the problem of domain adaptation. In this paper, we propose to combine (i) term generalization approaches such as word clustering and latent semantic analysis (LSA) and (ii) structured kernels to improve the adaptability of relation extractors to new text genres domains. The empirical evaluation on ACE 2005 domains shows that a suitable combination of syntax and lexical generalization is very promising for domain adaptation.", "In this paper, we present a discriminative approach for reranking discourse trees generated by an existing probabilistic discourse parser. The reranker relies on tree kernels (TKs) to capture the global dependencies between discourse units in a tree. In particular, we design new computational structures of discourse trees, which combined with standard TKs, originate novel discourse TKs. The empirical evaluation shows that our reranker can improve the state-of-the-art sentence-level parsing accuracy from 79.77 to 82.15 , a relative error reduction of 11.8 , which in turn pushes the state-of-the-art documentlevel accuracy from 55.8 to 57.3 .", "", "", "The availability of large scale data sets of manually annotated predicate-argument structures has recently favored the use of machine learning approaches to the design of automated semantic role labeling (SRL) systems. The main research in this area relates to the design choices for feature representation and for effective decompositions of the task in different learning models. Regarding the former choice, structural properties of full syntactic parses are largely employed as they represent ways to encode different principles suggested by the linking theory between syntax and semantics. The latter choice relates to several learning schemes over global views of the parses. For example, re-ranking stages operating over alternative predicate-argument sequences of the same sentence have shown to be very effective. In this article, we propose several kernel functions to model parse tree properties in kernel-based machines, for example, perceptrons or support vector machines. In particular, we define different kinds of tree kernels as general approaches to feature engineering in SRL. Moreover, we extensively experiment with such kernels to investigate their contribution to individual stages of an SRL architecture both in isolation and in combination with other traditional manually coded features. The results for boundary recognition, classification, and re-ranking stages provide systematic evidence about the significant impact of tree kernels on the overall accuracy, especially when the amount of training data is small. As a conclusive result, tree kernels allow for a general and easily portable feature engineering method which is applicable to a large family of natural language processing tasks." ] }
1508.02131
2952403517
Structural kernels are a flexible learning paradigm that has been widely used in Natural Language Processing. However, the problem of model selection in kernel-based methods is usually overlooked. Previous approaches mostly rely on setting default values for kernel hyperparameters or using grid search, which is slow and coarse-grained. In contrast, Bayesian methods allow efficient model selection by maximizing the evidence on the training data through gradient-based methods. In this paper we show how to perform this in the context of structural kernels by using Gaussian Processes. Experimental results on tree kernels show that this procedure results in better prediction performance compared to hyperparameter optimization via grid search. The framework proposed in this paper can be adapted to other structures besides trees, e.g., strings and graphs, thereby extending the utility of kernel-based methods.
Gaussian Processes are a major framework in machine learning nowadays: applications include Robotics @cite_0 , Geolocation @cite_23 and Computer Vision @cite_21 . Only very recently they have been successfully employed in a few NLP tasks such as translation quality estimation @cite_27 @cite_12 , detection of temporal patterns in text @cite_24 , semantic similarity @cite_25 and emotion analysis @cite_35 . In terms of feature representations, previous work focused on the vectorial inputs and applied well-known kernels for these inputs, e.g. the RBF kernel. As shown on , our approach is orthogonal to these previous ones, since kernels can be easily combined in different ways.
{ "cite_N": [ "@cite_35", "@cite_21", "@cite_0", "@cite_24", "@cite_27", "@cite_23", "@cite_25", "@cite_12" ], "mid": [ "2133675239", "", "2162717641", "69755908", "2251311344", "2167387227", "2250880280", "2251074483" ], "abstract": [ "We propose a model for jointly predicting multiple emotions in natural language sentences. Our model is based on a low-rank coregionalisation approach, which combines a vector-valued Gaussian Process with a rich parameterisation scheme. We show that our approach is able to learn correlations and anti-correlations between emotions on a news headlines dataset. The proposed model outperforms both singletask baselines and other multi-task approaches.", "", "Blimps are a promising platform for aerial robotics and have been studied extensively for this purpose. Unlike other aerial vehicles, blimps are relatively safe and also possess the ability to loiter for long periods. These advantages, however, have been difficult to exploit because blimp dynamics are complex and inherently non-linear. The classical approach to system modeling represents the system as an ordinary differential equation (ODE) based on Newtonian principles. A more recent modeling approach is based on representing state transitions as a Gaussian process (GP). In this paper, we present a general technique for system identification that combines these two modeling approaches into a single formulation. This is done by training a Gaussian process on the residual between the non-linear model and ground truth training data. The result is a GP-enhanced model that provides an estimate of uncertainty in addition to giving better state predictions than either ODE or GP alone. We show how the GP-enhanced model can be used in conjunction with reinforcement learning to generate a blimp controller that is superior to those learned with ODE or GP models alone.", "Temporal variations of text are usually ignored in NLP applications. However, text use changes with time, which can affect many applications. In this paper we model periodic distributions of words over time. Focusing on hashtag frequency in Twitter, we first automatically identify the periodic patterns. We use this for regression in order to forecast the volume of a hashtag based on past data. We use Gaussian Processes, a state-ofthe-art bayesian non-parametric model, with a novel periodic kernel. We demonstrate this in a text classification setting, assigning the tweet hashtag based on the rest of its text. This method shows significant improvements over competitive baselines.", "Annotating linguistic data is often a complex, time consuming and expensive endeavour. Even with strict annotation guidelines, human subjects often deviate in their analyses, each bringing different biases, interpretations of the task and levels of consistency. We present novel techniques for learning from the outputs of multiple annotators while accounting for annotator specific behaviour. These techniques use multi-task Gaussian Processes to learn jointly a series of annotator and metadata specific models, while explicitly representing correlations between models which can be learned directly from data. Our experiments on two machine translation quality estimation datasets show uniform significant accuracy gains from multi-task learning, and consistently outperform strong baselines.", "In this article, we present a novel approach to solving the localization problem in cellular networks. The goal is to estimate a mobile user's position, based on measurements of the signal strengths received from network base stations. Our solution works by building Gaussian process models for the distribution of signal strengths, as obtained in a series of calibration measurements. In the localization stage, the user's position can be estimated by maximizing the likelihood of received signal strengths with respect to the position. We investigate the accuracy of the proposed approach on data obtained within a large indoor cellular network.", "We report results obtained by the UoW method in SemEval-2014’s Task 10 – Multilingual Semantic Textual Similarity. We propose to model Semantic Textual Similarity in the context of Multi-task Learning in order to deal with inherent challenges of the task such as unbalanced performance across domains and the lack of training data for some domains (i.e. unknown domains). We show that the Multi-task Learning approach outperforms previous work on the 2012 dataset, achieves a robust performance on the 2013 dataset and competitive results on the 2014 dataset. We highlight the importance of the challenge of unknown domains, as it affects overall performance substantially.", "We describe our systems for the WMT14 Shared Task on Quality Estimation (subtasks 1.1, 1.2 and 1.3). Our submissions use the framework of Multi-task Gaussian Processes, where we combine multiple datasets in a multi-task setting. Due to the large size of our datasets we also experiment with Sparse Gaussian Processes, which aim to speed up training and prediction by providing sensible sparse approximations." ] }
1508.02593
2951045955
Large knowledge graphs increasingly add value to various applications that require machines to recognize and understand queries and their semantics, as in search or question answering systems. Latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs, showing promising results in tasks related to knowledge graph completion and cleaning. Besides storing facts about the world, schema-based knowledge graphs are backed by rich semantic descriptions of entities and relation-types that allow machines to understand the notion of things and their semantic relationships. In this work, we study how type-constraints can generally support the statistical modeling with latent variable models. More precisely, we integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches. Our experimental results show that prior knowledge on relation-types significantly improves these models up to 77 in link-prediction tasks. The achieved improvements are especially prominent when a low model complexity is enforced, a crucial requirement when these models are applied to very large datasets. Unfortunately, type-constraints are neither always available nor always complete e.g., they can become fuzzy when entities lack proper typing. We show that in these cases, it can be beneficial to apply a local closed-world assumption that approximates the semantics of relation-types based on observations made in the data.
A number of other latent variable models have been proposed for the statistical modeling of KGs. @cite_13 recently proposed a neural tensor network, which we did not consider in our study, since it was observed that it does not scale to larger datasets @cite_1 @cite_6 . Instead we exploit a less complex and more scalable neural network model proposed in @cite_6 , which could achieve comparable results to the neural tensor network of @cite_13 . TransE @cite_16 has been target of other recent research activities. @cite_9 proposed a framework for relationship modeling that combines aspects of TransE and the neural tensor network proposed in @cite_13 . @cite_0 proposed TransH which improves TransE's capability to model reflexive one-to-many, many-to-one and many-to-many relation-types by introducing a relation-type specific hyperplane where the translation is performed. This work has been further extended in @cite_19 by introducing TransR which separates representations of entities and relation-types in different spaces, where the translation is performed in the relation-space. An extensive review on representation learning with KGs can be found in @cite_23 .
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_19", "@cite_23", "@cite_16", "@cite_13" ], "mid": [ "2951077644", "2102363952", "2016753842", "2283196293", "2184957013", "2097286355", "2127795553", "2127426251" ], "abstract": [ "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.", "While relation extraction has traditionally been viewed as a task relying solely on textual data, recent work has shown that by taking as input existing facts in the form of entity-relation triples from both knowledge bases and textual data, the performance of relation extraction can be improved significantly. Following this new paradigm, we propose a tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction. By leveraging relational domain knowledge about entity type information, our learning algorithm is significantly faster than previous approaches and is better able to discover new relations missing from the database. In addition, when applied to a relation extraction task, our approach alone is comparable to several existing systems, and improves the weighted mean average precision of a state-of-theart method by 10 points when used as a subcomponent.", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.", "We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. TransE is a promising method proposed recently, which is very efficient while achieving state-of-the-art predictive performance. We discuss some mapping properties of relations which should be considered in embedding, such as reflexive, one-to-many, many-to-one, and many-to-many. We note that TransE does not do well in dealing with these properties. Some complex models are capable of preserving these mapping properties but sacrifice efficiency in the process. To make a good trade-off between model capacity and efficiency, in this paper we propose TransH which models a relation as a hyperplane together with a translation operation on it. In this way, we can well preserve the above mapping properties of relations with almost the same model complexity of TransE. Additionally, as a practical knowledge graph is often far from completed, how to construct negative examples to reduce false negative labels in training is very important. Utilizing the one-to-many many-to-one mapping property of a relation, we propose a simple trick to reduce the possibility of false negative labeling. We conduct extensive experiments on link prediction, triplet classification and fact extraction on benchmark datasets like WordNet and Freebase. Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.", "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.", "Relational machine learning studies methods for the statistical analysis of relational, or graph-structured, data. In this paper, we provide a review of how such statistical models can be “trained” on large knowledge graphs, and then used to predict new facts about the world (which is equivalent to predicting new edges in the graph). In particular, we discuss two dierent kinds of statistical relational models, both of which can scale to massive datasets. The first is based on tensor factorization methods and related latent variable models. The second is based on mining observable patterns in the graph. We also show how to combine these latent and observable models to get improved modeling power at decreased computational cost. Finally, we discuss how such statistical models of graphs can be combined with text-based information extraction methods for automatically constructing knowledge graphs from the Web. In particular, we discuss Google’s Knowledge Vault project.", "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1508.02593
2951045955
Large knowledge graphs increasingly add value to various applications that require machines to recognize and understand queries and their semantics, as in search or question answering systems. Latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs, showing promising results in tasks related to knowledge graph completion and cleaning. Besides storing facts about the world, schema-based knowledge graphs are backed by rich semantic descriptions of entities and relation-types that allow machines to understand the notion of things and their semantic relationships. In this work, we study how type-constraints can generally support the statistical modeling with latent variable models. More precisely, we integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches. Our experimental results show that prior knowledge on relation-types significantly improves these models up to 77 in link-prediction tasks. The achieved improvements are especially prominent when a low model complexity is enforced, a crucial requirement when these models are applied to very large datasets. Unfortunately, type-constraints are neither always available nor always complete e.g., they can become fuzzy when entities lack proper typing. We show that in these cases, it can be beneficial to apply a local closed-world assumption that approximates the semantics of relation-types based on observations made in the data.
Domain and range constraints as given by the KG's schema or via a local closed-world assumption have been exploited very recently in RESCAL @cite_1 @cite_12 , but to the best of our knowledge have not yet been integrated into other latent variable methods nor has their general value been recognized for these models.
{ "cite_N": [ "@cite_1", "@cite_12" ], "mid": [ "2102363952", "2536888436" ], "abstract": [ "While relation extraction has traditionally been viewed as a task relying solely on textual data, recent work has shown that by taking as input existing facts in the form of entity-relation triples from both knowledge bases and textual data, the performance of relation extraction can be improved significantly. Following this new paradigm, we propose a tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction. By leveraging relational domain knowledge about entity type information, our learning algorithm is significantly faster than previous approaches and is better able to discover new relations missing from the database. In addition, when applied to a relation extraction task, our approach alone is comparable to several existing systems, and improves the weighted mean average precision of a state-of-theart method by 10 points when used as a subcomponent.", "The statistical modeling of large multi-relational datasets has increasingly gained attention in recent years. Typical applications involve large knowledge bases like DBpedia, Freebase, YAGO and the recently introduced Google Knowledge Graph that contain millions of entities, hundreds and thousands of relations, and billions of relational tuples. Collective factorization methods have been shown to scale up to these large multi-relational datasets, in particular in form of tensor approaches that can exploit the highly scalable alternating least squares (ALS) algorithms for calculating the factors. In this paper we extend the recently proposed state-of-the-art RESCAL tensor factorization to consider relational type-constraints. Relational type-constraints explicitly define the logic of relations by excluding entities from the subject or object role. In addition we will show that in absence of prior knowledge about type-constraints, local closed-world assumptions can be approximated for each relation by ignoring unobserved subject or object entities in a relation. In our experiments on representative large datasets (Cora, DBpedia), that contain up to millions of entities and hundreds of type-constrained relations, we show that the proposed approach is scalable. It further significantly outperforms RESCAL without type-constraints in both, runtime and prediction quality." ] }
1508.02593
2951045955
Large knowledge graphs increasingly add value to various applications that require machines to recognize and understand queries and their semantics, as in search or question answering systems. Latent variable models have increasingly gained attention for the statistical modeling of knowledge graphs, showing promising results in tasks related to knowledge graph completion and cleaning. Besides storing facts about the world, schema-based knowledge graphs are backed by rich semantic descriptions of entities and relation-types that allow machines to understand the notion of things and their semantic relationships. In this work, we study how type-constraints can generally support the statistical modeling with latent variable models. More precisely, we integrated prior knowledge in form of type-constraints in various state of the art latent variable approaches. Our experimental results show that prior knowledge on relation-types significantly improves these models up to 77 in link-prediction tasks. The achieved improvements are especially prominent when a low model complexity is enforced, a crucial requirement when these models are applied to very large datasets. Unfortunately, type-constraints are neither always available nor always complete e.g., they can become fuzzy when entities lack proper typing. We show that in these cases, it can be beneficial to apply a local closed-world assumption that approximates the semantics of relation-types based on observations made in the data.
Further, latent variable methods have been combined with graph-feature models which lead to an increase of prediction quality @cite_6 and a decrease of model complexity @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_6" ], "mid": [ "2158781217", "2016753842" ], "abstract": [ "Tensor factorization has become a popular method for learning from multi-relational data. In this context, the rank of the factorization is an important parameter that determines runtime as well as generalization ability. To identify conditions under which factorization is an efficient approach for learning from relational data, we derive upper and lower bounds on the rank required to recover adjacency tensors. Based on our findings, we propose a novel additive tensor factorization model to learn from latent and observable patterns on multi-relational data and present a scalable algorithm for computing the factorization. We show experimentally both that the proposed additive model does improve the predictive performance over pure latent variable methods and that it also reduces the required rank — and therefore runtime and memory complexity — significantly.", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods." ] }
1508.02471
2953288228
Two mobile agents, starting from different nodes of a network at possibly different times, have to meet at the same node. This problem is known as @math . Agents move in synchronous rounds. Each agent has a distinct integer label from the set @math . Two main efficiency measures of rendezvous are its @math (the number of rounds until the meeting) and its @math (the total number of edge traversals). We investigate tradeoffs between these two measures. A natural benchmark for both time and cost of rendezvous in a network is the number of edge traversals needed for visiting all nodes of the network, called the exploration time. Hence we express the time and cost of rendezvous as functions of an upper bound @math on the time of exploration (where @math and a corresponding exploration procedure are known to both agents) and of the size @math of the label space. We present two natural rendezvous algorithms. Algorithm @math has cost @math (and, in fact, a version of this algorithm for the model where the agents start simultaneously has cost exactly @math ) and time @math . Algorithm @math has both time and cost @math . Our main contributions are lower bounds showing that, perhaps surprisingly, these two algorithms capture the tradeoffs between time and cost of rendezvous almost tightly. We show that any deterministic rendezvous algorithm of cost asymptotically @math (i.e., of cost @math ) must have time @math . On the other hand, we show that any deterministic rendezvous algorithm with time complexity @math must have cost @math .
The problem of rendezvous has been studied both under randomized and deterministic scenarios. An extensive survey of randomized rendezvous in various models can be found in @cite_22 , cf. also @cite_27 @cite_6 @cite_42 @cite_35 @cite_30 . Deterministic rendezvous in networks has been surveyed in @cite_41 . Several authors considered geometric scenarios (rendezvous in an interval of the real line, e.g., @cite_35 @cite_23 , or in the plane, e.g., @cite_16 @cite_39 ). Gathering more than two agents was studied, e.g., in @cite_13 @cite_30 @cite_2 @cite_9 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_22", "@cite_41", "@cite_9", "@cite_42", "@cite_6", "@cite_39", "@cite_27", "@cite_23", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "2010875998", "1971467559", "1501957312", "1988551609", "1975690872", "2103469743", "2071069929", "2014192080", "2040570464", "2030068303", "2007267487", "2096182510", "2010017329" ], "abstract": [ "", "Two players A and B are randomly placed on a line. The distribution of the distance between them is unknown except that the expected initial distance of the (two) players does not exceed some constant @math The players can move with maximal velocity 1 and would like to meet one another as soon as possible. Most of the paper deals with the asymmetric rendezvous in which each player can use a different trajectory. We find rendezvous trajectories which are efficient against all probability distributions in the above class. (It turns out that our trajectories do not depend on the value of @math ) We also obtain the minimax trajectory of player A if player B just waits for him. This trajectory oscillates with a geometrically increasing amplitude. It guarantees an expected meeting time not exceeding @math We show that, if player B also moves, then the expected meeting time can be reduced to @math The expected meeting time can be further reduced if the players use mixed strategies. We show that if player B rests, then the optimal strategy of player A is a mixture of geometric trajectories. It guarantees an expected meeting time not exceeding @math This value can be reduced even more (below @math ) if player B also moves according to a (correlated) mixed strategy. We also obtain a bound for the expected meeting time of the corresponding symmetric rendezvous problem.", "Search Theory is one of the original disciplines within the field of Operations Research. It deals with the problem faced by a Searcher who wishes to minimize the time required to find a hidden object, or “target. ” The Searcher chooses a path in the “search space” and finds the target when he is sufficiently close to it. Traditionally, the target is assumed to have no motives of its own regarding when it is found; it is simply stationary and hidden according to a known distribution (e. g. , oil), or its motion is determined stochastically by known rules (e. g. , a fox in a forest). The problems dealt with in this book assume, on the contrary, that the “target” is an independent player of equal status to the Searcher, who cares about when he is found. We consider two possible motives of the target, and divide the book accordingly. Book I considers the zero-sum game that results when the target (here called the Hider) does not want to be found. Such problems have been called Search Games (with the “ze- sum” qualifier understood). Book II considers the opposite motive of the target, namely, that he wants to be found. In this case the Searcher and the Hider can be thought of as a team of agents (simply called Player I and Player II) with identical aims, and the coordination problem they jointly face is called the Rendezvous Search Problem.", "Two or more mobile entities, called agents or robots, starting at distinct initial positions, have to meet. This task is known in the literature as rendezvous. Among many alternative assumptions that have been used to study the rendezvous problem, two most significantly influence the methodology appropriate for its solution. The first of these assumptions concerns the environment in which the mobile entities navigate: it can be either a terrain in the plane, or a network modeled as an undirected graph. The second assumption concerns the way in which the entities move: it can be either deterministic or randomized. In this article, we survey results on deterministic rendezvous in networks. © 2012 Wiley Periodicals, Inc. NETWORKS, 2012 © 2012 Wiley Periodicals, Inc.", "If two searchers are searching for a stationary target and wish to minimize the expected time until both searchers and the lost target are reunited, there is a trade off between searching for the target and checking back to see if the other searcher has already found the target. This note solves a non-linear optimization problem to find the optimal search strategy for this problem.", "Two friends have become separated in a building or shopping mall and and wish to meet as quickly as possible. There are n possible locations where they might meet. However, the locations are identical and there has been no prior agreement where to meet or how to search. Hence they must use identical strategies and must treat all locations in a symmetrical fashion. Suppose their search proceeds in discrete time. Since they wish to avoid the possibility of never meeting, they will wish to use some randomizing strategy. If each person searches one of the n locations at random at each step, then rendezvous will require n steps on average. It is possible to do better than this: although the optimal strategy is difficult to characterize for general n, there is a strategy with an expected time until rendezvous of less than 0.829 n for large enough n. For n = 2 and 3 the optimal strategy can be established and on average 2 and 8 3 steps are required respectively. There are many tantalizing variations on this problem, which we discuss with some conjectures. DYNAMIC PROGRAMMING; SEARCH PROBLEMS", "Two players are independently placed on a commonly labelled network X. They cannot see each other but wish to meet in least expected time. We consider continuous and discrete versions, in which they may move at unit speed or between adjacent distinct nodes, respectively. There are two versions of the problem (asymmetric or symmetric), depending on whether or not we allow the players to use different strategies. After obtaining some optimality conditions for general networks, we specialize to the interval and circle networks. In the first setting, we extend the work of J. V. Howard; in the second we prove a conjecture concerning the optimal symmetric strategy. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 256–274, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002 nav.10011", "We consider rendezvous problems in which two players move on the plane and wish to cooperate to minimise their first meeting time. We begin by considering the case where both players are placed such that the vector difference is chosen equiprobably from a finite set. We also consider a situation in which they know they are a distanced apart, but they do not know the direction of the other player. Finally, we give some results for the case in which player 1 knows the initial position of player 2, while player 2 is given information only on the initial distance of player 1.", "The author considers the problem faced by two people who are placed randomly in a known search region and move about at unit speed to find each other in the least expected time. This time is called the rendezvous value of the region. It is shown how symmetries in the search region may hinder the process by preventing coordination based on concepts such as north or clockwise. A general formulation of the rendezvous search problem is given for a compact metric space endowed with a group of isometrics which represents the spatial uncertainties of the players. These concepts are illustrated by considering upper bounds for various rendezvous values for the circle and an arbitrary metric network. The discrete rendezvous problem on a cycle graph for players restricted to symmetric Markovian strategies is then solved. Finally, the author considers the problem faced by two people on an infinite line who each know the distribution of the distance but not the direction to each other.", "Leaving marks at the starting points in a rendezvous search problem may provide the players with important information. Many of the standard rendezvous search problems are investigated under this new framework which we call markstart rendezvous search. Somewhat surprisingly, the relative difficulties of analysing problems in the two scenarios differ from problem to problem. Symmetric rendezvous on the line seems to be more tractable in the new setting whereas asymmetric rendezvous on the line when the initial distance is chosen by means of a convex distribution appears easier to analyse in the original setting. Results are also obtained for markstart rendezvous on complete graphs and on the line when the players' initial distance is given by an unknown probability distribution. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 722–731, 2001", "Suppose that @math players are placed randomly on the real line at consecutive integers, and faced in random directions. Each player has maximum speed one, cannot see the others, and doesn't know his relative position. What is the minimum time @math required to ensure that all the players can meet together at a single point, regardless of their initial placement? We prove that @math , @math , and @math is asymptotic to @math We also consider a variant of the problem which requires players who meet to stick together, and find in this case that three players require @math time units to ensure a meeting. This paper is thus a minimax version of the rendezvous search problem, which has hitherto been studied only in terms of minimizing the expected meeting time.", "", "In this paper we study the problem of gathering a collection of identical oblivious mobile robots in the same location of the plane. Previous investigations have focused mostly on the unlimited visibility setting, where each robot can always see all the others regardless of their distance.In the more difficult and realistic setting where the robots have limited visibility, the existing algorithmic results are only for convergence (towards a common point, without ever reaching it) and only for semi-synchronous environments, where robots' movements are assumed to be performed instantaneously.In contrast, we study this problem in a totally asynchronous setting, where robots' actions, computations, and movements require a finite but otherwise unpredictable amount of time. We present a protocol that allows anonymous oblivious robots with limited visibility to gather in the same location in finite time, provided they have orientation (i.e., agreement on a coordinate system).Our result indicates that, with respect to gathering, orientation is at least as powerful as instantaneous movements." ] }
1508.02471
2953288228
Two mobile agents, starting from different nodes of a network at possibly different times, have to meet at the same node. This problem is known as @math . Agents move in synchronous rounds. Each agent has a distinct integer label from the set @math . Two main efficiency measures of rendezvous are its @math (the number of rounds until the meeting) and its @math (the total number of edge traversals). We investigate tradeoffs between these two measures. A natural benchmark for both time and cost of rendezvous in a network is the number of edge traversals needed for visiting all nodes of the network, called the exploration time. Hence we express the time and cost of rendezvous as functions of an upper bound @math on the time of exploration (where @math and a corresponding exploration procedure are known to both agents) and of the size @math of the label space. We present two natural rendezvous algorithms. Algorithm @math has cost @math (and, in fact, a version of this algorithm for the model where the agents start simultaneously has cost exactly @math ) and time @math . Algorithm @math has both time and cost @math . Our main contributions are lower bounds showing that, perhaps surprisingly, these two algorithms capture the tradeoffs between time and cost of rendezvous almost tightly. We show that any deterministic rendezvous algorithm of cost asymptotically @math (i.e., of cost @math ) must have time @math . On the other hand, we show that any deterministic rendezvous algorithm with time complexity @math must have cost @math .
Memory required by the agents to achieve deterministic rendezvous was studied in @cite_18 for trees and in @cite_0 for general graphs. Memory needed for randomized rendezvous in the ring is discussed, e.g., in @cite_34 .
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_34" ], "mid": [ "1972775782", "2081055073", "2086702855" ], "abstract": [ "Two identical (anonymous) mobile agents start from arbitrary nodes in an a priori unknown graph and move synchronously from node to node with the goal of meeting. This rendezvous problem has been thoroughly studied, both for anonymous and for labeled agents, along with another basic task, that of exploring graphs by mobile agents. The rendezvous problem is known to be not easier than graph exploration. A well-known recent result on exploration, due to Reingold, states that deterministic exploration of arbitrary graphs can be performed in log-space, i.e., using an agent equipped with O(log n) bits of memory, where n is the size of the graph. In this paper we study the size of memory of mobile agents that permits us to solve the rendezvous problem deterministically. Our main result establishes the minimum size of the memory of anonymous agents that guarantees deterministic rendezvous when it is feasible. We show that this minimum size is Θ(log n), where n is the size of the graph, regardless of the delay between the starting times of the agents. More precisely, we construct identical agents equipped with Θ(log n) memory bits that solve the rendezvous problem in all graphs with at most n nodes, if they start with any delay τ, and we prove a matching lower bound Ω(log n) on the number of memory bits needed to accomplish rendezvous, even for simultaneous start. In fact, this lower bound is achieved already on the class of rings. This shows a significant contrast between rendezvous and exploration: e.g., while exploration of rings (without stopping) can be done using constant memory, rendezvous, even with simultaneous start, requires logarithmic memory. As a by-product of our techniques introduced to obtain log-space rendezvous we get the first algorithm to find a quotient graph of a given unlabeled graph in polynomial time, by means of a mobile agent moving around the graph.", "The aim of rendezvous in a graph is meeting of two mobile agents at some node of an unknown anonymous connected graph. In this article, we focus on rendezvous in trees, and, analogously to the efforts that have been made for solving the exploration problem with compact automata, we study the size of memory of mobile agents that permits to solve the rendezvous problem deterministically. We assume that the agents are identical, and move in synchronous rounds. We first show that if the delay between the starting times of the agents is arbitrary, then the lower bound on memory required for rendezvous is Ω(log n) bits, even for the line of length n. This lower bound meets a previously known upper bound of O(log n) bits for rendezvous in arbitrary graphs of size at most n. Our main result is a proof that the amount of memory needed for rendezvous with simultaneous start depends essentially on the number e of leaves of the tree, and is exponentially less impacted by the number n of nodes. Indeed, we present two identical agents with O(log e + log log n) bits of memory that solve the rendezvous problem in all trees with at most n nodes and at most e leaves. Hence, for the class of trees with polylogarithmically many leaves, there is an exponential gap in minimum memory size needed for rendezvous between the scenario with arbitrary delay and the scenario with delay zero. Moreover, we show that our upper bound is optimal by proving that Ω(log e + log log n) bits of memory are required for rendezvous, even in the class of trees with degrees bounded by 3.", "We present a trade-off between the expected time for two identical agents to rendezvous on a synchronous, anonymous, oriented ring and the memory requirements of the agents. In particular, we show there exists a 2t state agent which can achieve rendezvous on an n-node ring in expected time O(n2 2t + 2t) and that any t 2 state agent requires expected time Ω(n2 2t). As a corollary we observe that Θ(log log n) bits of memory are necessary and sufficient to achieve rendezvous in linear time." ] }
1508.02471
2953288228
Two mobile agents, starting from different nodes of a network at possibly different times, have to meet at the same node. This problem is known as @math . Agents move in synchronous rounds. Each agent has a distinct integer label from the set @math . Two main efficiency measures of rendezvous are its @math (the number of rounds until the meeting) and its @math (the total number of edge traversals). We investigate tradeoffs between these two measures. A natural benchmark for both time and cost of rendezvous in a network is the number of edge traversals needed for visiting all nodes of the network, called the exploration time. Hence we express the time and cost of rendezvous as functions of an upper bound @math on the time of exploration (where @math and a corresponding exploration procedure are known to both agents) and of the size @math of the label space. We present two natural rendezvous algorithms. Algorithm @math has cost @math (and, in fact, a version of this algorithm for the model where the agents start simultaneously has cost exactly @math ) and time @math . Algorithm @math has both time and cost @math . Our main contributions are lower bounds showing that, perhaps surprisingly, these two algorithms capture the tradeoffs between time and cost of rendezvous almost tightly. We show that any deterministic rendezvous algorithm of cost asymptotically @math (i.e., of cost @math ) must have time @math . On the other hand, we show that any deterministic rendezvous algorithm with time complexity @math must have cost @math .
Apart from the synchronous model used in this paper, several authors investigated asynchronous rendezvous in the plane @cite_44 @cite_13 and in network environments @cite_31 @cite_1 @cite_40 @cite_21 . In the latter scenario, the agent chooses the edge to traverse, but the adversary controls the speed of the agent. Under this assumption, rendezvous at a node cannot be guaranteed even in very simple graphs. Hence the rendezvous requirement is relaxed to permit the agents to meet inside an edge.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_44", "@cite_40", "@cite_31", "@cite_13" ], "mid": [ "1582826078", "2100580556", "", "2050063944", "", "2010017329" ], "abstract": [ "Two mobile agents starting at different nodes of an unknown network have to meet. This task is known in the literature as rendezvous. Each agent has a different label which is a positive integer known to it but unknown to the other agent. Agents move in an asynchronous way: the speed of agents may vary and is controlled by an adversary. The cost of a rendezvous algorithm is the total number of edge traversals by both agents until their meeting. The only previous deterministic algorithm solving this problem has cost exponential in the size of the graph and in the larger label. In this paper we present a deterministic rendezvous algorithm with cost polynomial in the size of the graph and in the length of the smaller label. Hence, we decrease the cost exponentially in the size of the graph and doubly exponentially in the labels of agents. As an application of our rendezvous algorithm we solve several fundamental problems involving teams of unknown size larger than 1 of labeled agents moving asynchronously in...", "Two mobile agents (robots) with distinct labels have to meet in an arbitrary, possibly infinite, unknown connected graph or in an unknown connected terrain in the plane. Agents are modeled as points, and the route of each of them only depends on its label and on the unknown environment. The actual walk of each agent also depends on an asynchronous adversary that may arbitrarily vary the speed of the agent, stop it, or even move it back and forth, as long as the walk of the agent in each segment of its route is continuous, does not leave it and covers all of it. Meeting in a graph means that both agents must be at the same time in some node or in some point inside an edge of the graph, while meeting in a terrain means that both agents must be at the same time in some point of the terrain. Does there exist a deterministic algorithm that allows any two agents to meet in any unknown environment in spite of this very powerful adversary? We give deterministic rendezvous algorithms for agents starting at arbitrary nodes of any anonymous connected graph (finite or infinite) and for agents starting at any interior points with rational coordinates in any closed region of the plane with path-connected interior. While our algorithms work in a very general setting - agents can, indeed, meet almost everywhere - we show that none of the above few limitations imposed on the environment can be removed. On the other hand, our algorithm also guarantees the following approximate rendezvous for agents starting at arbitrary interior points of a terrain as above: agents will eventually get at an arbitrarily small positive distance from each other.", "", "Two mobile agents (robots) having distinct labels and located in nodes of an unknown anonymous connected graph have to meet. We consider the asynchronous version of this well-studied rendezvous problem and we seek fast deterministic algorithms for it. Since in the asynchronous setting, meeting at a node, which is normally required in rendezvous, is in general impossible, we relax the demand by allowing meeting of the agents inside an edge as well. The measure of performance of a rendezvous algorithm is its cost: for a given initial location of agents in a graph, this is the number of edge traversals of both agents until rendezvous is achieved. If agents are initially situated at a distance D in an infinite line, we show a rendezvous algorithm with cost O(D|Lmin|2) when D is known and O((D + |Lmax|)3) if D is unknown, where |Lmin| and |Lmax| are the lengths of the shorter and longer label of the agents, respectively. These results still hold for the case of the ring of unknown size, but then we also give an optimal algorithm of cost O(n|Lmin|), if the size n of the ring is known, and of cost O(n|Lmax|), if it is unknown. For arbitrary graphs, we show that rendezvous is feasible if an upper bound on the size of the graph is known and we give an optimal algorithm of cost O(D|Lmin|) if the topology of the graph and the initial positions are known to agents.", "", "In this paper we study the problem of gathering a collection of identical oblivious mobile robots in the same location of the plane. Previous investigations have focused mostly on the unlimited visibility setting, where each robot can always see all the others regardless of their distance.In the more difficult and realistic setting where the robots have limited visibility, the existing algorithmic results are only for convergence (towards a common point, without ever reaching it) and only for semi-synchronous environments, where robots' movements are assumed to be performed instantaneously.In contrast, we study this problem in a totally asynchronous setting, where robots' actions, computations, and movements require a finite but otherwise unpredictable amount of time. We present a protocol that allows anonymous oblivious robots with limited visibility to gather in the same location in finite time, provided they have orientation (i.e., agreement on a coordinate system).Our result indicates that, with respect to gathering, orientation is at least as powerful as instantaneous movements." ] }
1508.02535
2952504335
Consider a complete communication network of @math nodes, where the nodes receive a common clock pulse. We study the synchronous @math -counting problem: given any starting state and up to @math faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo @math in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilisation time in @math (asymptotically optimal), (4) use a small number of states, and consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomisation, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilisation.
With this equivalence in mind, several impossibility results for consensus directly hold for counting as well. First, consensus cannot be solved in the presence of @math or more Byzantine failures @cite_3 . Second, any deterministic @math -resilient consensus algorithm needs to run for at least @math communication rounds @cite_25 . Third, it is known that the connectivity of the communication network must be at least @math @cite_7 . Finally, any consensus algorithm needs to communicate at least @math bits in total @cite_27 .
{ "cite_N": [ "@cite_7", "@cite_27", "@cite_25", "@cite_3" ], "mid": [ "2014772227", "2077963568", "2039164882", "2126924915" ], "abstract": [ "Can unanimity be achieved in an unreliable distributed system? This problem was named \"The Byzantine Generals Problem,\" by Lamport, Pease and Shostak [1980]. The results obtained in the present paper prove that unanimity is achievable in any distributed system if and only if the number of faulty processors in the system is: 1) less than one third of the total number of processors; and 2) less than one half of the connectivity of the system''s network. In cases where unanimity is achievable, algorithms to obtain it are given. This result forms a complete characterization of networks in light of the Byzantine Problem.", "Byzantine Agreement has become increasingly important in establishing distributed properties when errors may exist in the systems. Recent polynomial algorithms for reaching Byzantine Agreement provide us with feasible solutions for obtaining coordination and synchronization in distributed systems. In this paper the amount of information exchange necessary to ensure Byzantine Agreement is studied. This is measured by the total number of messages the participating processors have to send in the worst case. In algorithms that use a signature scheme, the number of signatures appended to messages are also counted. First it is shown that O( nt ) is a lower bound for the number of signatures for any algorithm using authentication, where n denotes the number of processors and t the upper bound on the number of faults the algorithm is supposed to handle. For algorithms that reach Byzantine Agreement without using authentication this is even a lower bound for the total number of messages. If n is large compared to t , these bounds match the upper bounds from previously known algorithms. For the number of messages in the authenticated case we prove the lower bound O( n + t 2 ). Finally algorithms that achieve this bound are presented.", "Abstract : The problem of 'assuring interactive consistency' is defined in (PSL). It is assumed that there are n isolated processors, of which at most m are faulty. The processors can communicate by means of two-party messages, using a medium which is reliable and of negligible delay. The sender of a message is always identifiable by the receiver. Each processor p has a private value sigma(p). The problem is to devise an algorithm that will allow each processor p to compute a value for each processor r, such that (a) if p and r are nonfaulty then p computes r's private value sigma(r), and (b) all the nonfaulty processors compute the same value for each processor r. It is shown in (PSL) that if n 3m + 1, then there is no algorithm which assures interactive consistency. On the other hand, if n or = 3m + 1, then an algorithm does exist. The algorithm presented in (PSL) uses m + 1 rounds of communication, and thus can be said to require 'time' m + 1. An obvious question is whether fewer rounds of communication suffice to solve the problem. In this paper, we answer this question in the negative. That is, we show that any algorithm which assures interactive consistency in the presence of m faulty processors requires at least m + 1 rounds of communication. (Author)", "The problem addressed here concerns a set of isolated processors, some unknown subset of which may be faulty, that communicate only by means of two-party messages. Each nonfaulty processor has a private value of information that must be communicated to each other nonfaulty processor. Nonfaulty processors always communicate honestly, whereas faulty processors may lie. The problem is to devise an algorithm in which processors communicate their own values and relay values received from others that allows each nonfaulty processor to infer a value for each other processor. The value inferred for a nonfaulty processor must be that processor's private value, and the value inferred for a faulty one must be consistent with the corresponding value inferred by each other nonfaulty processor. It is shown that the problem is solvable for, and only for, n ≥ 3 m + 1, where m is the number of faulty processors and n is the total number. It is also shown that if faulty processors can refuse to pass on information but cannot falsely relay information, the problem is solvable for arbitrary n ≥ m ≥ 0. This weaker assumption can be approximated in practice using cryptographic methods." ] }
1508.02535
2952504335
Consider a complete communication network of @math nodes, where the nodes receive a common clock pulse. We study the synchronous @math -counting problem: given any starting state and up to @math faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo @math in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilisation time in @math (asymptotically optimal), (4) use a small number of states, and consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomisation, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilisation.
In terms of communication complexity, no better bound than @math on the number of communicated bits is known. While non-trivial for consensus, this bound turns out to be trivial for deterministic counting algorithms: a self-stabilising algorithm needs to verify its output, and to do that, each of the @math nodes needs to receive information from at least @math other nodes to be certain that some other non-faulty node has the same output value. Similarly, no non-constant lower bounds on the number of nodes are known; however, a non-trivial constant lower bound for the case @math is known @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "1942094667" ], "abstract": [ "Consider a complete communication network on n nodes, each of which is a state machine with s states. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are \"odd\" and which are \"even\". We require that the solution is self-stabilising (reaching the correct operation from any initial state) and it tolerates f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms are expensive to implement in hardware: they require a source of random bits or a large number of statesas. We use computational techniques to construct very compact deterministic algorithms for the first non-trivial case of f = 1. While no algorithm exists for n < 4, we show that as few as 3 states are sufficient for all values n ≤ 4. We prove that the problem cannot be solved with only 2 states for n = 4, but there is a 2-state solution for all values n ≤ 6." ] }
1508.02535
2952504335
Consider a complete communication network of @math nodes, where the nodes receive a common clock pulse. We study the synchronous @math -counting problem: given any starting state and up to @math faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo @math in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilisation time in @math (asymptotically optimal), (4) use a small number of states, and consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomisation, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilisation.
In the case of algorithms, algorithm synthesis has been used for computer-aided design of optimal algorithms with resilience @math , but the approach does not scale due to the extremely fast-growing space of possible algorithms @cite_2 . In general, many fast-stabilising algorithms build on a connection between Byzantine consensus and synchronous counting, but require a large number of states per node @cite_11 due to, e.g., running a large number of consensus instances in parallel. Recently, in one of the preliminary conference reports @cite_0 this paper is based on, we outlined a recursive approach where each node needs to participate in only @math parallel instances of consensus. However, this approach resulted in suboptimal resilience of @math .
{ "cite_N": [ "@cite_0", "@cite_11", "@cite_2" ], "mid": [ "2039651269", "1520594164", "1942094667" ], "abstract": [ "Consider a complete communication network of n nodes, in which the nodes receive a common clock pulse. We study the synchronous c-counting problem: given any starting state and up to f faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes count modulo c in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have linear stabilisation time in f, (3) use a small number of states, and (4) achieve almost-optimal resilience. Prior algorithms either resort to randomisation, use a large number of states, or have poor resilience. In particular, we achieve an exponential improvement in the state complexity of deterministic algorithms, while still achieving linear stabilisation time and almost-linear resilience.", "Consider a distributed network of n nodes that is connected to a global source of \"beats\". All nodes receive the \"beats\" simultaneously, and operate in lock-step. A scheme that produces a \"pulse\" every Cycle beats is shown. That is, the nodes agree on \"special beats\", which are spaced Cycle beats apart. Given such a scheme, a clock synchronization algorithm is built. The \"pulsing\" scheme is self-stabilized despite any transient faults and the continuous presence of up to f < n 3 Byzantine nodes. Therefore, the clock synchronization built on top of the \"pulse\" is highly fault tolerant. In addition, a highly fault tolerant general stabilizer algorithm is constructed on top of the \"pulse\" mechanism. Previous clock synchronization solutions, operating in the exact same model as this one, either support f < n 4 and converge in linear time, or support f < n 3 and have exponential convergence time that also depends on the value of max-clock (the clock wrap around value). The proposed scheme combines the best of both worlds: it converges in linear time that is independent of max-clock and is tolerant to up to f < n 3 Byzantine nodes. Moreover, considering problems in a self-stabilizing, Byzantine tolerant environment that require nodes to know the global state (clock synchronization, token circulation, agreement, etc.), the work presented here is the first protocol to operate in a network that is not fully connected.", "Consider a complete communication network on n nodes, each of which is a state machine with s states. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are \"odd\" and which are \"even\". We require that the solution is self-stabilising (reaching the correct operation from any initial state) and it tolerates f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms are expensive to implement in hardware: they require a source of random bits or a large number of statesas. We use computational techniques to construct very compact deterministic algorithms for the first non-trivial case of f = 1. While no algorithm exists for n < 4, we show that as few as 3 states are sufficient for all values n ≤ 4. We prove that the problem cannot be solved with only 2 states for n = 4, but there is a 2-state solution for all values n ≤ 6." ] }
1508.02535
2952504335
Consider a complete communication network of @math nodes, where the nodes receive a common clock pulse. We study the synchronous @math -counting problem: given any starting state and up to @math faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo @math in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilisation time in @math (asymptotically optimal), (4) use a small number of states, and consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomisation, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilisation.
Finally, we note that while counting algorithms are usually designed for the case of a fully-connected communication topology, the algorithms can be extended to use in a variety of other graph classes with high enough connectivity @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "1942094667" ], "abstract": [ "Consider a complete communication network on n nodes, each of which is a state machine with s states. In synchronous 2-counting, the nodes receive a common clock pulse and they have to agree on which pulses are \"odd\" and which are \"even\". We require that the solution is self-stabilising (reaching the correct operation from any initial state) and it tolerates f Byzantine failures (nodes that send arbitrary misinformation). Prior algorithms are expensive to implement in hardware: they require a source of random bits or a large number of statesas. We use computational techniques to construct very compact deterministic algorithms for the first non-trivial case of f = 1. While no algorithm exists for n < 4, we show that as few as 3 states are sufficient for all values n ≤ 4. We prove that the problem cannot be solved with only 2 states for n = 4, but there is a 2-state solution for all values n ≤ 6." ] }
1508.02096
2949563612
We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our "composed" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).
Compacting models has been a focus of research in tasks, such as language modeling and machine translation, as extremely large models can be built with the large amounts of training data that are available in these tasks. In language modeling, it is frequent to prune higher order n-grams that do not encode any additional information @cite_1 @cite_30 @cite_21 . The same be applied in machine translation @cite_55 @cite_7 by removing longer translation pairs that can be replicated using smaller ones. In essence our model learns regularities at the subword level that can be leveraged for building more compact word representations.
{ "cite_N": [ "@cite_30", "@cite_7", "@cite_55", "@cite_21", "@cite_1" ], "mid": [ "1797288984", "2143269898", "1725104462", "", "1903115690" ], "abstract": [ "A criterion for pruning parameters from N-gram backoff language models is developed, based on the relative entropy between the original and the pruned model. It is shown that the relative entropy resulting from pruning a single N-gram can be computed exactly and efficiently for backoff models. The relative entropy measure can be expressed as a relative change in training set perplexity. This leads to a simple pruning criterion whereby all N-grams that change perplexity by less than a threshold are removed from the model. Experiments show that a production-quality Hub4 LM can be reduced to 26 its original size without increasing recognition error. We also compare the approach to a heuristic pruning criterion by Seymore and Rosenfeld (1996), and show that their approach can be interpreted as an approximation to the relative entropy criterion. Experimentally, both approaches select similar sets of N-grams (about 85 overlap), with the exact relative entropy criterion giving marginally better performance.", "When trained on very large parallel corpora, the phrase table component of a machine translation system grows to consume vast computational resources. In this paper, we introduce a novel pruning criterion that places phrase table pruning on a sound theoretical foundation. Systematic experiments on four language pairs under various data conditions show that our principled approach is superior to existing ad hoc pruning methods.", "Phrase-based machine translation models have shown to yield better translations than Word-based models, since phrase pairs encode the contextual information that is needed for a more accurate translation. However, many phrase pairs do not encode any relevant context, which means that the translation event encoded in that phrase pair is led by smaller translation events that are independent from each other, and can be found on smaller phrase pairs, with little or no loss in translation accuracy. In this work, we propose a relative entropy model for translation models, that measures how likely a phrase pair encodes a translation event that is derivable using smaller translation events with similar probabilities. This model is then applied to phrase table pruning. Tests show that considerable amounts of phrase pairs can be excluded, without much impact on the translation quality. In fact, we show that better translations can be obtained using our pruned models, due to the compression of the search space during decoding.", "", "When a trigram backoff language model is created from a large body of text, trigrams and bigrams that occur few times in the training text are often excluded from the model in order to decrease the model size. Generally, the elimination of n-grams with very low counts is believed to not significantly affect model performance. This project investigates the degradation of a trigram backoff model's perplexity and word error rates as bigram and trigram cutoffs are increased. The advantage of reduction in model size is compared to the increase in word error rate and perplexity scores. More importantly, this project also investigates alternative ways of excluding bigrams and trigrams from a backoff language model, using criteria other than the number of times an n-gram occurs in the training text. Specifically, a difference method has been investigated where the difference in the logs of the original and backed off trigram and bigram probabilities is used as a basis for n-gram exclusion from the model. We show that excluding trigrams and bigrams based on a weighted version of this difference method results in better perplexity and word error rate performance than excluding trigrams and bigrams based on counts alone." ] }
1508.02096
2949563612
We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our "composed" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).
Finally, our work has been applied to dependency parsing and found similar improvements over word models in morphologically rich languages @cite_31 .
{ "cite_N": [ "@cite_31" ], "mid": [ "2951336364" ], "abstract": [ "We present extensions to a continuous-state dependency parsing method that makes it applicable to morphologically rich languages. Starting with a high-performance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words." ] }
1508.02096
2949563612
We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our "composed" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).
The Skip-ngram model @cite_20 is an unsupervised task that is frequently used to train the projection matrix @math using raw text data. In this model, the parameters of each word @math in the input sentence @math are optimized to maximize the prediction of its contextual words over a fixed window size @math . The goal function of the model is to maximize the probability:
{ "cite_N": [ "@cite_20" ], "mid": [ "2950133940" ], "abstract": [ "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible." ] }
1508.02096
2949563612
We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our "composed" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).
The work in @cite_46 proposes a window-based model for word labeling. Given a vocabulary of output labels @math , the model predicts the probability of a given label @math , for word @math as @math , where @math are the input words indexed from @math to @math . The model projects each word @math into a @math -dimentional vector @math . Then, for the word to be labelled @math , the representations @math and the @math closest words @math are concatenated into a @math -dimensional vector and a linear combination is applied using the parameters in matrix @math where @math is defined as an hyperparameter that determines the dimensionality of the output of this operation. Then, a non-linearity is applied to the resulting vector, yielding vector @math . The output label can be predicted by applying a linear transformation to this vector parametrized by @math , which generates a vector with the size of the label vocabulary @math , and a softmax is applied to this vector to obtain a probability distribution over @math .
{ "cite_N": [ "@cite_46" ], "mid": [ "2158899491" ], "abstract": [ "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements." ] }
1508.02285
2953104288
Previous studies have shown that health reports in social media, such as DailyStrength and Twitter, have potential for monitoring health conditions (e.g. adverse drug reactions, infectious diseases) in particular communities. However, in order for a machine to understand and make inferences on these health conditions, the ability to recognise when laymen's terms refer to a particular medical concept (i.e. text normalisation) is required. To achieve this, we propose to adapt an existing phrase-based machine translation (MT) technique and a vector representation of words to map between a social media phrase and a medical concept. We evaluate our proposed approach using a collection of phrases from tweets related to adverse drug reactions. Our experimental results show that the combination of a phrase-based MT technique and the similarity between word vector representations outperforms the baselines that apply only either of them by up to 55 .
Phrase-based MT models (e.g. @cite_16 @cite_0 ) have been shown to be effective in translation between languages, as they learn local term dependencies, such as collocations, re-orderings, insertions and deletions. showed that a phrase-based MT technique markedly outperformed traditional word-based MT techniques on several benchmarks. In this work, we adapt the phrase-based MT technique of for the medical text normalisation task. In particular, we use the phrase-based MT technique to translate phrases from to , before mapping the translated phrases to medical concepts based on the ranked similarity of their word vector representations.
{ "cite_N": [ "@cite_0", "@cite_16" ], "mid": [ "2119168550", "2153653739" ], "abstract": [ "A phrase-based statistical machine translation approach — the alignment template approach — is described. This translation approach allows for general many-to-many relations between words. Thereby, the context of words is taken into account in the translation model, and local changes in word order from source to target language can be learned explicitly. The model is described using a log-linear modeling approach, which is a generalization of the often used source–channel approach. Thereby, the model is easier to extend than classical statistical machine translation systems. We describe in detail the process for learning phrasal translations, the feature functions used, and the search algorithm. The evaluation of this approach is performed on three different tasks. For the German–English speech VERBMOBIL task, we analyze the effect of various system components. On the French–English Canadian HANSARDS task, the alignment template system obtains significantly better results than a single-word-based translation model. In the Chinese–English 2002 National Institute of Standards and Technology (NIST) machine translation evaluation it yields statistically significantly better NIST scores than all competing research and commercial translation systems.", "We propose a new phrase-based translation model and decoding algorithm that enables us to evaluate and compare several, previously proposed phrase-based translation models. Within our framework, we carry out a large number of experiments to understand better and explain why phrase-based models out-perform word-based models. Our empirical results, which hold for all examined language pairs, suggest that the highest levels of performance can be obtained through relatively simple means: heuristic learning of phrase translations from word-based alignments and lexical weighting of phrase translations. Surprisingly, learning phrases longer than three words and learning phrases from high-accuracy word-level alignment models does not have a strong impact on performance. Learning only syntactically motivated phrases degrades the performance of our systems." ] }
1508.02285
2953104288
Previous studies have shown that health reports in social media, such as DailyStrength and Twitter, have potential for monitoring health conditions (e.g. adverse drug reactions, infectious diseases) in particular communities. However, in order for a machine to understand and make inferences on these health conditions, the ability to recognise when laymen's terms refer to a particular medical concept (i.e. text normalisation) is required. To achieve this, we propose to adapt an existing phrase-based machine translation (MT) technique and a vector representation of words to map between a social media phrase and a medical concept. We evaluate our proposed approach using a collection of phrases from tweets related to adverse drug reactions. Our experimental results show that the combination of a phrase-based MT technique and the similarity between word vector representations outperforms the baselines that apply only either of them by up to 55 .
Traditional approaches for creating word vector representations treated words as atomic units @cite_7 @cite_9 . For instance, the one-hot representation used a vector with a length of the size of the vocabulary, where one dimension is on, to represent a particular word @cite_9 . Recently, techniques for learning high-quality word vector representations (i.e. distributed word representations) that could capture the semantic similarity between words, such as continuous bags of words (CBOW) @cite_7 and global vectors (GloVe) @cite_12 , have been proposed. Indeed, these distributed word representations have been effectively applied in different systems that achieve state-of-the-art performances for several NLP tasks, such as MT @cite_4 and named entity recognition @cite_13 . In this work, beside using word vector representations to measure the similarity between translated Twitter phrases and medical concepts, we use the similarity between word vector representations of the original Twitter phrase and a medical concept to augment the adapted phrase-based MT technique.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_13", "@cite_12" ], "mid": [ "2126725946", "2950133940", "2158139315", "1570587036", "" ], "abstract": [ "Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs", "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.", "" ] }
1508.02268
2951980058
Dropout and other feature noising schemes have shown promising results in controlling over-fitting by artificially corrupting the training data. Though extensive theoretical and empirical studies have been performed for generalized linear models, little work has been done for support vector machines (SVMs), one of the most successful approaches for supervised learning. This paper presents dropout training for both linear SVMs and the nonlinear extension with latent representation learning. For linear SVMs, to deal with the intractable expectation of the non-smooth hinge loss under corrupting distributions, we develop an iteratively re-weighted least square (IRLS) algorithm by exploring data augmentation techniques. Our algorithm iteratively minimizes the expectation of a re-weighted least square problem, where the re-weights are analytically updated. For nonlinear latent SVMs, we consider learning one layer of latent representations in SVMs and extend the data augmentation technique in conjunction with first-order Taylor-expansion to deal with the intractable expected non-smooth hinge loss and the nonlinearity of latent representations. Finally, we apply the similar data augmentation ideas to develop a new IRLS algorithm for the expected logistic loss under corrupting distributions, and we further develop a non-linear extension of logistic regression by incorporating one layer of latent representations. Our algorithms offer insights on the connection and difference between the hinge loss and logistic loss in dropout training. Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of both linear and nonlinear SVMs. In addition, the nonlinear SVMs further improve the prediction performance on several image datasets.
Dropout training has been recognized as an effective feature noising strategy for neural networks by randomly dropping hidden units during training @cite_18 . One representative dropout strategy is the standard Monte Carlo" dropout or the explicit corruption @cite_44 @cite_6 , which has been applied in neural networks to prevent the feature co-adaptation effect and improve prediction performance in many applications, e.g., image classification @cite_20 @cite_47 @cite_43 , document classification @cite_4 @cite_35 , named entity recognition @cite_3 , tag recommendation @cite_42 , online prediction with expert advice @cite_12 , spoken language understanding @cite_48 , etc. Dropout training also performs well on standard machine learning models, e.g., DART, an ensemble model of boosted regression trees using dropout training @cite_23 .
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_4", "@cite_48", "@cite_42", "@cite_6", "@cite_3", "@cite_44", "@cite_43", "@cite_23", "@cite_47", "@cite_12", "@cite_20" ], "mid": [ "2952825952", "", "2158542502", "", "880911330", "2183112036", "2250968750", "2095705004", "2042184006", "2963877897", "35527955", "2135754715", "1904365287" ], "abstract": [ "Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.", "", "The goal of machine learning is to develop predictors that generalize well to test data. Ideally, this is achieved by training on very large (infinite) training data sets that capture all variations in the data distribution. In the case of finite training data, an effective solution is to extend the training set with artificially created examples--which, however, is also computationally costly. We propose to corrupt training examples with noise from known distributions within the exponential family and present a novel learning algorithm, called marginalized corrupted features (MCF), that trains robust predictors by minimizing the expected value of the loss function under the corrupting distribution-- essentially learning with infinitely many (corrupted) training examples. We show empirically on a variety of data sets that MCF classifiers can be trained efficiently, may generalize substantially better to test data, and are more robust to feature deletion at test time.", "", "Tag recommendation has become one of the most important ways of organizing and indexing online resources like articles, movies, and music. Since tagging information is usually very sparse, effective learning of the content representation for these resources is crucial to accurate tag recommendation. Recently, models proposed for tag recommendation, such as collaborative topic regression and its variants, have demonstrated promising accuracy. However, a limitation of these models is that, by using topic models like latent Dirichlet allocation as the key component, the learned representation may not be compact and effective enough. Moreover, since relational data exist as an auxiliary data source in many applications, it is desirable to incorporate such data into tag recommendation models. In this paper, we start with a deep learning model called stacked denoising autoencoder (SDAE) in an attempt to learn more effective content representation. We propose a probabilistic formulation for SDAE and then extend it to a relational SDAE (RSDAE) model. RSDAE jointly performs deep representation learning and relational learning in a principled way under a probabilistic framework. Experiments conducted on three real datasets show that both learning more effective representation and learning from relational data are beneficial steps to take to advance the state of the art.", "", "NLP models have many and sparse features, and regularization is key for balancing model overfitting versus underfitting. A recently repopularized form of regularization is to generate fake training data by repeatedly adding noise to real data. We reinterpret this noising as an explicit regularizer, and approximate it with a second-order formula that can be used during training without actually generating fake data. We show how to apply this method to structured prediction using multinomial logistic regression and linear-chain CRFs. We tackle the key challenge of developing a dynamic program to compute the gradient of the regularizer efficiently. The regularizer is a sum over inputs, so we can estimate it more accurately via a semi-supervised or transductive extension. Applied to text classification and NER, our method provides a >1 absolute performance gain over use of standardL2 regularization.", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.", "MART (Friedman, 2001, 2002), an ensemble model of boosted regression trees, is known to deliver high prediction accuracy for diverse tasks, and it is widely used in practice. However, it suffers an issue which we call over-specialization, wherein trees added at later iterations tend to impact the prediction of only a few instances, and make negligible contribution towards the remaining instances. This negatively affects the performance of the model on unseen data, and also makes the model over-sensitive to the contributions of the few, initially added tress. We show that the commonly used tool to address this issue, that of shrinkage, alleviates the problem only to a certain extent and the fundamental issue of over-specialization still remains. In this work, we explore a different approach to address the problem that of employing dropouts, a tool that has been recently proposed in the context of learning deep neural networks (, 2012). We propose a novel way of employing dropouts in MART, resulting in the DART algorithm. We evaluate DART on ranking, regression and classification tasks, using large scale, publicly available datasets, and show that DART outperforms MART in each of the tasks, with a significant margin. We also show that DART overcomes the issue of over-specialization to a considerable extent. Appearing in Proceedings of the 18 International Conference on Artificial Intelligence and Statistics (AISTATS) 2015, San Diego, CA, USA. JMLR: W&CP volume 38. Copyright 2015 by the authors.", "Preventing feature co-adaptation by encouraging independent contributions from different features often improves classification and regression performance. Dropout training (, 2012) does this by randomly dropping out (zeroing) hidden units and input features during training of neural networks. However, repeatedly sampling a random subset of input features makes training much slower. Based on an examination of the implied objective function of dropout training, we show how to do fast dropout training by sampling from or integrating a Gaussian approximation, instead of doing Monte Carlo optimization of this objective. This approximation, justified by the central limit theorem and empirical evidence, gives an order of magnitude speedup and more stability. We show how to do fast dropout training for classification, regression, and multilayer neural networks. Beyond dropout, our technique is extended to integrate out other types of noise and small image transformations.", "We consider online prediction with expert advice. Over the course of many trials, the goal of the learning algorithm is to achieve small additional loss (i.e. regret) compared to the loss of the best from a set of K experts. The two most popular algorithms are Hedge Weighted Majority and Follow the Perturbed Leader (FPL). The latter algorithm first perturbs the loss of each expert by independent additive noise drawn from a fixed distribution, and then predicts with the expert of minimum perturbed loss (“the leader”) where ties are broken uniformly at random. To achieve the optimal worst-case regret as a function of the lossL of the best expert in hindsight, the two types of algorithms need to tune their learning rate or noise magnitude, respectively, as a function ofL . Instead of perturbing the losses of the experts with additive noise, we randomly set them to 0 or 1 before selecting the leader. We show that our perturbations are an instance of dropout — because experts may be interpreted as features — although for non-binary losses the dropout probability needs to be made dependent on the losses to get good regret bounds. We show that this simple, tuning-free version of the FPL algorithm achieves two feats: optimal worst-case O( p L lnK + lnK) regret as a function ofL , and optimalO(lnK) regret when the loss vectors are drawn i.i.d. from a fixed distribution and there is a gap between the expected loss of the best expert and all others. A number of recent algorithms from the Hedge family (AdaHedge and FlipFlop) also achieve this, but they employ sophisticated tuning regimes. The dropout perturbation of the losses of the experts result in different noise distributions for each expert (because they depend on the expert’s total loss) and curiously enough no additional tuning is needed: the choice of dropout probability only affects the constants.", "When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition." ] }
1508.02268
2951980058
Dropout and other feature noising schemes have shown promising results in controlling over-fitting by artificially corrupting the training data. Though extensive theoretical and empirical studies have been performed for generalized linear models, little work has been done for support vector machines (SVMs), one of the most successful approaches for supervised learning. This paper presents dropout training for both linear SVMs and the nonlinear extension with latent representation learning. For linear SVMs, to deal with the intractable expectation of the non-smooth hinge loss under corrupting distributions, we develop an iteratively re-weighted least square (IRLS) algorithm by exploring data augmentation techniques. Our algorithm iteratively minimizes the expectation of a re-weighted least square problem, where the re-weights are analytically updated. For nonlinear latent SVMs, we consider learning one layer of latent representations in SVMs and extend the data augmentation technique in conjunction with first-order Taylor-expansion to deal with the intractable expected non-smooth hinge loss and the nonlinearity of latent representations. Finally, we apply the similar data augmentation ideas to develop a new IRLS algorithm for the expected logistic loss under corrupting distributions, and we further develop a non-linear extension of logistic regression by incorporating one layer of latent representations. Our algorithms offer insights on the connection and difference between the hinge loss and logistic loss in dropout training. Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of both linear and nonlinear SVMs. In addition, the nonlinear SVMs further improve the prediction performance on several image datasets.
In contrast to the standard Monte-Carlo" dropout, in this paper, we focus on the class of models that are considered to be deterministic versions of dropout by marginalizing the noise. These models are formalized as marginalized corrupted features (MCF), and do not need the random selection. It is possible to get gradients for the marginalized loss functions. Representative work on MCF includes the marginalization denoising autoencoders for domain adaptation @cite_7 and learning nonlinear representations @cite_9 and marginalized dropout noise in linear regression @cite_6 . Besides, @cite_47 explores the idea of marginalized dropout for speed-up, and @cite_4 develops several loss functions in the context of empirical risk minimization framework under different input noise distributions. Moreover, the MCF framework have also been developed for link prediction @cite_33 , multi-label prediction @cite_15 , image tagging @cite_1 and distance metric learning @cite_24 .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_7", "@cite_15", "@cite_9", "@cite_1", "@cite_6", "@cite_24", "@cite_47" ], "mid": [ "2158542502", "800216940", "2949821452", "2403271993", "", "170967611", "2183112036", "2093549852", "35527955" ], "abstract": [ "The goal of machine learning is to develop predictors that generalize well to test data. Ideally, this is achieved by training on very large (infinite) training data sets that capture all variations in the data distribution. In the case of finite training data, an effective solution is to extend the training set with artificially created examples--which, however, is also computationally costly. We propose to corrupt training examples with noise from known distributions within the exponential family and present a novel learning algorithm, called marginalized corrupted features (MCF), that trains robust predictors by minimizing the expected value of the loss function under the corrupting distribution-- essentially learning with infinitely many (corrupted) training examples. We show empirically on a variety of data sets that MCF classifiers can be trained efficiently, may generalize substantially better to test data, and are more robust to feature deletion at test time.", "Link prediction and multi-label learning on graphs are two important but challenging machine learning problems that have broad applications in diverse fields. Not only are the two problems inherently correlated and often appear concurrently, they are also exacerbated by incomplete data. We develop a novel algorithm to solve these two problems jointly under a unified framework, which helps reduce the impact of graph noise and benefits both tasks individually. We reduce multi-label learning problem into an additional link prediction task and solve both problems with marginalized denoising, which we co-regularize with Laplacian smoothing. This approach combines both learning tasks into a single convex objective function, which we optimize efficiently with iterative closed-form updates. The resulting approach performs significantly better than prior work on several important real-world applications with great consistency.", "Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters ? in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB^ TM , significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks.", "", "", "Automatic image annotation is a difficult and highly relevant machine learning task. Recent advances have significantly improved the state-of-the-art in retrieval accuracy with algorithms based on nearest neighbor classification in carefully learned metric spaces. But this comes at a price of increased computational complexity during training and testing. We propose FastTag, a novel algorithm that achieves comparable results with two simple linear mappings that are co-regularized in a joint convex loss function. The loss function can be efficiently optimized in closed form updates, which allows us to incorporate a large number of image descriptors cheaply. On several standard real-world benchmark data sets, we demonstrate that FastTag matches the current state-of-the-art in tagging quality, yet reduces the training and testing times by several orders of magnitude and has lower asymptotic complexity.", "", "Distance metric learning (DML) aims to learn a distance metric better than Euclidean distance. It has been successfully applied to various tasks, e.g., classification, clustering and information retrieval. Many DML algorithms suffer from the over-fitting problem because of a large number of parameters to be determined in DML. In this paper, we exploit the dropout technique, which has been successfully applied in deep learning to alleviate the over-fitting problem, for DML. Different from the previous studies that only apply dropout to training data, we apply dropout to both the learned metrics and the training data. We illustrate that application of dropout to DML is essentially equivalent to matrix norm based regularization. Compared with the standard regularization scheme in DML, dropout is advantageous in simulating the structured regularizers which have shown consistently better performance than non structured regularizers. We verify, both empirically and theoretically, that dropout is effective in regulating the learned metric to avoid the over-fitting problem. Last, we examine the idea of wrapping the dropout technique in the state-of-art DML methods and observe that the dropout technique can significantly improve the performance of the original DML methods.", "Preventing feature co-adaptation by encouraging independent contributions from different features often improves classification and regression performance. Dropout training (, 2012) does this by randomly dropping out (zeroing) hidden units and input features during training of neural networks. However, repeatedly sampling a random subset of input features makes training much slower. Based on an examination of the implied objective function of dropout training, we show how to do fast dropout training by sampling from or integrating a Gaussian approximation, instead of doing Monte Carlo optimization of this objective. This approximation, justified by the central limit theorem and empirical evidence, gives an order of magnitude speedup and more stability. We show how to do fast dropout training for classification, regression, and multilayer neural networks. Beyond dropout, our technique is extended to integrate out other types of noise and small image transformations." ] }
1508.02268
2951980058
Dropout and other feature noising schemes have shown promising results in controlling over-fitting by artificially corrupting the training data. Though extensive theoretical and empirical studies have been performed for generalized linear models, little work has been done for support vector machines (SVMs), one of the most successful approaches for supervised learning. This paper presents dropout training for both linear SVMs and the nonlinear extension with latent representation learning. For linear SVMs, to deal with the intractable expectation of the non-smooth hinge loss under corrupting distributions, we develop an iteratively re-weighted least square (IRLS) algorithm by exploring data augmentation techniques. Our algorithm iteratively minimizes the expectation of a re-weighted least square problem, where the re-weights are analytically updated. For nonlinear latent SVMs, we consider learning one layer of latent representations in SVMs and extend the data augmentation technique in conjunction with first-order Taylor-expansion to deal with the intractable expected non-smooth hinge loss and the nonlinearity of latent representations. Finally, we apply the similar data augmentation ideas to develop a new IRLS algorithm for the expected logistic loss under corrupting distributions, and we further develop a non-linear extension of logistic regression by incorporating one layer of latent representations. Our algorithms offer insights on the connection and difference between the hinge loss and logistic loss in dropout training. Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of both linear and nonlinear SVMs. In addition, the nonlinear SVMs further improve the prediction performance on several image datasets.
Both theoretical and empirical analyses have shown that the dropout training under MCF is equivalent to adding a regularization effect into the model for controlling over-fitting. @cite_35 describes how dropout can be seen as an adaptive regularizer, and @cite_14 proposes a theoretical explanation for why dropout training has been successful on high-dimensional single-layer natural language tasks. The result is that Dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions. @cite_25 develops a pseudo-ensemble by applying dropout in perturbing the parent model and examines the relationship to the standard ensemble methods by presenting a novel regularizer based on the noising process. Other work @cite_40 analyzes some underlying problems, e.g., when the dropout-regularized criterion has a unique minimizer and when the dropout-regularization penalty goes to infinity with the weights. @cite_39 sheds light on the dropout from a Bayesian standpoint, which enables us to optimize the dropout rates for better performance.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_39", "@cite_40", "@cite_25" ], "mid": [ "2952825952", "2949176378", "302823870", "1746889935", "" ], "abstract": [ "Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.", "Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions.", "Dropout is one of the key techniques to prevent the learning from overfitting. It is explained that dropout works as a kind of modified L2 regularization. Here, we shed light on the dropout from Bayesian standpoint. Bayesian interpretation enables us to optimize the dropout rate, which is beneficial for learning of weight parameters and prediction after learning. The experiment result also encourages the optimization of the dropout.", "Dropout is a simple but effective technique for learning in neural networks and other settings. A sound theoretical understanding of dropout is needed to determine when dropout should be applied and how to use it most effectively. In this paper we continue the exploration of dropout as a regularizer pioneered by Wager, et.al. We focus on linear classification where a convex proxy to the misclassification loss (i.e. the logistic loss used in logistic regression) is minimized. We show: (a) when the dropout-regularized criterion has a unique minimizer, (b) when the dropout-regularization penalty goes to infinity with the weights, and when it remains bounded, (c) that the dropout regularization can be non-monotonic as individual weights increase from 0, and (d) that the dropout regularization penalty may not be convex. This last point is particularly surprising because the combination of dropout regularization with any convex loss proxy is always a convex function. In order to contrast dropout regularization with @math regularization, we formalize the notion of when different sources are more compatible with different regularizers. We then exhibit distributions that are provably more compatible with dropout regularization than @math regularization, and vice versa. These sources provide additional insight into how the inductive biases of dropout and @math regularization differ. We provide some similar results for @math regularization.", "" ] }
1508.02268
2951980058
Dropout and other feature noising schemes have shown promising results in controlling over-fitting by artificially corrupting the training data. Though extensive theoretical and empirical studies have been performed for generalized linear models, little work has been done for support vector machines (SVMs), one of the most successful approaches for supervised learning. This paper presents dropout training for both linear SVMs and the nonlinear extension with latent representation learning. For linear SVMs, to deal with the intractable expectation of the non-smooth hinge loss under corrupting distributions, we develop an iteratively re-weighted least square (IRLS) algorithm by exploring data augmentation techniques. Our algorithm iteratively minimizes the expectation of a re-weighted least square problem, where the re-weights are analytically updated. For nonlinear latent SVMs, we consider learning one layer of latent representations in SVMs and extend the data augmentation technique in conjunction with first-order Taylor-expansion to deal with the intractable expected non-smooth hinge loss and the nonlinearity of latent representations. Finally, we apply the similar data augmentation ideas to develop a new IRLS algorithm for the expected logistic loss under corrupting distributions, and we further develop a non-linear extension of logistic regression by incorporating one layer of latent representations. Our algorithms offer insights on the connection and difference between the hinge loss and logistic loss in dropout training. Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of both linear and nonlinear SVMs. In addition, the nonlinear SVMs further improve the prediction performance on several image datasets.
Though much work has been done on marginalizing the quadratic loss, logistic loss, or the log-loss induced from a generalized linear model (GLM) @cite_4 @cite_35 @cite_3 , little work has been done on the margin-based hinge loss underlying the very successful support vector machines (SVMs) @cite_31 as discussed in Section 1. The technical challenge is that the non-smoothness of the hinge loss makes it hard to compute or even approximate its expectation under a given corrupting distribution. Existing methods are not directly applicable. This paper attempts to address this challenge and fill up the gap by extending dropout training as well as other feature noising schemes to SVMs. Finally, some preliminary results were reported in @cite_27 and this paper presents a systematical extension.
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_3", "@cite_27", "@cite_31" ], "mid": [ "2952825952", "2158542502", "2250968750", "2951351492", "2156909104" ], "abstract": [ "Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features by an estimate of the inverse diagonal Fisher information matrix. We also establish a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts the performance of dropout training, improving on state-of-the-art results on the IMDB reviews dataset.", "The goal of machine learning is to develop predictors that generalize well to test data. Ideally, this is achieved by training on very large (infinite) training data sets that capture all variations in the data distribution. In the case of finite training data, an effective solution is to extend the training set with artificially created examples--which, however, is also computationally costly. We propose to corrupt training examples with noise from known distributions within the exponential family and present a novel learning algorithm, called marginalized corrupted features (MCF), that trains robust predictors by minimizing the expected value of the loss function under the corrupting distribution-- essentially learning with infinitely many (corrupted) training examples. We show empirically on a variety of data sets that MCF classifiers can be trained efficiently, may generalize substantially better to test data, and are more robust to feature deletion at test time.", "NLP models have many and sparse features, and regularization is key for balancing model overfitting versus underfitting. A recently repopularized form of regularization is to generate fake training data by repeatedly adding noise to real data. We reinterpret this noising as an explicit regularizer, and approximate it with a second-order formula that can be used during training without actually generating fake data. We show how to apply this method to structured prediction using multinomial logistic regression and linear-chain CRFs. We tackle the key challenge of developing a dynamic program to compute the gradient of the regularizer efficiently. The regularizer is a sum over inputs, so we can estimate it more accurately via a semi-supervised or transductive extension. Applied to text classification and NER, our method provides a >1 absolute performance gain over use of standardL2 regularization.", "Dropout and other feature noising schemes have shown promising results in controlling over-fitting by artificially corrupting the training data. Though extensive theoretical and empirical studies have been performed for generalized linear models, little work has been done for support vector machines (SVMs), one of the most successful approaches for supervised learning. This paper presents dropout training for linear SVMs. To deal with the intractable expectation of the non-smooth hinge loss under corrupting distributions, we develop an iteratively re-weighted least square (IRLS) algorithm by exploring data augmentation techniques. Our algorithm iteratively minimizes the expectation of a re-weighted least square problem, where the re-weights have closed-form solutions. The similar ideas are applied to develop a new IRLS algorithm for the expected logistic loss under corrupting distributions. Our algorithms offer insights on the connection and difference between the hinge loss and logistic loss in dropout training. Empirical results on several real datasets demonstrate the effectiveness of dropout training on significantly boosting the classification accuracy of linear SVMs.", "Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing learning algorithms what is important in learning theory?." ] }
1508.01623
2950776723
We present our ongoing work on requirements specification and analysis for the geographically distributed software and systems. Developing software and systems within for different countries or states or even within for different organisations means that the requirements to them can differ in each particular case. These aspects naturally impact on the software architecture and on the development process as a whole. The challenge is to deal with this diversity in a systematic way, avoiding contradictions and non-compliance. In this paper, we present a formal framework for the analysis of the requirements diversity, which comes from the differences in the regulations, laws and cultural aspects for different countries or organisations. The framework also provides the corresponding architectural view and the methods for requirements structuring and optimisation.
The main purpose of the requirements specification (RS) is to elicit and to document the given problem (product software system) using concepts from the problem domain, i.e. on the RE phase we are speaking only on the . In contrast to this, the aim of a software architecture (SA) is to design a for the problem described in the RS, at a high level of abstraction. Thus, there are tight interdependencies between functional non-functional requirements and architectural elements, which makes the integration of the RE and architecture crucial . The results of the empirical study conducted by @cite_3 also have shown that the software architects with the knowledge and experience on RE perform better, in terms of architectural quality, than those without these knowledge end experience.
{ "cite_N": [ "@cite_3" ], "mid": [ "1982782640" ], "abstract": [ "While the relationship between Requirements Engineering and software architecture (SA) has been studied increasingly in the past five years in terms of methods, tools, development models, and paradigms, that in terms of the human agents conducting these processes has barely been explored. This paper describes the impact of requirements knowledge and experience (RKE) on SA tasks. Specifically, it describes an exploratory, empirical study involving a number of architecting teams, some with requirements background and others without, all architecting from the same set of requirements. The overall results of this study suggest that architects with RKE perform better than those without, and specific areas of architecting are identified where these differences manifest. We discuss the possible implications of the findings on the areas of training, education and technology." ] }
1508.01623
2950776723
We present our ongoing work on requirements specification and analysis for the geographically distributed software and systems. Developing software and systems within for different countries or states or even within for different organisations means that the requirements to them can differ in each particular case. These aspects naturally impact on the software architecture and on the development process as a whole. The challenge is to deal with this diversity in a systematic way, avoiding contradictions and non-compliance. In this paper, we present a formal framework for the analysis of the requirements diversity, which comes from the differences in the regulations, laws and cultural aspects for different countries or organisations. The framework also provides the corresponding architectural view and the methods for requirements structuring and optimisation.
@cite_2 described a spiral model-like development cycle of requirements and architecture. @cite_7 went further and have provided methodical guidance for the co-design. An experience-based approach for integration architecture and RE is presented by @cite_8 . This approach that supports the elicitation, specification and design activity by providing in terms of questionnaires, checklists, architectural patterns and rationale that have been collected in earlier successful projects and that are presented to developers to support them in their task.
{ "cite_N": [ "@cite_8", "@cite_7", "@cite_2" ], "mid": [ "8477272", "1539866868", "2056134008" ], "abstract": [ "Deriving requirements and architecture in concert implies the joint elicitation and specification of the problem and the structure of the solution. In this paper we argue that such an integrated process should be fundamentally based on experience. We sketch an approach developed in the context of the EMPRESS project that shows how different kinds of experiencebased artifacts, such as questionnaires, checklists, architectural patterns, and rationale, can beneficially be applied.", "The need to co-develop requirements and architectural artefacts, especially for innovative solutions, is widely recognised and accepted. Surprisingly, no comprehensive approach exists to structure the co-design process and to support the stakeholders, requirements engineers, and system architects in co-developing innovative requirements and architectural artefacts. In this paper, we propose a method for the co-design of requirements and architectural artefacts based on two viewpoints, the system usage viewpoint and the system architecture viewpoint. Initially, the two viewpoints are nearly decoupled. The method consists of five sub-processes that support the development of each viewpoint, the comparison of the two viewpoints, the consolidation of the viewpoints, and the definition of detailed system requirements based on the two viewpoints. The consolidation of system usage and coarse-grained system architecture is driven by the refinement of system interaction scenarios into architectural scenarios and the refinement of the associated usage goals. Preliminary results of applying our method in industry are reported.", "Software development organizations often choose between alternative starting points-requirements or architectures. This invariably results in a waterfall development process that produces artificially frozen requirements documents for use in the next step in the development life cycle. Alternatively, this process creates systems with constrained architectures that restrict users and handicap developers by resisting inevitable and desirable changes in requirements. The spiral life-cycle model addresses many drawbacks of a waterfall model by providing an incremental development process, in which developers repeatedly evaluate changing project risks to manage unstable requirements and funding. An even finer-grain spiral life cycle reflects both the realities and necessities of modern software development. Such a life cycle acknowledges the need to develop software architectures that are stable, yet adaptable, in the presence of changing requirements. The cornerstone of this process is that developers craft a system's requirements and its architecture concurrently, and interleave their development." ] }
1508.01623
2950776723
We present our ongoing work on requirements specification and analysis for the geographically distributed software and systems. Developing software and systems within for different countries or states or even within for different organisations means that the requirements to them can differ in each particular case. These aspects naturally impact on the software architecture and on the development process as a whole. The challenge is to deal with this diversity in a systematic way, avoiding contradictions and non-compliance. In this paper, we present a formal framework for the analysis of the requirements diversity, which comes from the differences in the regulations, laws and cultural aspects for different countries or organisations. The framework also provides the corresponding architectural view and the methods for requirements structuring and optimisation.
@cite_6 investigated the problem of designing regulation-compliant systems and, in particular, the challenges in eliciting and managing legal requirements. @cite_5 reported on an industry case study in which product requirements were specified to comply with the U.S. federal laws. @cite_0 performed a case study using our approach to evaluate the iTrust Medical Records System requirements for compliance with the U.S. Health Insurance Portability and Accountability Act. @cite_4 presented the guiding rules and a framework for for deriving compliant-by-construction requirements, also focusing on the U.S. federal laws.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4", "@cite_6" ], "mid": [ "2013086879", "2139735278", "2146077699", "2144369660" ], "abstract": [ "To ensure legal compliance, requirements engineers need tools to determine existing software requirements' compliance with relevant law. We propose using a production rule model for requirements engineers to query as they check software requirements for legal compliance. In this paper, we perform a case study using our approach to evaluate the iTrust Medical Records System requirements for compliance with the u.s. Health Insurance Portability and Accountability Act (HIPAA). We identifY 12 new compliance requirements beyond the 63 functional requirements with which we began our analysis.", "In the United States, federal and state regulations prescribe stakeholder rights and obligations that must be satisfied by the requirements for software systems. These regulations are typically wrought with ambiguities, making the process of deriving system requirements ad hoc and error prone. In highly regulated domains such as healthcare, there is a need for more comprehensive standards that can be used to assure that system requirements conform to regulations. To address this need, we expound upon a process called Semantic Parameterization previously used to derive rights and obligations from privacy goals. In this work, we apply the process to the Privacy Rule from the U.S. Health Insurance Portability and Accountability Act (HIPAA). We present our methodology for extracting and prioritizing rights and obligations from regulations and show how semantic models can be used to clarify ambiguities through focused elicitation and to balance rights with obligations. The results of our analysis can aid requirements engineers, standards organizations, compliance officers, and stakeholders in assuring systems conform to policy and satisfy requirements.", "While new laws and regulations address organisations, with their processes and information systems, the problem of defining suitable methods and techniques to support the design of law-compliant systems is getting increasing attention. We proposed a novel requirements engineering framework that includes a systematic process to derive law-compliant system requirements taking into account laws and strategic goals of stakeholders of a given domain. In this paper, we focus on the conceptual meta-model this framework rests on, defining it and discussing its use.", "The increasing complexity of IT systems and the growing demand for regulation compliance are main issues for the design of IT systems. Addressing these issues requires the developing of effective methods to support the analysis of regulations and the elicitation of any organizational and system requirements from them. This work investigates the problem of designing regulation-compliant systems and, in particular, the challenges in eliciting and managing legal requirements." ] }
1508.01887
2140810526
This work investigates how the traditional image classification pipelines can be extended into a deep architecture, inspired by recent successes of deep neural networks. We propose a deep boosting framework based on layer-by-layer joint feature boosting and dictionary learning. In each layer, we construct a dictionary of filters by combining the filters from the lower layer, and iteratively optimize the image representation with a joint discriminative-generative formulation, i.e. minimization of empirical classification error plus regularization of analysis image generation over training images. For optimization, we perform two iterating steps: i) to minimize the classification error, select the most discriminative features using the gentle adaboost algorithm; ii) according to the feature selection, update the filters to minimize the regularization on analysis image representation using the gradient descent method. Once the optimization is converged, we learn the higher layer representation in the same way. Our model delivers several distinct advantages. First, our layer-wise optimization provides the potential to build very deep architectures. Second, the generated image representation is compact and meaningful. In several visual recognition tasks, our framework outperforms existing state-of-the-art approaches.
Another related work to this paper is learning a dictionary in an analysis prior @cite_37 @cite_33 @cite_7 . The key idea of analysis-based model is utilizing analysis operator (also known as analysis dictionary) to deal with latent clean signal and leading to a sparse outcome. In this paper, we consider the analysis-based prior as a regularization prior to learn more discriminative features to a certain category. Please refer to Sec. for more details about analysis dictionary learning.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_7" ], "mid": [ "2107844156", "2162221686", "" ], "abstract": [ "The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysis-based and synthesis-based priors. Analysis-based priors assign probability to a signal through various forward measurements of it, while synthesis-based priors seek a reconstruction of the signal as a combination of atom signals. The algebraic similarity between the two suggests that they could be strongly related; however, in the absence of a detailed study, contradicting approaches have emerged. While the computationally intensive synthesis approach is receiving ever-increasing attention and is notably preferred, other works hypothesize that the two might actually be much closer, going as far as to suggest that one can approximate the other. In this paper we describe the two prior classes in detail, focusing on the distinction between them. We show that although in the simpler complete and undercomplete formulations the two approaches are equivalent, in their overcomplete formulation they depart. Focusing on the l1 case, we present a novel approach for comparing the two types of priors based on high-dimensional polytopal geometry. We arrive at a series of theoretical and numerical results establishing the existence of an unbridgeable gap between the two.", "In this paper, we propose a new and computationally efficient framework for learning sparse models. We formulate a unified approach that contains as particular cases models promoting sparse synthesis and analysis type of priors, and mixtures thereof. The supervised training of the proposed model is formulated as a bilevel optimization problem, in which the operators are optimized to achieve the best possible performance on a specific task, e.g., reconstruction or classification. By restricting the operators to be shift invariant, our approach can be thought as a way of learning analysis+synthesis sparsity-promoting convolutional operators. Leveraging recent ideas on fast trainable regressors designed to approximate exact sparse codes, we propose a way of constructing feed-forward neural networks capable of approximating the learned models at a fraction of the computational cost of exact solvers. In the shift-invariant case, this leads to a principled way of constructing task-specific convolutional networks. We illustrate the proposed models on several experiments in music analysis and image processing applications.", "" ] }
1508.01745
2952013107
Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.
Conventional approaches to NLG typically divide the task into sentence planning and surface realisation. Sentence planning maps input semantic symbols into an intermediary form representing the utterance, e.g. a tree-like or template structure, then surface realisation converts the intermediate structure into the final text @cite_10 @cite_41 . Although statistical sentence planning has been explored previously, for example, generating the most likely context-free derivations given a corpus @cite_37 or maximising the expected reward using reinforcement learning @cite_32 , these methods still rely on a pre-existing, handcrafted generator. To minimise handcrafting, proposed learning sentence planning rules directly from a corpus of utterances labelled with Rhetorical Structure Theory (RST) discourse relations @cite_1 . However, the required corpus labelling is expensive and additional handcrafting is still needed to map the sentence plan to a valid syntactic form.
{ "cite_N": [ "@cite_37", "@cite_41", "@cite_1", "@cite_32", "@cite_10" ], "mid": [ "1521413921", "2139079654", "2045738181", "1980340273", "" ], "abstract": [ "We present a simple, robust generation system which performs content selection and surface realization in a unified, domain-independent framework. In our approach, we break up the end-to-end generation process into a sequence of local decisions, arranged hierarchically and each trained discriminatively. We deployed our system in three different domains---Robocup sportscasting, technical weather forecasts, and common weather forecasts, obtaining results comparable to state-of-the-art domain-specific systems both in terms of BLEU scores and human evaluation.", "A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH's template-based generator even for quite complex information presentations.", "The specification discloses a luggage carrier made up of a generally U-shaped frame. The frame has two spaced legs with a hook on the front which hooks over the bumper of an automobile. Two braces are attached to the cross member of the U-shaped member and the front portion of the braces is received on fastening means welded to the under side of the car frame. The cross members provide a supporting surface for carrying articles, boats and the like. A platform may be supported on the frame.", "We present and evaluate a new model for Natural Language Generation (NLG) in Spoken Dialogue Systems, based on statistical planning, given noisy feedback from the current generation context (e.g. a user and a surface realiser). We study its use in a standard NLG problem: how to present information (in this case a set of search results) to users, given the complex tradeoffs between utterance length, amount of information conveyed, and cognitive load. We set these trade-offs by analysing existing match data. We then train a NLG policy using Reinforcement Learning (RL), which adapts its behaviour to noisy feedback from the current generation context. This policy is compared to several baselines derived from previous work in this area. The learned policy significantly outperforms all the prior approaches.", "" ] }
1508.01745
2952013107
Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.
As noted above, corpus-based NLG aims at learning generation decisions from data with minimal dependence on rules and heuristics. A pioneer in this direction is the class-based n-gram language model (LM) approach proposed by . later addressed some of the limitations of class-based LMs in the over-generation phase by using a modified generator based on a syntactic dependency tree. proposed a phrase-based NLG system based on factored LMs that can learn from a semantically aligned corpus. Although active learning @cite_27 was also proposed to allow learning online directly from users, the requirement for human annotated alignments limits the scalability of the system. Another similar approach casts NLG as a template extraction and matching problem, e.g., train a set of log-linear models to make a series of generation decisions to choose the most suitable template for realisation. later show that the outputs can be further improved by an SVM reranker making them comparable to human-authored texts. However, template matching approaches do not generalise well to unseen combinations of semantic elements.
{ "cite_N": [ "@cite_27" ], "mid": [ "2161181481" ], "abstract": [ "Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents Bagel, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that Bagel can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data." ] }
1508.01745
2952013107
Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.
Training an RNN with long range dependencies is difficult because of the vanishing gradient problem @cite_38 . mitigated this problem by replacing the sigmoid activation in the RNN recurrent connection with a self-recurrent memory block and a set of multiplication gates to mimic the read, write, and reset operations in digital computers. The resulting architecture is dubbed the Long Short-term Memory (LSTM) network. It has been shown to be effective in a variety of tasks, such as speech recognition @cite_19 , handwriting recognition @cite_13 , spoken language understanding @cite_24 , and machine translation @cite_14 . Recent work by has demonstrated that an NN structure augmented with a carefully designed memory block and differentiable read write operations can learn to mimic computer programs. Moreover, the ability to train deep networks provides a more sophisticated way of exploiting relations between labels and features, therefore making the prediction more accurate @cite_0 . By extending an LSTM network to be both deep in space and time, shows the resulting network can used to synthesise handwriting indistinguishable from that of a human.
{ "cite_N": [ "@cite_38", "@cite_14", "@cite_24", "@cite_19", "@cite_0", "@cite_13" ], "mid": [ "", "2949888546", "2024632416", "2950689855", "2160815625", "2122585011" ], "abstract": [ "", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.", "Neural network based approaches have recently produced record-setting performances in natural language understanding tasks such as word labeling. In the word labeling task, a tagger is used to assign a label to each word in an input sequence. Specifically, simple recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have shown to significantly outperform the previous state-of-the-art - conditional random fields (CRFs). This paper investigates using long short-term memory (LSTM) neural networks, which contain input, output and forgetting gates and are more advanced than simple RNN, for the word labeling task. To explicitly model output-label dependence, we propose a regression model on top of the LSTM un-normalized scores. We also propose to apply deep LSTM to the task. We investigated the relative importance of each gate in the LSTM by setting other gates to a constant and only learning particular gates. Experiments on the ATIS dataset validated the effectiveness of the proposed models.", "Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates , which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.", "Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models (GMMs) to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. An alternative way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition benchmarks, sometimes by a large margin. This article provides an overview of this progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.", "Recognizing lines of unconstrained handwritten text is a challenging task. The difficulty of segmenting cursive or overlapping characters, combined with the need to exploit surrounding context, has led to low recognition rates for even the best current recognizers. Most recent progress in the field has been made either through improved preprocessing or through advances in language modeling. Relatively little work has been done on the basic recognition algorithms. Indeed, most systems rely on the same hidden Markov models that have been used for decades in speech and handwriting recognition, despite their well-known shortcomings. This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. In experiments on two large unconstrained handwriting databases, our approach achieves word recognition accuracies of 79.7 percent on online data and 74.1 percent on offline data, significantly outperforming a state-of-the-art HMM-based system. In addition, we demonstrate the network's robustness to lexicon size, measure the individual influence of its hidden layers, and analyze its use of context. Last, we provide an in-depth discussion of the differences between the network and HMMs, suggesting reasons for the network's superior performance." ] }
1508.01951
1911151866
Quality assurance is one the most important challenges in crowdsourcing. Assigning tasks to several workers to increase quality through redundant answers can be expensive if asking homogeneous sources. This limitation has been overlooked by current crowdsourcing platforms resulting therefore in costly solutions. In order to achieve desirable cost-quality tradeoffs it is essential to apply efficient crowd access optimization techniques. Our work argues that optimization needs to be aware of diversity and correlation of information within groups of individuals so that crowdsourcing redundancy can be adequately planned beforehand. Based on this intuitive idea, we introduce the Access Path Model (APM), a novel crowd model that leverages the notion of access paths as an alternative way of retrieving information. APM aggregates answers ensuring high quality and meaningful confidence. Moreover, we devise a greedy optimization algorithm for this model that finds a provably good approximate plan to access the crowd. We evaluate our approach on three crowdsourced datasets that illustrate various aspects of the problem. Our results show that the Access Path Model combined with greedy optimization is cost-efficient and practical to overcome common difficulties in large-scale crowdsourcing like data sparsity and anonymity.
Quality assurance and control One of the central works in this field is presented by @cite_9 . In an experimental design with noisy observers, the authors use an Expectation Maximization algorithm @cite_27 to obtain maximum likelihood estimates for the observer variation when ground truth is missing or partially available. This has served as a foundation for several following contributions @cite_13 @cite_14 @cite_11 @cite_16 , placing David and Skene's algorithm in a crowdsourcing context and enriching it for building performance-sensitive pricing schemes. The APM model enhances these quality definitions by leveraging the fact that the error rates of workers are directly affected by the access path that they follow, which allows for efficient optimization.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_27", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2134305421", "9014458", "2049633694", "2152009989", "2098865355", "2142518823" ], "abstract": [ "For many supervised learning tasks it may be infeasible (or very expensive) to obtain objective and reliable labels. Instead, we can collect subjective (possibly noisy) labels from multiple experts or annotators. In practice, there is a substantial amount of disagreement among the annotators, and hence it is of great practical interest to address conventional supervised learning problems in this scenario. In this paper we describe a probabilistic approach for supervised learning when we have multiple annotators providing (possibly noisy) labels but no absolute gold standard. The proposed algorithm evaluates the different experts and also gives an estimate of the actual hidden labels. Experimental results indicate that the proposed method is superior to the commonly used majority voting baseline.", "In compiling a patient record many facets are subject to errors of measurement. A model is presented which allows individual error-rates to be estimated for polytomous facets even when the patient's \"true\" response is not available. The EM algorithm is shown to provide a slow but sure way of obtaining maximum likelihood estimates of the parameters of interest. Some preliminary experience is reported and the limitations of the method are described.", "", "An important way to make large training sets is to gather noisy labels from crowds of nonexperts. We propose a minimax entropy principle to improve the quality of these labels. Our method assumes that labels are generated by a probability distribution over workers, items, and labels. By maximizing the entropy of this distribution, the method naturally infers item confusability and worker expertise. We infer the ground truth by minimizing the entropy of this distribution, which we show minimizes the Kullback-Leibler (KL) divergence between the probability distribution and the unknown truth. We show that a simple coordinate descent scheme can optimize minimax entropy. Empirically, our results are substantially better than previously published methods for the same problem.", "Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately, since manually verifying the quality of the submitted results is hard, malicious workers often take advantage of the verification difficulty and submit answers of low quality. Currently, most requesters rely on redundancy to identify the correct answers. However, redundancy is not a panacea. Massive redundancy is expensive, increasing significantly the cost of crowdsourced solutions. Therefore, we need techniques that will accurately estimate the quality of the workers, allowing for the rejection and blocking of the low-performing workers and spammers. However, existing techniques cannot separate the true (unrecoverable) error rate from the (recoverable) biases that some workers exhibit. This lack of separation leads to incorrect assessments of a worker's quality. We present algorithms that improve the existing state-of-the-art techniques, enabling the separation of bias and error. Our algorithm generates a scalar score representing the inherent quality of each worker. We illustrate how to incorporate cost-sensitive classification errors in the overall framework and how to seamlessly integrate unsupervised and supervised techniques for inferring the quality of the workers. We present experimental results demonstrating the performance of the proposed algorithm under a variety of settings.", "Modern machine learning-based approaches to computer vision require very large databases of hand labeled images. Some contemporary vision systems already require on the order of millions of images for training (e.g., Omron face detector [9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings interesting theoretical and practical challenges: (1) The labelers may have wide ranging levels of expertise which are unknown a priori, and in some cases may be adversarial; (2) images may vary in their level of difficulty; and (3) multiple labels for the same image must be combined to provide an estimate of the actual label of the image. Probabilistic approaches provide a principled way to approach these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the difficulty of each image. On both simulated and real data, we demonstrate that the model outperforms the commonly used \"Majority Vote\" heuristic for inferring image labels, and is robust to both noisy and adversarial labelers." ] }
1508.01951
1911151866
Quality assurance is one the most important challenges in crowdsourcing. Assigning tasks to several workers to increase quality through redundant answers can be expensive if asking homogeneous sources. This limitation has been overlooked by current crowdsourcing platforms resulting therefore in costly solutions. In order to achieve desirable cost-quality tradeoffs it is essential to apply efficient crowd access optimization techniques. Our work argues that optimization needs to be aware of diversity and correlation of information within groups of individuals so that crowdsourcing redundancy can be adequately planned beforehand. Based on this intuitive idea, we introduce the Access Path Model (APM), a novel crowd model that leverages the notion of access paths as an alternative way of retrieving information. APM aggregates answers ensuring high quality and meaningful confidence. Moreover, we devise a greedy optimization algorithm for this model that finds a provably good approximate plan to access the crowd. We evaluate our approach on three crowdsourced datasets that illustrate various aspects of the problem. Our results show that the Access Path Model combined with greedy optimization is cost-efficient and practical to overcome common difficulties in large-scale crowdsourcing like data sparsity and anonymity.
Query and crowd access optimization In crowdsourced databases, quality assurance and crowd access optimization are envisioned as part of the query optimizer, which needs to estimate the query plans not only according to the cost but also to their accuracy and latency. Previous work @cite_8 @cite_6 @cite_10 focuses on building declarative query languages with support for processing crowdsourced data. The proposed optimizers define the execution order of operators in query plans and map crowdsourcable operators to micro-tasks. In our work, we propose a complementary approach by ensuring the quality of each single operator executed by the crowd.
{ "cite_N": [ "@cite_10", "@cite_6", "@cite_8" ], "mid": [ "1977810318", "1551839888", "2168144930" ], "abstract": [ "Crowdsourcing enables programmers to incorporate \"human computation\" as a building block in algorithms that cannot be fully automated, such as text analysis and image recognition. Similarly, humans can be used as a building block in data-intensive applications--providing, comparing, and verifying data used by applications. Building upon the decades-long success of declarative approaches to conventional data management, we use a similar approach for data-intensive applications that incorporate humans. Specifically, declarative queries are posed over stored relational data as well as data computed on-demand from the crowd, and the underlying system orchestrates the computation of query answers. We present Deco, a database system for declarative crowdsourcing. We describe Deco's data model, query language, and our prototype. Deco's data model was designed to be general (it can be instantiated to other proposed models), flexible (it allows methods for data cleansing and external access to be plugged in), and principled (it has a precisely-defined semantics). Syntactically, Deco's query language is a simple extension to SQL. Based on Deco's data model, we define a precise semantics for arbitrary queries involving both stored data and data obtained from the crowd. We then describe the Deco query processor which uses a novel push-pull hybrid execution model to respect the Deco semantics while coping with the unique combination of latency, monetary cost, and uncertainty introduced in the crowdsourcing environment. Finally, we experimentally explore the query processing alternatives provided by Deco using our current prototype.", "Amazon’s Mechanical Turk ( \") service allows users to post short tasks ( \") that other users can receive a small amount of money for completing. Common tasks on the system include labelling a collection of images, combining two sets of images to identify people which appear in both, or extracting sentiment from a corpus of text snippets. Designing a workow of various kinds of HITs for ltering, aggregating, sorting, and joining data sources together is common, and comes with a set of challenges in optimizing the cost per HIT, the overall time to task completion, and the accuracy of MTurk results. We propose Qurk, a novel query system for managing these workows, allowing crowdpowered processing of relational databases. We describe a number of query execution and optimization challenges, and discuss some potential solutions.", "Some queries cannot be answered by machines only. Processing such queries requires human input for providing information that is missing from the database, for performing computationally difficult functions, and for matching, ranking, or aggregating results based on fuzzy criteria. CrowdDB uses human input via crowdsourcing to process queries that neither database systems nor search engines can adequately answer. It uses SQL both as a language for posing complex queries and as a way to model data. While CrowdDB leverages many aspects of traditional database systems, there are also important differences. Conceptually, a major change is that the traditional closed-world assumption for query processing does not hold for human input. From an implementation perspective, human-oriented query operators are needed to solicit, integrate and cleanse crowdsourced data. Furthermore, performance and cost depend on a number of new factors including worker affinity, training, fatigue, motivation and location. We describe the design of CrowdDB, report on an initial set of experiments using Amazon Mechanical Turk, and outline important avenues for future work in the development of crowdsourced query processing systems." ] }
1508.01951
1911151866
Quality assurance is one the most important challenges in crowdsourcing. Assigning tasks to several workers to increase quality through redundant answers can be expensive if asking homogeneous sources. This limitation has been overlooked by current crowdsourcing platforms resulting therefore in costly solutions. In order to achieve desirable cost-quality tradeoffs it is essential to apply efficient crowd access optimization techniques. Our work argues that optimization needs to be aware of diversity and correlation of information within groups of individuals so that crowdsourcing redundancy can be adequately planned beforehand. Based on this intuitive idea, we introduce the Access Path Model (APM), a novel crowd model that leverages the notion of access paths as an alternative way of retrieving information. APM aggregates answers ensuring high quality and meaningful confidence. Moreover, we devise a greedy optimization algorithm for this model that finds a provably good approximate plan to access the crowd. We evaluate our approach on three crowdsourced datasets that illustrate various aspects of the problem. Our results show that the Access Path Model combined with greedy optimization is cost-efficient and practical to overcome common difficulties in large-scale crowdsourcing like data sparsity and anonymity.
Crowd access optimization is similar to the expert selection problem in decision-making. However, the assumption that the selected individuals will answer may no longer hold. Previous studies based on this assumption are @cite_31 @cite_4 @cite_15 . The proposed methods are nevertheless effective for task recommendation and performance evaluation.
{ "cite_N": [ "@cite_31", "@cite_4", "@cite_15" ], "mid": [ "2554839354", "", "1482468045" ], "abstract": [ "Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous “information pieceworkers”, have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a model of such crowdsourcing tasks and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, based on low-rank matrix approximation, significantly outperforms majority voting and, in fact, is order-optimal through comparison to an oracle that knows the reliability of every worker.", "", "We describe methods to predict a crowd worker's accuracy on new tasks based on his accuracy on past tasks. Such prediction provides a foundation for identifying the best workers to route work to in order to maximize accuracy on the new task. Our key insight is to model similarity of past tasks to the target task such that past task accuracies can be optimally integrated to predict target task accuracy. We describe two matrix factorization (MF) approaches from collaborative filtering which not only exploit such task similarity, but are known to be robust to sparse data. Experiments on synthetic and real-world datasets provide feasibility assessment and comparative evaluation of MF approaches vs. two baseline methods. Across a range of data scales and task similarity conditions, we evaluate: 1) prediction error over all workers; and 2) how well each method predicts the best workers to use for each task. Results show the benefit of task routing over random assignment, the strength of probabilistic MF over baseline methods, and the robustness of methods under different conditions." ] }
1508.01951
1911151866
Quality assurance is one the most important challenges in crowdsourcing. Assigning tasks to several workers to increase quality through redundant answers can be expensive if asking homogeneous sources. This limitation has been overlooked by current crowdsourcing platforms resulting therefore in costly solutions. In order to achieve desirable cost-quality tradeoffs it is essential to apply efficient crowd access optimization techniques. Our work argues that optimization needs to be aware of diversity and correlation of information within groups of individuals so that crowdsourcing redundancy can be adequately planned beforehand. Based on this intuitive idea, we introduce the Access Path Model (APM), a novel crowd model that leverages the notion of access paths as an alternative way of retrieving information. APM aggregates answers ensuring high quality and meaningful confidence. Moreover, we devise a greedy optimization algorithm for this model that finds a provably good approximate plan to access the crowd. We evaluate our approach on three crowdsourced datasets that illustrate various aspects of the problem. Our results show that the Access Path Model combined with greedy optimization is cost-efficient and practical to overcome common difficulties in large-scale crowdsourcing like data sparsity and anonymity.
Diversity for quality Relevant studies in management science @cite_2 @cite_22 emphasize diversity and define the notion of to refer to highly correlated forecasters. Another work that targets groups of workers is introduced by @cite_25 . This technique discards groups that do not prove to be the best ones. @cite_19 instead, refers to groups as and all of them are used for aggregation but not for optimization. Other systems like CrowdSearcher by @cite_7 and CrowdSTAR by @cite_5 support cross-community task allocation.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_19", "@cite_2", "@cite_5", "@cite_25" ], "mid": [ "2103425354", "", "2141649520", "2158392329", "2139973573", "2055733113" ], "abstract": [ "This paper characterizes the optimal composition of a group for making a combined forecast. In the model, individual forecasters have types defined according to a statistical criterion we call type coherence. Members of the same type have identical expected accuracy, and forecasters within a type have higher covariance than forecasters of different types. We derive the optimal group composition as a function of predictive accuracy, between-and within-type covariance, and group size. Group size plays a critical role in determining the optimal group: in small groups the most accurate type should be in the majority, whereas in large groups the type with the least within-type covariance should dominate. This paper was accepted by Peter Wakker, decision analysis.", "", "This paper addresses the problem of extracting accurate labels from crowdsourced datasets, a key challenge in crowdsourcing. Prior work has focused on modeling the reliability of individual workers, for instance, by way of confusion matrices, and using these latent traits to estimate the true labels more accurately. However, this strategy becomes ineffective when there are too few labels per worker to reliably estimate their quality. To mitigate this issue, we propose a novel community-based Bayesian label aggregation model, CommunityBCC, which assumes that crowd workers conform to a few different types, where each type represents a group of workers with similar confusion matrices. We assume that each worker belongs to a certain community, where the worker's confusion matrix is similar to (a perturbation of) the community's confusion matrix. Our model can then learn a set of key latent features: (i) the confusion matrix of each community, (ii) the community membership of each user, and (iii) the aggregated label of each item. We compare the performance of our model against established aggregation methods on a number of large-scale, real-world crowdsourcing datasets. Our experimental results show that our CommunityBCC model consistently outperforms state-of-the-art label aggregation methods, requiring, on average, 50 less data to pass the 90 accuracy mark.", "We introduce a general framework for modeling functionally diverse problem-solving agents. In this framework, problem-solving agents possess representations of problems and algorithms that they use to locate solutions. We use this framework to establish a result relevant to group composition. We find that when selecting a problem-solving team from a diverse population of intelligent agents, a team of randomly selected agents outperforms a team comprised of the best-performing agents. This result relies on the intuition that, as the initial pool of problem solvers becomes large, the best-performing agents necessarily become similar in the space of problem solvers. Their relatively greater ability is more than offset by their lack of problem-solving diversity.", "The online communities available on the Web have shown to be significantly interactive and capable of collectively solving difficult tasks. Nevertheless, it is still a challenge to decide how a task should be dispatched through the network due to the high diversity of the communities and the dynamically changing expertise and social availability of their members. We introduce CrowdSTAR, a framework designed to route tasks across and within online crowds. CrowdSTAR indexes the topic-specific expertise and social features of the crowd contributors and then uses a routing algorithm, which suggests the best sources to ask based on the knowledge vs. availability trade-offs. We experimented with the proposed framework for question and answering scenarios by using two popular social networks as crowd candidates: Twitter and Quora.", "Worker reliability is a longstanding issue in crowdsourcing, and the automatic discovery of high quality workers is an important practical problem. Most previous work on this problem mainly focuses on estimating the quality of each individual worker jointly with the true answer of each task. However, in practice, for some tasks, worker quality could be associated with some explicit characteristics of the worker, such as education level, major and age. So the following question arises: how do we automatically discover related worker attributes for a given task, and further utilize the findings to improve data quality? In this paper, we propose a general crowd targeting framework that can automatically discover, for a given task, if any group of workers based on their attributes have higher quality on average; and target such groups, if they exist, for future work on the same task. Our crowd targeting framework is complementary to traditional worker quality estimation approaches. Furthermore, an advantage of our framework is that it is more budget efficient because we are able to target potentially good workers before they actually do the task. Experiments on real datasets show that the accuracy of final prediction can be improved significantly for the same budget (or even less budget in some cases). Our framework can be applied to many real word tasks and can be easily integrated in current crowdsourcing platforms." ] }
1508.01651
1936507281
The SCION (Scalability, Control, and Isolation on Next-generation Networks) inter-domain network architecture was proposed to address the availability, scalability, and security shortcomings of the current Internet. This paper presents a retrospective of the SCION goals and design decisions, its attacker model and limitations, and research highlights of work conducted in the 5 years following SCION's initial publication.
The idea of clustering the network into domains has been attempted since the early days of the Internet. The Nimrod routing architecture @cite_22 , to our knowledge, is the first published description of these concepts. Nimrod describes a hierarchy of clusters of hosts, routers, or networks. A secure version of Nimrod was later proposed @cite_55 . FARA @cite_15 proposes a general notion of an entity to include clusters of computers that can be reached as a communication endpoint.
{ "cite_N": [ "@cite_55", "@cite_22", "@cite_15" ], "mid": [ "2170941317", "", "2154345358" ], "abstract": [ "This paper describes the work undertaken to secure Nimrod, a complex and sophisticated routing system that unifies interior and exterior routing functions. The focus of this work is countering attacks that would degrade or deny service to network subscribers. The work began with an analysis of security requirements for Nimrod, based on a hybrid approach that refines top-down requirements generation with an understanding of attack scenarios and the capabilities and limitations of countermeasures. The countermeasures selected for use here include several newly developed sequence integrity mechanisms, plus a protocol for shared secret establishment. A novel aspect of this work is the protection of subscriber traffic in support of the overall communication availability security goal.", "", "sloppy This paper describes FARA, a new organization of, network architecture concepts. FARA (Forwarding directive, Association, and Rendezvous Architecture) defines an abstract model with considerable generality and flexibility, based upon the decoupling of end-system names from network addresses. The paper explores the implications of FARA and the range of architecture instantiations that may be derived from FARA. As an illustration, the paper outlines a particular derived architecture, M-FARA, which features support for generalized mobility and multiple realms of network addressing." ] }
1508.01651
1936507281
The SCION (Scalability, Control, and Isolation on Next-generation Networks) inter-domain network architecture was proposed to address the availability, scalability, and security shortcomings of the current Internet. This paper presents a retrospective of the SCION goals and design decisions, its attacker model and limitations, and research highlights of work conducted in the 5 years following SCION's initial publication.
Mobility-first @cite_9 is an architecture that quickly maps billions of identities to their locations, yet does not propose a fundamental change in the underlying forwarding architecture in terms of security and availability. Nebula @cite_25 addresses security problems in the current Internet. Nebula takes a so-called default-off approach to reach a specific service, where a sender can send packets only if an approved path to a service is available. The network architecture helps the service to verify whether the packet followed the approved path (i.e., supporting path verification). However, Nebula achieves this property at a high cost. All routers on the path need to perform computationally-expensive path verification for every single packet and need to keep per-flow state, limiting its usage to highly specialized services.
{ "cite_N": [ "@cite_9", "@cite_25" ], "mid": [ "1981293710", "66821545" ], "abstract": [ "This paper presents an overview of the MobilityFirst network architecture, currently under development as part of the US National Science Foundation's Future Internet Architecture (FIA) program. The proposed architecture is intended to directly address the challenges of wireless access and mobility at scale, while also providing new services needed for emerging mobile Internet application scenarios. After briefly outlining the original design goals of the project, we provide a discussion of the main architectural concepts behind the network design, identifying key features such as separation of names from addresses, public-key based globally unique identifiers (GUIDs) for named objects, global name resolution service (GNRS) for dynamic binding of names to addresses, storage-aware routing and late binding, content- and context-aware services, optional in-network compute layer, and so on. This is followed by a brief description of the MobilityFirst protocol stack as a whole, along with an explanation of how the protocol works at end-user devices and inside network routers. Example of specific advanced services supported by the protocol stack, including multi-homing, mobility with disconnection, and content retrieval caching are given for illustration. Further design details of two key protocol components, the GNRS name resolution service and the GSTAR routing protocol, are also described along with sample results from evaluation. In conclusion, a brief description of an ongoing multi-site experimental proof-of-concept deployment of the MobilityFirst protocol stack on the GENI testbed is provided.", "The NEBULA Future Internet Architecture (FIA) project is focused on a future network that enables the vision of cloud computing [8,12] to be realized. With computation and storage moving to data centers, networking to these data centers must be several orders of magnitude more resilient for some applications to trust cloud computing and enable their move to the cloud." ] }
1508.01651
1936507281
The SCION (Scalability, Control, and Isolation on Next-generation Networks) inter-domain network architecture was proposed to address the availability, scalability, and security shortcomings of the current Internet. This paper presents a retrospective of the SCION goals and design decisions, its attacker model and limitations, and research highlights of work conducted in the 5 years following SCION's initial publication.
Serval @cite_29 proposes name-based service discovery and routing, and introduces a new service-access layer that enables late binding of a service to its location. Late binding provides flexibility in migrating and distributing services, yet it attempts to optimize networking for special application-services (especially in data-center network) built on top of the current Internet.
{ "cite_N": [ "@cite_29" ], "mid": [ "2103376620" ], "abstract": [ "Internet services run on multiple servers in different locations, serving clients that are often mobile and multihomed. This does not match well with today's network stack, designed for communication between fixed hosts with topology-dependent addresses. As a result, online service providers resort to clumsy and management-intensive work-arounds--forfeiting the scalability of hierarchical addressing to support virtual server migration, directing all client traffic through dedicated load balancers, restarting connections when hosts move, and so on. In this paper, we revisit the design of the network stack to meet the needs of online services. The centerpiece of our Serval architecture is a new Service Access Layer (SAL) that sits above an unmodified network layer, and enables applications to communicate directly on service names. The SAL provides a clean service-level control data plane split, enabling policy, control, and in-stack name-based routing that connects clients to services via diverse discovery techniques. By tying active sockets to the control plane, applications trigger updates to service routing state upon invoking socket calls, ensuring up-to-date service resolution. With Serval, end-points can seamlessly change network addresses, migrate flows across interfaces, or establish additional flows for efficient and uninterrupted service access. Experiments with our high-performance in-kernel prototype, and several example applications, demonstrate the value of a unified networking solution for online services." ] }
1508.01651
1936507281
The SCION (Scalability, Control, and Isolation on Next-generation Networks) inter-domain network architecture was proposed to address the availability, scalability, and security shortcomings of the current Internet. This paper presents a retrospective of the SCION goals and design decisions, its attacker model and limitations, and research highlights of work conducted in the 5 years following SCION's initial publication.
XIA @cite_53 proposes an evolvable network architecture that can easily adapt to the evolution of networks by supporting various principal types (where the principal includes but is not limited to service, content, host, domains, and path). Due to its flexibility, yet lack of specific data-plane mechanism, XIA uses for secure and available data forwarding.
{ "cite_N": [ "@cite_53" ], "mid": [ "1755020405" ], "abstract": [ "Motivated by limitations in today's host-centric IP network, recent studies have proposed clean-slate network architectures centered around alternate first-class principals, such as content, services, or users. However, much like the host-centric IP design, elevating one principal type above others hinders communication between other principals and inhibits the network's capability to evolve. This paper presents the eXpressive Internet Architecture (XIA), an architecture with native support for multiple principals and the ability to evolve its functionality to accommodate new, as yet unforeseen, principals over time. We describe key design requirements, and demonstrate how XIA's rich addressing and forwarding semantics facilitate flexibility and evolvability, while keeping core network functions simple and efficient. We describe case studies that demonstrate key functionality XIA enables." ] }
1508.01651
1936507281
The SCION (Scalability, Control, and Isolation on Next-generation Networks) inter-domain network architecture was proposed to address the availability, scalability, and security shortcomings of the current Internet. This paper presents a retrospective of the SCION goals and design decisions, its attacker model and limitations, and research highlights of work conducted in the 5 years following SCION's initial publication.
All the aforementioned new Internet architectures attempt to solve issues facing applications built on top of the current Internet, yet do not address the very fundamental architectural problems that hamper available and private data communication in the presence of malicious parties. The Framework for Internet Innovation (FII) @cite_38 also proposes a new architecture to enable evolution, diversity, and continuous innovation, such that the Internet can be composed of a heterogeneous conglomerate of architectures. The ChoiceNet @cite_37 architecture proposes an economy plane'' to enable network providers to offer new network-based services to customers, providing an network environment for improving innovation and competition.
{ "cite_N": [ "@cite_38", "@cite_37" ], "mid": [ "2164363093", "1992658674" ], "abstract": [ "We argue that the biggest problem with the current Internet architecture is not a particular functional deficiency, but its inability to accommodate innovation. To address this problem we propose a minimal architectural \"framework\" in which comprehensive architectures can reside. The proposed Framework for Internet Innovation (FII) --- which is derived from the simple observation that network interfaces should be extensible and abstract --- allows for a diversity of architectures to coexist, communicate, and evolve. We demonstrate FII's ability to accommodate diversity and evolution with a detailed examination of how information flows through the architecture and with a skeleton implementation of the relevant interfaces.", "The Internet has been a key enabling technology for many new distributed applications and services. However, the deployment of new protocols and services in the Internet infrastructure itself has been sluggish, especially where economic incentives for network providers are unclear. In our work, we seek to develop an \"economy plane\" for the Internet that enables network providers to offer new network-based services (QoS, storage, etc.) for sale to customers. The explicit connection between economic relationships and network services across various time scales enables users to select among service alternatives. The resulting competition among network service providers will lead to overall better technological solutions and more competitive prices. In this paper, we present the architectural aspects of our ChoiceNet economy plane as well as some of the technological problems that need to be addressed in a practical deployment." ] }
1508.01707
2952935849
Many millions of users routinely use their Google accounts to log in to relying party (RP) websites supporting the Google OpenID Connect service. OpenID Connect, a newly standardised single-sign-on protocol, builds an identity layer on top of the OAuth 2.0 protocol, which has itself been widely adopted to support identity management services. It adds identity management functionality to the OAuth 2.0 system and allows an RP to obtain assurances regarding the authenticity of an end user. A number of authors have analysed the security of the OAuth 2.0 protocol, but whether OpenID Connect is secure in practice remains an open question. We report on a large-scale practical study of Google's implementation of OpenID Connect, involving forensic examination of 103 RP websites which support its use for sign-in. Our study reveals serious vulnerabilities of a number of types, all of which allow an attacker to log in to an RP website as a victim user. Further examination suggests that these vulnerabilities are caused by a combination of Google's design of its OpenID Connect service and RP developers making design decisions which sacrifice security for simplicity of implementation. We also give practical recommendations for both RPs and OPs to help improve the security of real world OpenID Connect systems.
Meanwhile, a range of work exploring the security properties of real-world implementations of OAuth 2.0 has also been conducted. @cite_49 examined a number of deployed SSO systems, focussing on a logic flaw present in many such systems, including OpenID @. In parallel, Sun and Beznosov @cite_13 also studied deployed systems of OAuth 2.0 providing services in English. Li and Mitchell @cite_20 examined the security of deployed OAuth 2.0 systems providing services in Chinese. In parallel, Zhou and Evans @cite_28 conducted a large scale study of the security of Facebook's OAuth 2.0 implementation. @cite_24 , and Shehab and Mohsen @cite_30 have looked at the security of the implementation of OAuth 2.0 on mobile platforms. However, despite all the work on OAuth, very little research has been conducted on the security of OpenID Connect systems.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_24", "@cite_49", "@cite_13", "@cite_20" ], "mid": [ "2073828651", "2217843339", "", "2089775132", "2133723082", "88388190" ], "abstract": [ "With the roaring growth and wide adoption of smart mobile devices, users are continuously integrating with culture of the mobile applications (apps). These apps are not only gaining access to information on the smartphone but they are also able gain users' authorization to access remote servers on their behalf. The Open standard for Authorization (OAuth) is widely used in mobile apps for gaining access to user's resources on remote service providers. In this work, we analyze the different OAuth implementations adopted by some SDKs of the popular resource providers on smartphones and identify possible attacks on most OAuth implementations. We give some statistics on the trends followed by the service providers and by mobile applications developers. In addition, we propose an application-based OAuth Manager framework, that provides a secure OAuth flow in smartphones that is based on the concept of privilege separation and does not require high overhead.", "Correctly integrating third-party services into web applications is challenging, and mistakes can have grave consequences when third-party services are used for security-critical tasks such as authentication and authorization. Developers often misunderstand integration requirements and make critical mistakes when integrating services such as single sign-on APIs. Since traditional programming techniques are hard to apply to programs running inside black-box web servers, we propose to detect vulnerabilities by probing behaviors of the system. This paper describes the design and implementation of SSOScan, an automatic vulnerability checker for applications using Facebook Single Sign-On (SSO) APIs. We used SSOScan to study the twenty thousand top-ranked websites for five SSO vulnerabilities. Of the 1660 sites in our study that employ Facebook SSO, over 20 were found to suffer from at least one serious vulnerability.", "", "With the boom of software-as-a-service and social networking, web-based single sign-on (SSO) schemes are being deployed by more and more commercial websites to safeguard many web resources. Despite prior research in formal verification, little has been done to analyze the security quality of SSO schemes that are commercially deployed in the real world. Such an analysis faces unique technical challenges, including lack of access to well-documented protocols and code, and the complexity brought in by the rich browser elements (script, Flash, etc.). In this paper, we report the first \"field study\" on popular web SSO systems. In every studied case, we focused on the actual web traffic going through the browser, and used an algorithm to recover important semantic information and identify potential exploit opportunities. Such opportunities guided us to the discoveries of real flaws. In this study, we discovered 8 serious logic flaws in high-profile ID providers and relying party websites, such as Open ID (including Google ID and Pay Pal Access), Face book, Jan Rain, Freelancer, Farm Ville, Sears.com, etc. Every flaw allows an attacker to sign in as the victim user. We reported our findings to affected companies, and received their acknowledgements in various ways. All the reported flaws, except those discovered very recently, have been fixed. This study shows that the overall security quality of SSO deployments seems worrisome. We hope that the SSO community conducts a study similar to ours, but in a larger scale, to better understand to what extent SSO is insecurely deployed and how to respond to the situation.", "Millions of web users today employ their Facebook accounts to sign into more than one million relying party (RP) websites. This web-based single sign-on (SSO) scheme is enabled by OAuth 2.0, a web resource authorization protocol that has been adopted by major service providers. The OAuth 2.0 protocol has proven secure by several formal methods, but whether it is indeed secure in practice remains an open question. We examine the implementations of three major OAuth identity providers (IdP) (Facebook, Microsoft, and Google) and 96 popular RP websites that support the use of Facebook accounts for login. Our results uncover several critical vulnerabilities that allow an attacker to gain unauthorized access to the victim user's profile and social graph, and impersonate the victim on the RP website. Closer examination reveals that these vulnerabilities are caused by a set of design decisions that trade security for implementation simplicity. To improve the security of OAuth 2.0 SSO systems in real-world settings, we suggest simple and practical improvements to the design and implementation of IdPs and RPs that can be adopted gradually by individual sites.", "Many Chinese websites (relying parties) use OAuth 2.0 as the basis of a single sign-on service to ease password management for users. Many sites support five or more different OAuth 2.0 identity providers, giving users choice in their trust point. However, although OAuth 2.0 has been widely implemented (particularly in China), little attention has been paid to security in practice. In this paper we report on a detailed study of OAuth 2.0 implementation security for ten major identity providers and 60 relying parties, all based in China. This study reveals two critical vulnerabilities present in many implementations, both allowing an attacker to control a victim user’s accounts at a relying party without knowing the user’s account name or password. We provide simple, practical recommendations for identity providers and relying parties to enable them to mitigate these vulnerabilities. The vulnerabilities have been reported to the parties concerned." ] }
1508.01448
2951535568
Network interdiction problems are a natural way to study the sensitivity of a network optimization problem with respect to the removal of a limited set of edges or vertices. One of the oldest and best-studied interdiction problems is minimum spanning tree (MST) interdiction. Here, an undirected multigraph with nonnegative edge weights and positive interdiction costs on its edges is given, together with a positive budget B. The goal is to find a subset of edges R, whose total interdiction cost does not exceed B, such that removing R leads to a graph where the weight of an MST is as large as possible. Frederickson and Solis-Oba (SODA 1996) presented an O(log m)-approximation for MST interdiction, where m is the number of edges. Since then, no further progress has been made regarding approximations, and the question whether MST interdiction admits an O(1)-approximation remained open. We answer this question in the affirmative, by presenting a 14-approximation that overcomes two main hurdles that hindered further progress so far. Moreover, based on a well-known 2-approximation for the metric traveling salesman problem (TSP), we show that our O(1)-approximation for MST interdiction implies an O(1)-approximation for a natural interdiction version of metric TSP.
Many interdiction problems beyond the minimum spanning tree setting have been studied. This includes interdiction versions of the maximum @math - @math flow problem @cite_7 @cite_3 @cite_14 (a setting often called ), the shortest path problem @cite_26 @cite_5 , the maximum matching problem @cite_21 @cite_31 , interdicting the connectivity of a graph @cite_37 , interdiction of packings @cite_31 , stable set interdiction @cite_24 , and variants of facility location @cite_38 . However, the theoretical understanding of most interdiction problems still seems rather limited. A good example for which a large gap remains between the best known hardness results and approximation algorithms is network flow interdiction. Network flow interdiction is a strongly NP-hard problem @cite_7 for which no approximation results are known, except for a pseudo-approximation @cite_22 which is allowed to violate the budget by a factor of @math .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_14", "@cite_26", "@cite_22", "@cite_7", "@cite_21", "@cite_3", "@cite_24", "@cite_5", "@cite_31" ], "mid": [ "1968766531", "", "", "1999293689", "", "2133494553", "", "1986538507", "2037935057", "2121815516", "183464634" ], "abstract": [ "Facilities and their services can be lost due to natural disasters as well as to intentional strikes, either by terrorism or an army. An intentional strike against a system is called interdiction. The geographical distribution of facilities in a supply or service system may be particularly vulnerable to interdiction, and the resulting impacts of the loss of one or more facilities may be substantial. Critical infrastructure can be defined as those elements of infrastructure that, if lost, could pose a significant threat to needed supplies (e.g., food, energy, medicines), services (e.g., police, fire, and EMS), and communication or a significant loss of service coverage or efficiency. In this article we introduce two new spatial optimization models called the r-interdiction median problem and the r-interdiction covering problem. Both models identify for a given service supply system, that set of facilities that, if lost, would affect service delivery the most, depending upon the type of service protocol. These models can then be used to identify the most critical facility assets in a service supply system. Results of both models applied to spatial data are also presented. Several solutions derived from these two interdiction models are presented in greater detail and demonstrate the degree to which the loss of one or more facilities disrupts system efficiencies or coverage. Recommendations for further research are also made.", "", "", "Let N = (V, A) be a directed, arc weighted network with node set V and arc set A. Associated with each arc [email protected] is a non-negative distance d(e) and a non-negative cost c(e) of removing it from N. Let B be the total amount available to spend on removing arcs. The most vital arcs problem (MVAP) is the problem of finding a subset K of arcs such that @S\"e\"@e\"Kc(e) =< B and whose removal from N results in the greatest increase in the shortest distance between two specified nodes s and t in V. We show this problem to be NP-hard. In addition, we show that a closely related problem, whose solution provides a lower bound on the optimal solution value of MVAP, is polynomially solvable.", "", "Interest in network interdiction has been rekindled because of attempts to reduce the flow of drugs and precursor chemicals moving through river and road networks in South America. This paper considers a problem in which an enemy attempts to maximize flow through a capacitated network while an interdictor tries to minimize this maximum flow by interdicting (stopping flow on) network arcs using limited resources. This problem is shown to be NP-complete even when the interdiction of an arc requires exactly one unit of resource. New, flexible, integer programming models are developed for the problem and its variations and valid inequalities and a reformulation are derived to tighten the LP relaxations of some of these models. A small computational example from the literature illustrates a hybrid (partly directed and partly undirected) model and the usefulness of the valid inequalities and the reformulation.", "", "", "Given an undirected graph with weights on its vertices, the k most vital nodes independent set (k most vital nodes vertex cover) problem consists of determining a set of k vertices whose removal results in the greatest decrease in the maximum weight of independent sets (minimum weight of vertex covers, respectively). We also consider the complementary problems, minimum node blocker independent set (minimum node blocker vertex cover) that consists of removing a subset of vertices of minimum size such that the maximum weight of independent sets (minimum weight of vertex covers, respectively) in the remaining graph is at most a specified value. We show that these problems are NP-hard on bipartite graphs but polynomial-time solvable on unweighted bipartite graphs. Furthermore, these problems are polynomial also on cographs and graphs of bounded treewidth. Results on the non-existence of ptas are presented, too.", "Given a directed graph G=(V,A) with a non-negative weight (length) function on its arcs w:A→ℝ+ and two terminals s,t∈V, our goal is to destroy all short directed paths from s to t in G by eliminating some arcs of A. This is known as the short paths interdiction problem. We consider several versions of it, and in each case analyze two subcases: total limited interdiction, when a fixed number k of arcs can be removed, and node-wise limited interdiction, when for each node v∈V a fixed number k(v) of out-going arcs can be removed. Our results indicate that the latter subcase is always easier than the former one. In particular, we show that the short paths node-wise interdiction problem can be efficiently solved by an extension of Dijkstra’s algorithm. In contrast, the short paths total interdiction problem is known to be NP-hard. We strengthen this hardness result by deriving the following inapproximability bounds: Given k, it is NP-hard to approximate within a factor c<2 the maximum s–t distance d(s,t) obtainable by removing (at most) k arcs from G. Furthermore, given d, it is NP-hard to approximate within a factor @math the minimum number of arcs which has to be removed to guarantee d(s,t)≥d. Finally, we also show that the same inapproximability bounds hold for undirected graphs and or node elimination.", "In the Packing Interdiction problem we are given a packing LP together with a separate interdiction cost for each LP variable and a global interdiction budget. Our goal is to harm the LP: which variables should we forbid the LP from using (subject to forbidding variables of total interdiction cost at most the budget) in order to minimize the value of the resulting LP? Interdiction problems on graphs (interdicting the maximum flow, the shortest path, the minimum spanning tree, etc.) have been considered before; here we initiate a study of interdicting packing linear programs. Zenklusen showed that matching interdiction, a special case, is NP-hard and gave a 4-approximation for unit edge weights. We obtain an constant-factor approximation to the matching interdiction problem without the unit weight assumption. This is a corollary of our main result, an O(logq · min q, logk )-approximation to Packing Interdiction where q is the row-sparsity of the packing LP and k is the column-sparsity." ] }
1508.01448
2951535568
Network interdiction problems are a natural way to study the sensitivity of a network optimization problem with respect to the removal of a limited set of edges or vertices. One of the oldest and best-studied interdiction problems is minimum spanning tree (MST) interdiction. Here, an undirected multigraph with nonnegative edge weights and positive interdiction costs on its edges is given, together with a positive budget B. The goal is to find a subset of edges R, whose total interdiction cost does not exceed B, such that removing R leads to a graph where the weight of an MST is as large as possible. Frederickson and Solis-Oba (SODA 1996) presented an O(log m)-approximation for MST interdiction, where m is the number of edges. Since then, no further progress has been made regarding approximations, and the question whether MST interdiction admits an O(1)-approximation remained open. We answer this question in the affirmative, by presenting a 14-approximation that overcomes two main hurdles that hindered further progress so far. Moreover, based on a well-known 2-approximation for the metric traveling salesman problem (TSP), we show that our O(1)-approximation for MST interdiction implies an O(1)-approximation for a natural interdiction version of metric TSP.
A related line of research is the study of a version of interdiction problems, where the weight of edges can be increased continuously at a given weight per cost ratio which depends on the edge. These models are typically much more tractable then their discrete counterparts, i.e., the classical interdiction problems. The reason for this is that they can often be written as a single linear program. In particular, efficient algorithms for continuous interdiction have been obtained for maximum weight independent set in a matroid @cite_25 , maximum weight common independent sets in two matroids and the minimum cost circulation problem @cite_39 .
{ "cite_N": [ "@cite_25", "@cite_39" ], "mid": [ "2047684506", "2067993964" ], "abstract": [ "Abstract The problems of computing the maximum increase in the weight of the minimum spanning trees of a graph caused by the removal of a given number of edges, or by finite increases in the weights of the edges, are investigated. For the case of edge removals, the problem is shown to be NP-hard and an Ω(1 log k )-approximation algorithm is presented for it, where (input parameter) k > 1 is the number of edges to be removed. The second problem is studied, assuming that the increase in the weight of an edge has an associated cost proportional to the magnitude of the change. An O ( n 3 m 2 log( n 2 m )) time algorithm is presented to solve it.", "In this paper we give a method for solving certain budgeted optimization problems in strongly polynomial time. The method can be applied to several known budgeted problems, and in addition we show two new applications. The first one extends Frederickson’s and Solis-Oba’s result [G. N. Frederickson and R. Solis-Oba, Combinatorica, 18 (1998), pp. 503-518] to (poly)matroid intersections from single matroids. The second one is the budgeted version of the minimum cost circulation problem." ] }
1508.01448
2951535568
Network interdiction problems are a natural way to study the sensitivity of a network optimization problem with respect to the removal of a limited set of edges or vertices. One of the oldest and best-studied interdiction problems is minimum spanning tree (MST) interdiction. Here, an undirected multigraph with nonnegative edge weights and positive interdiction costs on its edges is given, together with a positive budget B. The goal is to find a subset of edges R, whose total interdiction cost does not exceed B, such that removing R leads to a graph where the weight of an MST is as large as possible. Frederickson and Solis-Oba (SODA 1996) presented an O(log m)-approximation for MST interdiction, where m is the number of edges. Since then, no further progress has been made regarding approximations, and the question whether MST interdiction admits an O(1)-approximation remained open. We answer this question in the affirmative, by presenting a 14-approximation that overcomes two main hurdles that hindered further progress so far. Moreover, based on a well-known 2-approximation for the metric traveling salesman problem (TSP), we show that our O(1)-approximation for MST interdiction implies an O(1)-approximation for a natural interdiction version of metric TSP.
We highlight that @cite_10 claims to present a @math -approximation for the @math most vital edges problem for MST. However, the results in @cite_10 are based on an erroneous lemma about spanning trees. shortVersion We provide details on this erroneous lemma in the long version of the paper. In Appendix we provide details on this erroneous lemma.
{ "cite_N": [ "@cite_10" ], "mid": [ "2076304834" ], "abstract": [ "For a connected, undirected and weighted graph G = (V, E), the problem of finding the k most vital edges of G with respect to minimum spanning tree is to find k edges in G whose removal will cause greatest weight increase in the minimum spanning tree of the remaining graph. This problem is known to be N P-hard for arbitrary k. In this paper, we first describe a simple exact algorithm for this problem, based on the approach of edge replacement in the minimum spanning tree of G. Next we present polynomial-time randomized algorithms that produce optimal and approximate solutions to this problem. For |V| = n and |E| = m, our algorithm producing optimal solution has a time complexity of O(mn) with probability of success at least e - √ k-2 k-2 , which is 0.90 fork > 200 and asymptotically 1 when k goes to infinity. The algorithm producing approximate solution runs in O(mn+nk 2 log k) time with probability of success at least 1- 1 4 (2 n ) k 2-2 , which is 0.998 for k > 10, and produces solution within factor 2 to the optimal one. Finally we show that both of our randomized algorithms can be easily parallelized. On a CREW PRAM, the first algorithm runs in O(n) time using n 2 processors, and the second algorithm runs in O(log 2 n) time using mn log n processors and hence is RNC." ] }
1508.01239
2293297835
We optimize multiway equijoins on relational tables using degree information. We give a new bound that uses degree information to more tightly bound the maximum output size of a query. On real data, our bound on the number of triangles in a social network can be up to @math times tighter than existing worst case bounds. We show that using only a constant amount of degree information, we are able to obtain join algorithms with a running time that has a smaller exponent than existing algorithms-- for any database instance . We also show that this degree information can be obtained in nearly linear time, which yields asymptotically faster algorithms in the serial setting and lower communication algorithms in the MapReduce setting. In the serial setting, the data complexity of join processing can be expressed as a function @math in terms of input size @math and output size @math in which @math depends on the query. An upper bound for @math is given by fractional hypertreewidth. We are interested in situations in which we can get algorithms for which @math is strictly smaller than the fractional hypertreewidth. We say that a join can be processed in subquadratic time if @math . Building on the AYZ algorithm for processing cycle joins in quadratic time, for a restricted class of joins which we call @math -series-parallel graphs, we obtain a complete decision procedure for identifying subquadratic solvability (subject to the @math -SUM problem requiring quadratic time). Our @math -SUM based quadratic lower bound is tight, making it the only known tight bound for joins that does not require any assumption about the matrix multiplication exponent @math . We also give a MapReduce algorithm that meets our improved communication bound and handles essentially optimal parallelism.
Marx's work @cite_16 uses a stronger partitioning technique to fully characterize the fixed-parameter tractability of joins in terms of the submodular width of their hypergraphs. Marx achieves degree-uniformity within all small projections of the output, while we only achieve uniform degrees within relations. Marx's preprocessing is expensive; the technique as written in Section 4 of his paper @cite_16 takes time @math where @math is the submodular width of the join hypergraph. This preprocessing is potentially more expensive than the join processing itself. Our algorithms run in time @math with @math for several joins. Marx did not attempt to minimize this exponent, as his application was concerned with fixed parameter tractability. We were unable to find an easy way to achieve @math runtime for Marx's technique.
{ "cite_N": [ "@cite_16" ], "mid": [ "2953012544" ], "abstract": [ "An important question in the study of constraint satisfaction problems (CSP) is understanding how the graph or hypergraph describing the incidence structure of the constraints influences the complexity of the problem. For binary CSP instances (i.e., where each constraint involves only two variables), the situation is well understood: the complexity of the problem essentially depends on the treewidth of the graph of the constraints. However, this is not the correct answer if constraints with unbounded number of variables are allowed, and in particular, for CSP instances arising from query evaluation problems in database theory. Formally, if H is a class of hypergraphs, then let CSP(H) be CSP restricted to instances whose hypergraph is in H. Our goal is to characterize those classes of hypergraphs for which CSP(H) is polynomial-time solvable or fixed-parameter tractable, parameterized by the number of variables. Note that in the applications related to database query evaluation, we usually assume that the number of variables is much smaller than the size of the instance, thus parameterization by the number of variables is a meaningful question. The most general known property of H that makes CSP(H) polynomial-time solvable is bounded fractional hypertree width. Here we introduce a new hypergraph measure called submodular width, and show that bounded submodular width of H implies that CSP(H) is fixed-parameter tractable. In a matching hardness result, we show that if H has unbounded submodular width, then CSP(H) is not fixed-parameter tractable, unless the Exponential Time Hypothesis fails." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
It has been almost 25 years since Chidamber and Kemerer published their influential paper on OO metrics at OOPSLA'91 @cite_10 . Since then, OO metrics have been used pervasively in research and development. Here, we review and discuss the main issues with OO metrics, and the research community's attempts to understand the empirically-based principles of software.
{ "cite_N": [ "@cite_10" ], "mid": [ "2047345132" ], "abstract": [ "While software metrics are a generally desirable feature in the software management functions of project planning and project evaluation, they are of especial importance with a new technology such as the object-oriented approach. This is due to the significant need to train software engineers in generally accepted object-oriented principles. This paper presents theoretical work that builds a suite of metrics for object-oriented design. In particular, these metrics are based upon measurement theory and are informed by the insights of experienced object-oriented software developers. The proposed metrics are formally evaluated against a widelyaccepted list of software metric evaluation criteria." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
For quite some time, however, size has been known to be a potential confounding factor in empirical studies of software artifacts. For example, in a study designed to verify whether it is possible to use a multivariate logistic regression model based on OO metrics to predict faults in OO programs, @cite_25 reported strong correlations between class size and several OO software metrics. They then went on to compensate for that correlation by doing partial correlations. In another study of a large C++ system @cite_11 , also reported such correlations. In 2001, El @cite_13 presented a comprehensive analysis of the effect of class size in several OO metrics, and suggested that this effect might have confounded prior studies. We refer readers to @cite_13 for an extensive list of studies that the authors suggest may have reached invalid conclusions by neglecting to compensate for size. They then presented their own study of a large C++ framework which showed that strong correlations resulting from univariate analysis of data were neutralized when multivariate analysis including class size is used. Another more recent study reached the same conclusions when studying the relation between internal software attributes and component utilization @cite_27 .
{ "cite_N": [ "@cite_27", "@cite_13", "@cite_25", "@cite_11" ], "mid": [ "2047988765", "2160538621", "2015729052", "2139085711" ], "abstract": [ "One of the perceived values of open source software is the idea that many eyes can increase code quality and reduce the amount of bugs. This perception, however, has been questioned by some due the lack of supporting evidence. This paper presents an empirical analysis focusing on the relationship between the utilization of open source components and their engineering quality. In this study, we determine the popularity of 2,406 Maven components by calculating their usage across 55,191 open source Java projects. As a proxy of code quality for a component, we calculate (i) its defect density using the set of bug patterns reported by Find Bugs, and (ii) 9 popular software quality metrics from the SQO-OSS quality model. We then look for correlations between (i) popularity and defect density, and (ii) popularity and software quality metrics. In most of the cases, no correlations were found. In cases where minor correlations exist, they are driven by component size. Statistically speaking, and using the methods in this study, the Maven repository does not seem to support the \"many eyeballs\" effect. We conjecture that the utilization of open source components is driven by factors other than their engineering quality, an interpretation that is supported by the findings in this study.", "Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness.", "Abstract One goal of this paper is to empirically explore the relationships between existing object-oriented (OO) coupling, cohesion, and inheritance measures and the probability of fault detection in system classes during testing. In other words, we wish to better understand the relationship between existing design measurement in OO systems and the quality of the software developed. The second goal is to propose an investigation and analysis strategy to make these kind of studies more repeatable and comparable, a problem which is pervasive in the literature on quality measurement. Results show that many of the measures capture similar dimensions in the data set, thus reflecting the fact that many of them are based on similar principles and hypotheses. However, it is shown that by using a subset of measures, accurate models can be built to predict which classes most of the faults are likely to lie in. When predicting fault-prone classes, the best model shows a percentage of correct classifications higher than 80 and finds more than 90 of faulty classes. Besides the size of classes, the frequency of method invocations and the depth of inheritance hierarchies seem to be the main driving factors of fault-proneness.", "The paper describes an empirical investigation into an industrial object oriented (OO) system comprised of 133000 lines of C++. The system was a subsystem of a telecommunications product and was developed using the Shlaer-Mellor method (S. Shlaer and S.J. Mellor, 1988; 1992). From this study, we found that there was little use of OO constructs such as inheritance, and therefore polymorphism. It was also found that there was a significant difference in the defect densities between those classes that participated in inheritance structures and those that did not, with the former being approximately three times more defect-prone. We were able to construct useful prediction systems for size and number of defects based upon simple counts such as the number of states and events per class. Although these prediction systems are only likely to have local significance, there is a more general principle that software developers can consider building their own local prediction systems. Moreover, we believe this is possible, even in the absence of the suites of metrics that have been advocated by researchers into OO technology. As a consequence, measurement technology may be accessible to a wider group of potential users." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
However, and El 's argument has drawn some criticism stemming from the point of view that multivariate analysis of the kind proposed in their papers produces ill-specified, logically inconsistent statistical models @cite_3 . Specifically, the partial correlation of @math and @math controlling for a third variable @math , written @math , is a measure of the relationship between X and Y if statistically we hold Z constant . But trying to predict, for example, the effect on post-release defects @math by increasing the coupling value @math while holding the number of lines of code @math constant doesn't make sense, because in the world from where the data comes, increasing coupling usually requires additional lines of code (e.g. field and variable declarations). As Evanco points out @cite_3 , this model is inconsistent with the reality of the data. The suggestion following the criticism is that prediction models should use the metric in question @math or the size metric ( @math ), whichever gives more predictive power, but not both.
{ "cite_N": [ "@cite_3" ], "mid": [ "2100157906" ], "abstract": [ "It has been proposed by El (ibid. vol.27 (7), 2001) that size should be taken into account as a confounding variable when validating object-oriented metrics. We take issue with this perspective since the ability to measure size does not temporally precede the ability to measure many of the object-oriented metrics that have been proposed. Hence, the condition that a confounding variable must occur causally prior to another explanatory variable is not met. In addition, when specifying multivariate models of defects that incorporate object-oriented metrics, entering size as an explanatory variable may result in misspecified models that lack internal consistency. Examples are given where this misspecification occurs." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
In their study of slice-based cohesion and coupling metrics over 63 C programs, Meyers and Binkley @cite_28 include correlation coefficients between several coupling and cohesion metrics and Lines of Code (LOC). They show that they are not correlated. We noted that their correlation analysis was made for the entire dataset, which contained components of considerably different sizes; this made the analysis prone to sknewness-related errors. In subsequent email exchanges with one of the authors, he kindly shared the data with us; we then verified that, indeed, the distribution of size of the components was not normal but log-normal. Once the transformation to log scale was performed, the data showed moderate-to-strong positive linear correlation between log(size) and their coupling metric.
{ "cite_N": [ "@cite_28" ], "mid": [ "1993126296" ], "abstract": [ "Software reengineering is a costly endeavor, due in part to the ambiguity of where to focus reengineering effort. Coupling and Cohesion metrics, particularly quantitative cohesion metrics, have the potential to aid in this identification and to measure progress. The most extensive work on such metrics is with slice-based cohesion metrics. While their use of semantic dependence information should make them an excellent choice for cohesion measurement, their wide spread use has been impeded in part by a lack of empirical study. Recent advances in software tools make, for the first time, a large-scale empirical study of slice-based cohesion and coupling metrics possible. Four results from such a study are presented. First, “head-to-head” qualitative and quantitative comparisons of the metrics identify which metrics provide similar views of a program and which provide unique views of a program. This study includes statistical analysis showing that slice-based metrics are not proxies for simple size-based metrics such as lines of code. Second, two longitudinal studies show that slice-based metrics quantify the deterioration of a program as it ages. This serves to validate the metrics: the metrics quantify the degradation that exists during development; turning this around, the metrics can be used to measure the progress of a reengineering effort. Third, baseline values for slice-based metrics are provided. These values act as targets for reengineering efforts with modules having values outside the expected range being the most in need of attention. Finally, slice-based coupling is correlated and compared with slice-based cohesion." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
In recent years, there has been an increasing number of empirical studies on increasingly larger collections of software projects for purposes of understanding the way that developers use programming languages in real projects. For example, @cite_20 studied the way Java programs use inheritance in the 100 projects of the Qualitas corpus @cite_29 . The criteria for inclusion of projects in that corpus is relatively strict, requiring, for example, distribution in both source and binary forms. See https: www.cs.auckland.ac.nz @math ewan corpus docs criteria.html While their findings fall within the results reported here, the Qualitas corpus contains only 100 projects. The results reported in @cite_20 show that the data does not follow a normal distribution. Another study on the same corpus explored the simulated use of multiple dispatch via cascading instanceof statements @cite_2 . Another study by Gil and Lenz @cite_6 studied the use of overloading in Java programs, also using the Qualitas corpus. Some of the conclusions in these studies (e.g. whether a project is an outlier or not) may be missing the effect of size of the project.
{ "cite_N": [ "@cite_29", "@cite_6", "@cite_20", "@cite_2" ], "mid": [ "2095938258", "1570764682", "1598215108", "2123528931" ], "abstract": [ "In order to increase our ability to use measurement to support software development practise we need to do more analysis of code. However, empirical studies of code are expensive and their results are difficult to compare. We describe the Qualitas Corpus, a large curated collection of open source Java systems. The corpus reduces the cost of performing large empirical studies of code and supports comparison of measurements of the same artifacts. We discuss its design, organisation, and issues associated with its development.", "Method overloading is a controversial language feature, especially in the context of Object Oriented languages, where its interaction with overriding may lead to confusing semantics. One of the main arguments against overloading is that it can be abused by assigning the same identity to conceptually different methods. This paper describes a study of the actual use of overloading in JAVA. To this end, we developed a taxonomy of classification of the use of overloading, and applied it to a large JAVA corpus comprising more than 100,000 user defined types. We found that more than 14 of the methods in the corpus are overloaded. Using sampling and evaluation by human raters we found that about 60 of overloaded methods follow one of the \"non ad hoc use of overloading patterns\" and that additional 20 can be easily rewritten in this form. The most common pattern is the use of overloading as an emulation of default arguments, a mechanism which does not exist in JAVA.", "Inheritance is a crucial part of object-oriented programming, but its use in practice, and the resulting large-scale inheritance structures in programs, remain poorly understood. Previous studies of inheritance have been relatively small and have generally not considered issues such as Java's distinction between classes and interfaces, nor have they considered the use of external libraries. In this paper we present the first substantial empirical study of the large-scale use of inheritance in a contemporary OO programming language. We present a suite of structured metrics for quantifying inheritance in Java programs. We present the results of performing a corpus analysis using those metrics to over 90 applications consisting of over 100,000 separate classes and interfaces. Our analysis finds higher use of inheritance than anticipated, variation in the use of inheritance between interfaces and classes, and differences between inheritance within application types compared with inheritance from external libraries.", "Multiple dispatch uses the run time types of more than one argument to a method call to determine which method body to run. While several languages over the last 20 years have provided multiple dispatch, most object-oriented languages still support only single dispatch forcing programmers to implement multiple dispatch manually when required. This paper presents an empirical study of the use of multiple dispatch in practice, considering six languages that support multiple dispatch, and also investigating the potential for multiple dispatch in Java programs. We hope that this study will help programmers understand the uses and abuses of multiple dispatch; virtual machine implementors optimise multiple dispatch; and language designers to evaluate the choice of providing multiple dispatch in new programming languages." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
Calla ' @cite_16 made a statistical analysis of 1,000 Smalltalk projects found in SqueakSource in order to understand the use of certain dynamic features of Smalltalk. They do not report the distribution in terms of project size. The study was designed to gather bulk statistics along an existing taxonomy, so the results are reported as simple counts of feature occurrences among the whole corpus or among a category of projects (e.g. out of 652,990 methods, only 8,349 use dynamic features, and then a breakdown is shown among categories). While the taxonomy is taken into account in the analysis of the data, project size is not. It would be interesting to see whether there is a correlation between the categories and size of the projects.
{ "cite_N": [ "@cite_16" ], "mid": [ "2112939580" ], "abstract": [ "The dynamic and reflective features of programming languages are powerful constructs that programmers often mention as extremely useful. However, the ability to modify a program at runtime can be both a boon-in terms of flexibility-, and a curse-in terms of tool support. For instance, usage of these features hampers the design of type systems, the accuracy of static analysis techniques, or the introduction of optimizations by compilers. In this paper, we perform an empirical study of a large Smalltalk codebase- often regarded as the poster-child in terms of availability of these features-, in order to assess how much these features are actually used in practice, whether some are used more than others, and in which kinds of projects. These results are useful to make informed decisions about which features to consider when designing language extensions or tool support." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
@cite_22 randomly collected 1,132 jar files off the Internet and analyzed them (at bytecode level) using a tool developed by the authors. The purpose of that study was to inform Java language designers and implementers about how developers actually use the language. That study reports summary statistics for their entire dataset without taking the distribution of jar size into account. Most distributions shown in the paper aren't normal, so the summary statistics are somewhat misleading. Some of the reported metrics in that study are the same metrics that we use for our study; for example they found on average 9 methods per class, with median 5. The reported values fall within the range of ours, but particularly close to the values for large projects, which leads us to believe that their dataset was biased towards large projects.
{ "cite_N": [ "@cite_22" ], "mid": [ "2136836627" ], "abstract": [ "We present a study of the static structure of real Java bytecode programs. A total of 1132 Java jar-files were collected from the Internet and analyzed. In addition to simple counts (number of methods per class, number of bytecode instructions per method, etc.), structural metrics such as the complexity of control-flow and inheritance graphs were computed. We believe this study will be valuable in the design of future programming languages and virtual machine instruction sets, as well as in the efficient implementation of compilers and other language processors. Copyright © 2006 John Wiley & Sons, Ltd." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
In another large study, @cite_23 have conducted an empirical assessment of 2,080 Java projects randomly selected from Sourceforge, and discovered several facts about the projects' use of Java. The size of the projects is not reported, and only simple statistics are given. For example, the reported mean and median methods per class are 3.5 and 4, respectively. Given that the data does not follow a normal distribution on project size, these values are, again, somewhat misleading and at odds with the findings of @cite_22 . Like so many large open source code repositories, Sourceforge is severely skewed towards small to medium projects; the reported summary statistics are consistent with our findings for small projects.
{ "cite_N": [ "@cite_22", "@cite_23" ], "mid": [ "2136836627", "2028889016" ], "abstract": [ "We present a study of the static structure of real Java bytecode programs. A total of 1132 Java jar-files were collected from the Internet and analyzed. In addition to simple counts (number of methods per class, number of bytecode instructions per method, etc.), structural metrics such as the complexity of control-flow and inheritance graphs were computed. We believe this study will be valuable in the design of future programming languages and virtual machine instruction sets, as well as in the efficient implementation of compilers and other language processors. Copyright © 2006 John Wiley & Sons, Ltd.", "Getting insight into different aspects of source code artifacts is increasingly important -- yet there is little empirical research using large bodies of source code, and subsequently there are not much statistically significant evidence of common patterns and facts of how programmers write source code. We pose 32 research questions, explain rationale behind them, and obtain facts from 2,080 randomly chosen Java applications from Sourceforge. Among these facts we find that most methods have one or zero arguments or they do not return any values, few methods are overridden, most inheritance hierarchies have the depth of one, close to 50 of classes are not explicitly inherited from any classes, and the number of methods is strongly correlated with the number of fields in a class." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
Ours is not the first study to try to unveil internal mathematical structures of software, and the software research community is not the only one looking for mathematical principles in existing software; communities that study complex systems and networks have long found software intriguing. One of the first studies of this kind was by @cite_12 , which analyzed the types and dependencies in the JDK, and noticed the existence of power laws and small world behavior. Soon after, Myers @cite_1 explored what he called collaboration graphs'' ( aka dependencies) in three C++ and three C applications. Many more studies of this kind followed. For example, @cite_5 , @cite_4 , @cite_15 and @cite_24 all study the evolution of software networks finding evidence of known mathematical principles that also exist in natural systems, and that might serve as predictive models for software evolution.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_24", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "1984486859", "2145416273", "2119324190", "2034251024", "2017303659", "2026500287" ], "abstract": [ "Software systems represent one of the most complex man-made artifacts. Understanding the structure of software systems can provide useful insights into software engineering efforts and can potentially help the development of complex system models applicable to other domains. In this paper, we analyze one of the most popular open-source Linux meta packages distributions called the Gentoo Linux. In our analysis, we model software packages as nodes and dependencies among them as edges. Our empirical results show that the resulting Gentoo network cannot be easily explained by existing complex network models. This in turn motivates our research in developing two new network growth models in which a new node is connected to an old node with the probability that depends not only on the degree but also on the “age” of the old node. Through computational and empirical studies, we demonstrate that our models have better explanatory power than the existing ones. In an effort to further explore the properties of these new models, we also present some related analytical results.", "Software systems emerge from mere keystrokes to form intricate functional networks connecting many collaborating modules, objects, classes, methods, and subroutines. Building on recent advances in the study of complex networks, I have examined software collaboration graphs contained within several open-source software systems, and have found them to reveal scale-free, small-world networks similar to those identified in other technological, sociological, and biological systems. I present several measures of these network topologies, and discuss their relationship to software engineering practices. I also present a simple model of software system evolution based on refactoring processes which captures some of the salient features of the observed systems. Some implications of object-oriented design for questions about network robustness, evolvability, degeneracy, and organization are discussed in the wake of these findings.", "The development of a complex system depends on the self-coordinated action of a large number of agents, often determining unexpected global behavior. The case of software evolution has great practical importance: knowledge of what is to be considered atypical can guide developers in recognizing and reacting to abnormal behavior. Although the initial framework of a theory of software exists, the current theoretical achievements do not fully capture existing quantitative data or predict future trends. Here we show that two elementary laws describe the evolution of package sizes in a Linux-based operating system: first, relative changes in size follow a random walk with non-Gaussian jumps; second, each size change is bounded by a limit that is dependent on the starting size, an intriguing behavior that we call “soft bound.” Our approach is based on data analysis and on a simple theoretical model, which is able to reproduce empirical details without relying on any adjustable parameter and generates definite predictions. The same analysis allows us to formulate and support the hypothesis that a similar mechanism is shaping the distribution of mammalian body sizes, via size-dependent constraints during cladogenesis. Whereas generally accepted approaches struggle to reproduce the large-mass shoulder displayed by the distribution of extant mammalian species, this is a natural consequence of the softly bounded nature of the process. Additionally, the hypothesis that this model is valid has the relevant implication that, contrary to a common assumption, mammalian masses are still evolving, albeit very slowly.", "In a recent paper, Krapivsky and Redner (Phys. Rev. E, 71 (2005) 036118) proposed a new growing network model with new nodes being attached to a randomly selected node, as well to all ancestors of the target node. The model leads to a sparse graph with an average degree growing logarithmically with the system size. Here we present compeling evidence for software networks being the result of a similar class of growing dynamics. The predicted pattern of network growth, as well as the stationary in- and out-degree distributions are consistent with the model. Our results confirm the view of large-scale software topology being generated through duplication-rewiring mechanisms. Implications of these findings are outlined.", "“Evolution behaves like a tinkerer” (Francois Jacob, Science, 1977). Software systems provide a singular opportunity to understand biological processes using concepts from network theory. The Debian GNU Linux operating system allows us to explore the evolution of a complex network in a unique way. The modular design detected during its growth is based on the reuse of existing code in order to minimize costs during programming. The increase of modularity experienced by the system over time has not counterbalanced the increase in incompatibilities between software packages within modules. This negative effect is far from being a failure of design. A random process of package installation shows that the higher the modularity, the larger the fraction of packages working properly in a local computer. The decrease in the relative number of conflicts between packages from different modules avoids a failure in the functionality of one package spreading throughout the entire system. Some potential analogies with the evolutionary and ecological processes determining the structure of ecological networks of interacting species are discussed.", "A large number of complex networks, both natural and artificial, share the presence of highly heterogeneous, scale-free degree distributions. A few mechanisms for the emergence of such patterns have been suggested, optimization not being one of them. In this letter we present the first evidence for the emergence of scaling (and the presence of small-world behavior) in software architecture graphs from a well-defined local optimization process. Although the rules that define the strategies involved in software engineering should lead to a tree-like structure, the final net is scale-free, perhaps reflecting the presence of conflicting constraints unavoidable in a multidimensional optimization process. The consequences for other complex networks are outlined." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
Closer to our work, a study presented in 2006 by @cite_17 also targeted the Lego Hypothesis,'' as coined by the authors. That study, which built on an earlier one by the same group @cite_21 , searched for the existence of power laws and other mathematical functions in a collection of 56 Java applications using 17 OO metrics, such as number of methods per type and the number of dependencies per type. For each of those 56 applications, the study revealed whether the 17 metrics' data points could fit the mathematical functions of interest. The study found that very few projects, and in only very few metrics, had strict power law distributions; most projects, and in most metrics, revealed reasonable fits at 80 were searching for. Another study by @cite_14 studied the existence of power laws in a variety of applications written in a variety of languages.
{ "cite_N": [ "@cite_14", "@cite_21", "@cite_17" ], "mid": [ "2124418175", "2070787093", "1972458945" ], "abstract": [ "A single statistical framework, comprising power law distributions and scale-free networks, seems to fit a wide variety of phenomena. There is evidence that power laws appear in software at the class and function level. We show that distributions with long, fat tails in software are much more pervasive than previously established, appearing at various levels of abstraction, in diverse systems and languages. The implications of this phenomenon cover various aspects of software engineering research and practice.", "Though conventional OO design suggests programs should be built from many small objects, like Lego bricks, they are instead built from objects that are scale-free, like fractals, and unlike Lego bricks.", "Large amounts of Java software have been written since the language's escape into unsuspecting software ecology more than ten years ago. Surprisingly little is known about the structure of Java programs in the wild: about the way methods are grouped into classes and then into packages, the way packages relate to each other, or the way inheritance and composition are used to put these programs together. We present the results of the first in-depth study of the structure of Java programs. We have collected a number of Java programs and measured their key structural attributes. We have found evidence that some relationships follow power-laws, while others do not. We have also observed variations that seem related to some characteristic of the application itself. This study provides important information for researchers who can investigate how and why the structural relationships we find may have originated, what they portend, and how they can be managed." ] }
1508.00628
2953267737
Many internal software metrics and external quality attributes of Java programs correlate strongly with program size. This knowledge has been used pervasively in quantitative studies of software through practices such as normalization on size metrics. This paper reports size-related super- and sublinear effects that have not been known before. Findings obtained on a very large collection of Java programs -- 30,911 projects hosted at Google Code as of Summer 2011 -- unveils how certain characteristics of programs vary disproportionately with program size, sometimes even non-monotonically. Many of the specific parameters of nonlinear relations are reported. This result gives further insights for the differences of "programming in the small" vs. "programming in the large." The reported findings carry important consequences for OO software metrics, and software research in general: metrics that have been known to correlate with size can now be properly normalized so that all the information that is left in them is size-independent.
All of these studies largely ignore application size, and focus on the modules themselves (i.e. classes, interfaces). In the study by @cite_17 , the results are ordered by application size, and even grouped within size ranges; but no insights are given regarding the effect, if any, that application size may have on the observations. We believe our study is complementary to all of these prior studies in search for mathematical laws in software applications, because it focuses on the size of the application as a whole, not just on the size of each OO module.
{ "cite_N": [ "@cite_17" ], "mid": [ "1972458945" ], "abstract": [ "Large amounts of Java software have been written since the language's escape into unsuspecting software ecology more than ten years ago. Surprisingly little is known about the structure of Java programs in the wild: about the way methods are grouped into classes and then into packages, the way packages relate to each other, or the way inheritance and composition are used to put these programs together. We present the results of the first in-depth study of the structure of Java programs. We have collected a number of Java programs and measured their key structural attributes. We have found evidence that some relationships follow power-laws, while others do not. We have also observed variations that seem related to some characteristic of the application itself. This study provides important information for researchers who can investigate how and why the structural relationships we find may have originated, what they portend, and how they can be managed." ] }
1508.00998
2953314481
In this paper we present a method for the estimation of the color of the illuminant in RAW images. The method includes a Convolutional Neural Network that has been specially designed to produce multiple local estimates. A multiple illuminant detector determines whether or not the local outputs of the network must be aggregated into a single estimate. We evaluated our method on standard datasets with single and multiple illuminants, obtaining lower estimation errors with respect to those obtained by other general purpose methods in the state of the art.
Since the only information available are the sensor responses @math across the image, color constancy is an under-determined problem @cite_8 and thus further assumptions and or knowledge are needed to solve it. Several computational color constancy algorithms have been proposed, each based on different assumptions. The most common assumption is that the color of the light source is uniform across the scene, i.e. @math . The next two sections review single and multiple illuminant estimation algorithms in the state of the art.
{ "cite_N": [ "@cite_8" ], "mid": [ "1491192762" ], "abstract": [ "This paper presents a negative result: current machine colour constancy algorithms are not good enough for colour-based object recognition. This result has surprised us since we have previously used the better of these algorithms successfully to correct the colour balance of images for display. Colour balancing has been the typical application of colour constancy, rarely has it been actually put to use in a computer vision system, so our goal was to show how well the various methods would do on an obvious machine colour vision task, namely, object recognition. Although all the colour constancy methods we tested proved insufficient for the task, we consider this an important finding in itself. In addition we present results showing the correlation between colour constancy performance and object recognition performance, and as one might expect, the better the colour constancy the better the recognition rate." ] }
1508.00776
2139125359
Automatically detecting, labeling, and tracking objects in videos depends first and foremost on accurate category-level object detectors. These might, however, not always be available in practice, as acquiring high-quality large scale labeled training datasets is either too costly or impractical for all possible real-world application scenarios. A scalable solution consists in re-using object detectors pre-trained on generic datasets. This work is the first to investigate the problem of on-line domain adaptation of object detectors for causal multi-object tracking (MOT). We propose to alleviate the dataset bias by adapting detectors from category to instances, and back: (i) we jointly learn all target models by adapting them from the pre-trained one, and (ii) we also adapt the pre-trained model on-line. We introduce an on-line multi-task learning algorithm to efficiently share parameters and reduce drift, while gradually improving recall. Our approach is applicable to any linear object detector, and we evaluate both cheap "mini-Fisher Vectors" and expensive "off-the-shelf" ConvNet features. We quantitatively measure the benefit of our domain adaptation strategy on the KITTI tracking benchmark and on a new dataset (PASCAL-to-KITTI) we introduce to study the domain mismatch problem in MOT.
approaches consist in building object tracks by associating detections precomputed over the whole video sequence. Recent works include the network flow approach of Pirsiavash al @cite_49 (DP ), global energy minimization @cite_14 (CEM), two-granularity tracking @cite_11 , Hungarian matching @cite_53 , and the hybrid stochastic deterministic approach of Collins and Carr @cite_12 . These approaches rely heavily on the quality of the pre-trained detector, as tracks are formed only from pre-determined detections. Furthermore, they are generally applied off-line and are not always applicable to the streaming scenario.
{ "cite_N": [ "@cite_14", "@cite_53", "@cite_49", "@cite_12", "@cite_11" ], "mid": [ "2083049794", "", "2016135469", "2141781852", "58749160" ], "abstract": [ "Many recent advances in multiple target tracking aim at finding a (nearly) optimal set of trajectories within a temporal window. To handle the large space of possible trajectory hypotheses, it is typically reduced to a finite set by some form of data-driven or regular discretization. In this work, we propose an alternative formulation of multitarget tracking as minimization of a continuous energy. Contrary to recent approaches, we focus on designing an energy that corresponds to a more complete representation of the problem, rather than one that is amenable to global optimization. Besides the image evidence, the energy function takes into account physical constraints, such as target dynamics, mutual exclusion, and track persistence. In addition, partial image evidence is handled with explicit occlusion reasoning, and different targets are disambiguated with an appearance model. To nevertheless find strong local minima of the proposed nonconvex energy, we construct a suitable optimization scheme that alternates between continuous conjugate gradient descent and discrete transdimensional jump moves. These moves, which are executed such that they always reduce the energy, allow the search to escape weak minima and explore a much larger portion of the search space of varying dimensionality. We demonstrate the validity of our approach with an extensive quantitative evaluation on several public data sets.", "", "We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance.", "Although ‘tracking-by-detection’ is a popular approach when reliable object detectors are available, missed detections remain a difficult hurdle to overcome. We present a hybrid stochastic deterministic optimization scheme that uses RJMCMC to perform stochastic search over the space of detection configurations, interleaved with deterministic computation of the optimal multi-frame data association for each proposed detection hypothesis. Since object trajectories do not need to be estimated directly by the sampler, our approach is more efficient than traditional MCMCDA techniques. Moreover, our holistic formulation is able to generate longer, more reliable trajectories than baseline tracking-by-detection approaches in challenging multi-target scenarios.", "We propose a tracking framework that mediates grouping cues from two levels of tracking granularities, detection tracklets and point trajectories, for segmenting objects in crowded scenes. Detection tracklets capture objects when they are mostly visible. They may be sparse in time, may miss partially occluded or deformed objects, or contain false positives. Point trajectories are dense in space and time. Their affinities integrate long range motion and 3D disparity information, useful for segmentation. Affinities may leak though across similarly moving objects, since they lack model knowledge. We establish one trajectory and one detection tracklet graph, encoding grouping affinities in each space and associations across. Two-granularity tracking is cast as simultaneous detection tracklet classification and clustering (cl2) in the joint space of tracklets and trajectories. We solve cl2 by explicitly mediating contradictory affinities in the two graphs: Detection tracklet classification modifies trajectory affinities to reflect object specific dis-associations. Non-accidental grouping alignment between detection tracklets and trajectory clusters boosts or rejects corresponding detection tracklets, changing accordingly their classification.We show our model can track objects through sparse, inaccurate detections and persistent partial occlusions. It adapts to the changing visibility masks of the targets, in contrast to detection based bounding box trackers, by effectively switching between the two granularities according to object occlusions, deformations and background clutter." ] }
1508.00776
2139125359
Automatically detecting, labeling, and tracking objects in videos depends first and foremost on accurate category-level object detectors. These might, however, not always be available in practice, as acquiring high-quality large scale labeled training datasets is either too costly or impractical for all possible real-world application scenarios. A scalable solution consists in re-using object detectors pre-trained on generic datasets. This work is the first to investigate the problem of on-line domain adaptation of object detectors for causal multi-object tracking (MOT). We propose to alleviate the dataset bias by adapting detectors from category to instances, and back: (i) we jointly learn all target models by adapting them from the pre-trained one, and (ii) we also adapt the pre-trained model on-line. We introduce an on-line multi-task learning algorithm to efficiently share parameters and reduce drift, while gradually improving recall. Our approach is applicable to any linear object detector, and we evaluate both cheap "mini-Fisher Vectors" and expensive "off-the-shelf" ConvNet features. We quantitatively measure the benefit of our domain adaptation strategy on the KITTI tracking benchmark and on a new dataset (PASCAL-to-KITTI) we introduce to study the domain mismatch problem in MOT.
approaches are similar to CFT ones in that they learn independent instance-specific trackers from automatic detections, but the target-specific models correspond to specializations of the generic category-level model. This requires the pre-trained detector and the target-specific trackers to have the same parametric form ( same features and classifier) that work well both at the category and instance levels. This idea was recently introduced by Hall and Perona @cite_36 to track pedestrians and faces by intersecting detections from a generic boosted cascade with a target-specific fine-tuned version of the cascade.
{ "cite_N": [ "@cite_36" ], "mid": [ "1930202536" ], "abstract": [ "A method for online, real-time tracking of objects is presented. Tracking is treated as a repeated detection problem where potential target objects are identified with a pre-trained category detector and object identity across frames is established by individual-specific detectors. The individual detectors are (re-)trained online from a single positive example whenever there is a coincident category detection. This ensures that the tracker is robust to drift. Real-time operation is possible since an individual-object detector is obtained through elementary manipulations of the thresholds of the category detector and therefore only minimal additional computations are required. Our tracking algorithm is benchmarked against nine state-of-the-art trackers on two large, publicly available and challenging video datasets. We find that our algorithm is 10 more accurate and nearly as fast as the fastest of the competing algorithms, and it is as accurate but 20 times faster than the most accurate of the competing algorithms." ] }
1508.00776
2139125359
Automatically detecting, labeling, and tracking objects in videos depends first and foremost on accurate category-level object detectors. These might, however, not always be available in practice, as acquiring high-quality large scale labeled training datasets is either too costly or impractical for all possible real-world application scenarios. A scalable solution consists in re-using object detectors pre-trained on generic datasets. This work is the first to investigate the problem of on-line domain adaptation of object detectors for causal multi-object tracking (MOT). We propose to alleviate the dataset bias by adapting detectors from category to instances, and back: (i) we jointly learn all target models by adapting them from the pre-trained one, and (ii) we also adapt the pre-trained model on-line. We introduce an on-line multi-task learning algorithm to efficiently share parameters and reduce drift, while gradually improving recall. Our approach is applicable to any linear object detector, and we evaluate both cheap "mini-Fisher Vectors" and expensive "off-the-shelf" ConvNet features. We quantitatively measure the benefit of our domain adaptation strategy on the KITTI tracking benchmark and on a new dataset (PASCAL-to-KITTI) we introduce to study the domain mismatch problem in MOT.
Related to our work, Breitenstein al @cite_31 track automatically detected pedestrians using a boosted classifier on low-level features to learn target-specific appearance models. Another related approach @cite_56 uses a multi-task objective to learn jointly a generic object model and trackers. It, however, does not use a pre-trained detector, but initializes targets by hand for each video, assuming that instances form a crowd of slow-moving near duplicates. Other related works @cite_10 @cite_51 @cite_34 include approaches for domain adaptation from generic to specific scene detectors for similar scenarios, although they do not learn trackers. Some other works @cite_20 @cite_23 @cite_57 do not address MOT but nonetheless perform detector adaptation specifically for videos via other means. For instance, @cite_55 puts forth a procedure to self-learn object detectors for unlabeled video streams by making use of a similar multi-task learning formulation. On the other hand, @cite_20 relies on unsupervised multiple instance learning to collect online samples for incremental learning. Finally, adaptive tracking methods often adopt selective update strategies to avoid drift, for instance by integrating unlabeled data in the model in a semi-supervised manner @cite_8 .
{ "cite_N": [ "@cite_8", "@cite_10", "@cite_55", "@cite_56", "@cite_57", "@cite_23", "@cite_31", "@cite_34", "@cite_51", "@cite_20" ], "mid": [ "1807914171", "2133434696", "2168930216", "1991807052", "", "", "2148958980", "", "", "2143629212" ], "abstract": [ "Recently, on-line adaptation of binary classifiers for tracking have been investigated. On-line learning allows for simple classifiers since only the current view of the object from its surrounding background needs to be discriminiated. However, on-line adaption faces one key problem: Each update of the tracker may introduce an error which, finally, can lead to tracking failure (drifting). The contribution of this paper is a novel on-line semi-supervised boosting method which significantly alleviates the drifting problem in tracking applications. This allows to limit the drifting problem while still staying adaptive to appearance changes. The main idea is to formulate the update process in a semi-supervised fashion as combined decision of a given prior and an on-line classifier. This comes without any parameter tuning. In the experiments, we demonstrate real-time tracking of our SemiBoost tracker on several challenging test sequences where our tracker outperforms other on-line tracking methods.", "Typical object detectors trained on images perform poorly on video, as there is a clear distinction in domain between the two types of data. In this paper, we tackle the problem of adapting object detectors learned from images to work well on videos. We treat the problem as one of unsupervised domain adaptation, in which we are given labeled data from the source domain (image), but only unlabeled data from the target domain (video). Our approach, self-paced domain adaptation, seeks to iteratively adapt the detector by re-training the detector with automatically discovered target domain examples, starting with the easiest first. At each iteration, the algorithm adapts by considering an increased number of target domain examples, and a decreased number of source domain examples. To discover target domain examples from the vast amount of video data, we introduce a simple, robust approach that scores trajectory tracks instead of bounding boxes. We also show how rich and expressive features specific to the target domain can be incorporated under the same framework. We show promising results on the 2011 TRECVID Multimedia Event Detection [1] and LabelMe Video [2] datasets that illustrate the benefit of our approach to adapt object detectors to video.", "Learning object detectors requires massive amounts of labeled training samples from the specific data source of interest. This is impractical when dealing with many different sources (e.g., in camera networks), or constantly changing ones such as mobile cameras (e.g., in robotics or driving assistant systems). In this paper, we address the problem of self-learning detectors in an autonomous manner, i.e. (i) detectors continuously updating themselves to efficiently adapt to streaming data sources (contrary to transductive algorithms), (ii) without any labeled data strongly related to the target data stream (contrary to self-paced learning), and (iii) without manual intervention to set and update hyper-parameters. To that end, we propose an unsupervised, on-line, and self-tuning learning algorithm to optimize a multi-task learning convex objective. Our method uses confident but laconic oracles (high-precision but low-recall off-the-shelf generic detectors), and exploits the structure of the problem to jointly learn on-line an ensemble of instance-level trackers, from which we derive an adapted category-level object detector. Our approach is validated on real-world publicly available video object datasets.", "In this paper, we propose a label propagation framework to handle the multiple object tracking (MOT) problem for a generic object type (cf. pedestrian tracking). Given a target object by an initial bounding box, all objects of the same type are localized together with their identities. We treat this as a problem of propagating bi-labels, i.e. a binary class label for detection and individual object labels for tracking. To propagate the class label, we adopt clustered Multiple Task Learning (cMTL) while enforcing spatio-temporal consistency and show that this improves the performance when given limited training data. To track objects, we propagate labels from trajectories to detections based on affinity using appearance, motion, and context. Experiments on public and challenging new sequences show that the proposed method improves over the current state of the art on this task.", "", "", "In this paper, we address the problem of automatically detecting and tracking a variable number of persons in complex scenes using a monocular, potentially moving, uncalibrated camera. We propose a novel approach for multiperson tracking-by-detection in a particle filtering framework. In addition to final high-confidence detections, our algorithm uses the continuous confidence of pedestrian detectors and online-trained, instance-specific classifiers as a graded observation model. Thus, generic object category knowledge is complemented by instance-specific information. The main contribution of this paper is to explore how these unreliable information sources can be used for robust multiperson tracking. The algorithm detects and tracks a large number of dynamically moving people in complex scenes with occlusions, does not rely on background modeling, requires no camera or ground plane calibration, and only makes use of information from the past. Hence, it imposes very few restrictions and is suitable for online applications. Our experiments show that the method yields good tracking performance in a large variety of highly dynamic scenarios, such as typical surveillance videos, webcam footage, or sports sequences. We demonstrate that our algorithm outperforms other methods that rely on additional information. Furthermore, we analyze the influence of different algorithm components on the robustness.", "", "", "Most common approaches for object detection collect thousands of training examples and train a detector in an offline setting, using supervised learning methods, with the objective of obtaining a generalized detector that would give good performance on various test datasets. However, when an offline trained detector is applied on challenging test datasets, it may fail in some cases by not being able to detect some objects or by producing false alarms. We propose an unsupervised multiple instance learning (MIL) based incremental solution to deal with this issue. We introduce an MIL loss function for Real Adaboost and present a tracking based effective unsupervised online sample collection mechanism to collect the online samples for incremental learning. Experiments demonstrate the effectiveness of our approach by improving the performance of a state of the art offline trained detector on the challenging datasets for pedestrian category." ] }
1508.00715
1922773239
We study the extent to which online social networks can be connected to open knowledge bases. The problem is referred to as learning social knowledge graphs. We propose a multi-modal Bayesian embedding model, GenVector, to learn latent topics that generate word and network embeddings. GenVector leverages large-scale unlabeled data with embeddings and represents data of two modalities---i.e., social network users and knowledge concepts---in a shared latent topic space. Experiments on three datasets show that the proposed method clearly outperforms state-of-the-art methods. We then deploy the method on AMiner, a large-scale online academic search system with a network of 38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our method significantly decreases the error rate in an online A B test with live users.
Variants of topic models @cite_13 @cite_4 represent each word as a vector of topic-specific probabilities. Although Corr-LDA @cite_1 and the author-topic model @cite_16 can be used for multi-modal modeling, the topic models use discrete representation for observed variables, and are not able to exploit the continuous semantics of words and authors.
{ "cite_N": [ "@cite_1", "@cite_16", "@cite_13", "@cite_4" ], "mid": [ "2020842694", "2949169239", "2107743791", "1880262756" ], "abstract": [ "We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.", "We introduce the author-topic model, a generative model for documents that extends Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) to include authorship information. Each author is associated with a multinomial distribution over topics and each topic is associated with a multinomial distribution over words. A document with multiple authors is modeled as a distribution over topics that is a mixture of the distributions associated with the authors. We apply the model to a collection of 1,700 NIPS conference papers and 160,000 CiteSeer abstracts. Exact inference is intractable for these datasets and we use Gibbs sampling to estimate the topic and author distributions. We compare the performance with two other generative models for documents, which are special cases of the author-topic model: LDA (a topic model) and a simple author model in which each author is associated with a distribution over words rather than a distribution over topics. We show topics recovered by the author-topic model, and demonstrate applications to computing similarity between authors and entropy of author output.", "Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized model is able to deal with domain specific synonymy as well as with polysemous words. In contrast to standard Latent Semantic Indexing (LSI) by Singular Value Decomposition, the probabilistic variant has a solid statistical foundation and defines a proper generative data model. Retrieval experiments on a number of test collections indicate substantial performance gains over direct term matching methods as well as over LSI. In particular, the combination of models with different dimensionalities has proven to be advantageous.", "We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model." ] }
1508.00715
1922773239
We study the extent to which online social networks can be connected to open knowledge bases. The problem is referred to as learning social knowledge graphs. We propose a multi-modal Bayesian embedding model, GenVector, to learn latent topics that generate word and network embeddings. GenVector leverages large-scale unlabeled data with embeddings and represents data of two modalities---i.e., social network users and knowledge concepts---in a shared latent topic space. Experiments on three datasets show that the proposed method clearly outperforms state-of-the-art methods. We then deploy the method on AMiner, a large-scale online academic search system with a network of 38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our method significantly decreases the error rate in an online A B test with live users.
Learning embeddings @cite_10 @cite_23 @cite_0 is effective at modeling continuous semantics with large-scale unlabeled data, e.g., knowledge bases and network structure. Neural tensor networks @cite_12 are expressive models for mapping the embeddings to the prediction targets. However, GenVector can better model multi-modal data by basing the embeddings on a generative process from latent topics.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_12", "@cite_23" ], "mid": [ "2154851992", "2950133940", "2127426251", "2125031621" ], "abstract": [ "We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10 higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60 less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively.", "We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by , and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS's solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS's factorization." ] }
1508.00715
1922773239
We study the extent to which online social networks can be connected to open knowledge bases. The problem is referred to as learning social knowledge graphs. We propose a multi-modal Bayesian embedding model, GenVector, to learn latent topics that generate word and network embeddings. GenVector leverages large-scale unlabeled data with embeddings and represents data of two modalities---i.e., social network users and knowledge concepts---in a shared latent topic space. Experiments on three datasets show that the proposed method clearly outperforms state-of-the-art methods. We then deploy the method on AMiner, a large-scale online academic search system with a network of 38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our method significantly decreases the error rate in an online A B test with live users.
Recently a few research works @cite_18 @cite_27 propose hybrid models to combine the advantages of topic models and embeddings. Gaussian embedding models @cite_9 learn word representation via a Gaussian generative process to encode hierarchical structure of words. However, these models are proposed to address other issues in semantic modeling, and cannot be directly used for multi-modal data.
{ "cite_N": [ "@cite_27", "@cite_18", "@cite_9" ], "mid": [ "2295103979", "2250753706", "2127265454" ], "abstract": [ "This paper introduces a hybrid model that combines a neural network with a latent topic model. The neural network provides a lowdimensional embedding for the input data, whose subsequent distribution is captured by the topic model. The neural network thus acts as a trainable feature extractor while the topic model captures the group structure of the data. Following an initial pretraining phase to separately initialize each part of the model, a unified training scheme is introduced that allows for discriminative training of the entire model. The approach is evaluated on visual data in scene classification task, where the hybrid model is shown to outperform models based solely on neural networks or topic models, as well as other baseline methods.", "Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents.", "Current work in lexical distributed representations maps each word to a point vector in low-dimensional space. Mapping instead to a density provides many interesting advantages, including better capturing uncertainty about a representation and its relationships, expressing asymmetries more naturally than dot product or cosine similarity, and enabling more expressive parameterization of decision boundaries. This paper advocates for density-based distributed embeddings and presents a method for learning representations in the space of Gaussian distributions. We compare performance on various word embedding benchmarks, investigate the ability of these embeddings to model entailment and other asymmetric relationships, and explore novel properties of the representation." ] }
1508.00715
1922773239
We study the extent to which online social networks can be connected to open knowledge bases. The problem is referred to as learning social knowledge graphs. We propose a multi-modal Bayesian embedding model, GenVector, to learn latent topics that generate word and network embeddings. GenVector leverages large-scale unlabeled data with embeddings and represents data of two modalities---i.e., social network users and knowledge concepts---in a shared latent topic space. Experiments on three datasets show that the proposed method clearly outperforms state-of-the-art methods. We then deploy the method on AMiner, a large-scale online academic search system with a network of 38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our method significantly decreases the error rate in an online A B test with live users.
Learning social knowledge graphs is also related to keyword extraction. Different from conventional keyword extraction methods @cite_3 @cite_15 @cite_24 @cite_25 , our method is based on topic models and embedding learning.
{ "cite_N": [ "@cite_24", "@cite_15", "@cite_25", "@cite_3" ], "mid": [ "2158018156", "2558393237", "1172683740", "2146769536" ], "abstract": [ "We present a new keyword extraction algorithm that applies to a single document without using a corpus. Frequent terms are extracted first, then a set of cooccurrence between each term and the frequent terms, i.e., occurrences in the same sentences, is generated. Co-occurrence distribution shows importance of a term in the documentas follows. If probability distribution of co-occurrence between term a and the frequent terms is biased to a particular subset of frequent terms, then term a is likely to be a keyword. The degree of biases of distribution is measured by the χ 2 -measure. Our algorithm shows comparable performance to tfidf without using a corpus.", "Best practices and invaluable advice from world-renowned data warehouse expertsIn this book, leading data warehouse experts from the Kimball Group share best practices for using the upcoming Business Intelligence release of SQL Server, referred to as SQL Server 2008 R2.In this new edition, the authors explain how SQL Server 2008 R2 provides a collection of powerful new tools that extend the power of its BI toolset to Excel and SharePoint users and they show how to use SQL Server to build a successful data warehouse that supports the business intelligence requirements that are common to most organizations. Covering the complete suite of data warehousing and BI tools that are part of SQL Server 2008 R2, as well as Microsoft Office, the authors walk you through a full project lifecycle, including design, development, deployment and maintenance.Features more than 50 percent new and revised material that covers the rich new feature set of the SQL Server 2008 R2 release, as well as the Office 2010 releaseIncludes brand new content that focuses on PowerPivot for Excel and SharePoint, Master Data Services, anddiscusses updated capabilities of SQL Server Analysis, Integration, and Reporting ServicesShares detailed case examples that clearly illustrate how to best apply the techniques described in the bookThe accompanying Web site contains all code samples as well as the sample database used throughout the case studiesThe Microsoft Data Warehouse Toolkit, Second Edition provides you with the knowledge of how and when to use BI tools such as Analysis Services and Integration Services to accomplish your most essential data warehousing tasks.", "In the menagerie of tasks for information extraction, entity linking is a new beast that has drawn a lot of attention from NLP practitioners and researchers recently. Entity Linking, also referred to as record linkage or entity resolution, involves aligning a textual mention of a named-entity to an appropriate entry in a knowledge base, which may or may not contain the entity. This has manifold applications ranging from linking patient health records to maintaining personal credit files, prevention of identity crimes, and supporting law enforcement. We discuss the key challenges present in this task and we present a high-performing system that links entities using max-margin ranking. We also summarize recent work in this area and describe several open research problems.", "This paper explores several unsupervised approaches to automatic keyword extraction using meeting transcripts. In the TFIDF (term frequency, inverse document frequency) weighting framework, we incorporated part-of-speech (POS) information, word clustering, and sentence salience score. We also evaluated a graph-based approach that measures the importance of a word based on its connection with other sentences or words. The system performance is evaluated in different ways, including comparison to human annotated keywords using F-measure and a weighted score relative to the oracle system performance, as well as a novel alternative human evaluation. Our results have shown that the simple unsupervised TFIDF approach performs reasonably well, and the additional information from POS and sentence score helps keyword extraction. However, the graph method is less effective for this domain. Experiments were also performed using speech recognition output and we observed degradation and different patterns compared to human transcripts." ] }
1508.00778
2179481075
In this note we prove that for any compact subset S of a Busemann surface @math (S,d) (in particular, for any simple polygon with geodesic metric) and any positive number @math ź, the minimum number of closed balls of radius @math ź with centers at @math S and covering the set S is at most 19 times the maximum number of disjoint closed balls of radius @math ź centered at points of S: @math ź(S)≤ź(S)≤19ź(S), where @math ź(S) and @math ź(S) are the covering and the packing numbers of S by @math ź-balls. Busemann surfaces represent a far-reaching generalization not only of simple polygons, but also of Euclidean and hyperbolic planes and of all planar polygonal complexes of global non-positive curvature. Roughly speaking, a Busemann surface is a geodesic metric space homeomorphic to @math R2 in which the distance function is convex.
Due to this analogy, one can formulate the previous question about @math for their continuous counterparts @math --- polygons in @math endowed with the (intrinsic) geodesic metric. It turns out that this question was not yet considered even for simple polygons (in this case, only a factor 2 approximation algorithm for packing number was recently given in @cite_13 ). The geodesic metric on simple polygons was studied in several papers in connection with algorithmic problems. In particular, in was shown in @cite_23 , that balls are convex, implying that simple polygons are Busemann spaces. In this paper, we consider the relationship between the packing and covering numbers not only for simple polygons in the Euclidean or hyperbolic planes but also for (compact subsets of) general Busemann surfaces.
{ "cite_N": [ "@cite_13", "@cite_23" ], "mid": [ "2407537893", "1975883249" ], "abstract": [ "Given a polygon @math , for two points @math and @math contained in the polygon, their is the length of the shortest @math -path within @math . A of radius @math centered at a point @math is the set of points in @math whose geodesic distance to @math is at most @math . We present a polynomial time @math -approximation algorithm for finding a densest geodesic unit disk packing in @math . Allowing arbitrary radii but constraining the number of disks to be @math , we present a @math -approximation algorithm for finding a packing in @math with @math geodesic disks whose minimum radius is maximized. We then turn our focus on of @math and present a @math -approximation algorithm for covering @math with @math geodesic disks whose maximal radius is minimized. Furthermore, we show that all these problems are @math -hard in polygons with holes. Lastly, we present a polynomial time exact algorithm which covers a polygon with two geodesic disks of minimum maximal radius.", "The geodesic center of a simple polygon is a point inside the polygon which minimizes the maximum internal distance to any point in the polygon. We present an algorithm which calculates the geodesic center of a simple polygon withn vertices in timeO(n logn)." ] }
1508.00835
2953100850
In this work we address the problem of indoor scene understanding from RGB-D images. Specifically, we propose to find instances of common furniture classes, their spatial extent, and their pose with respect to generalized class models. To accomplish this, we use a deep, wide, multi-output convolutional neural network (CNN) that predicts class, pose, and location of possible objects simultaneously. To overcome the lack of large annotated RGB-D training sets (especially those with pose), we use an on-the-fly rendering pipeline that generates realistic cluttered room scenes in parallel to training. We then perform transfer learning on the relatively small amount of publicly available annotated RGB-D data, and find that our model is able to successfully annotate even highly challenging real scenes. Importantly, our trained network is able to understand noisy and sparse observations of highly cluttered scenes with a remarkable degree of accuracy, inferring class and pose from a very limited set of cues. Additionally, our neural network is only moderately deep and computes class, pose and position in tandem, so the overall run-time is significantly faster than existing methods, estimating all output parameters simultaneously in parallel on a GPU in seconds.
Couprie al @cite_14 take a different approach, instead using a multi-scale CNN to classify the full image, and then use superpixels to aggregate and smooth prediction outputs. While this allows them to extract a per-pixel semantic segmentation, they fail to achieve very high scores in important classes, such as table and chair. Hariharan al @cite_3 also predict pixel-level class associations, but classify region proposals instead of the full image. They also use a CNN as a feature extractor on these regions, before classifying into categories with an SVM and aggregating onto a coarse mask. They then use a second classifier stage on this coarse mask projected on to superpixels to extract a detailed segmentation. While these results are interesting, we question the overall utility of such a fine grained segmentation, as it does not provide pose with respect to a class-level representation.
{ "cite_N": [ "@cite_14", "@cite_3" ], "mid": [ "1485037422", "2950612966" ], "abstract": [ "This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. We obtain state-of-the-art on the NYU-v2 depth dataset with an accuracy of 64.5 . We illustrate the labeling of indoor scenes in videos sequences that could be processed in real-time using appropriate hardware such as an FPGA.", "We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top- down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work." ] }
1508.00835
2953100850
In this work we address the problem of indoor scene understanding from RGB-D images. Specifically, we propose to find instances of common furniture classes, their spatial extent, and their pose with respect to generalized class models. To accomplish this, we use a deep, wide, multi-output convolutional neural network (CNN) that predicts class, pose, and location of possible objects simultaneously. To overcome the lack of large annotated RGB-D training sets (especially those with pose), we use an on-the-fly rendering pipeline that generates realistic cluttered room scenes in parallel to training. We then perform transfer learning on the relatively small amount of publicly available annotated RGB-D data, and find that our model is able to successfully annotate even highly challenging real scenes. Importantly, our trained network is able to understand noisy and sparse observations of highly cluttered scenes with a remarkable degree of accuracy, inferring class and pose from a very limited set of cues. Additionally, our neural network is only moderately deep and computes class, pose and position in tandem, so the overall run-time is significantly faster than existing methods, estimating all output parameters simultaneously in parallel on a GPU in seconds.
Guo and Hoiem @cite_10 predict support surfaces (such as tables and desks) in single view RGB-D images using a bottom up approach which aggregates low-level features ( edges, voxel occupancy). These features are used to propose planar surfaces, which are then classified using a linear SVM. While they provide object-class pose annotations for the NYUv2 set which we use in this paper, they do not classify objects or their pose themselves.
{ "cite_N": [ "@cite_10" ], "mid": [ "2005175937" ], "abstract": [ "In this paper, we present an approach to predict the extent and height of supporting surfaces such as tables, chairs, and cabinet tops from a single RGBD image. We define support surfaces to be horizontal, planar surfaces that can physically support objects and humans. Given a RGBD image, our goal is to localize the height and full extent of such surfaces in 3D space. To achieve this, we created a labeling tool and annotated 1449 images with rich, complete 3D scene models in NYU dataset. We extract ground truth from the annotated dataset and developed a pipeline for predicting floor space, walls, the height and full extent of support surfaces. Finally we match the predicted extent with annotated scenes in training scenes and transfer the the support surface configuration from training scenes. We evaluate the proposed approach in our dataset and demonstrate its effectiveness in understanding scenes in 3D space." ] }
1508.00092
2190186811
We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.
In the last few years, there has been intense research on remote sensing scene classification, focusing both on the use of suitable image descriptors and of a proper classification task. Local descriptors, in fact, like local binary patterns (LBP) @cite_21 , scale-invariant feature transform (SIFT) @cite_59 , or histograms of oriented gradients (HOG) @cite_42 , with their invariance to geometric and photometric transformations, have proven effective in a variety of computer vision applications, especially object recognition. They can be extracted both in sparse (keypoint-based) and dense way. In any case, given the high dimensionality of the feature space, they need a subsequent coding phase in order to obtain an expressive but compact representation of the image.
{ "cite_N": [ "@cite_21", "@cite_42", "@cite_59" ], "mid": [ "2163352848", "2161969291", "2151103935" ], "abstract": [ "Presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed \"uniform,\" are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the \"uniform\" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Experimental results demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance." ] }
1508.00092
2190186811
We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.
The basic version of BOVW, however, neglects information on the spatial distribution of visual words. Hence, there have been several efforts in the literature to make up for this deficiency. One popular approach is the spatial pyramid match kernel (SPMK) proposed in @cite_33 for object and scene categorization. It consists in partitioning the image at different levels of resolution and computing weighted histograms of the number of matches of local features at each level. Another alternative, considered in @cite_3 , is to perform a randomized spatial partition (RSP), aiming at a better characterization of the spatial layout of the images. These partition patterns are then weighted according to their discriminative abilities, and boosted into a robust classifier.
{ "cite_N": [ "@cite_33", "@cite_3" ], "mid": [ "2162915993", "2113740552" ], "abstract": [ "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors.", "The spatial layout of images plays a critical role in natural scene analysis. Despite previous work, e.g., spatial pyramid matching, how to design optimal spatial layout for scene classification remains an open problem due to the large variations of scene categories. This paper presents a novel image representation method, with the objective to characterize the image layout by various patterns, in the form of randomized spatial partition (RSP). The RSP-based image representation makes it possible to mine the most descriptive image layout pattern for each category of scenes, and then combine them by training a discriminative classifier, i.e., the proposed ORSP classifier. Besides RSP image representation, another powerful classifier, called the BRSP classifier, is also proposed. By weighting and boosting a sequence of various partition patterns, the BRSP classifier is more robust to the intra-class variations hence leads to a more accurate classification. Both RSP-based classifiers are tested on three publicly available scene datasets. The experimental results highlight the effectiveness of the proposed methods." ] }
1508.00092
2190186811
We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.
Most of the features proposed in the current literature are extracted only from the gray-scale image. However, Yang and Newsam @cite_48 showed that color histogram descriptors, evaluated on hue, lightness and saturation (Color-HLS), may provide a very good performance. The interactions among RGB color bands is exploited also in mCENTRIST @cite_58 , an extension of the CENTRIST algorithm @cite_53 , based in turn on LBP histograms and principal component analysis. LBP features are used also in @cite_17 , where the maximal conditional mutual information (MCMI) scheme is proposed to select an optimal subset of these features. In @cite_12 , instead, feature selection is performed separately for each class, with both one-versus-all and one-versus-one strategies.
{ "cite_N": [ "@cite_48", "@cite_53", "@cite_58", "@cite_12", "@cite_17" ], "mid": [ "", "2113855951", "2071309849", "2064125725", "2132633146" ], "abstract": [ "", "CENsus TRansform hISTogram (CENTRIST), a new visual descriptor for recognizing topological places or scene categories, is introduced in this paper. We show that place and scene recognition, especially for indoor environments, require its visual descriptor to possess properties that are different from other vision domains (e.g., object recognition). CENTRIST satisfies these properties and suits the place and scene recognition task. It is a holistic representation and has strong generalizability for category recognition. CENTRIST mainly encodes the structural properties within an image and suppresses detailed textural information. Our experiments demonstrate that CENTRIST outperforms the current state of the art in several place and scene recognition data sets, compared with other descriptors such as SIFT and Gist. Besides, it is easy to implement and evaluates extremely fast.", "mCENTRIST, a new multichannel feature generation mechanism for recognizing scene categories, is proposed in this paper. mCENTRIST explicitly captures the image properties that are encoded jointly by two image channels, which is different from popular multichannel descriptors. In order to avoid the curse of dimensionality, tradeoffs at both feature and channel levels have been executed to make mCENTRIST computationally practical. As a result, mCENTRIST is both efficient and easy to implement. In addition, a hyperopponent color space is proposed by embedding Sobel information into the opponent color space for further performance improvements. Experiments show that mCENTRIST outperforms established multichannel descriptors on four RGB and RGB-near infrared data sets, including aerial orthoimagery, indoor, and outdoor scene category recognition tasks. Experiments also verify that the hyper opponent color space enhances descriptors' performance effectively.", "Generally, some object-based features are more relevant to a thematic class than other features. These strongly relevant features, termed as class-specific features, would significantly contribute to thematic information extraction for very high resolution (VHR) images. However, many existing feature selection methods have been designed to select a good feature subset for all classes, rather than an independent feature subset for the thematic class. The latter might better meet the requirement of thematic information extraction than the former. In addition, the lack of quantitative evaluation of the contribution of the selected features to thematic classes also weakens our understandability of these features. To address the problems, class-specific feature selection methods are developed to measure the effectiveness of features for extracting thematic information from VHR images. First, the one-versus-all scheme is combined with traditional feature selection methods, such as ReliefF and LeastC. Also, one-versus-one scheme is utilized for alleviating the negative impact of a class imbalance problem arising from the one-versus-all scheme. Then, the relative contributions of features to thematic classes are obtained by the class-specific feature selection methods to describe the effectiveness of features for thematic information extraction. Finally, the class-specific feature selection methods are compared with the original methods on three different VHR image data sets by the nearest neighbor and support vector machine. Experimental results show that the class-specific feature selection methods outperform the corresponding conventional methods, and the one-versus-one scheme surpasses one-versus-all scheme. Additionally, many features are evaluated by the class-specific feature selection methods, to provide end users advice on effectiveness of the features.", "Local binary patterns of more bits extracted in a large structure have shown promising results in visual recognition applications. This results in very high-dimensional data so that it is not feasible to directly extract features from the LBP histogram, especially for a large-scale database. Instead of extracting features from the LBP histogram, we propose a new approach to learn discriminative LBP structures for a specific application. Our objective is to select an optimal subset of binarized-pixel-difference features to compose the LBP structure. As these features are strongly correlated, conventional feature-selection methods may not yield a desirable performance. Thus, we propose an incremental Maximal-Conditional-Mutual-Information scheme for LBP structure learning. The proposed approach has demonstrated a superior performance over the state-of-the-arts results on classifying both spatial patterns such as texture classification, scene recognition and face recognition, and spatial-temporal patterns such as dynamic texture recognition. HighlightsWe propose a new approach to tackle high-dimensional LBP features.It discovers optimal LBP structure to generate discriminative features.We propose a MCMI scheme for LBP structure learning to handle pixel correlation.It demonstrates a superior performance to SOTA on various visual applications." ] }
1508.00092
2190186811
We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.
We conclude this review by mentioning the very recent work of @cite_2 , the only one, to the best of our knowledge, considering ConvNets for land use classification. As already said, however, CNNs are only used to produce shallow feature vectors for SVM classification, and no training on remote sensing data is carried out. Nonetheless, the performance is competitive with the previous state of the art.
{ "cite_N": [ "@cite_2" ], "mid": [ "1912954554" ], "abstract": [ "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset." ] }
1508.00536
2276278723
Estimating mutual information (MI) from samples is a fundamental problem in statistics, machine learning, and data analysis. Recently it was shown that a popular class of non-parametric MI estimators perform very poorly for strongly dependent variables and have sample complexity that scales exponentially with the true MI. This undesired behavior was attributed to the reliance of those estimators on local uniformity of the underlying (and unknown) probability density function. Here we present a novel semi-parametric estimator of mutual information, where at each sample point, densities are locally approximated by a Gaussians distribution. We demonstrate that the estimator is asymptotically unbiased. We also show that the proposed estimator has a superior performance compared to several baselines, and is able to accurately measure relationship strengths over many orders of magnitude.
Mutual Information Estimators Recently, there has been a significant amount of work on estimating information-theoretic quantities such as entropy, mutual information, and divergences, from i.i.d. samples. Methods include k-nearest-neighbors , , , ; minimum spanning trees ; kernel density estimate , ; maximum likelihood density ratio ; ensemble methods , @cite_0 , etc. As pointed our earlier, all of those methods underestimate the mutual information when two variables have strong dependency. addressed this shortcoming by introducing a local non-uniformity correction, but their estimator depended on a heuristically defined threshold parameter and lacked performance guarantees.
{ "cite_N": [ "@cite_0" ], "mid": [ "2129905273" ], "abstract": [ "Introduction. Survey of Existing Methods. The Kernel Method for Univariate Data. The Kernel Method for Multivariate Data. Three Important Methods. Density Estimation in Action." ] }
1508.00451
2267320693
Tackling pattern recognition problems in areas such as computer vision, bioinformatics, speech or text recognition is often done best by taking into account task-specific statistical relations between output variables. In structured prediction, this internal structure is used to predict multiple outputs simultaneously, leading to more accurate and coherent predictions. Structural support vector machines (SSVMs) are nonprobabilistic models that optimize a joint input-output function through margin-based learning. Because SSVMs generally disregard the interplay between unary and interaction factors during the training phase, final parameters are suboptimal. Moreover, its factors are often restricted to linear combinations of input features, limiting its generalization power. To improve prediction accuracy, this paper proposes: (i) joint inference and learning by integration of back-propagation and loss-augmented inference in SSVM subgradient descent; (ii) extending SSVM factors to neural networks that form highly nonlinear functions of input features. Image segmentation benchmark results demonstrate improvements over conventional SSVM training methods in terms of accuracy, highlighting the feasibility of end-to-end SSVM training with neural factors. HighlightsA novel structured prediction model is proposed and applied to image segmentation.SSVM factors are modeled by highly nonlinear functions through neural networks.Back-propagation and loss-augmented inference are integrated in subgradient descent.Segmentation benchmark accuracy results show benefits over standard SSVM methods.
Although the combination of neural networks and structured or probabilistic graphical models dates back to the early '90s @cite_46 @cite_40 , interest in this topic is resurging. Several recent works introduce nonlinear unary factors potentials into structured models. For the task of image segmentation, @cite_15 train a convolutional neural network as a unary classifier, followed by the training of a dense random field over the input pixels. Similarly, @cite_25 combine the output maps of a convolutional network with a CRF for image segmentation, while Li and Zemel @cite_48 propose semisupervised maxmargin learning with nonlinear unary potentials. Contrary to these works, we trade the bifurcated training approach for integrated inference and training of unary and interactions factors. Several works @cite_52 @cite_31 @cite_20 @cite_39 focus on linear-chain graphs, using an independently trained deep learning model whose output serves as unary input features. Contrary to these works, we focus on more general graphs. Other works suggest kernels towards nonlinear SSVMs @cite_45 @cite_10 ; we approach nonlinearity by representing SSVM factors by arbitrarily deep neural networks.
{ "cite_N": [ "@cite_31", "@cite_15", "@cite_48", "@cite_52", "@cite_39", "@cite_40", "@cite_45", "@cite_46", "@cite_10", "@cite_25", "@cite_20" ], "mid": [ "", "1923697677", "2140045824", "2158899491", "", "", "", "1902568950", "", "2022508996", "" ], "abstract": [ "", "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "Semi-supervised learning, which uses unlabeled data to help learn a discriminative model, is especially important for structured output problems, as considerably more effort is needed to label its multi-dimensional outputs versus standard single output problems. We propose a new max-margin framework for semi-supervised structured output learning, that allows the use of powerful discrete optimization algorithms and high order regularizers defined directly on model predictions for the unlabeled examples. We show that our framework is closely related to Posterior Regularization, and the two frameworks optimize special cases of the same objective. The new framework is instantiated on two image segmentation tasks, using both a graph regularizer and a cardinality regularizer. Experiments also demonstrate that this framework can utilize unlabeled data from a different source than the labeled data to significantly improve performance while saving labeling effort.", "We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.", "", "", "", "We propose a new machine learning paradigm called Graph Transformer Networks that extends the applicability of gradient-based learning algorithms to systems composed of modules that take graphs as inputs and produce graphs as output. Training is performed by computing gradients of a global objective function with respect to all the parameters in the system using a kind of back-propagation procedure. A complete check reading system based on these concepts is described. The system uses convolutional neural network character recognizers, combined with global training techniques to provide record accuracy on business and personal checks. It is presently deployed commercially and reads million of checks per month.", "", "Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.", "" ] }
1508.00451
2267320693
Tackling pattern recognition problems in areas such as computer vision, bioinformatics, speech or text recognition is often done best by taking into account task-specific statistical relations between output variables. In structured prediction, this internal structure is used to predict multiple outputs simultaneously, leading to more accurate and coherent predictions. Structural support vector machines (SSVMs) are nonprobabilistic models that optimize a joint input-output function through margin-based learning. Because SSVMs generally disregard the interplay between unary and interaction factors during the training phase, final parameters are suboptimal. Moreover, its factors are often restricted to linear combinations of input features, limiting its generalization power. To improve prediction accuracy, this paper proposes: (i) joint inference and learning by integration of back-propagation and loss-augmented inference in SSVM subgradient descent; (ii) extending SSVM factors to neural networks that form highly nonlinear functions of input features. Image segmentation benchmark results demonstrate improvements over conventional SSVM training methods in terms of accuracy, highlighting the feasibility of end-to-end SSVM training with neural factors. HighlightsA novel structured prediction model is proposed and applied to image segmentation.SSVM factors are modeled by highly nonlinear functions through neural networks.Back-propagation and loss-augmented inference are integrated in subgradient descent.Segmentation benchmark accuracy results show benefits over standard SSVM methods.
Do and Arti eres @cite_55 propose a CRF in which potentials are represented by multilayer networks. The performance of their linear-chain probabilistic model is demonstrated by optical character and speech recognition using two-hidden-layer neural network outputs as unary potentials. Furthermore, joint inference and learning in linear-chain models is also proposed by @cite_44 , however, the application to more general graphs remains an open problem @cite_11 . Contrary to these works, we popose a nonprobabilistic approach for general graphs by also modeling nonlinear interaction factors. More recently, Schwing and Urtasun @cite_28 train a convolutional network as a unary classifier jointly with a fully-connected CRF for the task of image segmentation, similar to @cite_30 @cite_5 . @cite_27 advocate a joint learning and reasoning approach, in which a structured model is probabilistically trained using loopy belief propagation for the task of optical character recognition and image tagging. Other related work includes Domke @cite_53 who uses relaxations for combined message-passing and learning.
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_55", "@cite_53", "@cite_44", "@cite_27", "@cite_5", "@cite_11" ], "mid": [ "2136391815", "2102492119", "2226111737", "2105681376", "", "2950774024", "", "" ], "abstract": [ "This paper proposes a new hybrid architecture that consists of a deep Convolu-tional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.", "Convolutional neural networks with many layers have recently been shown to achieve excellent results on many high-level tasks such as image classification, object detection and more recently also semantic segmentation. Particularly for semantic segmentation, a two-stage procedure is often employed. Hereby, convolutional networks are trained to provide good local pixel-wise features for the second step being traditionally a more global graphical model. In this work we unify this two-stage process into a single joint training algorithm. We demonstrate our method on the semantic image segmentation task and show encouraging results on the challenging PASCAL VOC 2012 dataset.", "We propose a non-linear graphical model for structured prediction. It combines the power of deep neural networks to extract high level features with the graphical framework of Markov networks, yielding a powerful and scalable probabilistic model that we apply to signal labeling tasks.", "A successful approach to structured learning is to write the learning objective as a joint function of linear parameters and inference messages, and iterate between updates to each. This paper observes that if the inference problem is \"smoothed\" through the addition of entropy terms, for fixed messages, the learning objective reduces to a traditional (non-structured) logistic regression problem with respect to parameters. In these logistic regression problems, each training example has a bias term determined by the current set of messages. Based on this insight, the structured energy function can be extended from linear factors to any function class where an \"oracle\" exists to minimize a logistic loss.", "", "Many problems in real-world applications involve predicting several random variables which are statistically related. Markov random fields (MRFs) are a great mathematical tool to encode such relationships. The goal of this paper is to combine MRFs with deep learning algorithms to estimate complex representations while taking into account the dependencies between the output random variables. Towards this goal, we propose a training algorithm that is able to learn structured models jointly with deep features that form the MRF potentials. Our approach is efficient as it blends learning and inference and makes use of GPU acceleration. We demonstrate the effectiveness of our algorithm in the tasks of predicting words from noisy images, as well as multi-class classification of Flickr photographs. We show that joint learning of the deep features and the MRF parameters results in significant performance gains.", "", "" ] }
1508.00451
2267320693
Tackling pattern recognition problems in areas such as computer vision, bioinformatics, speech or text recognition is often done best by taking into account task-specific statistical relations between output variables. In structured prediction, this internal structure is used to predict multiple outputs simultaneously, leading to more accurate and coherent predictions. Structural support vector machines (SSVMs) are nonprobabilistic models that optimize a joint input-output function through margin-based learning. Because SSVMs generally disregard the interplay between unary and interaction factors during the training phase, final parameters are suboptimal. Moreover, its factors are often restricted to linear combinations of input features, limiting its generalization power. To improve prediction accuracy, this paper proposes: (i) joint inference and learning by integration of back-propagation and loss-augmented inference in SSVM subgradient descent; (ii) extending SSVM factors to neural networks that form highly nonlinear functions of input features. Image segmentation benchmark results demonstrate improvements over conventional SSVM training methods in terms of accuracy, highlighting the feasibility of end-to-end SSVM training with neural factors. HighlightsA novel structured prediction model is proposed and applied to image segmentation.SSVM factors are modeled by highly nonlinear functions through neural networks.Back-propagation and loss-augmented inference are integrated in subgradient descent.Segmentation benchmark accuracy results show benefits over standard SSVM methods.
Other related work aiming to improve conventional SSVMs are the works of @cite_23 and @cite_0 , in which a hierarchical part-based model is proposed for multiclass object recognition and shape detection, focusing on model reconfigurability through compositional alternatives in And-Or graphs. @cite_59 propose the use of convolutional neural networks to model an end-to-end relation between input images and structured outputs in active template regression. @cite_56 propose the learning of a structured model with multilayer deformable parts for action understanding, while @cite_37 propose a hierarchical structured model for action segmentation.
{ "cite_N": [ "@cite_37", "@cite_56", "@cite_0", "@cite_59", "@cite_23" ], "mid": [ "1912148408", "1907753563", "1993164181", "1973255633", "2072801280" ], "abstract": [ "Detailed analysis of human action, such as action classification, detection and localization has received increasing attention from the community; datasets like JHMDB have made it plausible to conduct studies analyzing the impact that such deeper information has on the greater action understanding problem. However, detailed automatic segmentation of human action has comparatively been unexplored. In this paper, we take a step in that direction and propose a hierarchical MRF model to bridge low-level video fragments with high-level human motion and appearance; novel higher-order potentials connect different levels of the supervoxel hierarchy to enforce the consistency of the human segmentation by pulling from different segment-scales. Our single layer model significantly outperforms the current state-of-the-art on actionness, and our full model improves upon the single layer baselines in action segmentation.", "The focus of the action understanding literature has predominately been classification, how- ever, there are many applications demanding richer action understanding such as mobile robotics and video search, with solutions to classification, localization and detection. In this paper, we propose a compositional model that leverages a new mid-level representation called compositional trajectories and a locally articulated spatiotemporal deformable parts model (LALSDPM) for fully action understanding. Our methods is advantageous in capturing the variable structure of dynamic human activity over a long range. First, the compositional trajectories capture long-ranging, frequently co-occurring groups of trajectories in space time and represent them in discriminative hierarchies, where human motion is largely separated from camera motion; second, LASTDPM learns a structured model with multi-layer deformable parts to capture multiple levels of articulated motion. We implement our methods and demonstrate state of the art performance on all three problems: action detection, localization, and recognition.", "In this paper, we investigate a novel reconfigurable part-based model, namely And-Or graph model, to recognize object shapes in images. Our proposed model consists of four layers: leaf-nodes at the bottom are local classifiers for detecting contour fragments; or-nodes above the leaf-nodes function as the switches to activate their child leaf-nodes, making the model reconfigurable during inference; and-nodes in a higher layer capture holistic shape deformations; one root-node on the top, which is also an or-node, activates one of its child and-nodes to deal with large global variations (e.g. different poses and views). We propose a novel structural optimization algorithm to discriminatively train the And-Or model from weakly annotated data. This algorithm iteratively determines the model structures (e.g. the nodes and their layouts) along with the parameter learning. On several challenging datasets, our model demonstrates the effectiveness to perform robust shape-based object detection against background clutter and outperforms the other state-of-the-art approaches. We also release a new shape database with annotations, which includes more than @math challenging shape instances, for recognition and detection.", "In this work, the human parsing task, namely decomposing a human image into semantic fashion body regions, is formulated as an active template regression (ATR) problem, where the normalized mask of each fashion body item is expressed as the linear combination of the learned mask templates, and then morphed to a more precise mask with the active shape parameters, including position, scale and visibility of each semantic region. The mask template coefficients and the active shape parameters together can generate the human parsing results, and are thus called the structure outputs for human parsing. The deep Convolutional Neural Network (CNN) is utilized to build the end-to-end relation between the input human image and the structure outputs for human parsing. More specifically, the structure outputs are predicted by two separate networks. The first CNN network is with max-pooling, and designed to predict the template coefficients for each label mask, while the second CNN network is without max-pooling to preserve sensitivity to label mask position and accurately predict the active shape parameters. For a new image, the structure outputs of the two networks are fused to generate the probability of each label for each pixel, and super-pixel smoothing is finally used to refine the human parsing result. Comprehensive evaluations on a large dataset well demonstrate the significant superiority of the ATR framework over other state-of-the-arts for human parsing. In particular, the F1-score reaches @math percent by our ATR framework, significantly higher than @math percent based on the state-of-the-art algorithm [28] .", "This paper proposes a reconfigurable model to recognize and detect multiclass (or multiview) objects with large variation in appearance. Compared with well acknowledged hierarchical models, we study two advanced capabilities in hierarchy for object modeling: (i) switch'' variables(i.e. or-nodes) for specifying alternative compositions, and (ii) making local classifiers (i.e. leaf-nodes) shared among different classes. These capabilities enable us to account well for structural variabilities while preserving the model compact. Our model, in the form of an And-Or Graph, comprises four layers: a batch of leaf-nodes with collaborative edges in bottom for localizing object parts, the or-nodes over bottom to activate their children leaf-nodes, the and-nodes to classify objects as a whole, one root-node on the top for switching multiclass classification, which is also an or-node. For model training, we present an EM-type algorithm, namely dynamical structural optimization (DSO), to iteratively determine the structural configuration, (e.g., leaf-node generation associated with their parent or-nodes and shared across other classes), along with optimizing multi-layer parameters. The proposed method is valid on challenging databases, e.g., PASCAL VOC 2007 and UIUC-People, and it achieves state-of-the-arts performance." ] }
1508.00192
2949654213
WaveCluster is an important family of grid-based clustering algorithms that are capable of finding clusters of arbitrary shapes. In this paper, we investigate techniques to perform WaveCluster while ensuring differential privacy. Our goal is to develop a general technique for achieving differential privacy on WaveCluster that accommodates different wavelet transforms. We show that straightforward techniques based on synthetic data generation and introduction of random noise when quantizing the data, though generally preserving the distribution of data, often introduce too much noise to preserve useful clusters. We then propose two optimized techniques, PrivTHR and PrivTHREM, which can significantly reduce data distortion during two key steps of WaveCluster: the quantization step and the significant grid identification step. We conduct extensive experiments based on four datasets that are particularly interesting in the context of clustering, and show that PrivTHR and PrivTHREM achieve high utility when privacy budgets are properly allocated.
The syntactic approaches for privacy preserving clustering @cite_30 is to output @math -anonymous clusters. @cite_38 presented an algorithm to output @math -anonymous clusters by using minimum spanning tree. @cite_23 created @math -anonymous clusters by merging clusters so that each cluster contains at least @math key values of the records. @cite_39 proposed an approach that converts the anonymity problem for cluster analysis to the counterpart problem for classification analysis. @cite_37 proposed a perturbation method called @math -gather clustering, which releases the cluster centers, together with their sizes, radiuses, and a set of associated sensitive values. However, these approaches only satisfy syntactic privacy notions such as k-anonymity, and cannot provide formal guarantees of privacy as differential privacy.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_37", "@cite_39", "@cite_23" ], "mid": [ "2142406320", "2154026994", "2120263102", "2144326136", "2095029525" ], "abstract": [ "The collection of digital information by governments, corporations, and individuals has created tremendous opportunities for knowledge- and information-based decision making. Driven by mutual benefits, or by regulations that require certain data to be published, there is a demand for the exchange and publication of data among various parties. Data in its original form, however, typically contains sensitive information about individuals, and publishing such data will violate individual privacy. The current practice in data publishing relies mainly on policies and guidelines as to what types of data can be published and on agreements on the use of published data. This approach alone may lead to excessive data distortion or insufficient protection. Privacy-preserving data publishing (PPDP) provides methods and tools for publishing useful information while preserving data privacy. Recently, PPDP has received considerable attention in research communities, and many approaches have been proposed for different data publishing scenarios. In this survey, we will systematically summarize and evaluate different approaches to PPDP, study the challenges in practical data publishing, clarify the differences and requirements that distinguish PPDP from other related problems, and propose future research directions.", "In this paper we present extended definitions of k-anonymity and use them to prove that a given data mining model does not violate the k-anonymity of the individuals represented in the learning examples. Our extension provides a tool that measures the amount of anonymity retained during data mining. We show that our model can be applied to various data mining problems, such as classification, association rule mining and clustering. We describe two data mining algorithms which exploit our extension to guarantee they will generate only k-anonymous output, and provide experimental results for one of them. Finally, we show that our method contributes new and efficient ways to anonymize data and preserve patterns during anonymization.", "Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code [15]. Sweeney [16] proposed the k-anonymity model for privacy where non-key attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an e fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well.", "Releasing person-specific data could potentially reveal sensitive information about individuals. k-anonymization is a promising privacy protection mechanism in data publishing. Although substantial research has been conducted on k-anonymization and its extensions in recent years, only a few prior works have considered releasing data for some specific purpose of data analysis. This paper presents a practical data publishing framework for generating a masked version of data that preserves both individual privacy and information usefulness for cluster analysis. Experiments on real-life data suggest that by focusing on preserving cluster structure in the masking process, the cluster quality is significantly better than the cluster quality of the masked data without such focus. The major challenge of masking data for cluster analysis is the lack of class labels that could be used to guide the masking process. Our approach converts the problem into the counterpart problem for classification analysis, wherein class labels encode the cluster structure in the data, and presents a framework to evaluate the cluster quality on the masked data.", "Privacy Preserving Record Linkage is an emerging field of research which attempts to deal with the classical linkage problem from a privacy preserving point of view. In this paper we propose a novel approach for performing Privacy Preserving Blocking in order to minimize the computational cost of Privacy Preserving Record Linkage. We achieve this without compromising privacy by using Nearest Neighbors clustering, a well-known clustering algorithm and by using a reference table. A reference table is a publicly known table the contents of which are used as intermediate references. The combination of Nearest Neighbors and a reference table offers our approach k-anonymity characteristics." ] }
1508.00192
2949654213
WaveCluster is an important family of grid-based clustering algorithms that are capable of finding clusters of arbitrary shapes. In this paper, we investigate techniques to perform WaveCluster while ensuring differential privacy. Our goal is to develop a general technique for achieving differential privacy on WaveCluster that accommodates different wavelet transforms. We show that straightforward techniques based on synthetic data generation and introduction of random noise when quantizing the data, though generally preserving the distribution of data, often introduce too much noise to preserve useful clusters. We then propose two optimized techniques, PrivTHR and PrivTHREM, which can significantly reduce data distortion during two key steps of WaveCluster: the quantization step and the significant grid identification step. We conduct extensive experiments based on four datasets that are particularly interesting in the context of clustering, and show that PrivTHR and PrivTHREM achieve high utility when privacy budgets are properly allocated.
In this work, our goal is to perform WaveCluster under differential privacy. The focus of initial work on differential privacy @cite_35 @cite_13 @cite_5 @cite_9 @cite_20 concerned the theoretical proof of its feasibility on various data analysis tasks, e.g., histogram and logistic regression.
{ "cite_N": [ "@cite_35", "@cite_9", "@cite_5", "@cite_13", "@cite_20" ], "mid": [ "2109426455", "2138865266", "2245160765", "2951011752", "" ], "abstract": [ "Over the past five years a new approach to privacy-preserving data analysis has born fruit [13, 18, 7, 19, 5, 37, 35, 8, 32]. This approach differs from much (but not all!) of the related literature in the statistics, databases, theory, and cryptography communities, in that a formal and ad omnia privacy guarantee is defined, and the data analysis techniques presented are rigorously proved to satisfy the guarantee. The key privacy guarantee that has emerged is differential privacy. Roughly speaking, this ensures that (almost, and quantifiably) no risk is incurred by joining a statistical database. In this survey, we recall the definition of differential privacy and two basic techniques for achieving it. We then show some interesting applications of these techniques, presenting algorithms for three specific tasks and three general results on differentially private learning.", "We show by means of several examples that robust statistical estimators present an excellent starting point for differentially private estimators. Our algorithms use a new paradigm for differentially private mechanisms, which we call Propose-Test-Release (PTR), and for which we give a formal definition and general composition theorems.", "Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask, What concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (nonprivate) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private probabilistically approximately correct learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.", "We present an approach to differentially private computation in which one does not scale up the magnitude of noise for challenging queries, but rather scales down the contributions of challenging records. While scaling down all records uniformly is equivalent to scaling up the noise magnitude, we show that scaling records non-uniformly can result in substantially higher accuracy by bypassing the worst-case requirements of differential privacy for the noise magnitudes. This paper details the data analysis platform wPINQ, which generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a few simple operators (including a non-uniformly scaling Join operator) wPINQ can reproduce (and improve) several recent results on graph analysis and introduce new generalizations (e.g., counting triangles with given degrees). We also show how to integrate probabilistic inference techniques to synthesize datasets respecting more complicated (and less easily interpreted) measurements.", "" ] }
1508.00192
2949654213
WaveCluster is an important family of grid-based clustering algorithms that are capable of finding clusters of arbitrary shapes. In this paper, we investigate techniques to perform WaveCluster while ensuring differential privacy. Our goal is to develop a general technique for achieving differential privacy on WaveCluster that accommodates different wavelet transforms. We show that straightforward techniques based on synthetic data generation and introduction of random noise when quantizing the data, though generally preserving the distribution of data, often introduce too much noise to preserve useful clusters. We then propose two optimized techniques, PrivTHR and PrivTHREM, which can significantly reduce data distortion during two key steps of WaveCluster: the quantization step and the significant grid identification step. We conduct extensive experiments based on four datasets that are particularly interesting in the context of clustering, and show that PrivTHR and PrivTHREM achieve high utility when privacy budgets are properly allocated.
More recent work has focused on practical applications of differential privacy for privacy-preserving data publishing. An approach proposed by @cite_11 encoded marginals with Fourier coefficients and then added noise to the released coefficients. @cite_14 exploited consistency constraints to reduce noise for histogram counts. @cite_28 proposed , which uses wavelet transforms to reduce noise for histogram counts. @cite_22 indexed data by -trees and -trees, developing effective budget allocation strategies for building the noisy trees and obtaining noisy counts for the tree nodes. @cite_8 proposed uniform-grid and adaptive-grid methods to derive appropriate partition granularity in differentially private synopsis publishing. @cite_16 proposed the and techniques for constructing optimal noisy histograms, using dynamic programming and Exponential mechanism. These data publishing techniques are specifically crafted for answering range queries. Unfortunately, synthesizing the dataset and applying WaveCluster on top of it often render WaveCluster results useless, since these differentially private data publishing techniques do not capture the essence of WaveCluster and introduce too much unnecessary noise for WaveCluster.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_8", "@cite_28", "@cite_16", "@cite_11" ], "mid": [ "2057576485", "2122290076", "2949105717", "1990503647", "2022542865", "2123733729" ], "abstract": [ "We show that it is possible to significantly improve the accuracy of a general class of histogram queries while satisfying differential privacy. Our approach carefully chooses a set of queries to evaluate, and then exploits consistency constraints that should hold over the noisy output. In a post-processing phase, we compute the consistent input most likely to have produced the noisy output. The final output is differentially-private and consistent, but in addition, it is often much more accurate. We show, both theoretically and experimentally, that these techniques can be used for estimating the degree sequence of a graph very precisely, and for computing a histogram that can support arbitrary range queries accurately.", "Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-understood how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release data that is useful for a variety of queries. In this paper, we focus on spatial data such as locations and more generally any multi-dimensional data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. We propose instead the class of private spatial decompositions'': these adapt standard spatial indexing methods such as quad trees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various basic steps, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. A major contribution of our work is to provide new techniques for parameter setting and post-processing the output to improve the accuracy of query answers. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy.", "In this paper, we tackle the problem of constructing a differentially private synopsis for two-dimensional datasets such as geospatial datasets. The current state-of-the-art methods work by performing recursive binary partitioning of the data domains, and constructing a hierarchy of partitions. We show that the key challenge in partition-based synopsis methods lies in choosing the right partition granularity to balance the noise error and the non-uniformity error. We study the uniform-grid approach, which applies an equi-width grid of a certain size over the data domain and then issues independent count queries on the grid cells. This method has received no attention in the literature, probably due to the fact that no good method for choosing a grid size was known. Based on an analysis of the two kinds of errors, we propose a method for choosing the grid size. Experimental results validate our method, and show that this approach performs as well as, and often times better than, the state-of-the-art methods. We further introduce a novel adaptive-grid method. The adaptive grid method lays a coarse-grained grid over the dataset, and then further partitions each cell according to its noisy count. Both levels of partitions are then used in answering queries over the dataset. This method exploits the need to have finer granularity partitioning over dense regions and, at the same time, coarse partitioning over sparse regions. Through extensive experiments on real-world datasets, we show that this approach consistently and significantly outperforms the uniform-grid method and other state-of-the-art methods.", "Privacy-preserving data publishing has attracted considerable research interest in recent years. Among the existing solutions, ∈-differential privacy provides the strongest privacy guarantee. Existing data publishing methods that achieve ∈-differential privacy, however, offer little data utility. In particular, if the output data set is used to answer count queries, the noise in the query answers can be proportional to the number of tuples in the data, which renders the results useless. In this paper, we develop a data publishing technique that ensures ∈-differential privacy while providing accurate answers for range-count queries, i.e., count queries where the predicate on each attribute is a range. The core of our solution is a framework that applies wavelet transforms on the data before adding noise to it. We present instantiations of the proposed framework for both ordinal and nominal data, and we provide a theoretical analysis on their privacy and utility guarantees. In an extensive experimental study on both real and synthetic data, we show the effectiveness and efficiency of our solution.", "Differential privacy (DP) is a promising scheme for releasing the results of statistical queries on sensitive data, with strong privacy guarantees against adversaries with arbitrary background knowledge. Existing studies on differential privacy mostly focus on simple aggregations such as counts. This paper investigates the publication of DP-compliant histograms, which is an important analytical tool for showing the distribution of a random variable, e.g., hospital bill size for certain patients. Compared to simple aggregations whose results are purely numerical, a histogram query is inherently more complex, since it must also determine its structure, i.e., the ranges of the bins. As we demonstrate in the paper, a DP-compliant histogram with finer bins may actually lead to significantly lower accuracy than a coarser one, since the former requires stronger perturbations in order to satisfy DP. Moreover, the histogram structure itself may reveal sensitive information, which further complicates the problem. Motivated by this, we propose two novel mechanisms, namely NoiseFirst and StructureFirst, for computing DP-compliant histograms. Their main difference lies in the relative order of the noise injection and the histogram structure computation steps. NoiseFirst has the additional benefit that it can improve the accuracy of an already published DP-compliant histogram computed using a naive method. For each of proposed mechanisms, we design algorithms for computing the optimal histogram structure with two different objectives: minimizing the mean square error and the mean absolute error, respectively. Going one step further, we extend both mechanisms to answer arbitrary range queries. Extensive experiments, using several real datasets, confirm that our two proposals output highly accurate query answers and consistently outperform existing competitors.", "The contingency table is a work horse of official statistics, the format of reported data for the US Census, Bureau of Labor Statistics, and the Internal Revenue Service. In many settings such as these privacy is not only ethically mandated, but frequently legally as well. Consequently there is an extensive and diverse literature dedicated to the problems of statistical disclosure control in contingency table release. However, all current techniques for reporting contingency tables fall short on at leas one of privacy, accuracy, and consistency (among multiple released tables). We propose a solution that provides strong guarantees for all three desiderata simultaneously. Our approach can be viewed as a special case of a more general approach for producing synthetic data: Any privacy-preserving mechanism for contingency table release begins with raw data and produces a (possibly inconsistent) privacy-preserving set of marginals. From these tables alone-and hence without weakening privacy--we will find and output the \"nearest\" consistent set of marginals. Interestingly, this set is no farther than the tables of the raw data, and consequently the additional error introduced by the imposition of consistency is no more than the error introduced by the privacy mechanism itself. The privacy mechanism of [20] gives the strongest known privacy guarantees, with very little error. Combined with the techniques of the current paper, we therefore obtain excellent privacy, accuracy, and consistency among the tables. Moreover, our techniques are surprisingly efficient. Our techniques apply equally well to the logical cousin of the contingency table, the OLAP cube." ] }
1508.00192
2949654213
WaveCluster is an important family of grid-based clustering algorithms that are capable of finding clusters of arbitrary shapes. In this paper, we investigate techniques to perform WaveCluster while ensuring differential privacy. Our goal is to develop a general technique for achieving differential privacy on WaveCluster that accommodates different wavelet transforms. We show that straightforward techniques based on synthetic data generation and introduction of random noise when quantizing the data, though generally preserving the distribution of data, often introduce too much noise to preserve useful clusters. We then propose two optimized techniques, PrivTHR and PrivTHREM, which can significantly reduce data distortion during two key steps of WaveCluster: the quantization step and the significant grid identification step. We conduct extensive experiments based on four datasets that are particularly interesting in the context of clustering, and show that PrivTHR and PrivTHREM achieve high utility when privacy budgets are properly allocated.
Another important line of prior work focuses on integrating differential privacy into other practical data analysis tasks, such as regression analysis, model fitting, classification and etc. @cite_27 proposed a differentially private regularized logistic regression algorithm that balances privacy with learnability. @cite_24 proposed a differentially private approach for logistic and linear regressions that involve perturbing the objective function of the regression model, rather than simply introducing noise into the results. @cite_42 incorporated differential privacy into several types of decision trees and subsequently demonstrated the tradeoff among privacy, accuracy and sample size. Using decision trees as an example application, @cite_31 investigated a generalization-based algorithm for achieving differential privacy for classification problems.
{ "cite_N": [ "@cite_24", "@cite_27", "@cite_42", "@cite_31" ], "mid": [ "2162379889", "2112380340", "2109135024", "2005107218" ], "abstract": [ "e-differential privacy is the state-of-the-art model for releasing sensitive information while protecting privacy. Numerous methods have been proposed to enforce e-differential privacy in various analytical tasks, e.g., regression analysis. Existing solutions for regression analysis, however, are either limited to non-standard types of regression or unable to produce accurate regression results. Motivated by this, we propose the Functional Mechanism, a differentially private method designed for a large class of optimization-based analyses. The main idea is to enforce e-differential privacy by perturbing the objective function of the optimization problem, rather than its results. As case studies, we apply the functional mechanism to address two most widely used regression models, namely, linear regression and logistic regression. Both theoretical analysis and thorough experimental evaluations show that the functional mechanism is highly effective and efficient, and it significantly outperforms existing solutions.", "This paper addresses the important tradeoff between privacy and learnability, when designing algorithms for learning from private databases. We focus on privacy-preserving logistic regression. First we apply an idea of [6] to design a privacy-preserving logistic regression algorithm. This involves bounding the sensitivity of regularized logistic regression, and perturbing the learned classifier with noise proportional to the sensitivity. We then provide a privacy-preserving regularized logistic regression algorithm based on a new privacy-preserving technique: solving a perturbed optimization problem. We prove that our algorithm preserves privacy in the model due to [6]. We provide learning guarantees for both algorithms, which are tighter for our new algorithm, in cases in which one would typically apply logistic regression. Experiments demonstrate improved learning performance of our method, versus the sensitivity method. Our privacy-preserving technique does not depend on the sensitivity of the function, and extends easily to a class of convex loss functions. Our work also reveals an interesting connection between regularization and privacy.", "We consider the problem of data mining with formal privacy guarantees, given a data access interface based on the differential privacy framework. Differential privacy requires that computations be insensitive to changes in any particular individual's record, thereby restricting data leaks through the results. The privacy preserving interface ensures unconditionally safe access to the data and does not require from the data miner any expertise in privacy. However, as we show in the paper, a naive utilization of the interface to construct privacy preserving data mining algorithms could lead to inferior data mining results. We address this problem by considering the privacy and the algorithmic requirements simultaneously, focusing on decision tree induction as a sample application. The privacy mechanism has a profound effect on the performance of the methods chosen by the data miner. We demonstrate that this choice could make the difference between an accurate classifier and a completely useless one. Moreover, an improved algorithm can achieve the same level of accuracy and privacy as the naive implementation but with an order of magnitude fewer learning samples.", "Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among the existing privacy models, ∈-differential privacy provides one of the strongest privacy guarantees and has no assumptions about an adversary's background knowledge. Most of the existing solutions that ensure ∈-differential privacy are based on an interactive model, where the data miner is only allowed to pose aggregate queries to the database. In this paper, we propose the first anonymization algorithm for the non-interactive setting based on the generalization technique. The proposed solution first probabilistically generalizes the raw data and then adds noise to guarantee ∈-differential privacy. As a sample application, we show that the anonymized data can be used effectively to build a decision tree induction classifier. Experimental results demonstrate that the proposed non-interactive anonymization algorithm is scalable and performs better than the existing solutions for classification analysis." ] }
1508.00192
2949654213
WaveCluster is an important family of grid-based clustering algorithms that are capable of finding clusters of arbitrary shapes. In this paper, we investigate techniques to perform WaveCluster while ensuring differential privacy. Our goal is to develop a general technique for achieving differential privacy on WaveCluster that accommodates different wavelet transforms. We show that straightforward techniques based on synthetic data generation and introduction of random noise when quantizing the data, though generally preserving the distribution of data, often introduce too much noise to preserve useful clusters. We then propose two optimized techniques, PrivTHR and PrivTHREM, which can significantly reduce data distortion during two key steps of WaveCluster: the quantization step and the significant grid identification step. We conduct extensive experiments based on four datasets that are particularly interesting in the context of clustering, and show that PrivTHR and PrivTHREM achieve high utility when privacy budgets are properly allocated.
Differentially private cluster analysis has also be studied in prior work. @cite_2 proposed differentially private model fitting based on genetic algorithms, with applications to -means clustering. McSherry @cite_34 introduced the PINQ framework, which has been applied to achieve differential privacy for -means clustering using an iterative algorithm @cite_1 . @cite_25 proposed the sample-aggregate framework that calibrates the noise magnitude according to the smooth sensitivity of a function. They showed that their framework can be applied to -means clustering under the assumption that the dataset is well-separated. These research efforts primarily focus on centroid-based clustering, such as -means, that is most suited for separating convex clusters and presents insufficient spatial information to detect clusters with complex shapes, e.g. concave shapes. In contrast to these research efforts, we propose techniques that enforce differential privacy on WaveCluster, which is not restricted to well-separated datasets, and can detect clusters with arbitrary shapes.
{ "cite_N": [ "@cite_34", "@cite_1", "@cite_25", "@cite_2" ], "mid": [ "2012873258", "1981330950", "2101771965", "2100761657" ], "abstract": [ "Privacy Integrated Queries (PINQ) is an extensible data analysis platform designed to provide unconditional privacy guarantees for the records of the underlying data sets. PINQ provides analysts with access to records through an SQL-like declarative language (LINQ) amidst otherwise arbitrary C# code. At the same time, the design of PINQ's analysis language and its careful implementation provide formal guarantees of differential privacy for any and all uses of the platform. PINQ's guarantees require no trust placed in the expertise or diligence of the analysts, broadening the scope for design and deployment of privacy-preserving data analyses, especially by privacy nonexperts.", "We demonstrate Damson, a novel and powerful tool for publishing the results of biomedical research with strong privacy guarantees. Damson is developed based on the theory of differential privacy, which ensures that the adversary cannot infer the presence or absence of any individual from the published results, even with substantial background knowledge. Damson supports a variety of analysis tasks that are common in biomedical studies, including histograms, marginals, data cubes, classification, regression, clustering, and ad-hoc selection-counts. Additionally, Damson contains an effective query optimization engine, which obtains high accuracy for analysis results, while minimizing the privacy costs of performing such analysis.", "We introduce a new, generic framework for private data analysis.The goal of private data analysis is to release aggregate information about a data set while protecting the privacy of the individuals whose information the data set contains.Our framework allows one to release functions f of the data withinstance-based additive noise. That is, the noise magnitude is determined not only by the function we want to release, but also bythe database itself. One of the challenges is to ensure that the noise magnitude does not leak information about the database. To address that, we calibrate the noise magnitude to the smoothsensitivity of f on the database x --- a measure of variabilityof f in the neighborhood of the instance x. The new frameworkgreatly expands the applicability of output perturbation, a technique for protecting individuals' privacy by adding a smallamount of random noise to the released statistics. To our knowledge, this is the first formal analysis of the effect of instance-basednoise in the context of data privacy. Our framework raises many interesting algorithmic questions. Namely,to apply the framework one must compute or approximate the smoothsensitivity of f on x. We show how to do this efficiently for several different functions, including the median and the cost ofthe minimum spanning tree. We also give a generic procedure based on sampling that allows one to release f(x) accurately on manydatabases x. This procedure is applicable even when no efficient algorithm for approximating smooth sensitivity of f is known orwhen f is given as a black box. We illustrate the procedure by applying it to k-SED (k-means) clustering and learning mixtures of Gaussians.", "epsilon-differential privacy is rapidly emerging as the state-of-the-art scheme for protecting individuals' privacy in published analysis results over sensitive data. The main idea is to perform random perturbations on the analysis results, such that any individual's presence in the data has negligible impact on the randomized results. This paper focuses on analysis tasks that involve model fitting, i.e., finding the parameters of a statistical model that best fit the dataset. For such tasks, the quality of the differentially private results depends upon both the effectiveness of the model fitting algorithm, and the amount of perturbations required to satisfy the privacy guarantees. Most previous studies start from a state-of-the-art, non-private model fitting algorithm, and develop a differentially private version. Unfortunately, many model fitting algorithms require intensive perturbations to satisfy -differential privacy, leading to poor overall result quality. Motivated by this, we propose PrivGene, a general-purpose differentially private model fitting solution based on genetic algorithms (GA). PrivGene needs significantly less perturbations than previous methods, and it achieves higher overall result quality, even for model fitting tasks where GA is not the first choice without privacy considerations. Further, PrivGene performs the random perturbations using a novel technique called the enhanced exponential mechanism, which improves over the exponential mechanism by exploiting the special properties of model fitting tasks. As case studies, we apply PrivGene to three common analysis tasks involving model fitting: logistic regression, SVM classification, and k-means clustering. Extensive experiments using real data confirm the high result quality of PrivGene, and its superiority over existing methods." ] }
1508.00488
1858390837
This paper introduces LABurst, a general technique for identifying key moments, or moments of high impact, in social media streams without the need for domain-specific information or seed keywords. We leverage machine learning to model temporal patterns around bursts in Twitter's unfiltered public sample stream and build a classifier to identify tokens experiencing these bursts. We show LABurst performs competitively with existing burst detection techniques while simultaneously providing insight into and detection of unanticipated moments. To demonstrate our approach's potential, we compare two baseline event-detection algorithms with our language-agnostic algorithm to detect key moments across three major sporting competitions: 2013 World Series, 2014 Super Bowl, and 2014 World Cup. Our results show LABurst outperforms a time series analysis baseline and is competitive with a domain-specific baseline even though we operate without any domain knowledge. We then go further by transferring LABurst's models learned in the sports domain to the task of identifying earthquakes in Japan and show our method detects large spikes in earthquake-related tokens within two minutes of the actual event.
One of the most well-known works in detecting events from microblog streams is Sakaki, Okazaki, and Matsuo's 2010 paper on detecting earthquakes in Japan using Twitter @cite_1 . show that not only can one detect earthquakes on Twitter but also that it can be done simply by tracking frequencies of earthquake-related tokens. Surprisingly, this approach can outperform geological earthquake detection tools since digital data propagates faster than tremor waves in the Earth's crust. Though this research is limited in that it requires pre-specified tokens and is highly domain- and location-specific (Japan has a high density of Twitter users, so earthquake detection may perform less well in areas with fewer Twitter users), it demonstrates a significant use case and the potential of such applications.
{ "cite_N": [ "@cite_1" ], "mid": [ "2124499489" ], "abstract": [ "Twitter, a popular microblogging service, has received much attention recently. An important characteristic of Twitter is its real-time nature. For example, when an earthquake occurs, people make many Twitter posts (tweets) related to the earthquake, which enables detection of earthquake occurrence promptly, simply by observing the tweets. As described in this paper, we investigate the real-time interaction of events such as earthquakes in Twitter and propose an algorithm to monitor tweets and to detect a target event. To detect a target event, we devise a classifier of tweets based on features such as the keywords in a tweet, the number of words, and their context. Subsequently, we produce a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location. We consider each Twitter user as a sensor and apply Kalman filtering and particle filtering, which are widely used for location estimation in ubiquitous pervasive computing. The particle filter works better than other comparable methods for estimating the centers of earthquakes and the trajectories of typhoons. As an application, we construct an earthquake reporting system in Japan. Because of the numerous earthquakes and the large number of Twitter users throughout the country, we can detect an earthquake with high probability (96 of earthquakes of Japan Meteorological Agency (JMA) seismic intensity scale 3 or more are detected) merely by monitoring tweets. Our system detects earthquakes promptly and sends e-mails to registered users. Notification is delivered much faster than the announcements that are broadcast by the JMA." ] }
1508.00488
1858390837
This paper introduces LABurst, a general technique for identifying key moments, or moments of high impact, in social media streams without the need for domain-specific information or seed keywords. We leverage machine learning to model temporal patterns around bursts in Twitter's unfiltered public sample stream and build a classifier to identify tokens experiencing these bursts. We show LABurst performs competitively with existing burst detection techniques while simultaneously providing insight into and detection of unanticipated moments. To demonstrate our approach's potential, we compare two baseline event-detection algorithms with our language-agnostic algorithm to detect key moments across three major sporting competitions: 2013 World Series, 2014 Super Bowl, and 2014 World Cup. Our results show LABurst outperforms a time series analysis baseline and is competitive with a domain-specific baseline even though we operate without any domain knowledge. We then go further by transferring LABurst's models learned in the sports domain to the task of identifying earthquakes in Japan and show our method detects large spikes in earthquake-related tokens within two minutes of the actual event.
Similar to Petrovi ' c , Weng and Lee's 2011 paper on EDCoW, short for Event Detection with Clustering of Wavelet-based Signals, is also able to identify events from Twitter without seed keywords @cite_14 . After stringent filtering (removing stop words, common words, and non-English tokens), EDCoW uses wavelet analysis to isolate and identify bursts in token usage as a sliding window advances along the social media stream. Besides the heavy filtering of the input data, this approach exhibits notable similarities with the language-agnostic method we describe herein with its reliance on bursts to detect event-related tokens. These methods, however, operate retrospectively, focusing on daily news rather than breaking event detection on which our research focuses. Becker, Naaman, and Gravano's 2011 paper on identifying events in Twitter also fall under retrospective analysis, but their findings also demonstrate reasonable performance in identifying events in Twitter by leveraging classification tasks to separate tweets into those on real-world events'' versus non-event messages @cite_12 . Similarly, also employ a retrospective technique to separate tweets into global, event-related topics and personal topics @cite_4 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_12" ], "mid": [ "11244355", "2137553870", "" ], "abstract": [ "Twitter, as a form of social media, is fast emerging in recent years. Users are using Twitter to report real-life events. This paper focuses on detecting those events by analyzing the text stream in Twitter. Although event detection has long been a research topic, the characteristics of Twitter make it a non-trivial task. Tweets reporting such events are usually overwhelmed by high flood of meaningless “babbles”. Moreover, event detection algorithm needs to be scalable given the sheer amount of tweets. This paper attempts to tackle these challenges with EDCoW (Event Detection with Clustering of Wavelet-based Signals). EDCoW builds signals for individual words by applying wavelet analysis on the frequencybased raw signals of the words. It then filters away the trivial words by looking at their corresponding signal autocorrelations. The remaining words are then clustered to form events with a modularity-based graph partitioning technique. Experimental results show promising result of EDCoW.", "Microblogs such as Twitter reflect the general public's reactions to major events. Bursty topics from microblogs reveal what events have attracted the most online attention. Although bursty event detection from text streams has been studied before, previous work may not be suitable for microblogs because compared with other text streams such as news articles and scientific publications, microblog posts are particularly diverse and noisy. To find topics that have bursty patterns on microblogs, we propose a topic model that simultaneously captures two observations: (1) posts published around the same time are more likely to have the same topic, and (2) posts published by the same user are more likely to have the same topic. The former helps find event-driven posts while the latter helps identify and filter out \"personal\" posts. Our experiments on a large Twitter dataset show that there are more meaningful and unique bursty topics in the top-ranked results returned by our model than an LDA baseline and two degenerate variations of our model. We also show some case studies that demonstrate the importance of considering both the temporal information and users' personal interests for bursty topic detection from microblogs.", "" ] }
1508.00488
1858390837
This paper introduces LABurst, a general technique for identifying key moments, or moments of high impact, in social media streams without the need for domain-specific information or seed keywords. We leverage machine learning to model temporal patterns around bursts in Twitter's unfiltered public sample stream and build a classifier to identify tokens experiencing these bursts. We show LABurst performs competitively with existing burst detection techniques while simultaneously providing insight into and detection of unanticipated moments. To demonstrate our approach's potential, we compare two baseline event-detection algorithms with our language-agnostic algorithm to detect key moments across three major sporting competitions: 2013 World Series, 2014 Super Bowl, and 2014 World Cup. Our results show LABurst outperforms a time series analysis baseline and is competitive with a domain-specific baseline even though we operate without any domain knowledge. We then go further by transferring LABurst's models learned in the sports domain to the task of identifying earthquakes in Japan and show our method detects large spikes in earthquake-related tokens within two minutes of the actual event.
More recently, 's 2013 paper on TopicSketch seeks to perform real-time event detection from Twitter streams without pre-defined topical keywords'' by maintaining acceleration features across three levels of granularity: individual token, bigram, and total stream @cite_5 . As with Petrovi ' c 's use of LSH, leverage sketches'' and dimensionality reduction to facilitate event detection and also relies on language-specific similarities. Furthermore, focus only on tweets from Singapore rather than the worldwide stream. In contrast, our approach is differentiated primarily in its language-agnosticism and its use of the unfiltered stream from Twitter's global network.
{ "cite_N": [ "@cite_5" ], "mid": [ "2011949237" ], "abstract": [ "Twitter has become one of the largest platforms for users around the world to share anything happening around them with friends and beyond. A bursty topic in Twitter is one that triggers a surge of relevant tweets within a short time, which often reflects important events of mass interest. How to leverage Twitter for early detection of bursty topics has therefore become an important research problem with immense practical value. Despite the wealth of research work on topic modeling and analysis in Twitter, it remains a huge challenge to detect bursty topics in real-time. As existing methods can hardly scale to handle the task with the tweet stream in real-time, we propose in this paper Topic Sketch, a novel sketch-based topic model together with a set of techniques to achieve real-time detection. We evaluate our solution on a tweet stream with over 30 million tweets. Our experiment results show both efficiency and effectiveness of our approach. Especially it is also demonstrated that Topic Sketch can potentially handle hundreds of millions tweets per day which is close to the total number of daily tweets in Twitter and present bursty event in finer-granularity." ] }
1507.08888
2253256285
Recently, assurance cases have received much attentions in the field of software-based computer systems and IT services. However, software very often changes and there are no strong regulations for software. These facts are main two challenges to be addressed in software assurance cases. We propose a development method of assurance cases by means of continuous revision at every stage of the system lifecycle, including in-operation and service recovery in failure cases. The quality of dependability arguments are improved by multiple stakeholders who check with each other. This paper reported our experience of the proposed method in a case of the ASPEN education service. The case study demonstrate that the continuos updates create a significant amount of active risk communications between stakeholders. This gives us a promising perspective for the long-term improvement of service dependability with the lifecycle assurance cases.
Since the modern software is well modeled in design, there has been increasing interest in the model-driven approach to the development of software assurance case. Denney and Pai shows an automated assembly from a lightweight formal models of software development @cite_20 @cite_5 . @cite_7 shows how to generate an assurance argument for a system using information extracted directly from design, analysis and development models of that system.
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_20" ], "mid": [ "2170178247", "2093830757", "1498923079" ], "abstract": [ "We present AdvoCATE, an Assurance Case Automation ToolsEt, to support the automated construction and assessment of safety cases. In addition to manual creation and editing, it has a growing suite of automated features. In this paper, we highlight its capabilities for (i) inclusion of specific metadata, (ii) translation to and from various formats, including those of other widely used safety case tools, (iii) composition, with auto-generated safety case fragments, and (iv) computation of safety case metrics which, we believe, will provide a transparent, quantitative basis for assessment of the state of a safety case as it evolves. The tool primarily supports the Goal Structuring Notation (GSN), is compliant with the GSN Community Standard Version 1, and the Object Modeling Group Argumentation Metamodel (OMG ARM).", "Assurance cases are used to demonstrate confidence in properties of interest for a system, e.g. For safety or security. A model-based assurance case seeks to bring the benefits of model-driven engineering, such as automation, transformation and validation, to what is currently a lengthy and informal process. In this paper we develop a model-based assurance approach, based on a weaving model, which allows integration between assurance case, design and process models and meta-models. In our approach, the assurance case itself is treated as a structured model, with the aim that all entities in the assurance case become linked explicitly to the models that represent them. We show how it is possible to exploit the weaving model for automated generation of assurance cases. Building upon these results, we discuss how a seamless model-driven approach to assurance cases can be achieved and examine the utility of increased formality and automation.", "We describe a lightweight methodology to support the automatic assembly of safety cases from tabular requirements specifications. The resulting safety case fragments provide an alternative, graphical, view of the requirements. The safety cases can be modified and augmented with additional information. In turn, these modifications can be mapped back to extensions of the tabular requirements, with which they are kept consistent, thus avoiding the need for engineers to maintain an additional artifact. We formulate our approach on top of an idealized process, and illustrate the applicability of the methodology on excerpts of requirements specifications for an experimental Unmanned Aircraft System." ] }
1507.08956
2083970834
Digital maps have become a part of our daily life with a number of commercial and free map services. These services have still a huge potential for enhancement with rich semantic information to support a large class of mapping applications. In this paper, we present Map++, a system that leverages standard cell-phone sensors in a crowdsensing approach to automatically enrich digital maps with different road semantics like tunnels, bumps, bridges, footbridges, crosswalks, road capacity, among others. Our analysis shows that cell-phones sensors with humans in vehicles or walking get affected by the different road features, which can be mined to extend the features of both free and commercial mapping services. We present the design and implementation of Map++ and evaluate it in a large city. Our evaluation shows that we can detect the different semantics accurately with at most 3 false positive rate and 6 false negative rate for both vehicle and pedestrian-based features. Moreover, we show that Map++ has a small energy footprint on the cell-phones, highlighting its promise as a ubiquitous digital maps enriching service.
For indoor maps, recently in @cite_21 @cite_16 @cite_25 authors proposed indoor maps construction and inferring indoor structures like elevators using sensors available on smart-phones. uses a similar approach for semantics inference. However, outdoor maps have completely different diverse semantics.
{ "cite_N": [ "@cite_21", "@cite_25", "@cite_16" ], "mid": [ "1764672635", "2145259302", "2164729653" ], "abstract": [ "Although GPS has been considered a ubiquitous outdoor localization technology, we are still far from a similar technology for indoor environments. While a number of technologies have been proposed for indoor localization, they are isolated efforts that are way from a true ubiquitous localization system. A ubiquitous indoor positioning system is envisioned to be deployed on a large scale worldwide, with minimum overhead, to work with heterogeneous devices, and to allow users to roam seamlessly from indoor to outdoor environments. Such a system will enable a wide set of applications including worldwide seamless direction finding between indoor locations, enhancing first responders' safety by providing anywhere localization and floor plans, and providing a richer environment for location-aware social networking applications. We describe an architecture for the ubiquitous indoor positioning system (IPS) and the challenges that have to be addressed to materialize it. We then focus on the feasibility of automating the construction of a worldwide indoor floor plan and fingerprint database which, as we believe, is one of the main challenges that limit the existence of a ubiquitous IPS system. Our proof of concept uses a crowd-sourcing approach that leverages the embedded sensors in today's cell phones as a worldwide distributed floor plan generation tool. This includes constructing the floor plans and determining the areas of interest (corridors, offices, meeting rooms, elevators, etc). The cloud computing concepts are also adopted for the processing and storage of the huge amount of data generated and requested by the system's users. Our results show the ability of the system to construct an accurate floor plan and identify the areas of interest with more than 90 accuracy. We also identify different research directions for addressing the challenges of realizing a true ubiquitous IPS system.", "The existence of a worldwide indoor floor-plans database can lead to significant growth in pervasive computing, especially for indoor environments. In this demonstration, we show CrowdInside: a crowdsourcing-based system for the automatic construction of buildings floor-plans. CrowdInside leverages the smart phones sensors that are ubiquitously available with humans who use a building to automatically and transparently construct accurate motion traces. CrowdInside processes the collected motion traces from different visitors of a building to detect its overall floor-plan shape as well as higher level semantics such as number of rooms, their shapes and locations, and corridors shapes along with a variety of points of interest in the environment. The goal of this demo is to show how the accurate motion traces are constructed as well as how the building floor-plan can be automatically generated.", "The existence of a worldwide indoor floorplans database can lead to significant growth in location-based applications, especially for indoor environments. In this paper, we present CrowdInside: a crowdsourcing-based system for the automatic construction of buildings floorplans. CrowdInside leverages the smart phones sensors that are ubiquitously available with humans who use a building to automatically and transparently construct accurate motion traces. These accurate traces are generated based on a novel technique for reducing the errors in the inertial motion traces by using the points of interest in the indoor environment, such as elevators and stairs, for error resetting. The collected traces are then processed to detect the overall floorplan shape as well as higher level semantics such as detecting rooms and corridors shapes along with a variety of points of interest in the environment. Implementation of the system in two testbeds, using different Android phones, shows that CrowdInside can detect the points of interest accurately with 0.2 false positive rate and 1.3 false negative rate. In addition, the proposed error resetting technique leads to more than 12 times enhancement in the median distance error compared to the state-of-the-art. Moreover, the detailed floorplan can be accurately estimated with a relatively small number of traces. This number is amortized over the number of users of the building. We also discuss possible extensions to CrowdInside for inferring even higher level semantics about the discovered floorplans." ] }
1507.08956
2083970834
Digital maps have become a part of our daily life with a number of commercial and free map services. These services have still a huge potential for enhancement with rich semantic information to support a large class of mapping applications. In this paper, we present Map++, a system that leverages standard cell-phone sensors in a crowdsensing approach to automatically enrich digital maps with different road semantics like tunnels, bumps, bridges, footbridges, crosswalks, road capacity, among others. Our analysis shows that cell-phones sensors with humans in vehicles or walking get affected by the different road features, which can be mined to extend the features of both free and commercial mapping services. We present the design and implementation of Map++ and evaluate it in a large city. Our evaluation shows that we can detect the different semantics accurately with at most 3 false positive rate and 6 false negative rate for both vehicle and pedestrian-based features. Moreover, we show that Map++ has a small energy footprint on the cell-phones, highlighting its promise as a ubiquitous digital maps enriching service.
Recently, inertial sensors embedded in smart phones allowed detection of the different road features. In @cite_3 , we showed that a could be applied to inertial sensors to recognize different physical and logical landmarks (e.g. bridges, turns, and cellular signal anomalies) for mobile phones. The goal was to provide an accurate and energy-efficient GPS replacement. extends this work by adding more semantic features, such as roundabouts, as well as using a classifier-based approach, which provides the in a more intuitive way with simpler implementation and more compact representation. In addition, enriches the road semantic by a novel class of pedestrian-based semantic features such as underpasses, footbridges, road capacity among others.
{ "cite_N": [ "@cite_3" ], "mid": [ "1983363341" ], "abstract": [ "We present Dejavu, a system that uses standard cell-phone sensors to provide accurate and energy-efficient outdoor localization suitable for car navigation. Our analysis shows that different road landmarks have a unique signature on cell-phone sensors; For example, going inside tunnels, moving over bumps, going up a bridge, and even potholes all affect the inertial sensors on the phone in a unique pattern. Dejavu employs a dead-reckoning localization approach and leverages these road landmarks, among other automatically discovered abundant virtual landmarks, to reset the accumulated error and achieve accurate localization. To maintain a low energy profile, Dejavu uses only energy-efficient sensors or sensors that are already running for other purposes. We present the design of Dejavu and how it leverages crowd-sourcing to automatically learn virtual landmarks and their locations. Our evaluation results from implementation on different android devices in both city and highway driving show that Dejavu can localize cell phones to within 8.4 m median error in city roads and 16.6 m on highways. Moreover, compared to GPS and other state-of-the-art systems, Dejavu can extend the battery lifetime by 347 , achieving even better localization results than GPS in the more challenging in-city driving conditions." ] }
1507.08956
2083970834
Digital maps have become a part of our daily life with a number of commercial and free map services. These services have still a huge potential for enhancement with rich semantic information to support a large class of mapping applications. In this paper, we present Map++, a system that leverages standard cell-phone sensors in a crowdsensing approach to automatically enrich digital maps with different road semantics like tunnels, bumps, bridges, footbridges, crosswalks, road capacity, among others. Our analysis shows that cell-phones sensors with humans in vehicles or walking get affected by the different road features, which can be mined to extend the features of both free and commercial mapping services. We present the design and implementation of Map++ and evaluate it in a large city. Our evaluation shows that we can detect the different semantics accurately with at most 3 false positive rate and 6 false negative rate for both vehicle and pedestrian-based features. Moreover, we show that Map++ has a small energy footprint on the cell-phones, highlighting its promise as a ubiquitous digital maps enriching service.
Monitoring road condition using inertial sensors was proposed in @cite_26 @cite_9 . They mainly use the inertial sensors to detect the potholes and traffic conditions and use GPS to localize the sensed road problems. Both systems use external sensor chips which have higher sampling rates and lower noise compared to chips on typical cell-phones in the market. In addition, they depend on the energy-hungry GPS. , on the other hand, detects a significantly richer set of features, both based on vehicle and pedestrian traces, using a lower energy-profile sensors available in commodity cell-phones.
{ "cite_N": [ "@cite_9", "@cite_26" ], "mid": [ "2144169341", "2168463792" ], "abstract": [ "We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.). To monitor road and traffic conditions in such a setting, we present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. In this paper, we focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio, and or GPS sensors in these phones to detect potholes, bumps, braking, and honking. Nericell addresses several challenges including virtually reorienting the accelerometer on a phone that is at an arbitrary orientation, and performing honk detection and localization in an energy efficient manner. We also touch upon the idea of triggered sensing, where dissimilar sensors are used in tandem to conserve energy. We evaluate the effectiveness of the sensing functions in Nericell based on experiments conducted on the roads of Bangalore, with promising results.", "This paper investigates an application of mobile sensing: detecting and reporting the surface conditions of roads. We describe a system and associated algorithms to monitor this important civil infrastructure using a collection of sensor-equipped vehicles. This system, which we call the Pothole Patrol (P2), uses the inherent mobility of the participating vehicles, opportunistically gathering data from vibration and GPS sensors, and processing the data to assess road surface conditions. We have deployed P2 on 7 taxis running in the Boston area. Using a simple machine-learning approach, we show that we are able to identify potholes and other severe road surface anomalies from accelerometer data. Via careful selection of training data and signal features, we have been able to build a detector that misidentifies good road segments as having potholes less than 0.2 of the time. We evaluate our system on data from thousands of kilometers of taxi drives, and show that it can successfully detect a number of real potholes in and around the Boston area. After clustering to further reduce spurious detections, manual inspection of reported potholes shows that over 90 contain road anomalies in need of repair." ] }
1507.08754
2198087598
This paper presents a new version of Dropout called Split Dropout (sDropout) and rotational convolution techniques to improve CNNs' performance on image classification. The widely used standard Dropout has advantage of preventing deep neural networks from overfitting by randomly dropping units during training. Our sDropout randomly splits the data into two subsets and keeps both rather than discards one subset. We also introduce two rotational convolution techniques, i.e. rotate-pooling convolution (RPC) and flip-rotate-pooling convolution (FRPC) to boost CNNs' performance on the robustness for rotation transformation. These two techniques encode rotation invariance into the network without adding extra parameters. Experimental evaluations on ImageNet2012 classification task demonstrate that sDropout not only enhances the performance but also converges faster. Additionally, RPC and FRPC make CNNs more robust for rotation transformations. Overall, FRPC together with sDropout bring @math (model of Zeiler and Fergus zeiler2013visualizing , 10-view, top-1) accuracy increase in ImageNet 2012 classification task compared to the original network.
In the last few years, we have witnessed tremendous performance improvements made on the CNNs. These improvements are mainly two types: building more powerful architectures and designing effective regularizing techniques. To be more capable of fitting training datasets, especially for large-scale datasets, models are made deeper and larger, e.g., the work of Simonyan al @cite_10 and Szegedy al @cite_24 . Setting strides smaller to capture more image information also helps in model improvement , such as the work of Zeiler and Fergus @cite_21 . Rectified Linear Unit (ReLU) @cite_7 and Parametric Rectified Linear Unit (PReLU) @cite_1 contribute to the recent success of improvements on activation functions. On the other hand, regularization is an important aspect in boosting the testing performance of CNNs. Data augmentation @cite_0 @cite_18 and the Dropout @cite_6 technique are recently the common way to regularize the model. One major work in this paper is to improve Dropout.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_21", "@cite_1", "@cite_6", "@cite_24", "@cite_0", "@cite_10" ], "mid": [ "", "", "2952186574", "1677182931", "1904365287", "2950179405", "", "1686810756" ], "abstract": [ "", "", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94 top-5 test error on the ImageNet 2012 classification dataset. This is a 26 relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66 [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1 , [26]) on this dataset.", "When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition.", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision." ] }
1507.08754
2198087598
This paper presents a new version of Dropout called Split Dropout (sDropout) and rotational convolution techniques to improve CNNs' performance on image classification. The widely used standard Dropout has advantage of preventing deep neural networks from overfitting by randomly dropping units during training. Our sDropout randomly splits the data into two subsets and keeps both rather than discards one subset. We also introduce two rotational convolution techniques, i.e. rotate-pooling convolution (RPC) and flip-rotate-pooling convolution (FRPC) to boost CNNs' performance on the robustness for rotation transformation. These two techniques encode rotation invariance into the network without adding extra parameters. Experimental evaluations on ImageNet2012 classification task demonstrate that sDropout not only enhances the performance but also converges faster. Additionally, RPC and FRPC make CNNs more robust for rotation transformations. Overall, FRPC together with sDropout bring @math (model of Zeiler and Fergus zeiler2013visualizing , 10-view, top-1) accuracy increase in ImageNet 2012 classification task compared to the original network.
In standard Dropout @cite_13 , each element of a layer's output is kept with probability p, otherwise set to 0 with probability (1 - p) (usually set p = 0.5). It can be seen as a stochastic regularization technique. Since the successful application of Dropout in feedforward neural networks for speech and object recognition @cite_13 , several works have been done on the improvement and analysis of this technique. A generalization of Dropout, 'standout' proposed by Ba al @cite_4 , uses a binary belief network to compute the probability for each hidden variable. They believe several hidden units are highly correlated in the pre-dropout activities. DropConnect, proposed by Wan al @cite_8 , sets a randomly selected subset of weights, rather than activations, to zero. To speed up the training process in Dropout, Wang al @cite_12 propose a fast dropout. The model uses an objective function approximately equivalent to that of real standard dropout training, but does not actually sample the inputs. sDropout is an extension of Dropout and can be easily combined with methods mentioned above without conflict.
{ "cite_N": [ "@cite_13", "@cite_12", "@cite_4", "@cite_8" ], "mid": [ "", "35527955", "2136836265", "4919037" ], "abstract": [ "", "Preventing feature co-adaptation by encouraging independent contributions from different features often improves classification and regression performance. Dropout training (, 2012) does this by randomly dropping out (zeroing) hidden units and input features during training of neural networks. However, repeatedly sampling a random subset of input features makes training much slower. Based on an examination of the implied objective function of dropout training, we show how to do fast dropout training by sampling from or integrating a Gaussian approximation, instead of doing Monte Carlo optimization of this objective. This approximation, justified by the central limit theorem and empirical evidence, gives an order of magnitude speedup and more stability. We show how to do fast dropout training for classification, regression, and multilayer neural networks. Beyond dropout, our technique is extended to integrate out other types of noise and small image transformations.", "Recently, it was shown that deep neural networks can perform very well if the activities of hidden units are regularized during learning, e.g, by randomly dropping out 50 of their activities. We describe a method called 'standout' in which a binary belief network is overlaid on a neural network and is used to regularize of its hidden units by selectively setting activities to zero. This 'adaptive dropout network' can be trained jointly with the neural network by approximately computing local expectations of binary dropout variables, computing derivatives using back-propagation, and using stochastic gradient descent. Interestingly, experiments show that the learnt dropout network parameters recapitulate the neural network parameters, suggesting that a good dropout network regularizes activities according to magnitude. When evaluated on the MNIST and NORB datasets, we found that our method achieves lower classification error rates than other feature learning methods, including standard dropout, denoising auto-encoders, and restricted Boltzmann machines. For example, our method achieves 0.80 and 5.8 errors on the MNIST and NORB test sets, which is better than state-of-the-art results obtained using feature learning methods, including those that use convolutional architectures.", "We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models." ] }
1507.08754
2198087598
This paper presents a new version of Dropout called Split Dropout (sDropout) and rotational convolution techniques to improve CNNs' performance on image classification. The widely used standard Dropout has advantage of preventing deep neural networks from overfitting by randomly dropping units during training. Our sDropout randomly splits the data into two subsets and keeps both rather than discards one subset. We also introduce two rotational convolution techniques, i.e. rotate-pooling convolution (RPC) and flip-rotate-pooling convolution (FRPC) to boost CNNs' performance on the robustness for rotation transformation. These two techniques encode rotation invariance into the network without adding extra parameters. Experimental evaluations on ImageNet2012 classification task demonstrate that sDropout not only enhances the performance but also converges faster. Additionally, RPC and FRPC make CNNs more robust for rotation transformations. Overall, FRPC together with sDropout bring @math (model of Zeiler and Fergus zeiler2013visualizing , 10-view, top-1) accuracy increase in ImageNet 2012 classification task compared to the original network.
A more direct way to add invariance is Data augmentation. Related work of Sermanet al @cite_5 build a jittered dataset by adding transformed versions of the original training set, including translation, scaling and rotation. There are many more works that transform input data to obtain invariance, e.g. Howard and Andrew G @cite_9 , Dosovitskiy al @cite_14 . But augmenting the training set with rotated versions does not achieve the same effect as ours, as it can not be restricted to upper layers. The rotation invariance in our work is achieved by pooling over systematically transformed versions of filters. It is closely related to the recent work of Gens al @cite_11 , which pools features over symmetry groups within a neural network.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_14", "@cite_11" ], "mid": [ "", "2949194345", "1804629308", "2136026194" ], "abstract": [ "", "We investigate multiple techniques to improve upon the current state of the art deep convolutional neural network based image classification pipeline. The techiques include adding more image transformations to training data, adding more transformations to generate additional predictions at test time and using complementary models applied to higher resolution images. This paper summarizes our entry in the Imagenet Large Scale Visual Recognition Challenge 2013. Our system achieved a top 5 classification error rate of 13.55 using no external data which is over a 20 relative improvement on the previous year's winner.", "When deep learning is applied to visual object recognition, data augmentation is often used to generate additional training data without extra labeling cost. It helps to reduce overfitting and increase the performance of the algorithm. In this paper we investigate if it is possible to use data augmentation as the main component of an unsupervised feature learning architecture. To that end we sample a set of random image patches and declare each of them to be a separate single-image surrogate class. We then extend these trivial one-element classes by applying a variety of transformations to the initial 'seed' patches. Finally we train a convolutional neural network to discriminate between these surrogate classes. The feature representation learned by the network can then be used in various vision tasks. We find that this simple feature learning algorithm is surprisingly successful, achieving competitive classification results on several popular vision datasets (STL-10, CIFAR-10, Caltech-101).", "The chief difficulty in object recognition is that objects' classes are obscured by a large number of extraneous sources of variability, such as pose and part deformation. These sources of variation can be represented by symmetry groups, sets of composable transformations that preserve object identity. Convolutional neural networks (convnets) achieve a degree of translational invariance by computing feature maps over the translation group, but cannot handle other groups. As a result, these groups' effects have to be approximated by small translations, which often requires augmenting datasets and leads to high sample complexity. In this paper, we introduce deep symmetry networks (symnets), a generalization of convnets that forms feature maps over arbitrary symmetry groups. Symnets use kernel-based interpolation to tractably tie parameters and pool over symmetry spaces of any dimension. Like convnets, they are trained with backpropagation. The composition of feature transformations through the layers of a symnet provides a new approach to deep learning. Experiments on NORB and MNIST-rot show that symnets over the affine group greatly reduce sample complexity relative to convnets by better capturing the symmetries in the data." ] }
1507.08610
2225255140
We address a declarative construction of abstract syntax trees with Parsing Expression Grammars. AST operators (constructor, connector, and tagging) are newly defined to specify flexible AST constructions. A new challenge coming with PEGs is the consistency management of ASTs in backtracking and packrat parsing. We make the transaction AST machine in order to perform AST operations in the context of the speculative parsing of PEGs. All the consistency control is automated by the analysis of AST operators. The proposed approach is implemented in the Nez parser, written in Java. The performance study shows that the transactional AST machine requires 25 approximately more time in CSV, XML, and C grammars.
Since Ford presented a formalism of PEGs @cite_1 , many researchers and practitioners have been developed PEG-based parser generators: Leg Peg (for C), Rats! @cite_4 , Mouse @cite_6 (for Java), PEG.js (for JavaScript), and LPeg @cite_17 (for Lua). Basically, these tools rely on language-dependent semantic actions for AST construction. Notably, LPeg provides the substring capturing, similarly to our approach, but other AST constructions can depend on semantic actions written in Lua programming languages. In semantic actions, the consistency management is the user's responsibility.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_4", "@cite_17" ], "mid": [ "2018045485", "1542200249", "2098396599", "2046801051" ], "abstract": [ "For decades we have been using Chomsky's generative system of grammars, particularly context-free grammars (CFGs) and regular expressions (REs), to express the syntax of programming languages and protocols. The power of generative grammars to express ambiguity is crucial to their original purpose of modelling natural languages, but this very power makes it unnecessarily difficult both to express and to parse machine-oriented languages using CFGs. Parsing Expression Grammars (PEGs) provide an alternative, recognition-based formal foundation for describing machine-oriented syntax, which solves the ambiguity problem by not introducing ambiguity in the first place. Where CFGs express nondeterministic choice between alternatives, PEGs instead use prioritized choice. PEGs address frequently felt expressiveness limitations of CFGs and REs, simplifying syntax definitions and making it unnecessary to separate their lexical and hierarchical components. A linear-time parser can be built for any PEG, avoiding both the complexity and fickleness of LR parsers and the inefficiency of generalized CFG parsing. While PEGs provide a rich set of operators for constructing grammars, they are reducible to two minimal recognition schemas developed around 1970, TS TDPL and gTS GTDPL, which are here proven equivalent in effective recognition power.", "Two recent developments in the field of formal languages are Parsing Expression Grammar (PEG) and packrat parsing. The PEG formalism is similar to BNF, but defines syntax in terms of recognizing strings, rather than constructing them. It is, in fact, precise specification of a backtracking recursive-descent parser. Packrat parsing is a general method to handle backtracking in recursive-descent parsers. It ensures linear working time, at a huge memory cost. This paper reports an experiment that consisted of defining the syntax of Java 1.5 in PEG formalism, and literally transcribing the PEG definitions into parsing procedures (accidentally, also in Java). The resulting primitive parser shows an acceptable behavior, indicating that packrat parsing might be an overkill for practical languages. The exercise with defining the Java syntax suggests thatmore work is needed on PEG as a language specification tool.", "We explore how to make the benefits of modularity available for syntactic specifications and present Rats!, a parser generator for Java that supports easily extensible syntax. Our parser generator builds on recent research on parsing expression grammars (PEGs), which, by being closed under composition, prioritizing choices, supporting unlimited lookahead, and integrating lexing and parsing, offer an attractive alternative to context-free grammars. PEGs are implemented by so-called packrat parsers, which are recursive descent parsers that memoize all intermediate results (hence their name). Memoization ensures linear-time performance in the presence of unlimited lookahead, but also results in an essentially lazy, functional parsing technique. In this paper, we explore how to leverage PEGs and packrat parsers as the foundation for extensible syntax. In particular, we show how make packrat parsing more widely applicable by implementing this lazy, functional technique in a strict, imperative language, while also generating better performing parsers through aggressive optimizations. Next, we develop a module system for organizing, modifying, and composing large-scale syntactic specifications. Finally, we describe a new technique for managing (global) parsing state in functional parsers. Our experimental evaluation demonstrates that the resulting parser generator succeeds at providing extensible syntax. In particular, Rats! enables other grammar writers to realize real-world language extensions in little time and code, and it generates parsers that consistently out-perform parsers created by two GLR parser generators.", "Current text pattern-matching tools are based on regular expressions. However, pure regular expressions have proven too weak a formalism for the task: many interesting patterns either are difficult to describe or cannot be described by regular expressions. Moreover, the inherent non-determinism of regular expressions does not fit the need to capture specific parts of a match. Motivated by these reasons, most scripting languages nowadays use pattern-matching tools that extend the original regular-expression formalism with a set of ad hoc features, such as greedy repetitions, lazy repetitions, possessive repetitions, ‘longest-match rule,’ lookahead, etc. These ad hoc extensions bring their own set of problems, such as lack of a formal foundation and complex implementations. In this paper, we propose the use of Parsing Expression Grammars (PEGs) as a basis for pattern matching. Following this proposal, we present LPEG, a pattern-matching tool based on PEGs for the Lua scripting language. LPEG unifies the ease of use of pattern-matching tools with the full expressive power of PEGs. Because of this expressive power, it can avoid the myriad of ad hoc constructions present in several current pattern-matching tools. We also present a Parsing Machine that allows a small and efficient implementation of PEGs for pattern matching. Copyright © 2008 John Wiley & Sons, Ltd." ] }
1507.08578
2200911645
We study the persistence exponent for the first passage time of a random walk below the trajectory of another random walk. More precisely, let @math and @math be two centered, weakly dependent random walks. We establish that @math for a non-random @math . In the classical setting, @math , it is well-known that @math . We prove that for any non-trivial @math one has @math and the exponent @math depends only on @math . Our result holds also in the continuous setting, when @math and @math are independent and possibly perturbed Brownian motions or Ornstein-Uhlenbeck processes. In the latter case the probability decays at exponential rate.
Our result can be understood from various perspectives. One of them is the so-called entropic repulsion. This question was asked in @cite_0 in the context of the Gaussian free field for @math . Namely, the authors studied the repulsive effect on the interface of the wall which is a fixed realization of an i.i.d. field @math . They observe that the tail of @math plays a fundamental role. When it is subgaussian than the effect of the wall is essentially equivalent to the wall given by @math , while when the tail is heavier than Gaussian the interface is pushed much more upwards. It would be interesting to ask an analogous question in our case. By Fact we know already that the disorder has a negligible effect when @math for @math . We expect that when @math the repulsion becomes much stronger.
{ "cite_N": [ "@cite_0" ], "mid": [ "1480598051" ], "abstract": [ "We consider the harmonic crystal, or massless free field, ( ), ( ), that is the centered Gaussian field with covariance given by the Green function of the simple random walk on ℤ d . Our main aim is to obtain quantitative information on the repulsion phenomenon that arises when we condition ( ) to be larger than ( ), ( ) is an IID field (which is also independent of ϕ), for every x in a large region ( ), with N a positive integer and D a bounded subset of ℝ d . We are mostly motivated by results for given typical realizations of σ (quenched set–up), since the conditioned harmonic crystal may be seen as a model for an equilibrium interface, living in a (d+1)–dimensional space, constrained not to go below an inhomogeneous substrate that acts as a hard wall. We consider various types of substrate and we observe that the interface is pushed away from the wall much more than in the case of a flat wall as soon as the upward tail of σ 0 is heavier than Gaussian, while essentially no effect is observed if the tail is sub–Gaussian. In the critical case, that is the one of approximately Gaussian tail, the interplay of the two sources of randomness, ϕ and σ, leads to an enhanced repulsion effect of additive type. This generalizes work done in the case of a flat wall and also in our case the crucial estimates are optimal Large Deviation type asymptotics as ( ) of the probability that ϕ lies above σ in D N ." ] }
1507.08578
2200911645
We study the persistence exponent for the first passage time of a random walk below the trajectory of another random walk. More precisely, let @math and @math be two centered, weakly dependent random walks. We establish that @math for a non-random @math . In the classical setting, @math , it is well-known that @math . We prove that for any non-trivial @math one has @math and the exponent @math depends only on @math . Our result holds also in the continuous setting, when @math and @math are independent and possibly perturbed Brownian motions or Ornstein-Uhlenbeck processes. In the latter case the probability decays at exponential rate.
The paper @cite_0 was followed by @cite_4 which could be seen as an analogue of our work. Namely, the topic of this paper is a Gaussian free field interface conditioned to be above the fixed realization of another Gaussian free field. The authors obtain the precise estimates for the probability of this event and the entropic repulsion induced by the conditioning.
{ "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "1480598051", "2002513116" ], "abstract": [ "We consider the harmonic crystal, or massless free field, ( ), ( ), that is the centered Gaussian field with covariance given by the Green function of the simple random walk on ℤ d . Our main aim is to obtain quantitative information on the repulsion phenomenon that arises when we condition ( ) to be larger than ( ), ( ) is an IID field (which is also independent of ϕ), for every x in a large region ( ), with N a positive integer and D a bounded subset of ℝ d . We are mostly motivated by results for given typical realizations of σ (quenched set–up), since the conditioned harmonic crystal may be seen as a model for an equilibrium interface, living in a (d+1)–dimensional space, constrained not to go below an inhomogeneous substrate that acts as a hard wall. We consider various types of substrate and we observe that the interface is pushed away from the wall much more than in the case of a flat wall as soon as the upward tail of σ 0 is heavier than Gaussian, while essentially no effect is observed if the tail is sub–Gaussian. In the critical case, that is the one of approximately Gaussian tail, the interplay of the two sources of randomness, ϕ and σ, leads to an enhanced repulsion effect of additive type. This generalizes work done in the case of a flat wall and also in our case the crucial estimates are optimal Large Deviation type asymptotics as ( ) of the probability that ϕ lies above σ in D N .", "We analyze a model of an interface fluctuating above a rough substrate. It is based on harmonic crystals, or lattice free fields, indexed by ℤ d , d ≥ 3. The phenomenon for which we want to get precise quantitative estimates is the repulsion effect of the substrate on the interface: the substrate is itself a random field, but its randomness is quenched (this generalizes the widely considered case of a flat deterministic substrate). With respect to [2] in which the substrate has been taken to be an IID field, here the substrate is an harmonic crystal, as the interface, and as such it is strongly correlated. We obtain the leading asymptotic behavior of the model in the limit of a very extended substrate: we show in particular that, to leading order, the effect of an IID substrate cannot be distinguished from the effect of an harmonic crystal substrate. We observe however that, unlike in the IID substrate case, annealed and quenched models display sharply different features." ] }
1507.08578
2200911645
We study the persistence exponent for the first passage time of a random walk below the trajectory of another random walk. More precisely, let @math and @math be two centered, weakly dependent random walks. We establish that @math for a non-random @math . In the classical setting, @math , it is well-known that @math . We prove that for any non-trivial @math one has @math and the exponent @math depends only on @math . Our result holds also in the continuous setting, when @math and @math are independent and possibly perturbed Brownian motions or Ornstein-Uhlenbeck processes. In the latter case the probability decays at exponential rate.
This perspective was the initial motivation for analyzing the problems in this paper (more precisely the result given in Theorem ). In fact, the question arises from studies of extremal particles of a branching random walk in a time-inhomogeneous random environment. In the companion paper @cite_1 we show that the randomness of the environment has a slowing effect on the position of the maximal particle. Namely, the logarithmic correction to the speed is bigger than in the standard (time-homogenous) case, which is a consequence of ).
{ "cite_N": [ "@cite_1" ], "mid": [ "70502602" ], "abstract": [ "A method is disclosed of preferentially accessing a group of positions (12 in FIG. 1) for a first type of call over a second type of call in a telephone system (FIG. 1) in which the positions (12) are served by a uniform or automatic call distributing office (1). In accordance with one aspect of the invention, selected ones of the positions (12) are marked unavailable for serving calls after the completion of a call of the second type. Thereafter the selected positions are accessed by overriding the unavailable state for a prescribed number of calls of the first type and, at the completion of the prescribed number of calls, the selected positions are again marked available for serving calls. In accordance with a second aspect of the invention, responsive to a first prescribed traffic delay condition in placing calls of the first type to the positions, ones of the positions are successively marked unavailable for serving calls as each completes a call. This occurs as long as the traffic delay state exists. Calls of the first class are placed to the positions that are marked unavailable by overriding the unavailable state. The positions are successively again made available for serving calls as each becomes idle for serving a new call responsive to a second more desirable prescribed traffic delay condition encountered in placing first type calls." ] }
1507.08257
2241614293
Optimising queries in real-world situations under imperfect conditions is still a problem that has not been fully solved. We consider finding the optimal order in which to execute a given set of selection operators under partial ignorance of their selectivities. The selectivities are modelled as intervals rather than exact values and we apply a concept from decision theory, the minimisation of the maximum regret, as a measure of optimality. We show that the associated decision problem is NP-hard, which renders a brute-force approach to solving it impractical. Nevertheless, by investigating properties of the problem and identifying special cases which can be solved in polynomial time, we gain insight that we use to develop a novel heuristic for solving the general problem. We also evaluate minmax regret query optimisation experimentally, showing that it outperforms a currently employed strategy of optimisers that uses mean values for uncertain parameters.
Most database systems keep statistics allowing them to estimate the selectivity for single attributes fairly accurately. For the joint selectivity of multiple attributes, much early work and many systems make the attribute value independence (AVI) assumption. This assumes that the selectivity of a set of operators @math is equal to @math . If instead a system stores (some) joint selectivities (it is infeasible for it to store all of them), we can use the AVI assumption to fill in the gaps'' or use the estimation approach advocated in @cite_20 .
{ "cite_N": [ "@cite_20" ], "mid": [ "2143672210" ], "abstract": [ "Cost-based query optimizers need to estimate the selectivity of conjunctive predicates when comparing alternative query execution plans. To this end, advanced optimizers use multivariate statistics to improve information about the joint distribution of attribute values in a table. The joint distribution for all columns is almost always too large to store completely, and the resulting use of partial distribution information raises the possibility that multiple, non-equivalent selectivity estimates may be available for a given predicate. Current optimizers use cumbersome ad hoc methods to ensure that selectivities are estimated in a consistent manner. These methods ignore valuable information and tend to bias the optimizer toward query plans for which the least information is available, often yielding poor results. In this paper we present a novel method for consistent selectivity estimation based on the principle of maximum entropy (ME). Our method exploits all available information and avoids the bias problem. In the absence of detailed knowledge, the ME approach reduces to standard uniformity and independence assumptions. Experiments with our prototype implementation in DB2 UDB show that use of the ME approach can improve the optimizer’s cardinality estimates by orders of magnitude, resulting in better plan quality and significantly reduced query execution times. For almost all queries, these improvements are obtained while adding only tens of milliseconds to the overall time required for query optimization." ] }
1507.08257
2241614293
Optimising queries in real-world situations under imperfect conditions is still a problem that has not been fully solved. We consider finding the optimal order in which to execute a given set of selection operators under partial ignorance of their selectivities. The selectivities are modelled as intervals rather than exact values and we apply a concept from decision theory, the minimisation of the maximum regret, as a measure of optimality. We show that the associated decision problem is NP-hard, which renders a brute-force approach to solving it impractical. Nevertheless, by investigating properties of the problem and identifying special cases which can be solved in polynomial time, we gain insight that we use to develop a novel heuristic for solving the general problem. We also evaluate minmax regret query optimisation experimentally, showing that it outperforms a currently employed strategy of optimisers that uses mean values for uncertain parameters.
Assuming we have accurate values for the selectivity @math and cost @math of selection operator @math , we can calculate the rank @math of @math : Given a set of selection operators, sorting and executing them in non-decreasing order of their ranks results in the minimal expected pipelined processing cost @cite_3 under the AVI assumption. Clearly, the computation of the ranks and the sorting can be done in polynomial time. A similar argument applies if a query uses a conjunction of predicates on the same relation, and query evaluation uses a simple table scan. In such a case, the optimiser should test the predicates in the order which minimises the total number of tests. Basically, ordering selection operators optimally is a solved problem, but only when given exact values for the @math and @math .
{ "cite_N": [ "@cite_3" ], "mid": [ "166430344" ], "abstract": [ "State-of-the-art optimization approaches for relational database systems, e.g., those used in systems such as OBE, SQL DS, and commercial INGRES. when used for queries in non-traditional database applications, suffer from two problems. First, the time complexity of their optimization algorithms, being combinatoric, is exponential in the number of relations to be joined in the query. Their cost is therefore prohibitive in situations such as deductive databases and logic oriented languages for knowledge bases, where hundreds of joins may be required. The second problem with the traditional approaches is that, albeit effective in their specific domain, it is not clear whether they can be generalized to different scenarios (e.g. parallel evaluation) since they lack a formal model to define the assumptions and critical factors on which their valiclity depends. This paper proposes a solution to these problems by presenting (i) a formal model and a precise statement of the optimization problem that delineates the assumptions and limitations of the previous approaches, and (ii) a quadratic-tinie algorithm th& determines the optimum join order for acyclic queries. The approach proposed is robust; in particular, it is shown that it remains heuristically effective for cyclic queries as well." ] }
1507.08257
2241614293
Optimising queries in real-world situations under imperfect conditions is still a problem that has not been fully solved. We consider finding the optimal order in which to execute a given set of selection operators under partial ignorance of their selectivities. The selectivities are modelled as intervals rather than exact values and we apply a concept from decision theory, the minimisation of the maximum regret, as a measure of optimality. We show that the associated decision problem is NP-hard, which renders a brute-force approach to solving it impractical. Nevertheless, by investigating properties of the problem and identifying special cases which can be solved in polynomial time, we gain insight that we use to develop a novel heuristic for solving the general problem. We also evaluate minmax regret query optimisation experimentally, showing that it outperforms a currently employed strategy of optimisers that uses mean values for uncertain parameters.
In parametric query optimisation several plans can be precompiled and then, depending on the query parameters, be selected for execution @cite_5 . However, if there is a large number of optimal plans, each covering a small region of the parameter space, this becomes problematic. First of all, we have to store all these plans. In addition, constantly switching from one plan to another in a dynamic environment (such as stream processing) just because we have small changes in the parameters introduces a considerable overhead. In order to amend this, researchers have proposed reducing the number of plans at the cost of slightly decreasing the quality of the query execution @cite_7 . Our approach can be seen as an extreme form of parametric query optimisation by finding a single plan that covers the whole parameter space.
{ "cite_N": [ "@cite_5", "@cite_7" ], "mid": [ "113324792", "2116727378" ], "abstract": [ "Query optimizers normally compile queries into one optimal plan by assuming complete knowledge of all cost parameters such as selectivity and resource availability. The execution of such plans could be sub-optimal when cost parameters are either unknown at compile time or change significantly between compile time and runtime [Loh89, GrW89]. Parametric query optimization [INS+92, CG94, GK94] optimizes a query into a number of candidate plans, each optimal for some region of the parameter space. In this paper, we present parametric query optimization algorithms. Our approach is based on the property that for linear cost functions, each parametric optimal plan is optimal in a convex polyhedral region of the parameter space. This property is used to optimize linear and non-linear cost functions. We also analyze the expected sizes of the parametric optimal set of plans and the number of plans produced by the Cole and Graefe algorithm [CG94].", "A \"plan diagram\" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to \"anorexic\" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs." ] }
1507.08257
2241614293
Optimising queries in real-world situations under imperfect conditions is still a problem that has not been fully solved. We consider finding the optimal order in which to execute a given set of selection operators under partial ignorance of their selectivities. The selectivities are modelled as intervals rather than exact values and we apply a concept from decision theory, the minimisation of the maximum regret, as a measure of optimality. We show that the associated decision problem is NP-hard, which renders a brute-force approach to solving it impractical. Nevertheless, by investigating properties of the problem and identifying special cases which can be solved in polynomial time, we gain insight that we use to develop a novel heuristic for solving the general problem. We also evaluate minmax regret query optimisation experimentally, showing that it outperforms a currently employed strategy of optimisers that uses mean values for uncertain parameters.
Notions of robustness in query optimisation have been considered in @cite_0 @cite_2 @cite_26 . Babcock and Chaudhuri @cite_0 use probability distributions derived from sampling as well as user preferences in order to tune the predictability (or robustness) of query plans versus their performance. For @cite_26 , robustness means not continuing to execute to completion a query plan which is found to be suboptimal during evaluation; instead re-optimisation is performed. On the other hand, @cite_2 consider a plan to be robust only if its cost is within e.g. 20 , these techniques need additional statistical information to work.
{ "cite_N": [ "@cite_0", "@cite_26", "@cite_2" ], "mid": [ "2026966899", "2041563709", "" ], "abstract": [ "Research on query optimization has focused almost exclusively on reducing query execution time, while important qualities such as consistency and predictability have largely been ignored, even though most database users consider these qualities to be at least as important as raw performance. In this paper, we explore how the query optimization process can be made more robust, focusing on the important subproblem of cardinality estimation. The robust cardinality estimation technique that we propose allows for a user- or application-specified trade-off between performance and predictability, and it captures multi-dimensional correlations while remaining space- and time-efficient.", "Virtually every commercial query optimizer chooses the best plan for a query using a cost model that relies heavily on accurate cardinality estimation. Cardinality estimation errors can occur due to the use of inaccurate statistics, invalid assumptions about attribute independence, parameter markers, and so on. Cardinality estimation errors may cause the optimizer to choose a sub-optimal plan. We present an approach to query processing that is extremely robust because it is able to detect and recover from cardinality estimation errors. We call this approach \"progressive query optimization\" (POP). POP validates cardinality estimates against actual values as measured during query execution. If there is significant disagreement between estimated and actual values, execution might be stopped and re-optimization might occur. Oscillation between optimization and execution steps can occur any number of times. A re-optimization step can exploit both the actual cardinality and partial results, computed during a previous execution step. Checkpoint operators (CHECK) validate the optimizer's cardinality estimates against actual cardinalities. Each CHECK has a condition that indicates the cardinality bounds within which a plan is valid. We compute this validity range through a novel sensitivity analysis of query plan operators. If the CHECK condition is violated, CHECK triggers re-optimization. POP has been prototyped in a leading commercial DBMS. An experimental evaluation of POP using TPC-H queries illustrates the robustness POP adds to query processing, while incurring only negligible overhead. A case-study applying POP to a real-world database and workload shows the potential of POP, accelerating complex OLAP queries by almost two orders of magnitude.", "" ] }
1507.08257
2241614293
Optimising queries in real-world situations under imperfect conditions is still a problem that has not been fully solved. We consider finding the optimal order in which to execute a given set of selection operators under partial ignorance of their selectivities. The selectivities are modelled as intervals rather than exact values and we apply a concept from decision theory, the minimisation of the maximum regret, as a measure of optimality. We show that the associated decision problem is NP-hard, which renders a brute-force approach to solving it impractical. Nevertheless, by investigating properties of the problem and identifying special cases which can be solved in polynomial time, we gain insight that we use to develop a novel heuristic for solving the general problem. We also evaluate minmax regret query optimisation experimentally, showing that it outperforms a currently employed strategy of optimisers that uses mean values for uncertain parameters.
Minmax regret optimisation (MRO) has been applied to a number of optimisation problems where some of the parameters are (partially) unknown @cite_29 . The complexity of the MRO version of a problem is often higher than that of the original problem. Many optimisation problems with polynomial-time solutions turn out to be NP-hard in their MRO versions @cite_29 .
{ "cite_N": [ "@cite_29" ], "mid": [ "2028589351" ], "abstract": [ "Min-max and min-max regret criteria are commonly used to define robust solutions. After motivating the use of these criteria, we present general results. Then, we survey complexity results for the min-max and min-max regret versions of some combinatorial optimization problems: shortest path, spanning tree, assignment, min cut, min s-t cut, knapsack. Since most of these problems are NP-hard, we also investigate the approximability of these problems. Furthermore, we present algorithms to solve these problems to optimality." ] }
1507.08234
2952042755
We present two novel models of document coherence and their application to information retrieval (IR). Both models approximate document coherence using discourse entities, e.g. the subject or object of a sentence. Our first model views text as a Markov process generating sequences of discourse entities (entity n-grams); we use the entropy of these entity n-grams to approximate the rate at which new information appears in text, reasoning that as more new words appear, the topic increasingly drifts and text coherence decreases. Our second model extends the work of Guinaudeau & Strube [28] that represents text as a graph of discourse entities, linked by different relations, such as their distance or adjacency in text. We use several graph topology metrics to approximate different aspects of the discourse flow that can indicate coherence, such as the average clustering or betweenness of discourse entities in text. Experiments with several instantiations of these models show that: (i) our models perform on a par with two other well-known models of text coherence even without any parameter tuning, and (ii) reranking retrieval results according to their coherence scores gives notable performance gains, confirming a relation between document coherence and relevance. This work contributes two novel models of document coherence, the application of which to IR complements recent work in the integration of document cohesiveness or comprehensibility to ranking [5, 56].
A text can be coherent at a and level @cite_26 . coherence is measured by examining the similarity between neighbouring text spans, e.g., the well-connectedness of adjacent sentences through lexical cohesion @cite_14 , or entity repetition @cite_50 . coherence, on the other hand, is measured through discourse-level relations connecting remote text spans across a whole text, e.g. sentences @cite_9 @cite_27 .
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_9", "@cite_27", "@cite_50" ], "mid": [ "2015933299", "", "1586232984", "2045738181", "2124741472" ], "abstract": [ "Cohesion in English is concerned with a relatively neglected part of the linguistic system: its resources for text construction, the range of meanings that are speciffically associated with relating what is being spoken or written to its semantic environment. A principal component of these resources is 'cohesion'. This book studies the cohesion that arises from semantic relations between sentences. Reference from one to the other, repetition of word meanings, the conjunctive force of but, so, then and the like are considered. Further, it describes a method for analysing and coding sentences, which is applied to specimen texts.", "", "Introduction 1. A theory of discourse coherence 2. Coherence and verb phrase ellipsis 3. Coherence and gapping 4. Coherence and extraction 5. Coherence and pronoun interpretation 6. Coherence and tense interpretation 7. Conclusion.", "The specification discloses a luggage carrier made up of a generally U-shaped frame. The frame has two spaced legs with a hook on the front which hooks over the bumper of an automobile. Two braces are attached to the cross member of the U-shaped member and the front portion of the braces is received on fastening means welded to the under side of the car frame. The cross members provide a supporting surface for carrying articles, boats and the like. A platform may be supported on the frame.", "This paper concerns relationships among focus of attention, choice of referring expression, and perceived coherence of utterances within a discourse segment. It presents a framework and initial theory of centering intended to model the local component of attentional state. The paper examines interactions between local coherence and choice of referring expressions; it argues that differences in coherence correspond in part to the inference demands made by different types of referring expressions, given a particular attentional state. It demonstrates that the attentional state properties modeled by centering can account for these differences." ] }
1507.08234
2952042755
We present two novel models of document coherence and their application to information retrieval (IR). Both models approximate document coherence using discourse entities, e.g. the subject or object of a sentence. Our first model views text as a Markov process generating sequences of discourse entities (entity n-grams); we use the entropy of these entity n-grams to approximate the rate at which new information appears in text, reasoning that as more new words appear, the topic increasingly drifts and text coherence decreases. Our second model extends the work of Guinaudeau & Strube [28] that represents text as a graph of discourse entities, linked by different relations, such as their distance or adjacency in text. We use several graph topology metrics to approximate different aspects of the discourse flow that can indicate coherence, such as the average clustering or betweenness of discourse entities in text. Experiments with several instantiations of these models show that: (i) our models perform on a par with two other well-known models of text coherence even without any parameter tuning, and (ii) reranking retrieval results according to their coherence scores gives notable performance gains, confirming a relation between document coherence and relevance. This work contributes two novel models of document coherence, the application of which to IR complements recent work in the integration of document cohesiveness or comprehensibility to ranking [5, 56].
There is extensive work on local coherence that uses different approaches, including bag of words methods at sentence level @cite_17 , sequences of content words (of length @math ) at paragraph level @cite_30 , local lexical cohesion information @cite_37 , local syntactic cues @cite_42 , and combining local lexical and syntactic features, e.g., term co-occurrence @cite_25 @cite_40 . Overall, various aspects of CT have long been used to model local coherence @cite_45 @cite_13 , including the well-known entity approaches that rank the repetition and syntactic realisation of entities in adjacent sentences @cite_51 @cite_42 .
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_42", "@cite_40", "@cite_45", "@cite_51", "@cite_13", "@cite_25", "@cite_17" ], "mid": [ "2006987417", "2110568314", "", "2120465901", "2050277298", "2140676672", "2170524863", "2169546346", "2089150068" ], "abstract": [ "We propose a method for dealing with semantic complexities occurring in information retrieval systems on the basis of linguistic observations. Our method follows from an analysis indicating that long runs of content words appear in a stopped document cluster, and our observation that these long runs predominately originate from the prepositional phrase and subject complement positions and as such, may be useful predictors of semantic coherence. From this linguistic basis, we test three statistical hypotheses over a small collection of documents from different genre. By coordinating thesaurus semantic categories (SEMCATs) of the long run words to the semantic categories of paragraphs, we conclude that for paragraphs containing both long runs and short runs, the SEMCAT weight of long runs of content words is a strong predictor of the semantic coherence of the paragraph.", "The problem of organizing information for multidocument summarization so that the generated summary is coherent has received relatively little attention. While sentence ordering for single document summarization can be determined from the ordering of sentences in the input article, this is not the case for multidocument summarization where summary sentences may be drawn from different input articles. In this paper, we propose a methodology for studying the properties of ordering information in the news genre and describe experiments done on a corpus of multiple acceptable orderings we developed for the task. Based on these experiments, we implemented a strategy for ordering information that combines constraints from chronological order of events and topical relatedness. Evaluation of our augmented algorithm shows a significant improvement of the ordering over two baseline strategies.", "", "We describe a generic framework for integrating various stochastic models of discourse coherence in a manner that takes advantage of their individual strengths. An integral part of this framework are algorithms for searching and training these stochastic coherence models. We evaluate the performance of our models and algorithms and show empirically that utility-trained log-linear coherence models outperform each of the individual coherence models considered.", "This article describes an implemented system which uses centering theory for planning of coherent texts and choice of referring expressions. We argue that text and sentence planning need to be driven in part by the goal of maintaining referential continuity and thereby facilitating pronoun resolution: Obtaining a favorable ordering of clauses, and of arguments within clauses, is likely to increase opportunities for nonambiguous pronoun use. Centering theory provides the basis for such an integrated approach. Generating coherent texts according to centering theory is treated as a constraint satisfaction problem. The well-known Rule 2 of centering theory is reformulated in terms of a set of constraints—cohesion, salience, cheapness, and continuity—and we show sample outputs obtained under a particular weighting of these constraints. This framework facilitates detailed research into evaluation metrics and will therefore provide a productive research tool in addition to the immediate practical benefit of improving the fluency and readability of generated texts. The technique is generally applicable to natural language generation systems, which perform hierarchical text structuring based on a theory of coherence relations with certain additional assumptions.", "This article proposes a novel framework for representing and measuring local coherence. Central to this approach is the entity-grid representation of discourse, which captures patterns of entity distribution in a text. The algorithm introduced in the article automatically abstracts a text into a set of entity transition sequences and records distributional, syntactic, and referential information about discourse entities. We re-conceptualize coherence assessment as a learning task and show that our entity-based representation is well-suited for ranking-based generation and text classification tasks. Using the proposed representation, we achieve good performance on text ordering, summary coherence evaluation, and readability assessment.", "Centering theory is the best-known framework for theorizing about local coherence and salience; however, its claims are articulated in terms of notions which are only partially specified, such as \"utterance,\" \"realization,\" or \"ranking.\" A great deal of research has attempted to arrive at more detailed specifications of these parameters of the theory; as a result, the claims of centering can be instantiated in many different ways. We investigated in a systematic fashion the effect on the theory's claims of these different ways of setting the parameters. Doing this required, first of all, clarifying what the theory's claims are (one of our conclusions being that what has become known as \"Constraint 1\" is actually a central claim of the theory). Secondly, we had to clearly identify these parametric aspects: For example, we argue that the notion of \"pronoun\" used in Rule 1 should be considered a parameter. Thirdly, we had to find appropriate methods for evaluating these claims. We found that while the theory's main claim about salience and pronominalization, Rule 1—a preference for pronominalizing the backward-looking center (CB)—is verified with most instantiations, Constraint 1–a claim about (entity) coherence and CB uniqueness—is much more instantiation-dependent: It is not verified if the parameters are instantiated according to very mainstream views (\"vanilla instantiation\"), it holds only if indirect realization is allowed, and is violated by between 20 and 25 of utterances in our corpus even with the most favorable instantiations. We also found a trade-off between Rule 1, on the one hand, and Constraint 1 and Rule 2, on the other: Setting the parameters to minimize the violations of local coherence leads to increased violations of salience, and vice versa. Our results suggest that \"entity\" coherence—continuous reference to the same entities—must be supplemented at least by an account of relational coherence.", "Ordering information is a critical task for natural language generation applications. In this paper we propose an approach to information ordering that is particularly suited for text-to-text generation. We describe a model that learns constraints on sentence order from a corpus of domain-specific texts and an algorithm that yields the most likely order among several alternatives. We evaluate the automatically generated orderings against authored texts from our corpus and against human subjects that are asked to mimic the model's task. We also assess the appropriateness of such a model for multidocument summarization.", "Latent Semantic Analysis (LSA) is used as a technique for measuring the coherence of texts. By comparing the vectors for 2 adjoining segments of text in a high‐dimensional semantic space, the method provides a characterization of the degree of semantic relatedness between the segments. We illustrate the approach for predicting coherence through reanalyzing sets of texts from 2 studies that manipulated the coherence of texts and assessed readers’ comprehension. The results indicate that the method is able to predict the effect of text coherence on comprehension and is more effective than simple term‐term overlap measures. In this manner, LSA can be applied as an automated method that produces coherence predictions similar to propositional modeling. We describe additional studies investigating the application of LSA to analyzing discourse structure and examine the potential of LSA as a psychological model of coherence effects in text comprehension." ] }